anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Delaunay variables
Question: I've read a little bit about Delaunay variables, but I can't understand what they are good for. Do they make calculations easier? What is the advantage of using them? Where can I read a bit more about it? There's not so much in the books. Answer: Long story short: Delaunay variables are something which is called action-angle variables, which makes them extremely useful in perturbation theory. Let me first talk about what action-angle variables in general and then mention a few facts about Delaunay variables themselves. Let us consider a Hamiltonian system of $N$ degrees of freedom with generic canonical coordinates $p_i,q^j, i,j=1,...,N$ and a general integrable Hamiltonian $H(p_i,q^j)$. Action-angle coordinates are a special type of canonical coordinates $J_\beta(p_i,q^i), \psi^\alpha(p_i,q^i),\, \alpha,\beta = 1,...,N$ such that the Hamiltonian is expressed only as a function of the actions $J_\beta$, $H =H(J_\beta)$. Consequently, the equations of motion are $$\dot{\psi^\alpha} = \frac{\partial H}{\partial J_\alpha} \equiv \Omega^\alpha(J_\beta)$$ $$\dot{J_\beta} = -\frac{\partial H}{\partial \psi^\alpha} = 0$$ In other words, we can easily write the explicit (and general) solution of the equations of motion as $J_\beta(t) = J_{0\beta}, \,\psi^\alpha(t) = \psi^\alpha_0 + \Omega^\alpha(J_{0\beta})t$. Of course, the issue is that it is usually quite difficult to find the transformation $J_\alpha(p_i,q^i), \psi^\alpha(p_i,q^i)$ from the generic canonical coordinates $p_i,q^i$ you start with. In fact, it is typically about as hard as finding the explicit solution to the equations of motion in the original system of coordinates. So why make the transformation? The answer is that it becomes extremely useful once the original system is perturbed such that the new Hamiltonian is $H' = H(J_\beta) + \epsilon K(J_\beta,\psi^\alpha)$ with $\epsilon \ll 1$. There is then a straightforward algorithm to transform to new, slightly deformed coordinates $\psi^{\alpha'}, J'_\beta$ such that $H' = H(J'_\beta) + \epsilon \bar{K}(J'_\beta)$, where the new addition to the Hamiltonian is obtained by averaging over the angle coordinates and subsituting the new actions: $$\bar{K}(J'_\beta) = \langle K(J_\beta,\psi^\alpha)\rangle_{\psi}\Big|_{J_\beta \to J'_\beta}$$ In other words, action-angle coordinates provide you not only with the exact solution but also a way to describe the exact solution of any close system (up to special points called resonances...). For more details see e.g. Arnol'd, Kozlov & Neishtadt (English ed. 2006) - Mathematical Aspects of Classical and Celestial Mechanics. The Delaunay variables are action-angle variables for the Newtonian two-body problem. Even more, they are "the" action-angle variables, since any other set of action-angle variables is related to another one by a discrete rotation. The three Delaunay action variables are usually denoted $L,G,H$, and the respective angles as $l = M, g=\omega, h = \Omega$. $G$ and $H$ have the meaning of total and azimuthal angular momentum respectively, and $L$ has the meaning of a "total orbital action" including not only the angular momentum but also the "oomph" (also known as action) of the radial motion. The angles $\omega, \Omega$ are argument of periapsis and longitude of ascending node and in the unperturbed Keplerian problem they are degenerate (they have a zero associated frequency $\Omega^\alpha$). The final angle $M$ is just the mean anomally containing the only real dynamics in the Kepler problem. The nice thing about Delaunay variables is that one just needs to solve the Kepler equation to make the transform to the action-angle coordinates. However, when passing to perturbation theory, I recommend to make a simple discrete transform to a modified set of coordinates because Delaunay variables are degenerate near circular and/or equatorial orbits. For the construction and properties of Delaunay variables, see the excellent textbook of Morbidelli (2002) - Modern Celestial Mechanics (this is also a good resource for perturbation theory mentioned earlier).
{ "domain": "physics.stackexchange", "id": 64355, "tags": "coordinate-systems, hamiltonian-formalism, integrable-systems" }
Calling jQuery plugin methods
Question: I have two code snippets and the following questions: Which uses the best practice and why? Which one is good for performance? Code 1 jQuery( function( $ ) { // Responsive video var $area = $( "#sidebar" ); $area.fitVids(); // Image gallery var $slider = $( ".owl-carousel" ); $slider.owlCarousel(); }); Code 2 // Global jQuery variable var $ = jQuery; /** * Responsive video */ var fitvidsInit = function() { var $area = $( "#sidebar" ); $area.fitVids(); }; /** * Slides */ var sliderInit = function() { var $slider = $( ".owl-carousel" ); $slider.owlCarousel(); }; /** * Execute code */ $( function() { fitvidsInit(); sliderInit(); } ) I also have to defined the variable $ because this is in WordPress. Answer: Any performance difference would be too small to measure. Code 2 pollutes the global namespace with three variables: $, fitvidsInit, and sliderInit. Therefore, Code 1 is better. I suggest eliminating $area and $slider as well: jQuery( function( $ ) { // Responsive video $( "#sidebar" ).fitVids(); // Image gallery $( ".owl-carousel" ).owlCarousel(); });
{ "domain": "codereview.stackexchange", "id": 16628, "tags": "javascript, performance, jquery, comparative-review" }
Problems that are hard w.r.t UGC-hardness of VERTEX COVER
Question: Are there any problems $P$ such that an $\alpha$-approximation for $P$ (for soem choice of $\alpha$) would imply a better than 2-approximation for VERTEX COVER, which would be hard via the UGC ? Answer: Node-weighted multiway-cut is a problem to which vertex-cover can be reduced to in an approximation preserving way. Thus, a better than $2$-approximation for node-weighted multiway cut is not possible assuming UGC. The reduction was shown by Garg-Vazirani-Yannakakis in their paper which also established a $2$-approximation for node-weighted multiway-cut.
{ "domain": "cstheory.stackexchange", "id": 1173, "tags": "cc.complexity-theory, unique-games-conjecture" }
Why during free-expansion, temperature changes for real-gas while the same doesn't happen so for ideal gas?
Question: While reading Free-expansion, I got this: Real gases experience a temperature change during free expansion. For an ideal gas, the temperature doesn't change, [...] I know that temperature doesn't change for free-expansion of ideal gas since internal energy is a function of temperature and since, $\Delta U= 0\;,$ temperature remains same. But the later statement? Why does the temperature change for real gas? Does the internal energy of real gas not depend on temperature? Can anyone please explain this fact? Answer: The internal energy of a real gas depends on both temperature and pressure. So, if U remains constant and pressure changes, the temperature must change. In the ideal gas limit of very low pressures, the pressure dependence of real gases weakens and approaches zero.
{ "domain": "physics.stackexchange", "id": 27955, "tags": "thermodynamics" }
Simple Bash Music Player - follow-up
Question: This question is a follow-up to this question. This is a year later, but it has the same context: I wanted new reviews for the updated code from the original question, but this time I wrote the code from scratch. I did this, not because the project is of any value or interest to me any more, as much as I wanted to compare my programming knowledge/style now to mine a year before. So, you will probably notice different algorithms and a different style of programming. I would appreciate it if you could review the new code, and possibly refer to previous bad habits I might still be having. Also, I would be extra thankful if you would state any progress notes between the two versions of the code. I've implemented new features, fixed bugs, and incorporated command line options. The program uses a bash utility library I've created for common tasks, called bash_lib: #!/bin/bash declare -rA _EXIT_CODES=( ['EX_OK']=0 # successful termination ['EX__BASE']=64 # base value for error messages ['EX_USAGE']=64 # command line usage error ['EX_DATAERR']=65 # data format error ['EX_NOINPUT']=66 # cannot open input ['EX_NOUSER']=67 # addressee unknown ['EX_NOHOST']=68 # host name unknown ['EX_UNAVAILABLE']=69 # service unavailable ['EX_SOFTWARE']=70 # internal software error ['EX_OSERR']=71 # system error (e.g., can't fork) ['EX_OSFILE']=72 # critical OS file missing ['EX_CANTCREAT']=73 # can't create (user) output file ['EX_IOERR']=74 # input/output error ['EX_TEMPFAIL']=75 # temp failure; user is invited to retry ['EX_PROTOCOL']=76 # remote error in protocol ['EX_NOPERM']=77 # permission denied ['EX_CONFIG']=78 # configuration error ['EX__MAX']=78 # maximum listed value ) # Displays a menu-based list of choices to screen # and echoes the associated value of the choice # @ENVIROMENT_VAR $_MENU_CHOICES the associative array for choice (key) / returned_value (value) menu() { select choice in "${!_MENU_CHOICES[@]}" ; do [ -n "$choice" ] || continue echo "${_MENU_CHOICES[$choice]}" return done } # Outputs error message and exits with an error code # @param $1 the error message, echo -e # @param $2 the error code. If $2 is empty, no exit happens. error() { echo -e "$1" >&2 log "error: $1" [ -n "$2" ] && _exit $2 } # Returns host name of given site. ex: http://google.com/whatever -> google.com # @param $1 the url url_get_host() { basename "$( dirname "$1" )" } # Returns the doceded url of given site. ex: http%3A%2F%2Fwww -> http://www # @param $1 the url # @param $2 the number of times to decode it. default: 2 url_decode() { local res="$1" local num="$2" if [ "$num" = auto ] ; then while egrep '%[0-9]+' -q <<< "$res" ; do res="$( sed 's/%\([0-9A-F][0-9A-F]\)/\\\x\1/g' <<< "$res" | xargs -0 echo -e)" done elif [ -z "$num" ] || ! is_num "$num" ; then num=2 fi if ! [ "$num" = auto ] ; then for ((i=0; i < $num; ++i)) ; do res="$( sed 's/%\([0-9A-F][0-9A-F]\)/\\\x\1/g' <<< "$res" | xargs -0 echo -e)" done fi echo "$res" } # Returns wether a text is a number # @param $1 the text is_num() { egrep '^[0-9]+$' -q <<< "$1" } # Returns wether a text is a confirmation # @param $1 the text. Or -p text: read -p $2 and check $REPLY instead # @param $2 used only if $1 is -p: prompt text is_yes() { REPLY="$1" [ "$1" = "-p" ] && read -p "$2" REPLY egrep '^([yY]|[Yy]es)$' -q <<< "$REPLY" } # Returns wether a program exists in $PATH or as a function is_prog_there() { [ -x "$( which "$1" )" ] || [ -n type "$1" ] } # Looks for program and if not found, exists with error an code # @param $1 the program name require() { for arg in "$@" ; do is_prog_there "$arg" || error "Required program not found: $arg" EX_UNAVAILABLE done } # Logs a string to a log file # @param $1 the string # @param $2 the log file path. default is ${_LOGFILE} # @ENVIRONMENT_VAR ${_LOGFILE} default location of logfile if $2 is empty log() { echo "[$( date )]: $1" >> "${_LOGFILE:-$2}" } # Exits after logging # @param $1 the exit code. default is 0 # @ENVIRONMENT_VAR ${_FORCED_EXIT} if set to true, does not check error code. _exit() { local code="$1" if [ -z "$code" ] ; then code=0 elif ! is_num "$code" ; then [ -n "${_EXIT_CODES["$1"]}" ] && code=${_EXIT_CODES["$1"]} || code=0 fi if ( ! [ "${_FORCED_EXIT}" = true ] ) && ([[ $code -eq 1 ]] || [[ $code -eq 2 ]] || ([[ $code -ge 127 ]] && [[ $code -le 165 ]]) || [ $code = 255 ]); then error "Wrong exit code reported: $code. Exit codes must NOT be \ 1,2,127-165,255, these are system reserved.\ Use the _EXIT_CODES associative array instead." ${_EXIT_CODES['EX_SOFTWARE']} fi log "Exiting with error code $code" exit "$code" } Here is the program source code (sorry if it's longer than you'd expect): #!/bin/bash declare -r SettingsFile="$HOME/.config/PlayMusic/settings.conf" declare -r _LOGFILE="$HOME/.config/PlayMusic/logfile.log" declare -r Version=0.3.2 source bash_lib log "Program started: $1" declare -A Settings=( ['Dropbox']=false # Wether to copy downloaded tracks to the dropbox folder ['TimesToPlay']=1 # Number of times to play a track ['Download']=false # Wether to download the track ['Play']=true # Wether to play the track ['All']=false # Wether to act on all tracks found, or just ask to specify track(s) ['Player']='mplayer' # Command to invoke a CLI sound player ['Downloader']='wget -c -O' # Command to invoke a CLI downloader ['Editor']="${EDITOR:-nano}" # Command to invoke a CLI editor ['MusicPlace']="$HOME/Music" # Directory to download music ) declare Sites=( 'http://ccmixter.org/view/media/samples/mixed' 'http://ccmixter.org/view/media/remix' 'http://mp3.com/top-downloads/genre/jazz/' 'http://mp3.com/top-downloads/' 'http://mp3.com/top-downloads/genre/rock/' 'http://mp3.com/top-downloads/genre/hip hop/' 'http://mp3.com/top-downloads/genre/emo/' 'http://mp3.com/top-downloads/genre/pop/' ) # Array of sites to search for tracks at declare Extensions=( 'mp3' 'ogg' ) # Array of extensions to look for # Key: Track title. Value: Track url. declare -A Tracks # Displays settings in a user-friendly manner on the screen info() { cat << __EOF_ # General Settings # Settings File: $SettingsFile Log File: ${_LOGFILE} Version: $Version # Behaviour Settings # Dropbox Support: ${Settings['Dropbox']} Music Place: ${Settings['MusicPlace']} Player: ${Settings['Player']} Editor: ${Settings['Editor']} Player: ${Settings['Player']} # Music Action Settings # Times to Play: ${Settings['TimesToPlay']} Act on all tracks: ${Settings['All']} Action: $( get_music_action ) __EOF_ } # Saves $Settings and $Sites to $SettingsFile in a valid format save_settings() { log "Creating settings file" mkdir -p "$( dirname "$SettingsFile" )" echo "Created on $( date )" > "$SettingsFile" || _FORCED_EXIT=true error "Fatal Error!" $? for option in "${!Settings[@]}" ; do echo "$option = ${Settings["$option"]}" >> "$SettingsFile" done echo "Sites" >> "$SettingsFile" for site in "${Sites[@]}" ; do echo "$site" >> "$SettingsFile" done log "Created settings file successfully" } # Extracts program name from a command # @param $1 the cmd. get_prog_name() { cut -d ' ' -f 1 <<< "$1" } # Load $Settings and $Sites from the $SettingsFile load_settings() { log "Parsing settings file" local line_number=0 line local key value local correct local is_site=false site # Check Settings file if ! ([ -r "$SettingsFile" ] || [ -f "$SettingsFile" ]) ; then save_settings return fi while read -r line; do ((line_number++)) if [ -z "$line" ] || [ $line_number = 1 ] ; then continue fi correct=false # Check line format if $is_site ; then Sites["${#Sites[@]}"]="$line" continue else if [ "$line" = "Sites" ] ; then is_site=true Sites=() continue elif ! egrep "^[a-zA-Z]+\s*=.+$" -q <<< "$line" ; then error "$SettingsFile:$line_number: Incorrect format of line" EX_CONFIG fi fi # Extract Key,Value pair key="$( egrep -o '^[a-zA-Z]+' <<< "$line" )" value="$( sed -E "s/^$key\s*=\s*(.+)$/\1/" <<< "$line" )" # Check if value is valid case "$key" in 'Dropbox'|'Download'|'All'|'Play' ) if egrep 'false|true' -q <<< "$value" ; then correct=true else error "Expected true or false" fi;; 'TimesToPlay' ) if is_num "$value" ; then correct=true else error "Expected a number" fi;; 'Player'|'Editor'|'Downloader' ) if is_prog_there "$( get_prog_name "$value" )" ; then correct=true else error "Expected an executable program" fi;; 'MusicPlace' ) if [ -d "$value" ] && [ -w "$value" ] ; then correct=true; else error "'$value' is not a writable directory" fi;; * ) error "$SettingsFile:$line_number: Invalid option: $key\nValid Options are:\n" \ "\tDropbox, Download, All, Play, TimesToPlay, Player, Editor" EX_CONFIG;; esac if ! $correct ; then error "$SettingsFile:$line_number: Invalid value: '$value' to option: '$key'" EX_CONFIG fi Settings["$key"]="$value" done < "$SettingsFile" log "Parsed settings file successfully" } # Displays program usage in a user-friendly manner to screen usage() { cat << _USAGE_ Usage: ./PlayMusic -v|--version Output version then exit. -h|--help View this help then exit. -x|--dropbox Allow copying downloaded files to $HOME/Dropbox. -t|--playtimes [num] Times to play a track. -d|--download To download a track without asking. -D|--no-download To not download a track without asking. -p|--play To play a track without asking. -P|--no-play To not play a track without asking. -k|--ask To force ask what to do with a track. -a|--all To act on all tracks found. -y|--player [cmd] The command to run the music player. -e|--edtor [cmd] The command to run the editor. -l|--downloader The command to run the downloader. -m|--music-place [dir] To specify a music directory other than the one found at the settings file. -r|--recreate-settings To recreate the settings file to default then exit. -E|--edit-settings To edit the settings file then exit. -s|--save To save the given settings (runs after analyzing all options). -i|--info To display given settings so far then exit. _USAGE_ } # Outputs the function name of what to do with a track get_music_action() { if ${Settings['Download']} && ! ${Settings['Play']} ; then echo "download" elif ! ${Settings['Download']} && ${Settings['Play']} ; then echo "play" elif ${Settings['Download']} && ${Settings['Play']} ; then echo "download_then_play" # else # echo "ask" fi } # Parses program arguments # @param $1 program arguments parse_args() { log "Parsing Arguments" local save=false local args=`getopt -o vhxt:dpDPy:e:amX::islk --long version,help,dropbox,playtimes:,download,play,no-download,no-play,player:,editor:,all,music-place:,extensions::,info,save,downloader,ask -n 'PlayMusic' -- "$@"` || error "Internal Error!" EX_SOFTWARE eval set -- "$args" while true ; do case "$1" in -v|--version ) echo "$Version"; _exit;; -h|--help ) usage; _exit;; -i|--info ) info; _exit;; -x|--dropbox ) Settings['Dropbox']=true;; -t|--playtimes ) if is_num "$2" ; then Settings['TimesToPlay']="$2" else if [ -n "$2" ] ; then error "'$2' is not a number" EX_CONFIG else error "Please provide a number for the '$1' option" EX_CONFIG fi fi; shift;; -d|--download ) Settings['Download']=true;; -p|--play ) Settings['Play']=true;; -D|--no-download ) Settings['Download']=false;; -P|--no-play ) Settings['Play']=false;; -y|--player ) require "$( get_prog_name "$2" )" 2; Settings['Player']="$2"; shift;; -e|--editor ) require "$( get_prog_name "$2" )" 2; Settings['Editor']="$2"; shift;; -a|--all ) Settings['All']=true;; -A|--selective ) Settings['All']=false;; -m|--music-place ) if ! ([ -d "$2" ] && [ -w "$2" ]) ; then error "'$2' is not a writable directory to store music in" EX_CONFIG fi;; -X|--extensions ) if [ -n "$2" ] ; then Extensions=(${2//,/ }) else echo "Extensions: ${Extensions[@]// /,}" fi;; -s|--save ) save=true;; -l|--downloader ) require "$( get_prog_name "$2" )" 2; Settings['Downloader']="$2"; shift;; -k|--ask ) Settings['Play']=false; Settings['Download']=false;; -- ) break;; * ) error "Unknown argument: $1" EX_CONFIG;; esac shift done if ${Settings['All']} ; then local act=$( get_music_action ) [ -z act ] && act="ask what to do with" is_yes -p "Are you sure you want to $act all tracks [y/n] ? " || Settings['All']=false fi if $save ; then log "Saving settings" save_settings exit fi log "Parsed Arguments successfully" } # Fills $Tracks by looking through $Sites for tracks ending in $Extensions. This is the core backend functionality. find_tracks() { log "Looking for tracks" local exts="${Extensions[@]// /|}" local num for site in "${Sites[@]}" ; do log "Checking site: '$site'" num=0 [ "$1" = '-v' ] && echo "Parsing $site" for track in $( curl -Ls "$site" | egrep -o "\bhttp://.*\.("$exts")\b" ) ; do name="$( url_decode "$( basename "$track" )" auto )" Tracks["${name//+/ }"]="$track" ((num++)) done log "Found $num track(s) at the site" done [ ${#Tracks[@]} = 0 ] && error "Couldn't find any tracks!" 0 } # Edits $SettingsFile by ${Settings['Editor']} edit_settings() { log 'Edit settings requested' ${Settings['Editor']} "$SettingsFile" } # Handles music action # @param $1 action # @param $2 track url # @param $3 track name (with extension) handle_action() { ([ -n "$1" ] && [ -n "$2" ] && [ -n "$3" ]) || return [ "$1" = 'download' ] || local name="$( sed -E 's/\.(\w+)$//' <<< "$3" )" $1 "$2" "$([ "$1" = 'play' ] && echo "$name" || echo "$3")" "$([ "$1" = 'download_then_play' ] && echo "$name")" } # Downloads a track to ${Settings['MusicPlace']} by ${Settings['Downloader']} # If $3 is -v, Outputs the track location on disk. # @param $1 the track url # @param $2 the track name (with extension) # @param $3 -v. optional. download() { local download_to="${Settings['MusicPlace']}/$2" log "Action: Download [$1] to [$download_to]" wget -c -O "$download_to" "$1" log "wget returned $?" if $Settings['Dropbox'] ; then cp "$download_to" "$HOME/Dropbox/Music" log "Copying file to dropbox" fi [ "$3" = '-v' ] && echo "$download_to" } # Plays a track by ${Settings['Player']} # @param $1 the track location # @param $2 the track name (without extension). optional. play() { log "Action: Play [$1]" [ -n "$2" ] && notify-send "PlayMusic: Playing $2" ${Settings['Player']} "$1" } # Downloads then plays a track # @param $1 the track url # @param $2 the track name (with extension). # @param $3 the track name (without extension). optional. download_then_play() { log "Action: Download then Play [$1]" play $( download "$1" "$2" -v ) "$3" } # Asks the user what to do with a track # @param $1 the other option, outputs nothing when chosen. ask() { local -A _MENU_CHOICES=(['Download']='download' ['Play']='play' ['Download then Play']='download_then_play' ["$1"]='') menu } # Main entry point main() { local site_number=0 local com="$( get_music_action )" echo "Looking for Tracks in ${#Sites[@]} Site$([[ ${#Sites[@]} -gt 1 ]] && echo "s") .." find_tracks -v if ${Settings['All']} ; then [ -n "$com" ] || com="$( ask 'Exit' )" [ -n "$com" ] || _exit for track in "${Tracks[@]}" ; do handle_action "$com" "${Tracks["$track"]}" "$track" done else echo "Choose Track .." select track in "${!Tracks[@]}" 'Quit' ; do [ -n "$track" ] || continue [ "$track" = 'Quit' ] && _exit [ -n "$com" ] || com="$( ask 'Return back' )" [ -n "$com" ] || continue handle_action "$com" "${Tracks["$track"]}" "$track" done fi } # Handles signales. Should not be used directly. forced_exit() { _FORCED_EXIT=true error "Signal [$1] forced an exit." "$((128+$1))" } # Initializes the environment for processing init_env() { trap 'forced_exit 2' SIGINT trap 'forced_exit 3' SIGQUIT trap 'forced_exit 4' SIGABRT trap 'forced_exit 15' SIGTERM for i in "$@" ; do if [ "$i" = "-r" ] || [ "$i" = '--recreate-settings' ] ; then save_settings _exit fi if [ "$i" = "-E" ] || [ "$i" = '--edit-settings' ] ; then edit_settings _exit fi done mkdir -p "${Settings['MusicPlace']}" load_settings parse_args "$@" if [ ${#Sites[@]} = 0 ] ; then error "No Sites were found!" 0 fi } ARGS="$@" init_env "$ARGS" require "$( get_prog_name "${Settings['Player']}" )" "$( get_prog_name "${Settings['Editor']}" )" "$( get_prog_name "${Settings['Downloader']}" )" main "$ARGS" I would like to get reviews for anything and everything: The documentation style, the coding style, the choice of language, variable naming, algorithms, etc. Answer: This is just a review on bash_lib... # Returns the doceded url of given site. ex: http%3A%2F%2Fwww -> http://www # @param $1 the url # @param $2 the number of times to decode it. default: 2 I'm curious as to why you need to specify the number of times (attempts?) to decode such values... Also, do you have access to perl? Because if you do then it's easier and shorter to do via a simple Perl script. ;) To check for yes (or equivalent) inputs, I think a better approach is to normalize the casing first, then apply a case-insensitive egrep: $ for i in y Y Yes YES no; do egrep -i '^y(es|)$' -q <<< $i \ && echo $i - OK; done y - OK Y - OK Yes - OK YES - OK Compare that with your current method: $ for i in y Y Yes YES no; do egrep '^([yY]|[Yy]es)$' -q <<< $i && echo $i - OK; done y - OK Y - OK Yes - OK For your log() function, you can use date to format the output directly: date +"[%c]: $1" # this echo "[$( date )]: $1" # instead of this ([[ $code -eq 1 ]] || [[ $code -eq 2 ]] || ([[ $code -ge 127 ]] && [[ $code -le 165 ]]) || [ $code = 255 ]) That can be simplified too, by putting them all within one [[ ... ]]: $ for code in 1 2 120 127 160 165 240 255; do if ([[ $code -eq 1 ]] || [[ $code -eq 2 ]] || ([[ $code -ge 127 ]] && [[ $code -le 165 ]]) || [ $code = 255 ]); then echo $code - Y1; fi; [[ $code -eq 1 || $code -eq 2 || ($code -ge 127 && $code -le 165) || $code -eq 255 ]] \ && echo $code - Y2; done 1 - Y1 1 - Y2 2 - Y1 2 - Y2 127 - Y1 127 - Y2 160 - Y1 160 - Y2 165 - Y1 165 - Y2 255 - Y1 255 - Y2 A step further is to combine it with ! [ "${_FORCED_EXIT}" = true ], I'll leave that as an exercise for the reader. Finally, a small nitpick: wether is spelled wrongly... it should be whether. edit Further review on url_decode() and the actual 'program'... The other weird things about url_decode() are that you are repeating your decoding across two different loops, and carefully interpreting and handling for $num when $2 = auto (which isn't 'documented'). Also, since you mentioned that you prefer a more bash-like solution, perhaps you can consider the following too so that you can even not depend on grep and sed: url_decode() { local s="$1"; local n=${2:-0}; while [[ $((n--)) -gt 0 || (-z $2 && $s =~ %[[:xdigit:]]{2}) ]]; do s=$(echo -e ${s//%/\\\x}); done; echo $s } Rather than defaulting to 2, you might as well incrementally apply the decoding until you don't see %XX, where X is a hexadecimal character. This is (better) represented using the regex character class [[:xdigit:]]. =~ replaces the egrep and ${s//%/\\\x} replaces the sed. The loop condition simply says: countdown $n until it reaches 0, i.e. [$n - 1, 0], or $2 is not specified and we still have 'leftover' values to decode. If you still prefer to stick to a safe default such as 2, replace n=${2:-0} with n=${2:-2}, and then you can drop the || condition. A quick test: $ url_decode 'You%2BWon%2527t%2BBe%252BMissed' You+Won't+Be+Missed $ url_decode 'You%2BWon%2527t%2BBe%252BMissed' 1 You+Won%27t+Be%2BMissed $ url_decode 'You%2BWon%2527t%2BBe%252BMissed' 2 You+Won't+Be+Missed As for your actual 'program', I don't see many major problems with it...
{ "domain": "codereview.stackexchange", "id": 14886, "tags": "bash, music" }
Optimize this code of adapter's overriden method getItemCount()
Question: How can I optimize this code of adapter's overriden method getItemCount() into simpler form? @Override public int getItemCount() { return super.getItemCount() == 0 && data != null ? 1 : super.getItemCount(); } Answer: Code Beautification is suggestive. This is how I would write your code. There is also a possible performance optimization by only calling super.getItemCount() once. be consistent with parentheses (I'm using a different rule for methods and conditions, but feel free to use your own preference as long as you are consistent) create one variable to store the item count get the item count you would like to return from the base class determine predicate from the perspective of either (A) the edge case value of item count, when itemCount == 0; an inner condition is used to determine the new value (B) the combined condition that yields a different result itemCount == 0 && data != null snippet A @Override public int getItemCount() { int itemCount = super.getItemCount(); if (itemCount == 0) { itemCount = data != null ? 1 : 0; } return itemCount; } snippet B @Override public int getItemCount() { int itemCount = super.getItemCount(); if (itemCount == 0 && data != null) { itemCount = 1; } return itemCount; }
{ "domain": "codereview.stackexchange", "id": 34821, "tags": "java, performance, android" }
Different methods to determine DOF: Chebychev-Kutzbach-Grubler method vs. Screw method
Question: I'm familiar with Chebychev-Kutzbach-Grubler method to determine degree of freedom of a robot arm. But it seems this method fails to calculate the mobility of some parallel robots, as explained here. However I cannot understand screw theory well and I do not know how to apply it to determine DOF. So I wanna know what is the idea behind screw method to determine DOF? And could anyone explain with a simple example how Screw method works? EDIT: Could you please explain how we can determine total DOF of ,for example, SCARA Arm via screw method? Answer: I will try not to skip too many steps. Assuming a Global coordinate frame at the base and the arm is fully extended along the Y-axis of the base frame. Since SCARA has four joints, we will create four 6D spatial vectors (screws) ${ξ}_{i}$ with respect to a global coordinate frame. Keep in mind that all spatial vectors are described with respect to the global frame. Also ${ξ}_{i}$ will describe the axis around which rotation takes place and along which translation takes place. Basically, we assume a screw motion for every joint along its Z-axis. The first three elements of vector ${ξ}_{1}$ (for joint 1) will the linear velocity on the screw axis, let's call it ${v}_{i}$. The last three elements are the rotation of the joint with respect to the global coordinate frame, regardless if there are other joints preceding it. Let's call it ${ω}_{i}$ and it will always be a unit vector.$${ξ}_{ι}=\left[{\begin{matrix}{v}_{i}\\{ω}_{i} \end{matrix}}\right]$$ For the first joint the axis of rotation coincides with the global Z-axis. Therefore ${ω}_{1}$ is $${ω}_{1}=\left[{\begin{matrix}0\\0\\1 \end{matrix}}\right]$$ If you are not sure how I came up with it consider the following. if the z axis of the joint was along the y axis of the global frame then the ${ω}_{1}$ would be $${ω}_{y}=\left[{\begin{matrix}0\\1\\0 \end{matrix}}\right]$$. if it was along the x-axis then it would be $${ω}_{x}=\left[{\begin{matrix}1\\0\\0 \end{matrix}}\right]$$. And of course it was along the z axis but pointing downwards then: $${ω}_{-z}=\left[{\begin{matrix}0\\0\\-1 \end{matrix}}\right]$$. Now let's focus on ${v}_{1}$. We need a point ${q}_{1}$ that will give us the distance of the joint from the origin. The distance is L1 along the z-axis, therefore: $${q}_{1}=\left[{\begin{matrix}0\\0\\L1 \end{matrix}}\right]$$ We rotate ${q}_{1}$ with the unit angular velocity ${ω}_{1}$ (cross product) and we have $${v}_{1}=\left[-{ω}_{1} \times {q}_{1} \right] =- \left[{\begin{matrix}0\\0\\1 \end{matrix}}\right] \times \left[{\begin{matrix}0\\0\\L1 \end{matrix}}\right] = \left[{\begin{matrix}0\\0\\0 \end{matrix}}\right] $$ Therefore $${ξ}_{1}=\left[{\begin{matrix}{v}_{1}\\{ω}_{1} \end{matrix}}\right]=\left[{\begin{matrix}0\\0\\0\\0\\0\\1 \end{matrix}}\right]$$. Simillarly we have $${ξ}_{2}=\left[{\begin{matrix}{v}_{2}\\{ω}_{2} \end{matrix}}\right]=\left[{\begin{matrix}L2\\0\\0\\0\\0\\1 \end{matrix}}\right]$$ For the third joint the ${ω}_{3}$ is 0 because it is prismatic and ${v}_{3}$ is the unit vector $${v}_{3}=\left[{\begin{matrix}0\\0\\1 \end{matrix}}\right]$$ making ${ξ}_{3}$ $${ξ}_{3}=\left[{\begin{matrix}{v}_{3}\\0 \end{matrix}}\right]=\left[{\begin{matrix}0\\0\\1\\0\\0\\0 \end{matrix}}\right]$$. For joint 4 $${v}_{4}=-\left[{\begin{matrix} 0\\0\\1\end{matrix}}\right] \times \left[{\begin{matrix}0\\L2+L3\\L1-d1\end{matrix}}\right] = \left[{\begin{matrix}0\\L2+L3\\0\end{matrix}}\right]$$ $${ξ}_{4}=\left[{\begin{matrix}0\\L2+L3\\0\\0\\0\\1\end{matrix}}\right]$$ Now the spatial vectors ${ξ}_{i}$ describe the axes of rotation of all the joints. Keep in mind that some authors have $v$ and $ω$ switch places. For example the paper you are mentioning. This is fine as long as you are consistent throughout. This example is being worked out in "A mathematical introduction to robotic manipulation" I have linked in my previous answer on page 88. These are the basics of the screw theory description. Similarly we can create a wrench 6D vector φ with the linear and the angular (torques) forces. As you can tell by now, screw theory is a method to describe the manipulator (like the DH parameters) and not a mobility formula like the Chebychev-Kutzbach-Grubler method. The paper you are referencing stacks the spatial vectors (it calls them twists) ${ξ}_{i}$ and the wrenches ${φ}{i}$ to create a twist matrix and then analyze their constraint using the union and the reciprocal of the twists. Those are vector functions that the screw theory can take advantage of. I am not sure if my answer helps you but the paper you are mentioning requires a relatively strong understanding of screw theory and vector algebra to make sense. Please let me know if you need more clarifications. Cheers.
{ "domain": "robotics.stackexchange", "id": 1369, "tags": "mechanism, manipulator, screw-theory" }
Prove that the "6-rule" CFG for arithmetic expressions below is unambiguous
Question: Question: Prove that the 6-rule CFG for arithmetic expressions below is unambiguous. The CFG is as follows. $G = (V:=\{E,T,F\}, \Sigma:=\{+, \times,(,),x\},R,E\})$ where $R$ consists of 6 rules: $E\rightarrow E+T ~|~ T~~~~~~ T\rightarrow T\times F|F~~~~~~ F\rightarrow (E) | x$ My thoughts: I think we should start by showing each string $s$ in language has a unique derivation by strong induction on the length of $s$, but not sure how to proceed. Could you please help? Many thanks! Answer: Sometimes explicit induction can be avoided by using contradiction. Assume there is a smallest pair of different derivation(trees) that derive the same string. Since the pair is different, we can also assume that the roots of the trees are the same, but the applied productions differ. So can we have two derivation trees that start with the productions $F\to (E)$ and $F\to x$ and derive the same string. Obviously not. The same should be checked for the pair $E\to E+T$, $E\to T$ and the pair $T\to T\times F$, $T\to F$.
{ "domain": "cs.stackexchange", "id": 20269, "tags": "automata, context-free, pushdown-automata, computation-models" }
What do you think of this improvement of Linq's GroupBy method?
Question: Linq already has a set of GroupBy methods for enumerable, which returns IEnumerable<IGrouping<TKey, TSource>>. I dislike this return type because it means I have to do a where query everytime I want to find a group. I try to make something that returns a dictionary instead, as below: public static class GroupByExtension { public static Dictionary<TKey, List<TSource>> GroupToDictionary<TSource, TKey>( this IEnumerable<TSource> source, Func<TSource, TKey> keySelector) { return source.GroupBy(keySelector).ToDictionary (grouping => grouping.Key, grouping => grouping.ToList()); } } I can test it like this List<int> list = new List<int>() { 1, 2, 33, 4, 20, 43, 21, 93, 26, 31, 113 }; Dictionary<int, List<int>> numbersGroup = list.GroupToDictionary(i => i / 10); //group number by result of division by ten. So, I can call numbersGroup[1], numbersGroup[3], etc. as opposed to the more clumsy and less efficient Where's clauses if I only have IEnumerable<IGrouping<TKey, TSource>> IEnumerable<IGrouping<int, int>> numbersGroup2 = list.GroupBy(i => i / 10); IGrouping<int, int> group = numbersGroup2.Where(grouping => grouping.Key == 3).First(); //and this only returns an IGrouping, not a proper list, //and where can be expensive for large list because it uses O(n) IIRC //the first() could have also been avoided because we know that the keys are unique What do you think of this code? What drawback do you see? And what is the reason Microsoft use IEnumerable<IGrouping<TKey, TSource>> Answer: I think the reason it's done this way is to make GroupBy() consistent with the rest of LINQ: everything is as lazy as possible, so that following queries can be efficient. And there already is a method that does almost the same thing as your GroupToDictionary(), it's called ToLookup().
{ "domain": "codereview.stackexchange", "id": 2001, "tags": "c#, .net, linq" }
Retrieving data from cursor associated with a list item inside an anonymous listener method declaration
Question: I've run into a problem twice just recently. Though I'm getting around it I can't help but feel that it's a rather unconventional method and that there's a better one. This app is an exercise in using different types of persistent storage. I have 1 table to store short notes. In a ListView those notes are displayed along with the date they were created and a button to delete that note. Somehow inside an anonymous class' method override declaration for View.OnClickListener.onClick I am left unable to get an item's _id. But I DO have the view and so I place an invisible text view into the layout and every time I bind that view I setText to the _id of the row that is associated with that item. When the button is clicked the item will be deleted using the _id from the invisible view. YAY it works, however, it seems a little weird to be doing it that way. Source Code app/src/main/java/se/frand/app/onetableapp/DateListAdapter.java @Override public void bindView(final View view, final Context context, final Cursor cursor) { ... viewHolder.deleteButton.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { String where = MyContract.NoteEntry.COLUMN_NAME_ID + "=?"; // Retrieve the id of the row to delete from an invisible textview in the item layout // view is from bindView String[] args = new String[] { ""+((TextView)view.findViewById(R.id.note_id)).getText() }; context.getContentResolver().delete( MyContract.NoteEntry.CONTENT_URI, where, args); swapCursor(MainActivity.getLogTimes(context)); } }); viewHolder.noteView.setText(cursor.getString(MainActivity.COL_DATETIME_NOTE)); // setText of invisible textView to the id from the cursor viewHolder.invisibleView.setText(cursor.getString(MainActivity.COL_DATETIME_ID)); } app/src/main/res/layout/list_item_selected_layout.xml <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical"> ... <Button android:layout_width="wrap_content" android:layout_height="wrap_content" android:id="@+id/item_delete_button"/> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:id="@+id/note_id" android:visibility="invisible"/> </LinearLayout> My question is, how would you retrieve the _id? Here's how I did it in the master branch originally, but it didn't work. I caught the bug when I was working on my first refactor. String where = MyContract.DateEntry.COLUMN_NAME_ID + "=?"; String[] args = {""+cursor.getInt(MainActivity.COL_DATETIME_ID)}; db.delete(MyContract.DateEntry.TABLE_NAME, where, args); The problem here is it always gets the first row's id and deletes that row. It appears cursor is starting at the front even though cursor.getString(MainActivity.COL_DATETIME_NOTE) on the other side of the anonymous method is adding the correct data (not all items repeat the first row). Below is an additional occurrence of me resolving the same problem in the same way. I had hit this problem just two days prior doing a similar app. In this app users can view a set of nutritional ingredient items for a given meal. Long clicking a ingredient item brings up an AlertDialogwith optional replacements. Clicking one should replace the ingredient_id in a reference table with the _id of the ingredient selected. The same problem occurred here. When I selected a dialog choice the ingredient of the first item in the list would be replaced, not the ingredient that was clicked. Below is the code where I display the AlertDialog and handle the click of a dialog choice. Source Code app/src/main/java/se/frand/app/dietplan/IngredientsActivity.java builder.setTitle(R.string.pick_replace) .setItems(ingredients, new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int position) { // Once again here is where I pull the _id from the view String mealitemid = ""+((TextView) view.findViewById(R.id.meal_item_id)).getText(); SQLiteDatabase db = openOrCreateDatabase(MealsDbHelper.DATABASE_NAME,0,null); ContentValues values = new ContentValues(); values.put(MealItemsContract.MealItemEntry.COL_NAME_INGREDIENT_ID, ids[position]); int rows = db.update( MealItemsContract.MealItemEntry.TABLE_NAME, values, MealItemsContract.MealItemEntry.COL_NAME_ID+"=?", new String[] {mealitemid}); cursor = db.rawQuery(query, new String[]{meal_id}); adapter.swapCursor(cursor); db.close(); } }); builder.show(); I have a theory now as to why, before clicking and notifying the listener the cursor is entirely iterated through. Thus when clicked the cursor is in the position of beyond final, which it looks like in this behavior would be the same as the beginning. So what do you say? How should I be getting these _id values into an inner anonymous listener method declaration? I'd really appreciate any general code review, tips unrelated to this problem in particular. Answer: Retrieve id outside listener I think your problem is that you were trying to use the cursor inside the listener, but when the button was pressed, the cursor had already moved to a different row. What you could do is retrieve the id outside the listener, like this: @Override public void bindView(final View view, final Context context, final Cursor cursor) { ... final int id = cursor.getInt(MainActivity.COL_DATETIME_ID); viewHolder.deleteButton.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { String where = MyContract.NoteEntry.COLUMN_NAME_ID + "=?"; String[] args = new String[] {Integer.toString(id)}; context.getContentResolver().delete( MyContract.NoteEntry.CONTENT_URI, where, args); swapCursor(MainActivity.getLogTimes(context)); } }); viewHolder.noteView.setText(cursor.getString(MainActivity.COL_DATETIME_NOTE)); }
{ "domain": "codereview.stackexchange", "id": 15906, "tags": "java, android" }
Applying pressure at an angle
Question: When I apply the Force to an area , say at an angle $\theta$ with the vertical, is the pressure defined as $$ P =\frac{ \Delta F}{\Delta A_{Perpendicular} } =\frac{ \Delta F}{\Delta A \cos\theta} $$ Or $$ P =\frac{ \Delta F_{Perpendicular}}{\Delta A} =\frac{ \Delta F \cos\theta}{\Delta A}$$ And also please explain why , because both appear to be correct , but obviously give way different answers. Is it because the way pressure has been defined or some mathematical explaination too it too , because I get confused with these forms an often lot Answer: The definition of pressure $P$ in terms of vector quantities is $d\vec F_{\rm n} = (-)P\, dA\,\hat n$ where $d\vec F_{\rm n}$ is the force whose line of action is along the direction of the normal, $\hat n$, to the surface of area $dA$. In terms of magnitudes, $dF_{\rm n} = dF\, \cos \theta \Rightarrow dF\, \cos \theta = P\,dA \Rightarrow P = \dfrac{dF\cos\theta}{dA} = \dfrac{F}{\left ( \frac{dA}{\cos \theta}\right)}$
{ "domain": "physics.stackexchange", "id": 58801, "tags": "pressure" }
In addition to the reward function, which other functions do I need to implement Q-learning?
Question: In general, $Q$ function is defined as $$Q : S \times A \rightarrow \mathbb{R}$$ $$Q(s_t,a_t) = Q(s_t,a_t) + \alpha[r_{t+1} + \gamma \max\limits_{a} Q(s_{t+1},a) - Q(s_t,a_t)] $$ $\alpha$ and $\gamma$ are hyper-parameters. $r_{t+1}$ is the reward at next time step. $Q$ values are initialized arbitrarily. In addition to the reward function, which other functions do I need to implement Q-learning? Answer: In addition to the RF [*], you also need to define an exploratory policy (an example is the $\epsilon$-greedy), which allows you to explore the environment and learn the state-action value function $\hat{q}$. Moreover, although you don't need to know the details (i.e. the specific probabilities of transitioning from one state to the other) of the transition model, often denoted by $p$, you need a function that returns you the next state $s'$ for each action $a$ that you take in the current state $s$. You may not need to define this function, but, for example, the next state could be given by some kind of simulator of the environment (for example, in the case of Atari games, the Atari simulator may provide you the next frame of the game, which you could use to build an approximation of the next state). You can read the Q-learning pseudocode here. [*] The reward function is defined for the problem and, specifically, for the Markov Decision Process (MDP) that models the problem/environment. The RF is not defined only for applying the Q-learning to solve the problem (in fact, you could apply other algorithms, like SARSA), but you need the RF to use Q-learning; so, yes, you need to define/have the RF before applying Q-learning. You can think of the RF as the learning signal that is used to guide the agent towards the optimal policy, and that's why it's specific to each environment/problem. Note that, in theory, there could be more than one RF that leads to the optimal policy for an environment (see potential-based reward shaping for more details). (This paragraph was addressing what I originally thought was the question: I'm leaving it here because it may be relevant to the readers).
{ "domain": "ai.stackexchange", "id": 2897, "tags": "reinforcement-learning, q-learning, markov-decision-process, reward-functions, exploration-strategies" }
How should one clean the blades of a turbomolecular pump
Question: I have a turbomolecular pump from Osaka vacuum that is quite dirty, and I'd like to clean the blades off to improve balance. I'm not quite sure how to approach this, I've tried carbon cleaner for turbochargers for cars to no success, and my next thought is to walnut blast the blades, but I'm worried about bending them. What method should I use to clean these unknown deposits off the blades of this pump? Answer: Acetone or other ketones are used to clean auto engine internals of "varnish"( which looks similar to your deposits). This is for research purposes ,not general engine cleaning. The material looks organic so solvent seems preferred to mechanical methods like nut shell blast. Corrosion would not be a factor for the metal if solvent is used.
{ "domain": "engineering.stackexchange", "id": 3489, "tags": "turbines, turbomachinery" }
Wittig reaction with benzaldehyde
Question: Somehow I am told that my product is incorrect. Why is my product incorrect? As far as I can see this is a Wittig reaction resulting in a two carbon increase with removal of the oxygen. The only thing that strikes me as odd is that the Wittig is conjugated to a bromide salt. The only way this can happen is if the phosphorous part has an overall positive formal charge. Does this mean that the n-butyllithium will abstract a proton and leave the carbon backbone of the Wittig as $\ce{CH3CH}$? Even so this doesn't seem to affect my final product ... Answer: Below is the preparation of the Wittig reagent you are given. Yes, the intermediate is a phosphonium salt, with a formal positive charge on phosphorus. Treatment with base (BuLi) deprotonates at the adjacent carbon giving a phosphorus ylide. As for the Wittig reaction itself, one postulated mechanism (depicted in Organic Chemistry by Maitland Jones) is shown below. Nucleophilic attack of the ylide onto the aldehyde gives an open betaine intermediate, which closes to form an oxaphosphatane. The strained 4-membered ring decomposes readily to give an alkene and triphenylphosphine oxide. The stereochemistry of the Wittig is related to the nature of the ylide used. Unstabilized ylides, such as the one here, predominantly give the Z-configured alkene, while stabilized ylides (usually through conjugation to a carbonyl) give an E-configured alkene as the major product. With substrates similar to the ones here, I obtained a 4:1 ratio of Z:E products, so the selectivity is not perfect, as K_P points out. The stereochemical outcome is intimately related to the mechanism, and according to my copy of Jerry March's Advanced Organic Chemistry (4th ed), there is no evidence for the betaine intermediate. Instead, the oxaphosphatane is formed directly through a [2+2] cycloaddition mechanism. This mechanism is attractive because the stereochemical demands of the cycloaddition account for the preference of the (less stable) Z-product. However, this does not explain why different ylides give different selectivities.
{ "domain": "chemistry.stackexchange", "id": 2160, "tags": "organic-chemistry, synthesis" }
Product of a Transition System and a Finite Automaton
Question: Dealing with a question that asks me to compute the product of the following transition system and finite automaton. Compute the product between the transition system TS and the finite-word automaton A depicted below. Can't seem to find a good example on how to do this so I sort of just freestyled my answer. The best I found was in Principles of Model Checking but the book didn't really explain how the product was derived (it just provided a TS, FA and product), so I had to "guess" the logic. Can anybody have a quick look at my solution and see if there are there any obvious mistakes I have made? Also I can't figure out whether or not I should indicate any terminal states. I know that a TS doesn't have any terminal states, but an FA can - so what about the product? Answer: In Section 4.2.2 of the book "Principles of Model Checking", there is a definition (Definition 4.16; Page 165) of "Product of Transition System and NFA". You are right about the states (i.e., $S \times Q$) of the product but make mistakes about its transition relation. Below I focus on the transition relation. Definition 4.16 Product of Transition System $TS = (S, Act, \to, I, AP, L)$ and NFA $\mathcal{A} = (Q, \Sigma, \delta, Q_0, F)$. The transition relation $\to'$ of their product is the smallest relation defined by the rule $$\frac{s \to^{\alpha} t \; \land \; q \to^{L(t)} p}{(s,q) \to'^{\alpha} (t,p)}$$ Intuitively, the transition system TS generates atomic propositions and feeds them into the automaton $\mathcal{A}$, driving the automata running. This semantics can be used to verify if the TS satisfies some property expressed by an automaton. Additionally, the start states of the product is $$I' = \{ (s_0, q) \mid s_0 \in I \land \exists q_0 \in Q_0. q_0 \to^{L(s_0)} q \}.$$ That is, $q_0$ in automaton has moved one step forward, driven by the atomic propositions in $s_0$. Based on the definition above, I calculate the product as follows (please check it): For what it concerns the terminal states, please refer to "Remark 4.17" (immediately following the Definition above) in the book. By the way, the labels for transitions (action denoted by $\alpha$ above) in $TS$ are often ignored (because they are irrelevant).
{ "domain": "cs.stackexchange", "id": 4444, "tags": "automata, finite-automata, model-checking" }
Why does $i ( LK-KL )$ represent a real quantity?
Question: According to my textbook, it says that $i( LK-KL )$ represents a real quantity when $K$ and $L$ represent a real quantity. $K$ and $L$ are matrices. It says that this is because of basic rules. However, I was not able to recall my memories. Can anyone show the proof of this? Answer: The statement is that if $K$ and $L$ are Hermitian operators – which means $$ K = K^\dagger, \quad L = L^\dagger$$ and it implies that the eigenvalues of $K,L$ are real and the eigenvectors with different eigenvalues are orthogonal to each other, then $i(KL-LK)$ (the same as yours) is also Hermitian. This is easily proved by computing the Hermitian conjugate of this $i(KL-LK)$ because the result is the same as this operator itself: $$ [i(KL-LK)]^\dagger = i^\dagger (KL-LK)^\dagger = (-i) (L^\dagger K^\dagger - K^\dagger L^\dagger) = (-i)(LK-KL) = i(KL-LK).$$ I used $(AB)^\dagger = B^\dagger A^\dagger$ and $i^\dagger = i^* = -i$.
{ "domain": "physics.stackexchange", "id": 4786, "tags": "quantum-mechanics, heisenberg-uncertainty-principle, operators, observables, commutator" }
Which face filter algorithms can work on CPU or integrated GPU?
Question: I see many realtime face swap filters and appearance enhancement filters on smartphone apps. Even apps that can make you look like a granny or show you having a frown, no matter what your actual facial expression is. On searching for open source algorithms/code to apply such effects to my desktop PC's webcam, all I find are hugely resource-heavy programs that require a dedicated GPU or TPU. May I know how smartphone apps process face filters? If they send the data to a server for processing, how does it work in realtime? Or do they use scaled-down CNN models or GAN that can run on the phone's processor? If so, are there lightweight models or algorithms that can run on the desktop PC's CPU or integrated GPU (APU)? Answer: Face filters works by first detecting and localizing the face, then predicting the so called facial landmarks (a set of points that depict the geometry of the face, like its contour, shape of eyes, nose, mouth, ecc), and lastly applying the filter potentially yield by some generative model. These are all heavy work. For example, you can have a pipeline that first detects a face, if no faces you stop the processing. Otherwise you can have a multi-task model that yields the face bounding-box and landmarks directly, so improving the runtime because you reuse most of the model for both predictions. However, the part related for the filters is probably always separated, moreover you have to deal with computer graphics to render the filter and also apply some nice FX. What I want to say is that both solutions are possible, and even a combination of them. Many camera software (especially on mobile) have a very fast implementation of face detection (probably an improvement of Viola-Jones method). So once you detect a face, you can send that frame on the cloud for fast processing. Once you have the landmarks and, I guess, also the correct geometry of the filter, to enable real-time effects usually tracking is used. There exist very efficient tracking algorithms that rely un classical computer vision, that can either track objects and even landmarks. Basically you run the expensive part (or cloud processing) one frame every $N$, and for the other you use only tracking. Scaled-down CNN models or GAN, are also possible. There exist optimization frameworks like tf-lite that allow you to compress (by quantization and pruning), optimize, and even deploy a DL model. Indeed, you need to care about specific optimizations for the CPU and GPU. If you plan to implement something similar on your on PC, you can use OpenCV (either with Python or C++) for, e.g., tracking, then compress a DL model with tf-lite in case you use tensorflow and/or Keras: you can take a pre-trained model, and compress with such tool. In that way, you can target optimized CPU and GPU models. For the graphics part you should find some optimized library for GPU, or write the code yourself like in OpenGL or similar. You can also have a look at Dlib: it's a nice lib, that has many image processing, computer vision, ML implementations, and even pre-trained models.
{ "domain": "ai.stackexchange", "id": 3839, "tags": "convolutional-neural-networks, generative-adversarial-networks, filters, face-detection, deep-face" }
Which tools are available to call somatic mutations on non-coding regions from whole genome sequencing data?
Question: I have a set of whole genome sequencing samples, some of which have matched normal while some not. I want to call somatic mutations on non-coding regions. I was looking for GATK best practices (like this) but it seems it is not yet ready for production. Which alternative tools do you suggest? Answer: Those will have to be treated as separate analysis. You can use Mutect or VarScan for the matched samples and the tools listed in the other question for the unmatched samples however that means that you won't be able to compare the results easily. If the inter-sample comparison is important you will have to run all samples as unmatched. Personally I have used LoFreq for somatic variant detection with some success. It is very sensitive so the false positive rate can be high depending on how noisy your data is.
{ "domain": "bioinformatics.stackexchange", "id": 333, "tags": "wgs, mutations, somatic, non-coding" }
Build Hash with 2 input variables
Question: I have a case that looks like this : John has 120 candies, He needs some plastic bags and marks each plastic bag with the number and puts 50 candies in each bag. So John can only fill up to 3 plastic bags : plastic bag 1 = 50 candies plastic bag 2 = 50 candies plastic bag 3 = 20 candies I want to create a Hash from the above case. The first variable is 120 candies and second variable is 50 candies for each plastic bag. I have written a method to create a Hash like that : def make_a_hash(candies, each_candies) mark_number = 1 arry = [] begin put_candy = candies / each_candy > 0 ? each_candy : candies candies = candies - put_candy arry << [mark_number, put_candy] mark_number += 1 end while candies > 0 arry.to_h end #=> make_a_hash(120, 50) #=> {1=>50, 2=>50, 3=>20} That method is working, and I would like to know if there is another best practice instead my method. Answer: The first remark I'd like to make about your function is that the names of things are not very clear as to what it is supposed to do. Imagine presenting only the code to someone, who doesn't know what problem you are trying to solve. Do you think it would be very clear? We can make it more general, so that it can be applied to other problems. Instead of candies and plastic bags, let's use units and containers. That way, we can also calculate practically any "number of \$x\$ in \$y\$..." problems that involve integers: Number of boxes in delivery trucks Number of passengers in train carts Etc. Since we are looking to get as many full containers as we can, and then the remainder in another container, let's name the function that way as well. Here is what it looks like so far, with just the names changed: def get_full_containers_and_remainder(units, container_capacity) container_number = 1 containers = [] begin units_in_this_container = units / container_capacity > 0 ? container_capacity : units units -= units_in_this_container containers << [container_number, units_in_this_container] container_number += 1 end while units > 0 containers.to_h end get_full_containers_and_remainder(120, 50) # {1=>50, 2=>50, 3=>20} So far, so good. The logic is identical, but the function is easier to understand with more general names. The logic can be simplified to not having to rely on manually adding units one container at a time in a loop. For such small numbers this is trivial, but if you had to divide up really big numbers this would take very long. We can instead use a bit of simple math to calculate the remainder as well as the number of containers without a loop. Using the modulo operator we can get the remainder: remainder = units % container_capacity # remainder of 120 ÷ 50 = 20 Then we can calculate the number of full containers by subtracting the remainder from the total units, and then dividing by the container size. count_full_containers = (units - remainder) / container_capacity # (120 - 20) ÷ 50 = 2 From there it's simply a matter of adding that many full containers, and then one more container with the remainder. Note that I added a check at the beginning to return an empty hash early if either of the values is zero, to avoid division-by-zero errors. Working demo on repl.it def get_full_containers_and_remainder(units, container_capacity) if units == 0 || container_capacity == 0 return [].to_h end remainder = units % container_capacity count_full_containers = (units - remainder) / container_capacity container_number = 1 containers = [] begin containers << [container_number, container_capacity] container_number += 1 end while container_number <= count_full_containers containers << [container_number, remainder] return containers.to_h end puts get_full_containers_and_remainder(120, 50) # {1=>50, 2=>50, 3=>20} puts get_full_containers_and_remainder(0, 50) # {} puts get_full_containers_and_remainder(120, 0) # {}
{ "domain": "codereview.stackexchange", "id": 25702, "tags": "beginner, ruby, ruby-on-rails" }
Is the velocity gradient of a homogeneous viscous fluid constant?
Question: The formula for the viscosity of a fluid (or even a gas) is equal to \begin{align*} \frac{F}{A} =\mu\frac{du}{dz} \end{align*} where $F$ is the force applied to the top layer of the fluid, $A$ is the surface area between the top and bottom layers of the fluid, $\mu$ is the viscosity of the fluid, and $du/dz$ is the velocity gradient of the fluid. In general, is the velocity gradient of a homogeneous viscous fluid a constant, or are there cases where higher order terms are needed? Answer: This formula is applicable only in the case of a stationary fluid sheared between two plates (Poiseuille flow), in the limit that the shear strain is small. In that case the shear strain is automatically spatially constant. If the shear strain is large (or of the fluid is non-Newtonian) then $\mu$ will depend on the shear strain. The general formula is that the stress tensor is given by $$ \Pi_{ij} = P\delta_{ij} +\rho u_iu_j - \mu\left( \nabla_iu_j+\nabla_j u_i -\frac{2}{3}\delta_{ij} (\nabla\cdot u) \right) - \zeta \delta_{ij} (\nabla\cdot u) + O(\nabla^2) $$ where $O(\nabla^2)$ denotes terms that become important at higher strain rate. The force per area on a surface element is $$ dF_i = \Pi_{ij}dA_j $$ We can quickly check that this reproduces Newton's formula. Take a flow $u_x(z)$ bounded by plates in the $xy$ plane. The tangential force on the plates is $$ F_x/A=-(\hat{e}_z)_i \Pi_{xj} = -\Pi_{xz}=\eta\nabla_z u_x $$
{ "domain": "physics.stackexchange", "id": 54748, "tags": "fluid-dynamics, velocity, flow, viscosity" }
Integrate 3D sensing in robot_localization package
Question: Hi, I came across of this image of the robot_localization package, and I wonder if it is possible to integrate in the sensor fusion the 3D sensing data like in the picture. I didn't find anything about incorporating 3D sensing data in the documentation of this package. Best, Originally posted by andrestoga on ROS Answers with karma: 188 on 2019-08-15 Post score: 0 Answer: The package robot_localization is just a kalman filter that fuses odometry data to provide a filtered estimate of the fusion of sensors. The package can currently accept messages of type: Odometry IMU Pose Twist from a plethora of sources. Where those sources come from is up to you. The package does have support for 3D localization and 2D localisation with two_d_mode set to true. Please see the documentation as to how to use this. Specifically referring to your picture, Odometry would be supplied from a tachometer relating the sensor to kinematic constraints IMU would come from an Inertial Measurement Unit of which the manufacturer most likely provides a driver GPS provides a pose in a global reference frame which is determined by the navsat_transform_node also a part of the package. Camera using some visual odometry package such as rtabmap or viso 3D sensing appears to represent an Xbox Kinect in this image of which you could perform the above visual odometry or even Iterative Closest Point (ICP) odometry by interpreting the depth sensing as a point cloud Based on the sources you supply to the state estimation nodes, it will provide a 3D (or 2D) estimate of your position. This package is often used in collaboration with gmapping for Simaltaneous Localisation And Mapping (SLAM). Thanks, Grant. Originally posted by PapaG with karma: 161 on 2019-08-15 This answer was ACCEPTED on the original site Post score: 3
{ "domain": "robotics.stackexchange", "id": 33629, "tags": "ros, navigation, ros-melodic, robot-localization" }
operator precendence grammar
Question: According to this post https://stackoverflow.com/questions/28397767/computing-leading-and-trailing-sets-for-context-free-grammar while constructing operator precendence parser we have to create tabel with operator. Actually i dont understand the rules which tells how to build such table. For example take last production term -> '(' expr ')' do we use rule terminal nonterminal because of ( expr or do we use rule nonterminal terminal because of expr ) or do we use rule terminal nonterminal terminal shoudl we divide production or treat as whole? Then is there any other rule applicable in example from link then terminal nonterminal terminal ? Answer: I think that answer is clear (although of course I would do, because I wrote it). What it says is: if you find $$nonterminal\; TERMINAL$$ in any production, then you add the precedence relations $TRAIL \gtrdot TERMINAL$ for every $TRAIL$ in $Trailing(nonterminal)$. Similarly, every occurrence of $$TERMINAL\; nonterminal$$ generates the relationships $TERMINAL \lessdot LEAD$ for every $LEAD$ in $Leading(nonterminal)$. So in $'(' expr ')'$, you have both an instance of $TERMINAL\; nonterminal$ and an instance of $nonterminal\; TERMINAL$, and you need to deal with both of them independently.
{ "domain": "cs.stackexchange", "id": 6822, "tags": "terminology, formal-grammars, parsers" }
CFG-Infinite recursion
Question: As you see, the string production process never ends. Can someone explain me if this language is regular or not ? $ S \to Α Β S $ $ A \to S $ $ B \to a B b $ Answer: The language is regular: $$ L = \{\}$$ It doesn't contain any word not even the empty word $\varepsilon$. The automaton recognizing it has only one state; a reject state. Or if you are using DFA the automaton just wouldn't have an accept state. Notice that the language is regular but that the grammar is written in a form that's usually used for context free grammars, but the language produced by it is still regular.
{ "domain": "cs.stackexchange", "id": 16607, "tags": "regular-languages, context-free, regular-expressions" }
Effect of bluetooth headphones on health
Question: There was an article on the internet assuming that there's a high chance that bluetooth headphones cause a lot of mental and physical damages to our body. They explained it in a pretty neat way if I understood it correctly. They said as bluetooth headphones are the second class of waves, which have about 10 meters of effection area and average amount of effect, they are capable enough to damage a flow in body. As they have a fairly high energy, and as electrical flows in our body or more primary than chemical flows, the magnetic field causes by the headphones affects the electrical flows in body and disrupts them causing several problems in our body. Is this explanation right? Do you know any trust worthy articles I can read to know how bad bluetooth is for humans? Also do the same things apply to wifi waves too? Thanks in advance for any help. Link: https://www.radiationhealthrisks.com/bluetooth-technology-radiation/ Answer: So physics only has a very little bit to say about this topic. Obviously, if there is a very large circuit, you can induce electrical currents in it with radio and microwaves—if this weren't true, your radio would not work. However, this has to be understood as a bulk motion of a lot of electrons over the wavelength of the wave, and so in terms of biological structures, if we take the typical biological structure as a cell of 100 microns, this starts to be an issue at frequencies of 3 terahertz or above. At frequencies much lower than that like Bluetooth (2.4 GHz, I think?), the internal function of the cell is not so much being disturbed, but possibly a larger jiggling of a bunch of cells all together in an organ is possible. As you get larger, the danger would apparently get lower: your body might have limited resources to regulate a cell, but it has a whole circulatory system to cool down a warm liver. In addition, as those frequencies get even higher, a new effect starts to appear: electromagnetic waves start to become chemically active due to quantum mechanics. The bonds that bind together molecules have a certain energy per bond, and quantum mechanics says that an individual particle of light has a certain kick to it, and we can ask when those energies are of the same scale. This turns out to be at the very high frequencies (petahertz) that you see in visible light for example: visible light is chemically active and that is why we have receptors to see it! As you go higher and higher in frequency, you get to ultraviolet rays, x-rays, and gamma rays. These are so chemically active, they have so much kick, that they can be powerful enough to disrupt the chemical bonds that are in your body, in particular they can potentially break apart DNA, which is thought to be a mechanism by which these rays cause mass cell death (sunburns) and occasionally mess with the wrong parts of the DNA to create mutant cells that do not stop replicating (cancer). It is extremely unlikely that Bluetooth is carcinogenic: the waves are just too big for that. People have worried about this for a long time, in the context of cell phone usage, and large studies have been carried out with no findings of a significant increase in cancer risk. For example a study called Interphone found that across most of their categories increase cell phone use was associated with less cancer which seemed strange. With that said, we physicists would not be able to rule out more sophisticated interactions where your body works like some sort of antenna and responds in a sophisticated way to external microwaves and radio waves. That is the role for medical practitioners and researchers. I can design a circuit that sits on the back of a mouse and if I am cruel I can make it listen for a microwave signal and then dump toxins into that mouse’s bloodstream. Therefore, I as a physicist cannot prove that that is not what happens inside your body anywhere. It is physically possible for such mechanisms to exist, and we have nothing to say about them as physicists. As a broader scientist, I can say that that sounds like a weird thing to evolve, but weird things do happen to evolve all the time, so I don't think that's necessarily a decisive argument against it.
{ "domain": "physics.stackexchange", "id": 64807, "tags": "electromagnetic-radiation, everyday-life, biology, medical-physics" }
Can gauss law for magnetism be used for open surfaces?
Question: Suppose there's a loop of current carrying wire in a plane,then it's stated in my book that magnetic flux through the area of loop will be negative and of equal magnitude as the flux outside the loop and in the plane.The reason given is supposedly following gauss law, "since magnetic field lines are closed loops each field line crosses both inside and outside loop, so flux is same".This is highly unsatisfactory for me because firstly to the best of my knowledgeable gauss law is used for closed surfaces and not for planes, secondly it doesn't make sense just because "lines pass through both regions" will necessarily mean flux is equal,afterall lines are just imaginations and flux is dot product of field and area vector,how exactly will the dot products in both regions be equal,to be specific? Answer: One has to be very careful when trying to apply Gauss's law to open surfaces. Usually, you need to see the open surface as the limit of a closed surface. In our case, you can consider a half-sphere having the same center as the loop, and imagine its surface as the radius of the half-sphere goes to infinity: the flat circular part of the surface becomes a plane, so we need to show that the flux through the hemispherical part goes to zero as the radius goes to infinity. It can be shown this way: as the radius $R$ grows, the area of the hemisphere grows as $R^2$; on the other hand, at great distance from the current loop, its magnetic field is that of a magnetic dipole, that decays as $1/R^3$. Then the flux of the field through the hemisphere goes as $R^2\cdot 1/R^3 = 1/R$, and this means that as $R$ goes to infinity, the contribute to the flux coming from the hemisphere goes to zero. Since the flux through the hemisphere goes to zero, we can discard it and apply Gauss's law to the plane only. Anyway, you can also give a precise meaning to the flux lines. In particular you can show the validity of the following statement: If $S_1$ and $S_2$ are two surfaces such that every magnetic field line going through $S_1$ crosses also $S_2$ and vice versa, then the flux of the magnetic field through the two surfaces is the same. To convince yourself of this fact, consider all the field lines going through a finite surface $S_1$. They occupy a portion of space that is called a flux tube (follow the link to Wikipedia for a useful visualization). The surface of this region of space is such that the magnetic field is parallel to it in every point. It is clear that $S_1$ is a section of this flux tube. And it is also evident that every surface $S_2$, that is to satisfy the hypotheses of the previous statement, has to be a section of the same flux tube. Then we can consider the closed surface made by $S_1$, $S_2$ and the "lateral surface" of the flux tube comprised between $S_1$ and $S_2$. We can apply Gauss's law to this closed surface. The flux through the lateral surface is zero because there the field is by definition parallel to the surface. So, in order for the total flux to be zero, the flux through $S_1$ must cancel out with the flux through $S_2$.
{ "domain": "physics.stackexchange", "id": 69724, "tags": "electromagnetism, magnetic-fields, gauss-law" }
Why is a conservative force defined as the negative gradient of a potential?
Question: I'm learning about work in my dynamics class right now. We have defined the work on a particle due to the force field from point A to point B as the curve Integral over the force field from point A to B. From math I know that if a vector field has a potential, we only need to evaluate the potential at point B minus the potential at point A to get the result of the curve Integral. In the text that I'm reading, it's explained that if the integral over a force-field is path-independent, then the force field $F = -{\rm grad}(V)$, where $V$ is the potential. Why is it defined as the negative gradient? Doesn't one determine the potential from $F$ mathematically. Why do we impose the sign on the potential? Answer: We introduce a minus sign to equate the mathematical concept of a potential with the physical concept of potential energy. Take the gravitational field, for example, which we approximate as being constant near the surface of Earth. The force field can then be described by $\vec{F}(x,y,z)=-mg\hat{e_z}$, taking the up/down direction to be the $z$ direction. The mathematical potential $V$ would be $V(x,y,z) = -mgz+\text{Constant}$ and would satisfy $\nabla V=\vec{F}$. This would correspond with decreasing in height increasing in potential energy which would make us have to redefine mechanical energy as $T-V$ in order to maintain conservation. Instead of redefining mechanical energy, we introduce the minus sign $\vec{F} = -\nabla V$ which equates the physical notion of potential energy with the mathematical notion of the scalar potential.
{ "domain": "physics.stackexchange", "id": 83195, "tags": "forces, potential-energy, conventions" }
Oscillations of plane waves
Question: I am reading this notes on electromagnetism. On page 25 it reads the following, about planes waves: A plane wave propagating in the direction of z has no oscillations in the transverse plane $(k_x^2 + k_y^2 = 0)$, whereas, in the other limit, a plane wave propagating at a right angle to $z$ shows the highest spatial oscillations in the transverse plane $(k_x^2 + k_y^2 = k^2)$. But as far as I know, plane EM waves oscillate in a plane which is orthogonal to the direction of propagation, so if the direction of propagation was $z$, then the oscillations of E and H should actually take place in the $xy$ plane. Is this a mistake in the text, or am I getting something wrong? Answer: Your text is completely correct (though it arguably uses language that's more baroque than it really needs to, depending on whether the context around that passage warrants that construction or not). The passage is not talking about the direction of polarization at all; instead, it's talking about the spatial dependence of the wave on the position. If a plane wave is propagating along $z$, then there is no dependence on the $x$ or $y$ coordinates. If a plane wave is propagating at a right angle to $z$, then its propagation direction is in the $x,y$ plane, and the direction of highest spatial oscillations (i.e. the propagation direction, along $\vec k$), is in the $x,y$ plane.
{ "domain": "physics.stackexchange", "id": 47756, "tags": "electromagnetism, magnetic-fields, electric-fields, plane-wave" }
Does introducing a gauge field into the complex scalar field theory Lagrangian change its dynamics?
Question: I've been reading Lancaster & Blundell, and in Chapter 14 they focus on the Lagrangian $$ \mathcal{L}=(\partial^\mu\psi)^\dagger(\partial_\mu\psi) - m^2\psi^\dagger\psi. $$ To impose invariance to the transformation $\psi\rightarrow\psi\exp(i\alpha(x))$, where $\alpha(x)$ is a coordinate-dependent phase, they replace the derivatives in $\mathcal{L}$ with covariant derivatives $$ D_\mu = \partial_\mu + iqA_\mu. $$ Invariance then follows if we also admit the transformation $$ A_\mu\rightarrow A_\mu-\frac{1}{q}\partial_\mu\alpha(x). $$ Now, my question is a simple one: why are we 'allowed' to change the Lagrangian seemingly arbitrarily? I see how this change leads to the invariance of $\mathcal{L}$ with respect to the transformation $\psi\rightarrow\psi\exp(i\alpha(x))$, but surely in doing so we change the dynamics of the field $\psi$? Expansion of the `new' Lagrangian would seem to suggest that the EL equations do indeed result in different dynamics. Many thanks for your help. Answer: This is indeed true and is what is called the gauge principle. It tells us that if we make a global symmetry local, we need to add a corresponding gauge field such that the total Lagrangian still remains invariant under this local gauge transformation. This is a new dynamical field which has its own equations of motion and can couple to the fermion leading to interactions. In this case the original Lagrangian is invariant under $U(1)$ as $\psi \to \psi e^{i \alpha}$, note that also $\partial_\mu \psi \to \partial_\mu \psi e^{i \alpha}$. We say that these fields transform in the fundamental representation of $U(1)$. Now after making our transformation local: $\alpha \equiv \alpha(x)$ it's easy to see that $\partial_\mu \psi \not\to \partial_\mu \psi e^{i \alpha(x)}$ To account for this as we still want our field to transform in the fundamental representation we have to introduce a gauge field $A_\mu(x)$ and a covariant derivative $\mathcal{D}_\mu$ such that $\mathcal{D}_\mu \psi \to \mathcal{D}_\mu \psi e^{i\alpha(x)}$. This last transformation dictates how $A_\mu(x)$ should transform.
{ "domain": "physics.stackexchange", "id": 72975, "tags": "lagrangian-formalism, field-theory, gauge-theory, gauge-invariance" }
Do orbitals overlap?
Question: Yes, as the title states: Do orbitals overlap ? I mean, if I take a look at this figure... I see the distribution in different orbitals. So if for example I take the S orbitals, they are all just a sphere. So wont the 2S orbital overlap with the 1S overlap, making the electrons in each orbital "meet" at some point? Or have I misunderstood something? Answer: If you mean to ask "do the orbital radial probability distributions overlap?", the answer is yes: Image Credit making the electrons in each orbital "meet" at some point As you can see from the image, the electron orbitals are not position eigentstates. If you're imagining two point-like electrons in different orbitals colliding, you're not thinking "quantum mechanically".
{ "domain": "physics.stackexchange", "id": 9884, "tags": "quantum-mechanics, electrons, atoms" }
Decrease in temperature of an aqueous salt solution decreases conductivity
Question: Why does the conductivity of a water solution drop as the temperature decreases? How are these two related? Answer: According to the Stokes-Einstein-Debye theory, and assuming the ionic composition remains constant (say for a fully dissociated salt), the main factor accounting for the response of the conductivity to temperature is the change in the viscosity of the solvent. In the SED theory the frictional drag coefficient $f$ of a charged particle is proportional to the viscosity $\eta$: $$ f \propto \eta$$ As a result the electrical mobility $\mu$ of an ion of charge $q$ is inversely proportional to the viscosity, since $$ \mu =q/f\propto \eta^{-1}$$ and since the specific conductance $ \kappa$ depends linearly on the mobilities (approximately, at constant ionic strength), $$ \kappa \propto \eta^{-1}$$ Since the viscosity usually increases with decreasing temperature, the conductivity decreases.
{ "domain": "chemistry.stackexchange", "id": 11355, "tags": "aqueous-solution, solutions, temperature, conductivity" }
How to construct quantum circuit to count number of 0-qubits and 1-qubits
Question: Suppose we have a 3-qubit input; each bit is either 0 or 1. How to decide if there are more 1's than 0's? Only 1 extra qubit may be used for the output. (Yes I know this can be achieved using 3 Toffoli gates, but can it be done without Toffoli gates?) Now suppose we have an answer to the above question, then, how to extend the above circuit to deal with a 5-qubit input? Answer: Here's a general strategy that doesn't quite fulfil the brief: for an $n$-qubit input where $n+1=2^k$, $k$ an integer (e.g. $n=3,k=2$), it uses $k$ ancilla qubits but no Toffolis. (You can do something similar if $n+1$ is not a powe of 2, but you'd need some classical post-processing and I'd have thought you might as well just measure the input qubits!) The idea is to define a Hamiltonian $$ H=\left(\sum_{j=1}^nZ_j+nI\right)/2. $$ Note that this has eigenvalues $0,1,2,3,\ldots,n$ corresponding to the number of 1s in the string it's acting on. So, let $U=e^{2i\pi H/2^k}$. This is a unitary with the eigenvectors that we need. If you run a phase estimation procedure using $k$ ancilla qubits, it will exactly read out the number of 1s for you. This requires controlled-$U$, which is just a bunch of controlled-phase gates (i.e. all two-qubit gates) and the Fourier transform which, again, is two-qubit gates. Actually you only need the semi-classical Fourier transform, so it's just one-qubit gates with feed-forward of measurement results. So, once you know who many 1s there are, you can classically process that to decide if it's greater than $n/2$. In the case of $n+1$ being a power of 2, this is particularly simple. You just look at the bit representation of the output, and the most significant bit will give you the answer. So, this would be the only qubit you would need to measure. (Note that this is the last bit output by the Fourier transform, not the first).
{ "domain": "quantumcomputing.stackexchange", "id": 1751, "tags": "programming, circuit-construction" }
Sending HTML reports to a database project
Question: So, I have been recently heavily focusing on Front-End, for data validation, showing errors etc. And a big thank you to CertainPerformance for many reviews hes done for me :) I have created a Report template using handlebars and a MySQL database to store the data, using Node JS to interact between the 2. I have now gained understanding how Node JS can interact with the front end, grab data and send it to a database. I'd like to hear some reviews to my current code - front, and back end. For me, personally it works and I am extremely happy that I am able to create such thing, whereas before I never thought I could :D Just want to hear some reviews - where I can improve, where I'm I lacking, what can be simplified etc.. Node JS Note: this function is triggered at routing stage, I haven't included it here. // 21-TEMP-01a exports.TEMP01A21 = async function(req, res) { // Array to store actual temperature var actualTemp = []; // Array to store minimum temperature var minTemp = []; // Array to store maximum temperature var maxTemp = []; // Array to store actual humidity var actualHumid = []; // Array to store minimum humidity var minHumid = []; // Array to store maximum humidity var maxHumid = []; // Get drying room names var roomNames = []; // Loop thorugh each req.body values for(var key in req.body) { // Store req.body key to value var var value = req.body[key]; // For each key that starys with ActualTemp store into array if(key.startsWith("actualtemp")) { actualTemp.push(value); } // minTemp if(key.startsWith("mintemp")) { minTemp.push(value); } // maxTemp if(key.startsWith("maxtemp")) { maxTemp.push(value); } // actualHumidity if(key.startsWith("actualhumid")) { actualHumid.push(value); } // minHumidity if(key.startsWith("minhumid")) { minHumid.push(value); } // maxHumidity if(key.startsWith("maxhumid")) { maxHumid.push(value); } // Room Names if(key.startsWith("dryingroom")) { roomNames.push(value); } } // Get todays date var today = new Date(); // Format the date var formatDay = today.toISOString().substring(0, 10); // Create an array to create custom MySQL query var values = []; // For each temperature value, store its output to values array for(var i = 0; i < actualTemp.length; i++) { values.push([roomNames[i] , actualTemp[i], minTemp[i], maxTemp[i], actualHumid[i], minHumid[i], maxHumid[i], formatDay, res.locals.user.id]); } db.query("insert into 21TEMP01a (room_name, actual_temperature, min_temperature, max_temperature, actual_humidity, min_humidity, max_humidity, time, user) values ?", [values], (error, result) => { if(error) { console.log(error); } else{ res.redirect('/reports/daily'); } }); } Front-End <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.5.2/css/bootstrap.min.css" integrity="sha384-JcKb8q3iqJ61gNV9KGb8thSsNjpSL0n8PARn9HuZOnIxN0hoP+VmmDGMN5t9UJ0Z" crossorigin="anonymous"> <div class="container mt-4"> <h3>Form:</h3> <form id="form" class="mt-4 mb-4" action="/reports_send/21-TEMP-01a" method="POST"> <div style="border: 1px solid black; padding: 40px; border-radius: 25px;"> <h4>Select Room</h4> <div id="RoomSelect"> <select id="RoomMenu" class="form-control mb-4"> <!-- Drying Room 1 --> <option value="dry-1">Drying Room 1</option> <!-- Drying Room 2 --> <option value="dry-2">Drying Room 2</option> <!-- Dry Store--> <option value="dry-3">Dry Store</option> </select> </div> <div id="RoomInputs"> <!-- Drying Room 1 --> <div class="form-group" id="dry-1"> <!-- Title --> <h4>Drying Room 1</h4> <input type="hidden" name="dryingroom1" value="Drying Room 1"> <!-- All temperatures --> <div class="temperatures"> <label>Temperature °C - <strong>Actual</strong></label> <input type="number" class="form-control" name="actualtemp1" step=0.01> <label>Temperature °C - <strong>Minimum</strong></label> <input type="number" class="form-control" name="mintemp1" step=0.01> <label>Temperature °C - <strong>Maximum</strong></label> <input type="number" class="form-control" name="maxtemp1" step=0.01> </div> <br> <br> <!-- All humidity --> <div class="humidity"> <label>Relative Humidity - <strong>Actual</strong></label> <input type="number" class="form-control" name="actualhumid1"> <label>Relative Humidity - <strong>Minimum</strong></label> <input type="number" class="form-control" name="minhumid1"> <label>Relative Humidity - <strong>Maximum</strong></label> <input type="number" class="form-control" name="maxhumid1"> </div> </div> <!-- Drying Room 2 --> <div class="form-group" id="dry-2"> <!-- Title --> <h4>Drying Room 2</h4> <input type="hidden" name="dryingroom2" value="Drying Room 2"> <!-- All temperatures --> <div class="temperatures"> <label>Temperature °C - <strong>Actual</strong></label> <input type="number" class="form-control" name="actualtemp2" step=0.01> <label>Temperature °C - <strong>Minimum</strong></label> <input type="number" class="form-control" name="mintemp2" step=0.01> <label>Temperature °C - <strong>Maximum</strong></label> <input type="number" class="form-control" name="maxtemp2" step=0.01> </div> <br> <br> <!-- All humidity --> <div class="humidity"> <label>Relative Humidity - <strong>Actual</strong></label> <input type="number" class="form-control" name="actualhumid2"> <label>Relative Humidity - <strong>Minimum</strong></label> <input type="number" class="form-control" name="minhumid2"> <label>Relative Humidity - <strong>Maximum</strong></label> <input type="number" class="form-control" name="maxhumid2"> </div> </div> <!-- Dry Store --> <div class="form-group" id="dry-3"> <!-- Title --> <h4>Dry Store</h4> <input type="hidden" name="dryingroom3" value="Dry Store"> <!-- All temperatures --> <div class="temperatures"> <label>Temperature °C - <strong>Actual</strong></label> <input type="number" class="form-control" name="actualtemp3" step=0.01> <label>Temperature °C - <strong>Minimum</strong></label> <input type="number" class="form-control" name="mintemp3" step=0.01> <label>Temperature °C - <strong>Maximum</strong></label> <input type="number" class="form-control" name="maxtemp3" step=0.01> </div> <br> <br> <!-- All humidity --> <div class="humidity"> <label>Relative Humidity - <strong>Actual</strong></label> <input type="number" class="form-control" name="actualhumid3"> <label>Relative Humidity - <strong>Minimum</strong></label> <input type="number" class="form-control" name="minhumid3"> <label>Relative Humidity - <strong>Maximum</strong></label> <input type="number" class="form-control" name="maxhumid3"> </div> </div> <div id="errors" class="mt-2 alert alert-danger" style="opacity: 0; transition: all .5s;"> <p>Please complete all required fields</p> </div> <button id="submit-btn" class="btn btn-primary">Submit</button> </div> </div> </form> </div> <!-- Errors --> <div id="myModal" class="modal" tabindex="-1" role="dialog"> <div class="modal-dialog" role="document"> <div class="modal-content"> <div class="modal-header"> <h5 class="modal-title">Targets not met</h5> <button type="button" class="close" data-dismiss="modal" aria-label="Close"> <span aria-hidden="true">&times;</span> </button> </div> <div class="modal-body"> <p>Some temperatures or humidity values have not met their targets.</p> <p>- Re-check or continue submitting current data.</p> </div> <div class="modal-footer"> <button id="submit-email" type="button" class="btn btn-primary">Submit</button> <button type="button" class="btn btn-secondary" data-dismiss="modal">Close</button> </div> </div> </div> </div> <script> // Store DOM Strings var DOMStrings = { room_options: '#RoomMenu' }; // On selected option, show specific div element showActiveElement = () => { for(const option of document.querySelector(DOMStrings.room_options).options) { document.querySelector(`#${option.value}`).style.display = "none"; } if(document.querySelector(DOMStrings.room_options).value === 'dry-1') { document.querySelector('#dry-1').style.display = "block"; } else if (document.querySelector(DOMStrings.room_options).value === 'dry-2') { document.querySelector('#dry-2').style.display = "block"; } else if (document.querySelector(DOMStrings.room_options).value === 'dry-3') { document.querySelector('#dry-3').style.display = "block"; } } // Show selected div element from options document.querySelector(DOMStrings.room_options).addEventListener('change', () => { showActiveElement(); }); // Validate data document.getElementById("form").addEventListener("submit", (e) => { const inputs = document.querySelectorAll('#form input'); let bool; // 1. Check if input fields are empty const empty = [].filter.call(inputs, e => e.value === ""); let notEmpty; if(empty.length === 0) { notEmpty = true; } else { e.preventDefault(); document.getElementById('errors').style.opacity = 1; empty.forEach(element => { element.style.borderColor = "red"; element.placeholder = "Required"; element.addEventListener("input", (e) => { if(element.value === "") { bool = false; element.style.borderColor = "red"; element.placeholder = "Required"; } else { element.style.borderColor = "#fff"; } const empty = [].filter.call(inputs, e => e.value === ""); if(empty.length === 0) { document.getElementById('errors').style.opacity = 0; } else { document.getElementById('errors').style.opacity = 1; } }); }); } // Validate Temperature and humidity values if(notEmpty == true) { for (var i = 0; i < inputs.length; i++) { // Validate Temperatures for Drying room 1 & 2 if(inputs[i].name.match(/^(actual|min|max)-temp-(1|2)$/)) { validateTempDryingRoom(parseFloat(inputs[i].value), inputs[i], e); } // Validate Humidity for Drying room 1 & 2 if(inputs[i].name.match(/^(actual|min|max)humid(1|2)$/)) { validateHumidityDryingRoom(parseFloat(inputs[i].value), inputs[i], e); } // Validate Temp. for Dry Store if(inputs[i].name.match(/^(actual|min|max)temp(3)$/)) { validateTempDryStore(parseFloat(inputs[i].value), inputs[i], e); } // Validate humidity for Dry Store if(inputs[i].name.match(/^(actual|min|max)humid(3)$/)) { validateHumidityDryStore(parseFloat(inputs[i].value), inputs[i], e); } } } }); function validateTempDryingRoom(value, item, e) { if(value < 39.4 || value > 49) { e.preventDefault(); item.style.backgroundColor = 'red'; $("#myModal").modal(); } else { item.style.backgroundColor = '#fff'; } } function validateHumidityDryingRoom(value, item ,e) { if(value < 14 || value > 30) { e.preventDefault(); item.style.backgroundColor = 'red'; $("#myModal").modal(); } else { item.style.backgroundColor = '#fff'; } } function validateTempDryStore(value, item, e) { if(value < 10 || value > 27.2) { e.preventDefault(); item.style.backgroundColor = 'red'; $("#myModal").modal(); } else { item.style.backgroundColor = '#fff'; } } function validateHumidityDryStore(value, item ,e) { if(value < 40 || value > 70) { e.preventDefault(); item.style.backgroundColor = 'red'; $("#myModal").modal(); } else { item.style.backgroundColor = '#fff'; } } document.getElementById('submit-email').addEventListener('click', () => { const form = document.getElementById('form'); form.submit(); }); // On load window.onload = () => { showActiveElement(); } </script> Answer: First, your code seems globally ok, and much of what I'll write here are details. Also of course, it does reflect my personnal opinion. While I try not to discuss points about my personnal opinion, but more generic matters, my personnals opinion may impact what I consider generic matters or not. About comments Far from me the idea to discourage you to comment your code, but comments may not always be a good idea. You've probably learnt that commenting code is good, it feels obvious, so obvious that you don't even understand why commenting may harm your code. The only drawback you probably see to commenting your code may be that you took time to do it. Well, if you never go back to your code again, you don't need to comment it. Well most of code are "maintained" and evolve anyway. And that's the problem. In general, when the code change, the comments around the code tends not to change. When there is a problem in the code, you'll correct the code until the problem is resolved. When there is a problem in a comment, nothing will be done because no one will notice. Also comments should be meaningful. Usually you can replace some useless comments by correct variable/function/classe names. An example (that is much worse than your code, but to show the real difference) // store an empty array in the variable a let a = []; // [...] // if the string k starts with "at", append the content of the value v at the end of the array a if (k.startsWith("at")) { a.push(v); } And now compare it to the following uncommented code: let actualTemperatureArray = []; // [...] if (queryParameterKey.startsWith("actualTemperature")) { actualTemperatureArray.push(value); } The idea is that the name of your variable should reflect what the variable does. And if it's it, you don't need to comment it. Same for the "parsing" code. The fact that you look at the query parameter key and look if it starts with a certain string is obvious, it's literally what the code is saying. An interesting comment would not be what's your doing in, but why you're doing it. If it's not obvious. If it's obvious perhaps the comment is useless. But comment your tricks. For example, if you're skipping one value in a loop because "the first value should be ignored", this is the kind of thing that should be commented. There is a trick here that the future reader must know. But still you may want to comment your variable declaration. So if you do so, at least use JSDoc (https://jsdoc.app/). JSDoc let you add some static type definition that: Can be used by type checkers to validate javascript like static type languages Is used by modern IDE (Visual Code, WebStorm, etc.) to provide information about code. Now let's study examples from your code. // Array to store actual temperature var actualTemp = []; Do you thing actualTemp is enough for someone to understand what the variable does ? I'm not saying it isn't. But you have to decide. I'll take a conservative point of view, and consider that actualTemperatureArray is better (but know better the context, and perhaps in your field you thing that temp is enough, and nobody will think that in that context, it may mean temporary). And because actualTemperatureArray carry all the meaning you need, a comment is not necessary. var actualTemperatureArray = []; Of course, you'll store actual temperature in that array. No need to say it. But, if you want to say it, use JSDoc. /** @type {string[]} Array to store actual temperature */ var actualTemperatureArray = []; In your code, VS Code will show me the following tooltip, when I put my cursor over actualTemp: (local var) actualTemp: any[] So I know that this variable a local one, and it's an array of I don't know what. In the last example, will show me the following tooltip, when I put my cursor over actualTemperatureArray: (local var) actualTemperatureArray: string[] @type — {string[]} Array to store actual temperature This said, I don't want to discourage you from commenting, but to think about what you can bring to the reader that the code is not obiously providing. If you are just repeating what the code says, perhaps the comment is not necessary. If you bring insight about why you did this, it's probably an helpful comment. Repetitions The following block tend to have a lot of similar lines: // For each key that starys with ActualTemp store into array if (key.startsWith("actualtemp")) { actualTemperatureArray.push(value); } // minTemp if (key.startsWith("mintemp")) { minimalTemperatureArray.push(value); } // maxTemp if (key.startsWith("maxtemp")) { maximalTemperatureArray.push(value); } // actualHumidity if (key.startsWith("actualhumid")) { actualHumidityArray.push(value); } // minHumidity if (key.startsWith("minhumid")) { minimalHumidityArray.push(value); } // maxHumidity if (key.startsWith("maxhumid")) { maximalHumidityArray.push(value); } // Room Names if (key.startsWith("dryingroom")) { roomNames.push(value); } It may be a good idea to factorize that. Each block manipulate a different variable. So let's create an object containing all those variables, so we can be more generic when accessing those array variables. We need a data structure to "describe" the data we need. I consider that names like "actualtemp" or "mintemp" are part of the protocol and can't be changed. I could use that as index, but I'll use more explicit names, but that part can be simplified. We first create a structure describing the data we need (note that the order is important, it's the one in the query, as I'll reuse that order later to create the query). /** * @type {Object.<string, string>} The prefix of all body.req referenced by the name of the parameter */ var paramKeyPrefixesByParamName = { roomName: "dryingroom", actualTemperature: "actualtemp", minimalTemperature: "mintemp", maximalTemperature: "maxtemp", actualHumidity: "actualhumid", minimalHumidity: "minhumid", maximalHumidity: "maxhumid", }; /** @type{string[]} The parameter names */ const parameterNames = Object.keys(paramKeyPrefixesByParamName); Now let's create the arrays: /** * @type {Object.<string, string[]>} The array of values for each parameter, indexed by the name of the parameter */ var valueArraysByParamName = {}; I haven't created the arrays here, I create them later. I could have done it here. // Loop thorugh each req.body values for (var key in req.body) { // Store req.body key to value var var value = req.body[key]; // We iterate over all the // parameter names // (roomName, actualTemperature, ...) parameterNames.forEach((parameterName) => { // We extract the prefix // from the parameter names // (dryingroom, actualtemp, ...) const prefix = paramKeyPrefixesByParamName[parameterName]; if (key.startsWith(prefix)) { // We make sure the array exists. // If it doesn't exist, we create it. if (! valueArraysByParamName[parameterName]) { valueArraysByParamName[parameterName] = []; } // We append the value in the correct array. valueArraysByParamName[parameterName].push(value); } }) } We can now create the query: /** @type {string[][]} An array to create custom MySQL query */ var values = []; // For each temperature value, store the values associated to that temperature for (var i = 0; i < actualTemperatureArray.length; i++) { values.push([ ...parameterNames.map((parameterName)=>valueArraysByParamName[parameterName][i]), formatDay, res.locals.user.id ]); } So now, the server code look like: // 21-TEMP-01a exports.TEMP01A21 = async function (req, res) { /** * @type {Object.<string, string>} The prefix of all body.req referenced by the name of the parameter */ var paramKeyPrefixesByParamName = { roomName: "dryingroom", actualTemperature: "actualtemp", minimalTemperature: "mintemp", maximalTemperature: "maxtemp", actualHumidity: "actualhumid", minimalHumidity: "minhumid", maximalHumidity: "maxhumid", } /** * @type {Object.<string, string[]>} The array of values for each parameter, indexed by the name of the parameter */ var valueArraysByParamName = {}; // Loop thorugh each req.body values for (var key in req.body) { // Store req.body key to value var var value = req.body[key]; // We iterate over all parameter names (roomName, actualTemperature, ...) Object.keys(paramKeyPrefixesByParamName).forEach((parameterName) => { // We extract the prefix from the parameter name (dryingroom, actualtemp, ...) const prefix = paramKeyPrefixesByParamName[parameterName]; if (key.startsWith(prefix)) { // We make sure the array exists. If it doesn't exist, we create it. if (! valueArraysByParamName[parameterName]) { valueArraysByParamName[parameterName] = []; } // We append the value in the correct array. valueArraysByParamName[parameterName].push(value); } }) } // Get todays date var today = new Date(); // Format the date var formatDay = today.toISOString().substring(0, 10); /** @type {string[][]} An array to create custom MySQL query */ var values = []; // For each temperature value, store the values associated to that temperature for (var i = 0; i < actualTemperatureArray.length; i++) { values.push([ ...Object.keys(paramKeyPrefixesByParamName).map((parameterName)=>valueArraysByParamName[parameterName][i]), formatDay, res.locals.user.id ]); } db.query("insert into 21TEMP01a (room_name, actual_temperature, min_temperature, max_temperature, actual_humidity, min_humidity, max_humidity, time, user) values ?", [values], (error, result) => { if (error) { console.log(error); } else { res.redirect('/reports/daily'); } }); } Repetition of the front side In the front side you have some repetitions: function validateTempDryingRoom(value, item, e) { if(value < 39.4 || value > 49) { e.preventDefault(); item.style.backgroundColor = 'red'; $("#myModal").modal(); } else { item.style.backgroundColor = '#fff'; } } function validateHumidityDryingRoom(value, item ,e) { if(value < 14 || value > 30) { e.preventDefault(); item.style.backgroundColor = 'red'; $("#myModal").modal(); } else { item.style.backgroundColor = '#fff'; } } function validateTempDryStore(value, item, e) { if(value < 10 || value > 27.2) { e.preventDefault(); item.style.backgroundColor = 'red'; $("#myModal").modal(); } else { item.style.backgroundColor = '#fff'; } } function validateHumidityDryStore(value, item ,e) { if(value < 40 || value > 70) { e.preventDefault(); item.style.backgroundColor = 'red'; $("#myModal").modal(); } else { item.style.backgroundColor = '#fff'; } } It look like you're doing the same thing again and again. Let's rewrite this. function createValidateElement(minValue, maxValue) { return function (value, item, e) { if(value < minValue || value > maxValue) { e.preventDefault(); item.style.backgroundColor = 'red'; $("#myModal").modal(); } else { item.style.backgroundColor = '#fff'; } } } const validateTempDryingRoom = createValidateElement(39.4, 49); const validateHumidityDryingRoom = createValidateElement(14, 30); const validateTempDryStore = createValidateElement(10, 27.2); const validateHumidityDryStore = createValidateElement(40, 70); I created a function that create a function. Be careful, because validateTempDryingRoom, validateHumidityDryingRoom, validateTempDryStore, validateHumidityDryStore are now classical variable and not functions, they should be declared before being used. But still, the usage of those validators is also a repetition: // Validate Temperatures for Drying room 1 & 2 if(inputs[i].name.match(/^(actual|min|max)-temp-(1|2)$/)) { validateTempDryingRoom(parseFloat(inputs[i].value), inputs[i], e); } // Validate Humidity for Drying room 1 & 2 if(inputs[i].name.match(/^(actual|min|max)humid(1|2)$/)) { validateHumidityDryingRoom(parseFloat(inputs[i].value), inputs[i], e); } // Validate Temp. for Dry Store if(inputs[i].name.match(/^(actual|min|max)temp(3)$/)) { validateTempDryStore(parseFloat(inputs[i].value), inputs[i], e); } // Validate humidity for Dry Store if(inputs[i].name.match(/^(actual|min|max)humid(3)$/)) { validateHumidityDryStore(parseFloat(inputs[i].value), inputs[i], e); } So we need a data structure that describe the items being repeated. const dryDatas = { TempDryingRoom: { pattern: /^(actual|min|max)-temp-(1|2)$/, minValue: 39.4, maxValue: 49, }, HumidityDryingRoom: { pattern: /^(actual|min|max)humid(1|2)$/, minValue: 14, maxValue: 30, }, TempDryStore: { pattern: /^(actual|min|max)temp(3)$/, minValue: 10, maxValue: 27.2, }, HumidityDryStore: { pattern: /^(actual|min|max)humid(3)$/, minValue: 40, maxValue: 70, }, } And then we can create the validators: const dryDataValidators = Object.fromEntries(Object.entries(dryDatas).map(([dryDataKey,dryDataValue])=>[dryDataKey, createValidateElement(dryDataValue.min, dryDataValue.max)])); If you think it unreadable, you can use some more readable: const dryDataValidators = {}; for (let dryDataKey of Object.keys(dryDatas)) { const dryDataValue = dryDatas[dryDataKey]; dryDataValidators[dryDataKey] = createValidateElement(dryDataValue.min, dryDataValue.max); } It does the same thing... And then replace // Validate Temperature and humidity values if(notEmpty == true) { for (var i = 0; i < inputs.length; i++) { // Validate Temperatures for Drying room 1 & 2 if(inputs[i].name.match(/^(actual|min|max)-temp-(1|2)$/)) { validateTempDryingRoom(parseFloat(inputs[i].value), inputs[i], e); } // Validate Humidity for Drying room 1 & 2 if(inputs[i].name.match(/^(actual|min|max)humid(1|2)$/)) { validateHumidityDryingRoom(parseFloat(inputs[i].value), inputs[i], e); } // Validate Temp. for Dry Store if(inputs[i].name.match(/^(actual|min|max)temp(3)$/)) { validateTempDryStore(parseFloat(inputs[i].value), inputs[i], e); } // Validate humidity for Dry Store if(inputs[i].name.match(/^(actual|min|max)humid(3)$/)) { validateHumidityDryStore(parseFloat(inputs[i].value), inputs[i], e); } } } by // Validate Temperature and humidity values if(notEmpty == true) { for (var i = 0; i < inputs.length; i++) { // Iterate over all type of dryDatas (TempDryingRoom, HumidityDryingRoom, TempDryStore, ...) for (let dryDataKey of Object.keys(dryDatas)) { const pattern = dryDatas[dryDataKey].pattern; const validator = dryDataValidators[dryDataKey]; if(inputs[i].name.match(pattern)) { validator(parseFloat(inputs[i].value), inputs[i], e); } } } }
{ "domain": "codereview.stackexchange", "id": 39516, "tags": "javascript, node.js" }
What is a problem, a task, and a solution?
Question: I am attending algorithms and data structures course in my University and my professor gave me an interesting question the other day. He told me to think about it. What is a problem, a task, and a solution? It seems to me like it's more of a philosophical question. All of those 3 things correlate to one another. To me, it seems like if we have a problem, we then have a task to solve it, which we might be able to do with an algorithm, which is a solution. What do you guys think would be the answer to this question? Does it have some kind of deeper meaning about algorithms, or am I just overthinking it? Answer: A problem is that you're trying to study Algorithms(seriously, its a real world problem). By posting this question here, you've performed a task, in order to solve a part of that problem. A complete solution would be when you understand the problem, perform a set of tasks in order to solve that problem. Seems to be fun? Actually, it is. Lets understand with a better example, You want to become a good Software(Don't get deep in this term or change it with Web or whatever you'd like) Developer. It is a problem. Now you're attending some courses to fulfill that dream, this a task for the solution of that problem. Another task could be that you practice & sharpen your CS skills on your own. And many more such tasks. All these tasks collectively form a solution.
{ "domain": "cs.stackexchange", "id": 7715, "tags": "algorithms, terminology" }
On-site repulsion and Pauli exclusion
Question: Been studying hopping conduction and something that everyone is taking for granted is bothering me. Let's say we have a bunch of sites that are either unoccupied, singly occupied, or doubly occupied. Due to on-site Coulomb repulsion the two electron levels are separated by U energy at a doubly occupied site. Now everyone is saying that the two electrons on the double site are in the spin singlet state due to, I assume, Pauli exclusion. However the two electrons are not in the same energy level - they are separated by U so why is there a restriction on their spins? Answer: 4tnemele's answer is great, but I thought I'd try to give a simpler explanation of the main point of confusion. Let's say (as in 4tnemele's answer) that there's only one relevant orbital (or "energy level") at each site. That means there's two states a single electron can occupy: spin up and spin down. Also, let's say there's no magnetic field. You say "Due to on-site Coulomb repulsion the two electron levels are separated by U energy at a doubly occupied site." This is wrong, because there is only one "level" (orbital), not two. The Coulomb energy penalty U only applies if there are two electrons in that level, one spin up and one spin down. At a doubly occupied site, the overall energy is increased by U, but that energy doesn't belong to one electron or the other. It comes from the interaction between the electrons. At a doubly occupied site, the two electrons still have exactly the same spatial wavefunction. The many-body wavefunction is spatially symmetric. This is why the spins have to be opposite. So the basic problem is that you shouldn't say "the two electron levels are separated by U". Instead you should say "the state with two electrons at the same site is increased in energy by U relative to what you'd otherwise expect (twice the single-electron energy)".
{ "domain": "physics.stackexchange", "id": 526, "tags": "condensed-matter, quantum-spin" }
How to change number of particles in gmapping package?
Question: I would like to ask is this is the correct coding to change the number of particles of the gmapping package in launch file? <launch> <!-- Arguments --> <arg name="model" default="$(env TURTLEBOT3_MODEL)" doc="model type [burger, waffle, waffle_pi]"/> <arg name="configuration_basename" default="turtlebot3_lds_2d.lua"/> <arg name="set_base_frame" default="base_footprint"/> <arg name="set_odom_frame" default="odom"/> <arg name="set_map_frame" default="map"/> <!-- Gmapping --> <node pkg="gmapping" type="slam_gmapping" name="turtlebot3_slam_gmapping" output="screen"> <param name="base_frame" value="$(arg set_base_frame)"/> <param name="odom_frame" value="$(arg set_odom_frame)"/> <param name="map_frame" value="$(arg set_map_frame)"/> <param name="particles" value="100"/> <rosparam command="load" file="$(find turtlebot3_slam)/config/gmapping_params.yaml" /> </node> </launch> Im adding <param name="particles" value="100"/> in the launch file as above. Thank you. Originally posted by MirulJ on ROS Answers with karma: 15 on 2022-05-24 Post score: 0 Answer: The best place would be the file: $(find turtlebot3_slam)/config/gmapping_params.yaml Since the line: <rosparam command="load" file="$(find turtlebot3_slam)/config/gmapping_params.yaml" /> Is going to overwrite any ros parameters that you have defined before. Originally posted by Martin Peris with karma: 5625 on 2022-05-24 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 37708, "tags": "ros, navigation, ros-kinetic, gmapping, turtlebot3" }
Invalid roslaunch XML syntax: no root tag
Question: I was trying to launch a file and I am getting this error every time, I was trying to use the turtlebot3 map in my different robot. so I copied turtlebot3_house.world to my robot workspace. <?xml version="1.0" encoding="UTF-8"?> <launch> <include file="$(find robot)/world/turtlebot3_house.world"> </include> <param name="robot_description" command="cat '$(find robot)/urdf/robot.xacro'"/> <arg name="x" default="0"/> <arg name="y" default="0"/> <arg name="z" default="0.5"/> <node name="robot_spawn" pkg="gazebo_ros" type="spawn_model" output="screen" args="-urdf -param robot_description -model robot -x $(arg x) -y $(arg y) -z $(arg z)" /> </launch> my error RLException: while processing /home/guru/first_robot/catkin_ws/src/robot/world/turtlebot3_house.world: Invalid roslaunch XML syntax: no root <launch> tag The traceback for the exception was written to the log file the launch file which command which I used roslaunch robot spawn.launch Originally posted by harish556 on ROS Answers with karma: 74 on 2021-03-31 Post score: 0 Original comments Comment by Ranjit Kathiriya on 2021-03-31: Can you provide more description: For example: What command you are running to launch file, the directory structure of your folders, etc. I think that your launch file is correct, but then also look at this link if this might help you. Comment by harish556 on 2021-03-31: @ranjit I have just checked that, seems that's not the problem and I have also added command here Answer: The error is pretty clear: RLException: while processing /home/guru/first_robot/catkin_ws/src/robot/world/turtlebot3_house.world: Invalid roslaunch XML syntax: no root <launch> tag it states there is no root <launch> tag in turtlebot3_house.world, and that's correct. turtlebot3_house.world is a Gazebo .world file, not a .launch file you can include in another .launch file. Originally posted by gvdhoorn with karma: 86574 on 2021-03-31 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by harish556 on 2021-03-31: so, how can i make it a launch file Comment by gvdhoorn on 2021-03-31: You can't. That's not how this works. Comment by harish556 on 2021-03-31: thanks, i got that Comment by tryan on 2021-03-31: This tutorial explains how to use a Gazebo world in a launch file.
{ "domain": "robotics.stackexchange", "id": 36265, "tags": "ros" }
Is there a mistake in Lecture 5 of Stanford CS234 available on youtube?
Question: https://www.youtube.com/watch?v=buptHUzDKcE&list=PLoROMvodv4rOSOPzutgyCTapiGlY2Nd8u&index=5 At 53:45 Professor starts to describe temporal difference for linear value function approximation. At 56:20 on slide one can see how weights are updated. Is equation for $ \Delta w $ correct? In my opinion thing in brackets should be multiplied by $ X(s) - \gamma X(s') $ instead of $ X(s) $ because $ \frac {\partial ( \gamma X(s')^T w )} {\partial w} $ is not zero. Am i right? Answer: It's the notation that might be a bit confusing. Take a look at David Silver slides: pages 10-15. He has a complete derivation. Do not forget that the term $r + \gamma V(s';w)$ is the target. She mentions that in the video and the fact that you are actually doing supervised learning with target provided by a bootstrapped value (you try to minimize the Bellman error). In other words: you are trying to minimize the error between a target value and the estimation from your model. You do not know the target value so you estimate it with bootstrapping. Then you have $SE=(y - \hat{y}(w))^2$. The target y is considered known (as in supervised learning) so eventually you are searching for the weights $w$ that will make the output $\hat{y}$ of your model close to y. What I wrote here applies for batch training (minimizing the mean square error MSE). So eventually yes the quantity $V(s';w)$ is considered constant and will have derivative 0.
{ "domain": "datascience.stackexchange", "id": 7803, "tags": "reinforcement-learning" }
If the $SU(3)$ Noether charge is not gauge-invariant how can we say Hadrons are colorless?
Question: The Noether charge associated with global $SU(3)$ invariance is \begin{equation} J^{\mu}_a=-f_{abc}F_c^{\nu\mu}A_{b\mu}-i\frac{\delta \mathcal{L}_{Matter}}{\delta D_{\nu}\psi}t_a\psi \end{equation} and is conserved along the equations of motion $\partial_{\mu}J^{\mu}_a=0$ therefore giving rise to the conserved charges: \begin{equation} Q^a=\int d^3x\,\, J^0_a \end{equation} However the current is not gauge invariant (nor gauge covariant) so how can we say that Hadrons are colorless? In particular, can one say that the charge operator annihilates Hadrons? \begin{equation} \hat{Q}^a|Hadron\rangle=0 \end{equation} Answer: $Q^a=Q^a_0 \neq 0$ for fixed $Q^a_0$ is not a gauge-invariant statement $Q^a=0$ is gauge invariant. In words, being "red" is not a gauge invariant statement, but being "colorless" is.
{ "domain": "physics.stackexchange", "id": 45693, "tags": "gauge-theory, quantum-chromodynamics, noethers-theorem, confinement" }
tf view_frames syntax error
Question: I have Indigo ROS and Ubuntu 14.04.5 LTS. Few month ago when I run rosrun tf view_frames, it worked, but now it gives me a SyntaxError (as seen in the picture). I can't fix the syntax, because I don't have a permission to change files inside ros folder. Is there any solution to this problem? ubuntu@ubuntu-MS-7817:~$ rosrun tf view_frames File "/opt/ros/indigo/lib/tf/view_frames", line 57 print "Listening to /tf for %f seconds"%duration ^ SyntaxError: Missing parentheses in call to 'print'. Did you mean print(int "Listening to /tf for %f seconds"%duration)? Originally posted by Kri on ROS Answers with karma: 41 on 2018-02-28 Post score: 0 Original comments Comment by gvdhoorn on 2018-02-28: I just let your post out of the moderation queue, but if I would apply the support guidelines strictly, I should immediately close it. Was: Please DO NOT post a screenshot of the terminal or source file. Not clear enough? Comment by gvdhoorn on 2018-02-28: As to your issue: have you installed Anaconda or Python 3 between now and "few months ago"? If so, it could be that python3 is now the default interpreter, and view_frames has not been made Python 3 compatible yet, leading to the syntax error. Comment by Kri on 2018-03-01: @gvdhoorn I'm sorry, I didn't see the guidelines before. Luckily my issue isn't of that sort that would require copying the error to find out what is wrong. Comment by Kri on 2018-03-01: @gvdhoorn I don't remember doing anything with python ever on this computer, but my roommates might. I wouldn't know. Is there a way how to change the default python interpreter version? Comment by gvdhoorn on 2018-03-01:\ I didn't see the guidelines before they were shown to you almost fullscreen when you created your post. my issue isn't of that sort that would require copying the error to find out what is wrong I'd still like you to replace the image with the copy-pasted text of the error. Comment by Kri on 2018-03-01: @gvdhoorn I have edited the question. Is this done right? Comment by gvdhoorn on 2018-03-01: That's great. Thanks. As to your problem: what is the output of python --version? Comment by simff on 2018-03-01: An alternative way to view tf frames is: rosrun rqt_tf_tree rqt_tf_tree. This requires to have this into your workspace :) Comment by gvdhoorn on 2018-03-01: That would be a possible work-around yes, but it might be better to get to the bottom of this so we can make sure it's not an issue with the packages. Answer: @gvdhoorn it used to output 'python3' I have changed it into 'python2,' but the error remained. I used a command which python to see the path and discovered it leads to python2 inside miniconda3 folder. I have deleted the miniconda3 folder and now it works. Thank you for your patience and help. Originally posted by Kri with karma: 41 on 2018-03-01 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by gvdhoorn on 2018-03-01: Let's make this the answer. I would still it if you could check whether you are up-to-date, as the exact problem you are reporting should have been fixed (on 2017-07-24 already). Comment by gvdhoorn on 2018-03-01: @Kri: afaict, ros/geometry#151 should have resolved some lingering python2/3 issues with tf, even on Indigo. Can you make sure you're up-to-date? Comment by Kri on 2018-03-01: @gvdhoorn I don't know how to do that. I don't even understand what that means, which makes it hard to search for. Comment by gvdhoorn on 2018-03-01: When was the last time you did an sudo apt-get update and sudo apt-get upgrade, or used the Ubuntu Software Centre to update your ROS packages (assuming you use Ubuntu)? Comment by gvdhoorn on 2018-03-01: Note that I'm not saying you should upgrade now. I just want to figure out why you're running into this issue which should have already been fixed quite some time ago. Comment by Kri on 2018-03-01: That might me problematic. I did that few days ago, but since then the updated ubuntu doesn't get pass login screen (update broke something). Therefor I have to chose older version of ubuntu from GRUB. I don't know what else is not up to date when I run on the older version. Comment by gvdhoorn on 2018-03-01: Ok. Let's just chalk this up to using an unsupported Python version for now, and assume it has been fixed in recent versions of tf.
{ "domain": "robotics.stackexchange", "id": 30172, "tags": "ros-indigo" }
Is there an accepted name for Ross Quinlan's adaptation of the ID3 decision algorithm to use a Pearson's chi-squared test for independence?
Question: In Ross Quinlan's seminal paper Induction of Decision Trees, Quinlan summarizes the current state of machine learning in 1985 and loudly introduces the ID3 decision algorithm in the context of its peers in the field. However, he also states this halfway into the paper: An alternative method based on the chi-square test for stochastic independence has been found to be more useful. And further in: Hart notes, however, that the chi-square test is valid only when the expected values of $p' i$ and $n'i$ are uniformly larger than four. This condition could be violated by a set C of objects either when C is small or when few objects in C have a particular value of some attribute, and it is not clear how such sets would be handled. No empirical results with this approach are yet available. The name for this modification of the algorithm is not given, though. Is there an accepted name for this modification of ID3 (which is usually dependent on Shannon entropy to perform branching)? It seems like something that should be obvious (eg, ID3.1.x), but I cannot find a formal source. I've also taken the liberty of perusing and studying the meta links within the FAQ of this fine site, and this didn't seem to fall into any of the bins for obviousness or non-research level, nor did it appear to be a good fit for other StackExchange sites. It is, however, a simple question that I could not find the answer to in any of the common sources. This is directly pertinent to my research: I'm compiling another optimization that bridges the gap between generalized Shannon mutual information and Pearson's chi-squared test for independence in the context of TDIDT, then applying it to multivariate characteristics in cancer data as an alternative to random forests. As such, working definitions for my terms are greatly useful, insofar as they might lead to further papers I am not familiar with. Answer: Unfortunately, I'm not able to get into detail of the ID3 paper you cited. I read the Quinlan's book about C4.5 and I don't remember any quote about Chi-square splitting procedure. However, it exists a decision trees induction process that uses Chi-squares as splitting procedure, it is CHAID: http://en.wikipedia.org/wiki/CHAID. By the way, do you really need a Chi-square splitting procedure? Good permances of C4.5 are well known, and it uses Information Gain as splitting criterion. Moreover, classical Data Mining books (such as Introduction to Data Mining by Tan et al.) state that better performances can be achieved with different pruning techniques rather than different splitting criteria. If you really want a Chi-square splitting procedure I would suggest to use CHAID or modify the Java implementation of C4.5 in WEKA.
{ "domain": "cstheory.stackexchange", "id": 1189, "tags": "reference-request, machine-learning, terminology, decision-trees" }
how to write italic script in Rstudio
Question: I have got the problem on how to write the italic script in RStudio. I used the script in the screenshot and got the error "Error in grobs[[i]]: subscript out of bounds." Answer: The title is misleading as this error doesn't have anything to do with making the font italic. In Seurat v2, FeaturePlot does not return a ggplot2 object by default, so p in your case is NULL. You need to set do.return to TRUE in the FeaturePlot call. You should instead do: library(Seurat) p <- FeaturePlot(pbmc_small, head(pbmc_small@var.genes), do.return = TRUE) for(i in 1:length(p)) { p[[i]] <- p[[i]] + theme(plot.title = element_text(face = 'italic')) } cowplot::plot_grid(plotlist = p)
{ "domain": "bioinformatics.stackexchange", "id": 841, "tags": "rna-seq, single-cell" }
Connecting Bebop drone with ROS
Question: Hey, i want to control my Bebop2 with ROS messages. I downloaded bebop_autonomy and catkin it. When I want to run the launch file, the driver is getting killed There is no log file and I actually dont know, why I cant compile it. I have: ROS Indigo Ubuntu 14.04 Drone Firmware 3.9 Someone else had this problem? Is there another solution to use the drone with ROS? PARAMETERS /bebop/robot_description: <?xml version="1.... /rosdistro: indigo /rosversion: 1.11.20 NODES /bebop/ bebop_driver (bebop_driver/bebop_driver_node) robot_state_publisher (robot_state_publisher/robot_state_publisher) auto-starting new master process[master]: started with pid [16121] ROS_MASTER_URI=http://localhost:11311 setting /run_id to 829426e2-7cda-11e6-8573-0024e8cfd983 process[rosout-1]: started with pid [16134] started core service [/rosout] process[bebop/bebop_driver-2]: started with pid [16141] process[bebop/robot_state_publisher-3]: started with pid [16152] INFO [1474118862.405218917]: Initializing nodelet with 2 worker threads. INFO [1474118862.468457685]: [BebopSDK] 15:27:42:468 | Bebop:230 - Bebop Cnstr() INFO [1474118862.468590244]: Nodelet Cstr INFO [1474118862.472868092]: Connecting to Bebop ... INFO [1474118862.713580127]: [BebopSDK] 15:27:42:713 | CommandReceivedCallback:113 - Command Received Callback LWP id is: 16268 ERROR [1474118862.867989008]: [ARCONTROLLER_Network] 15:27:42:867 | ARCONTROLLER_Network_ReaderRun:940 - ARCOMMANDS_Decoder_DecodeBuffer () failed : 2 ARDrone3.PictureSettingsState.UNKNOWN -> Unknown co ERROR [1474118862.871437212]: [ARCONTROLLER_Network] 15:27:42:871 | ARCONTROLLER_Network_ReaderRun:940 - ARCOMMANDS_Decoder_DecodeBuffer () failed : 2 ARDrone3.PictureSettingsState.UNKNOWN -> Unknown co ERROR [1474118862.874543473]: [ARCONTROLLER_Network] 15:27:42:874 | ARCONTROLLER_Network_ReaderRun:940 - ARCOMMANDS_Decoder_DecodeBuffer () failed : 2 ARDrone3.PictureSettingsState.UNKNOWN -> Unknown co ERROR [1474118862.940212578]: [ARCONTROLLER_Network] 15:27:42:940 | ARCONTROLLER_Network_ReaderRun:940 - ARCOMMANDS_Decoder_DecodeBuffer () failed : 2 common.CommonState.UNKNOWN -> Unknown command INFO [1474118862.953768631]: [BebopSDK] 15:27:42:953 | Connect:326 - BebopSDK inited, lwp_id: 16141 INFO [1474118862.953857749]: Fetching all settings from the Drone .. . INFO [1474118863.071386661]: Value for GPSSettingsHomeTypeType recved: 0 INFO [1474118863.071473543]: [CB] 15:27:43:071 | Update:1703 - Checking if GPSSettingsHomeTypeType exists in params ... INFO [1474118863.073020388]: [CB] 15:27:43:072 | Update:1706 - No ERROR [1474118863.092875287]: [ARCONTROLLER_Network] 15:27:43:092 | ARCONTROLLER_Network_ReaderRun:940 - ARCOMMANDS_Decoder_DecodeBuffer () failed : 2 ARDrone3.PictureSettingsState.UNKNOWN -> Unknown co ERROR [1474118863.100761206]: [ARCONTROLLER_Network] 15:27:43:100 | ARCONTROLLER_Network_ReaderRun:940 - ARCOMMANDS_Decoder_DecodeBuffer () failed : 2 ARDrone3.PictureSettingsState.UNKNOWN -> Unknown co ERROR [1474118863.104702699]: [ARCONTROLLER_Network] 15:27:43:104 | ARCONTROLLER_Network_ReaderRun:940 - ARCOMMANDS_Decoder_DecodeBuffer () failed : 2 ARDrone3.PictureSettingsState.UNKNOWN -> Unknown co INFO [1474118863.189877636]: Value for PilotingSettingsMaxAltitudeCurrent recved: 30 INFO [1474118863.189995598]: [CB] 15:27:43:189 | Update:111 - Checking if PilotingSettingsMaxAltitudeCurrent exists in params ... INFO [1474118863.191580856]: [CB] 15:27:43:191 | Update:114 - No INFO [1474118863.193173377]: Value for PilotingSettingsMaxDistanceValue recved: 160 INFO [1474118863.193235326]: [CB] 15:27:43:193 | Update:352 - Checking if PilotingSettingsMaxDistanceValue exists in params ... INFO [1474118863.194297682]: [CB] 15:27:43:194 | Update:355 - No INFO [1474118863.200952648]: Value for PilotingSettingsNoFlyOverMaxDistanceShouldnotflyover recved: 0 INFO [1474118863.201026121]: [CB] 15:27:43:201 | Update:432 - Checking if PilotingSettingsNoFlyOverMaxDistanceShouldnotflyover exists in params ... INFO] [1474118863.202526521]: [CB] 15:27:43:202 | Update:435 - No INFO [1474118863.204033976]: Value for PilotingSettingsMaxTiltCurrent recved: 20 INFO [1474118863.204078953]: [CB] 15:27:43:204 | Update:191 - Checking if PilotingSettingsMaxTiltCurrent exists in params ... INFO [1474118863.204989544]: [CB] 15:27:43:204 | Update:194 - No INFO [1474118863.206983024]: Value for PilotingSettingsAbsolutControlOn recved: 0 INFO [1474118863.207043856]: [CB] 15:27:43:207 | Update:271 - Checking if PilotingSettingsAbsolutControlOn exists in params ... INFO [1474118863.207771113]: [CB] 15:27:43:207 | Update:274 - No INFO [1474118863.219866157]: Value for PilotingSettingsBankedTurnValue recved: 0 INFO [1474118863.219941096]: [CB] 15:27:43:219 | Update:513 - Checking if PilotingSettingsBankedTurnValue exists in params ... INFO [1474118863.221278557]: [CB] 15:27:43:221 | Update:516 - No INFO [1474118863.222701014]: Value for SpeedSettingsMaxVerticalSpeedCurrent recved: 1 INFO [1474118863.222754303]: [CB] 15:27:43:222 | Update:1075 - Checking if SpeedSettingsMaxVerticalSpeedCurrent exists in params ... INFO [1474118863.223785230]: [CB] 15:27:43:223 | Update:1078 - No INFO [1474118863.225613466]: Value for SpeedSettingsMaxRotationSpeedCurrent recved: 100 INFO [1474118863.225654602]: [CB] 15:27:43:225 | Update:1155 - Checking if SpeedSettingsMaxRotationSpeedCurrent exists in params ... INFO [1474118863.226362234]: [CB] 15:27:43:226 | Update:1158 - No INFO [1474118863.234484077]: Value for SpeedSettingsHullProtectionPresent recved: 0 INFO [1474118863.234542464]: [CB] 15:27:43:234 | Update:1235 - Checking if SpeedSettingsHullProtectionPresent exists in params ... INFO [1474118863.235710071]: [CB] 15:27:43:235 | Update:1238 - No INFO [1474118863.237468674]: Value for SpeedSettingsOutdoorOutdoor recved: 1 INFO [1474118863.237520497]: [CB] 15:27:43:237 | Update:1315 - Checking if SpeedSettingsOutdoorOutdoor exists in params ... INFO [1474118863.238472852]: [CB] 15:27:43:238 | Update:1318 - No INFO [1474118863.251575217]: Value for SpeedSettingsMaxPitchRollRotationSpeedCurrent recved: 300 INFO [1474118863.251644220]: [CB] 15:27:43:251 | Update:1395 - Checking if SpeedSettingsMaxPitchRollRotationSpeedCurrent exists in params ... INFO 1474118863.252927833]: [CB] 15:27:43:252 | Update:1398 - No INFO [1474118863.254569382]: Value for NetworkSettingsWifiSelectionType recved: 1 INFO [1474118863.254622043]: [CB] 15:27:43:254 | Update:1501 - Checking if NetworkSettingsWifiSelectionType exists in params ... INFO [1474118863.255588437]: [CB] 15:27:43:255 | Update:1504 - No INFO [1474118863.258823625]: Value for NetworkSettingsWifiSelectionBand recved: 0 INFO [1474118863.259016945]: [CB] 15:27:43:258 | Update:1522 - Checking if NetworkSettingsWifiSelectionBand exists in params ... INFO [1474118863.259923764]: [CB] 15:27:43:259 | Update:1525 - No INFO [1474118863.260750476]: Value for NetworkSettingsWifiSelectionChannel recved: 6 INFO [1474118863.260784000]: [CB] 15:27:43:260 | Update:1543 - Checking if NetworkSettingsWifiSelectionChannel exists in params ... INFO [1474118863.261342241]: [CB] 15:27:43:261 | Update:1546 - No INFO [1474118863.290460818]: Value for PictureSettingsVideoStabilizationModeMode recved: 0 INFO [1474118863.290513059]: [CB] 15:27:43:290 | Update:1623 - Checking if PictureSettingsVideoStabilizationModeMode exists in params ... INFO [1474118863.291447187]: [CB] 15:27:43:291 | Update:1626 - No ERROR [1474118863.293378298]: [ARCONTROLLER_Network] 15:27:43:293 | ARCONTROLLER_Network_ReaderRun:940 - ARCOMMANDS_Decoder_DecodeBuffer () failed : 2 ARDrone3.PictureSettingsState.UNKNOWN -> Unknown co [ERROR] [1474118863.296448102]: [ARCONTROLLER_Network] 15:27:43:296 | ARCONTROLLER_Network_ReaderRun:940 - ARCOMMANDS_Decoder_DecodeBuffer () failed : 2 ARDrone3.PictureSettingsState.UNKNOWN -> Unknown co [ERROR] [1474118863.315566944]: [ARCONTROLLER_Network] 15:27:43:315 | ARCONTROLLER_Network_ReaderRun:940 - ARCOMMANDS_Decoder_DecodeBuffer () failed : 2 ARDrone3.PictureSettingsState.UNKNOWN -> Unknown co INFO] [1474118863.330631988]: Value for GPSSettingsReturnHomeDelayDelay recved: 60 [ INFO] [1474118863.330693518]: [CB] 15:27:43:330 | Update:1783 - Checking if GPSSettingsReturnHomeDelayDelay exists in params ... INFO] [1474118863.333176795]: [CB] 15:27:43:333 | Update:1786 - No INFO] [1474118863.336192262]: Value for GPSSettingsHomeTypeType recved: 0 INFO] [1474118863.336302541]: [CB] 15:27:43:336 | Update:1703 - Checking if GPSSettingsHomeTypeType exists in params ... INFO] [1474118863.338623088]: [CB] 15:27:43:338 | Update:1711 - Yes INFO] [1474118866.113073247]: Dynamic reconfigure callback with level: -1 INFO] [1474118866.136688121]: Enabling video stream ... WARN] [1474118866.136829969]: [BebopSDK] 15:27:46:136 | StartStreaming:364 - Video streaming started ... INFO] [1474118866.136956032]: Nodelet lwp_id: 16141 INFO] [1474118866.137009601]: bebop_driver nodelet loaded. INFO] [1474118866.137100045]: [CameraThread] thread lwp_id: 16463 INFO] [1474118866.137190559]: [AuxThread] thread lwp_id: 16464 WARN] [1474118866.158027497]: [ARSTREAM2_H264Filter] 15:27:46:157 | ARSTREAM2_H264Filter_RtpReceiverNaluCallback:1201 - ARSTREAM2_RtpReceiver NALU buffer is too small, truncated AU (or maybe it's the fi WARN] [1474118866.167337480]: [ARSTREAM2_H264Filter] 15:27:46:167 | ARSTREAM2_H264Filter_enqueueCurrentAu:481 - AU output cancelled (waitForSync) WARN] [1474118866.201722369]: [ARSTREAM2_H264Filter] 15:27:46:201 | ARSTREAM2_H264Filter_enqueueCurrentAu:481 - AU output cancelled (waitForSync) WARN] [1474118866.231175626]: [ARSTREAM2_H264Filter] 15:27:46:231 | ARSTREAM2_H264Filter_enqueueCurrentAu:481 - AU output cancelled (waitForSync) WARN] [1474118866.266660026]: [ARSTREAM2_H264Filter] 15:27:46:266 | ARSTREAM2_H264Filter_enqueueCurrentAu:481 - AU output cancelled (waitForSync) WARN] [1474118866.292939904]: [ARSTREAM2_H264Filter] 15:27:46:292 | ARSTREAM2_H264Filter_sync:161 - SPS/PPS sync OK INFO] [1474118866.292996196]: [BebopSDK] 15:27:46:292 | DecoderConfigCallback:147 - H264 configuration packet received: #SPS: 27 #PPS: 9 (MP4? 0) WARN] [1474118866.293077003]: [ARSTREAM2_H264Filter] 15:27:46:293 | ARSTREAM2_H264Filter_generateGrayIFrame:568 - Waiting for a slice to generate gray I-frame WARN] [1474118866.293136577]: [ARSTREAM2_H264Filter] 15:27:46:293 | ARSTREAM2_H264Filter_generateGrayIFrame:568 - Waiting for a slice to generate gray I-frame WARN] [1474118866.293288622]: [ARSTREAM2_H264Filter] 15:27:46:293 | ARSTREAM2_H264Filter_generateGrayIFrame:618 - Gray I slice NALU output size: 1629 INFO] [1474118866.293336323]: [BebopSDK] 15:27:46:293 | FrameReceivedCallback:174 - Frame Recv & Decode LWP id: 16466 INFO] [1474118866.295966127]: [Decoder] 15:27:46:295 | InitCodec:133 - H264 Codec is initialized! INFO [1474118866.296011175]: [Decoder] 15:27:46:295 | Decode:235 - Updating H264 codec parameters (Buffer Size: 36) ... [bebop/bebop_driver-2] process has died [pid 16141, exit code -11, cmd /home/dennis/bebop_ws/devel/lib/bebop_driver/bebop_driver_node __name:=bebop_driver __log:=/home/dennis/.ros/log/829426e2-7cda-11e6-8573-0024e8cfd983/bebop-bebop_driver-2.log]. log file: /home/dennis/.ros/log/829426e2-7cda-11e6-8573-0024e8cfd983/bebop-bebop_driver-2*.log Originally posted by Trinto on ROS Answers with karma: 16 on 2016-09-17 Post score: 0 Original comments Comment by Trinto on 2016-09-19: Its about the ARSTREAM2_H264 . Its sending do driver and he is killing the driver.. Comment by jacobperron on 2016-09-19: Have you tried using this catkinized version of ARSDK? Try building that from source and then checkout and build the dev branch 2-parrot-sdk in bebop_autonomy Answer: I solved the problem by myself. Upgrading the SDK ( in bebop_autonomy under the hood are the steps for it ). Its difficult, because there is no new revision key. You have change the hash in generate.py to master/xml + chance the xml-tags at the end ( ardrone3.xml and Command.xml ) and in url-mustache its not libARCommands , its arsdk-xml and chance Xml to xml. Then delete ALL .xml in all filenames and in the files ! After chance in ardrone3_setting_callbacks.h pilotingsettingsstate_maxdistance_value to .._current, the both circling_xx_values with current to and something like bankedterm value to STATE. After you can catkin the workspace. the next prob is, that the driver is dying, cause the new frames are to big. So give the frames more space in bebop_decode.cpp Now you can communicate with the drone. Originally posted by Trinto with karma: 16 on 2016-09-19 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 25784, "tags": "ros, bebop-autonomy" }
A probabilistic set with no false positives?
Question: So, Bloom filters are pretty cool -- they are sets that support membership checking with no false negatives, but a small chance of a false positive. Recently though, I've been wanting a "Bloom filter" that guarantees the opposite: no false positives, but potentially false negatives. My motivation is simple: given a huge stream of items to process (with duplicates), we'd like to avoid processing items we've seen before. It doesn't hurt to process a duplicate, it is just a waste of time. Yet, if we neglected to process an element, it would be catastrophic. With a "reverse Bloom filter", one could store the items seen with little space overhead, and avoid processing duplicates with high probability by testing for membership in the set. Yet I can't seem to find anything of the sort. The closest I've found are "retouched Bloom filters", which allow one to trade selected false positives for a higher false negative rate. I don't know how well their data structure performs when one wants to remove all false positives, however. Anyone seen anything like this? :) Answer: One answer is to use a big hash table and when it fills up start replacing elements in it rather than finding (nonexistent) empty slots elsewhere for them. You don't get the nice fixed-rate of false answers that you do with Bloom filters, but it's better than nothing. I believe this is standard e.g. in chess software for keeping track of positions that have already been searched.
{ "domain": "cstheory.stackexchange", "id": 3403, "tags": "ds.data-structures" }
What kind of neural network should I build to classify each instance of a time series sequence?
Question: Let's say I have the time-series dataset below-left. I would like to train a model in such a way that, if I feed the model with an input like the test sequence below, it should be able to classify each sample with the correct class label. Training Sequence: Test Sequence: Time, Bitrate, Class Time, Bitrate Predicted Class 0, 312, 1 0, 234 -----> 0 0.3, 319, 1 0.2, 261 -----> 0 0.5, 227, 0 0.4, 277 -----> 0 0.6, 229, 0 0.7, 301 -----> 1 0.7, 219, 0 0.8, 305 -----> 1 0.8, 341, 1 0.9, 343 -----> 1 0.9, 281, 0 1.0, 299 -----> 0 ... ... So, what kind of neural network should I build to classify each instance of a time series sequence? Answer: A time series, usually, requires regular time intervals, but, from looking at your example, it seems that's not the case. You could try to use a MLP and give it as input the Time and Bitrate pairs and make it output the Class. The activation function is what makes your neural network produce its output, i.e. activate the neurons. The loss function calculates the error of your model's predictions. This error is used by the backpropagation algorithm to adjust the model when training. If your data can only be in one of two classes, for example 0 or 1, you have a binary classification problem. I recommend using the Sigmoid as the activation function. If you look at the Sigmoid's plot, you'll notice that, no matter the input, the output will always be within 0 and 1. You can think of this as the probability of the input being in class 0 or 1. So, for example, any input that produces a probability under 0.5 is class 0. Of course, if you want to be more precise you can simply procure the probabilities without the making it fit the labels. For example, an output of 0.7 represents that there is a 70% probability of the input belonging to class 1. Another term for this is Logistic Regression. The Binary cross-entropy calculates the error between the model's prediction and the real class label. This loss function can be applied to many classes, but the binary version keeps it to 0 and 1. Choosing the right activation functions and loss functions will depend on the problem you are trying to solve. Here are some useful guides on loss functions and activation functions. Additional to the above, try to use scaling(normalize) the inputs before feeding to the network, so that the network may generalize, as it looks like bitrate value above 300 is likely to be associated with class 1. It may improve performance.
{ "domain": "ai.stackexchange", "id": 2801, "tags": "classification, long-short-term-memory, time-series, algorithm-request, model-request" }
publishing file stream with ros
Question: Hi, I would like to create a service which publish as a topic a byte stream with the content of a file. It is possible to define inside a Ros service a data type variables ifstream? or could it be possible to use a primitive type for this purpose? Best Regards Originally posted by cmsoda on ROS Answers with karma: 31 on 2011-06-27 Post score: 0 Answer: You can only use some basic data types. For your application you would use a primitive type (probably an uint8[]). Originally posted by dornhege with karma: 31395 on 2011-06-27 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 5972, "tags": "ros, stream, rosmsg" }
Calculation of RGB values given min and max values
Question: I made a start at translating the accepted answer's code here into C#. Any suggestions on the correctness of the translation and the correctness of the original code would be very much appreciated. public class RgbValues { public int Red { get; set; } public int Green { get; set; } public int Blue { get; set; } } public static RgbValues GetRgbValues(float minimumValue, float maximumValue, float value) { var rgbValues = new RgbValues(); var halfmax = (minimumValue + maximumValue) / 2.0; rgbValues.Blue = (int) Math.Max(0.0, 255.0 * (1.0 - value/halfmax)); rgbValues.Red = (int)Math.Max(0.0, 255.0 * (value / halfmax - 1.0)); rgbValues.Green = 255 - rgbValues.Blue - rgbValues.Red; return rgbValues; } What is a bit worrying is that: GetRgbValues(10, 10113, 10113) should really caclulate: red = 255, green = 0, blue = 0 rather than: red = 254, green = 1, blue = 0 I guess there is some rounding issue. Any ideas? Answer: Calculations like these can be hard to understand, so I would try to strive for something that is clearly correct, not just correct. What I would do: normalize the value from \$[min, max]\$ to \$[0, 2]\$ realize that the "strength" of a color is \$1 - |value - color|\$ (where \$color\$ is \$0\$ for blue, \$1\$ for green and \$2\$ for red), so compute that clip the computer strength to remove negative numbers finally multiply it by \$255\$ and convert to int (while also rounding the value, to avoid bias towards smaller numbers) In code: public static RgbValues GetRgbValues(float minimum, float maximum, float value) { var normalizedValue = Normalize(minimum, maximum, value); return new RgbValues { Blue = Distance(normalizedValue, 0), Green = Distance(normalizedValue, 1), Red = Distance(normalizedValue, 2) }; } private static float Normalize(float minimum, float maximum, float value) { return (value - minimum) / (maximum - minimum) * 2; } private static int Distance(float value, float color) { var distance = Math.Abs(value - color); var colorStrength = 1 - distance; if (colorStrength < 0) colorStrength = 0; return (int)Math.Round(colorStrength * 255); } This code is longer than the original, but I think it's also clearer about what it does. I didn't figure out what your code did until I read the SO question, I think it's much more likely I would have succeeded with this code.
{ "domain": "codereview.stackexchange", "id": 9900, "tags": "c#" }
Use MOXIE on Earth
Question: I recently read about a machine called MOXIE that synthesised oxygen from carbon dioxide on Mars. I haven't been able to get the exact idea of how the thing works(though I've noticed analogies to trees). So my question is, why can't we use this machine on Earth? We know the crisis global warming is causing, and such a machine might be just what we need to tackle climate change. But I strangely see that NASA has plans to use it for astronauts only. Why? Answer: By thermodynamics, to produce O2 and C from CO2 it will require more energy than what was delivered from producing the same quantity of CO2 by burning O2 and C1. It produces oxygen at a rate that is insignificant at planetary scales. The idea will be to provide oxygen for a small habitat holding a few astronauts, not to change the composition of Mars atmosphera, using energy obtained from sources other than burning fuel (solar, RTGs....). The utility for the NASA would be if the mass of the machine plus its energy sources is less than that of the oxygen that they would have to bring from Earth if they did not have this machine. On Earth the production would be irrelevant in the scope of global climate change. And if you powered it with a coal/gas/oil generator you would end even worse that if you had not done anything (and that is without considering the ecological footprint -CO2 and other- of building it). If you powered it with solar/wind/sea energy... well, you would end better by using that solar/wind/sea energy directly into the grid to reduce the amount of gas/oil/coal burnt to produce energy globally. From the energy POV this could make sense after all of the energy on Earth is produced from carbon-free sources, and even in that case the problems of scale continue. Right now, the options we have are to rationalize the use of energy and to opt for carbon-free sources. 1There are some theoretical nitpicks about the law applying to reversed chemical reactions (if the MOXIE produces the C in a form other than the original the law would not apply). But in practice the rule holds.
{ "domain": "astronomy.stackexchange", "id": 5559, "tags": "mars, nasa" }
RSpec test for pagination
Question: This is a Rspec test for pagination in Rails project. I'm not sure if I should write the test in spec/requests or spec/controllers. And there must be a lot of thing that I had better to do. Which part of code should I refactor? spec/requests/companies_spec.r describe "Companies" do describe "GET /companies" do before(:all) { 50.times { FactoryGirl.create(:company) }} describe "index" do context "with 50 companies" do it "has not second page" do visit root_path expect(page).to have_no_xpath("//*[@class='pagination']//a[text()='2']") end end context "with 51 companies" do before{ FactoryGirl.create(:company) } it "has second page" do visit root_path find("//*[@class='pagination']//a[text()='2']").click expect(page.status_code).to eq(200) end end end end end Answer: There are many ways to test this. Mostly, though, it'd be nice to avoid having to create 50+ records, since it slows down your tests. If you use a request spec, though, it's probably best to create 50+ records, since it's a high-level test, so you'll want to be close to the "real" usage scenario. But you can cheat a little in other places. For instance, if you have the records-per-page number defined in a way that's configurable, you can set it to something lower in you pagination test (or you can set it globally for the test environment). For instance, if the per-page is set to 2, you only need to create 3 records to test pagination. That'll be a lot faster than creating 51 records. If you're spec'ing the view itself, you can simply define the instance variables that'll trigger pagination links, and not bother with the actual records. Or you can use FactoryGirl.build_list to merely build the records and assign them to a view-accessible variable, without actually storing them in the database - again, faster. You can also look into mocking and stubbing to avoid actually creating the records. For your current code, You can do a couple of things, like: describe "Companies" do describe "GET /companies", order: :defined do before(:all) { FactoryGirl.create_list :company, PER_PAGE } context "with few records" do it "does not paginate records" do visit "/companies" expect(page).to have_no_xpath("//*[@class='pagination']//a[text()='2']") end end context "with many records" do it "paginates records" do FactoryGirl.create :company visit "/companies" expect(page).to have_xpath("//*[@class='pagination']//a[text()='2']") find("//*[@class='pagination']//a[text()='2']").click expect(page.status_code).to eq(200) end end end end Changes I've made: Using FactoryGirl.create_list to create a number of records at once. Using a PER_PAGE constant, just in case it isn't 50. This could also be an ENV var, an instance variable, or simply hard-coded. But naming it helps document the code. Using order: :defined to force the examples to be run in the order they're defined. This avoids the specs randomly failing because the 2nd test has been run before the first one. I've change the visit path to /companies because that's what the spec is about. You used visit root_path, which no doubt worked fine, but the spec is about visiting /companies, so I find it nicer to keep it consistent. You might also want to check that the correct records actually show up on the page. I.e. attempt to find the name of the 51st company within the rendered page, when you've gone to the 2nd page's path. Lastly, you may want to add some specs for how the system should behave if you go say page 4, but there aren't enough records to show anything. But again, I'd probably start with view/controller specs, before moving on to high-level request specs. Request specs are great because they test everything pretty close to actual usage. But that also makes them more complex, so the more you can check at a lower level, the better.
{ "domain": "codereview.stackexchange", "id": 11580, "tags": "ruby, pagination, rspec" }
Entity Framework - Add or Update Parent Entity with Children
Question: This code saves an DomainEntityRecord (parent) and DomainEntityDetailRecords (child), and it works. It feels odd that I get the record from the DB, then remove the entities that don't exist in the domain model version, then when I save, I save (AddOrUpdate) the domain entities, and not the EF version. Maybe this is the right way? public void Handle(SaveDomainEntity message) { var domainEntityRecord = mapper.Map<DomainEntityRecord>(message); var existingDomainEntity = dbContext.Set<DomainEntityRecord>() .Where(x => x.Id == domainEntityRecord.Id) .Include(x => x.DomainEntityDetailRecords) .SingleOrDefault(); if (existingDomainEntity != null) { // Delete detail records that no longer exist. foreach (var existingDetail in existingDomainEntity.DomainEntityDetailRecords.ToList()) { if (domainEntityRecord.DomainEntityDetailRecords.All( x => x.DetailId != existingDetail.DetailId)) { dbContext.Set<DomainEntityDetailRecord>().Remove(existingDetail); } } } dbContext.Set<DomainEntityRecord>().AddOrUpdate(domainEntityRecord); domainEntityRecord.DomainEntityDetailRecords.ForEach( record => dbContext.Set<DomainEntityDetailRecord>().AddOrUpdate(record)); dbContext.SaveChanges(); } Answer: The line ... dbContext.Set<DomainEntityRecord>().AddOrUpdate(domainEntityRecord); ... will always try to find an existing record in the database in order to determine whether domainEntityRecord should be marked as Added or as Modified. But you already fetch the existing record by the statement: var existingDomainEntity = dbContext.Set<DomainEntityRecord>() .Where(x => x.DomainEntityId == domainEntityRecord.DomainEntityId) .Include(x => x.DomainEntityDetailRecords) .SingleOrDefault(); So there is a redundant database roundtrip in your code. After this statement var existingDomainEntity = ... you know everything that AddOrUpdate is going to find out again. So you may as well do it yourself: if the record exist: modify it and its details, if it doesn't: add it. To modify the existing records, use CurrentValues.SetValues: var domainEntityRecord = mapper.Map<DomainEntityRecord>(message); var existingDomainEntity = dbContext.Set<DomainEntityRecord>() .Where(x => x.DomainEntityId == domainEntityRecord.DomainEntityId) .Include(x => x.DomainEntityDetailRecords) .SingleOrDefault(); if (existingDomainEntity != null) { // Delete detail records that no longer exist. foreach (var existingDetail in existingDomainEntity.DomainEntityDetailRecords.ToList()) { if (domainEntityRecord.DomainEntityDetailRecords.All( x => x.DomainEntityDetailId != existingDetail.DomainEntityDetailId)) { dbContext.Set<DomainEntityDetailRecord>().Remove(existingDetail); } } // Copy current (incoming) values to db entry: dbContext.Entry(existingDomainEntity).CurrentValues.SetValues(domainEntityRecord); var detailPairs = from curr in domainEntityRecord.DomainEntityDetailRecords join db in existingDomainEntity.DomainEntityDetailRecords on curr.DomainEntityDetailId equals db.DomainEntityDetailId into grp from db in grp.DefaultIfEmpty() select new { curr, db }; foreach(var pair in detailPairs) { if (pair.db != null) dbContext.Entry(pair.db).CurrentValues.SetValues(pair.curr); else dbContext.Set<DomainEntityDetailRecord>().Add(pair.curr); } } else { dbContext.Set<DomainEntityRecord>().Add(domainEntityRecord); // This also adds its DomainEntityDetailRecords } dbContext.SaveChanges(); As you see, for the details I use a GroupJoin (join - into which serves as an outer join) to determine the existing and the new details. The existing ones are modified, the new ones are added.
{ "domain": "codereview.stackexchange", "id": 27401, "tags": "c#, entity-framework" }
CSS mask clipping and overlay SVG to achieve a two effect
Question: I am currently attempting to create a two SVG overlay / masking like the image below: I have created a SVG for the overlay. As it stands, I am trying to create two elements one for the green side and one for the blue side. For what I am trying to achieve, is this the best approach? If not, what is? Is it worth creating two SVGs to achieve the overlay in the example below? .hero-overlay { position: absolute; top: 0; height: 100%; width: 100%; -webkit-mask: url("https://dl.dropboxusercontent.com/u/58412455/circle-mask.svg") no-repeat center center; mask: url("https://dl.dropboxusercontent.com/u/58412455/circle-mask.svg") no-repeat center center; clip: rect(0px, 580px, 500px, 0px); } .mask-left { background-color: red; } .mask-right { -webkit-transform: rotate(180deg); -ms-transform: rotate(180deg); transform: rotate(180deg); background-color: blue; } jsFiddle Answer: I would suggest using an inline svg to create these shapes. This will provide more browser support than using the CSS mask property (canIuse). Both sides are created with paths and filled with the images using the pattern element : DEMO * {margin: 0;padding: 0;} body { background: url('http://lorempixel.com/output/nature-q-g-800-600-5.jpg'); background-size: cover; } svg { display: block; width: 95vw; height: 47.5vw; margin: 2.5vw; } <svg viewbox="0 0 100 50"> <defs> <pattern id="img" patternUnits="userSpaceOnUse" x="0" y="0" width="100" height="50"> <image xlink:href="http://lorempixel.com/output/abstract-h-c-490-500-4.jpg" x="0" y="0" width="49.5" height="50" /> <image xlink:href="http://lorempixel.com/output/abstract-h-c-490-500-8.jpg" x="50.5" y="0" width="49.5" height="50" /> </pattern> </defs> <path d="M0 0 H49.5 V15 A10.02 10.02 0 0 0 49.5 35 V50 H0z" fill="url(#img)" fill-opacity="0.8" /> <path d="M50.5 0 H100 V50 H51 V35 A10.03 10.03 0 0 0 50.5 15z" fill="url(#img)" fill-opacity="0.8" /> </svg>
{ "domain": "codereview.stackexchange", "id": 13408, "tags": "html5, svg, css3" }
roscpp template programming: get statically the Stamped message type
Question: I would like to know if there is some method or trait class in (roscpp or another library) which allows getting the "associated stamped type" of a Type T. For instance StampedType::type ::type If it does not exist, IMO it should. Its a very interesting method to do generic programming in ROS. BTW I've also missed message inheritance in ROS (but this is another issue where a lot of discussion it is possible) Originally posted by Pablo Iñigo Blasco on ROS Answers with karma: 2982 on 2012-12-29 Post score: 1 Answer: I don't know of any implementation of that method. It might make an interesting library to contribute. However there's no way to do it automatically as a stamped type is more of a convention, and not all messages have it. In the tf datatypes you will find a Stamped<> templated datatype for each type of data supported. Originally posted by tfoote with karma: 58457 on 2012-12-29 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Pablo Iñigo Blasco on 2012-12-31: Yep I understand. It's a pitty that Stamped<> is only for tf messages and the only allowed types are statically set. Moreover it forces you to use a tfBroadacaster instead a generic ros msg publisher. Comment by Pablo Iñigo Blasco on 2012-12-31: In any case it would be possible if there would be a template mechanism to generate msgs stamped datatypes. In that case Stamped would be a rosmsg type. In such case all the StampedMsgTypes would be unnecessary. The method ros::Publisher::publish might accept T or Stamped Comment by Pablo Iñigo Blasco on 2012-12-31: The header data could be filled explicitly or implicitly by the ros::Publisher. At the same time, a subscriber could use either T or Stamped . IMHO this could be possible maintaining the backwards compatibility. Comment by Pablo Iñigo Blasco on 2012-12-31: The header data could be filled explicitly or implicitly by the ros::Publisher... I think that this could be possible maintaining the backwards compatibility. Using rosmsg infraestructure even would be possible to make compatible StampedPose with Stamped
{ "domain": "robotics.stackexchange", "id": 12226, "tags": "ros, c++, message" }
Piston cylinder assembly with separate compartments
Question: Consider an ideal gas in a piston cylinder assembly, initially divided into 3 compartments by impermeable diathermal membranes. The compartments initially have the same mass and temperatures, but different pressures. The membranes are punctured, and the system settles to the final state with a uniform pressure, temperature, and volume. The goal is to find the final temperature $T_2$ in terms of the initial pressures $P_{1A}$, $P_{1B}$, $P_{1C}$, initial temperature $T_1$, and ideal gas properties. The first law of thermodynamics yields $\Delta E = Q-W=\frac{m}{3}c_v\Delta T + \frac{m}{3}c_v\Delta T + \frac{m}{3}c_v\Delta T$ But $Q=0$ so we have $-W=mc_v\Delta T$ Now, this is easy if the process is isobaric, so that $W=P_2(V_2-V_1)$ but is this a valid assumption? I'm going to assume that it is, and that the final pressure $P_2$ is the average of all the initial pressures, $P_2=\bar{P_1}=\frac{P_{1A}+P_{1B}+P_{1C}}{3}$. So we have $W=\bar{P_1}(V_2-V_1) = \bar{P_1}(\frac{mRT_2}{\bar{P_1}} - \frac{mRT_1}{3P_{1A}} - \frac{mRT_1}{3P_{1B}} - \frac{mRT_1}{3P_{1C}})$ Substituting this back into the first law will allow us to solve for $T_2$ in terms of $P_{1A}$, $P_{1B}$, $P_{1C}, T_1$ and gas properties. My solution, however, is contingent assuming that $P_2=\bar{P_1}=\frac{P_{1A}+P_{1B}+P_{1C}}{3}$. I made this assumption because intuitively I think the pressures need to equilibrate to some value, and that value must be an average of all the pressures if the masses in each compartment are the same. Is this a good assumption? Is there a better way to solve this problem? Answer: Actually, in the end, the work done by the overall combined system is just that required to raise the weight of the piston to its new elevation and to push back the outside atmosphere. For the initial state, $$P_{1A}A=Mg+P_{atm}A\tag{1}$$where M is the mass of the piston, and the expansion work done by the system is $$W=\left(\frac{Mg}{A}+P_{atm}\right)\Delta V\tag{2}$$So, combining these equations, the work done by the system is just $$W=P_{1A}\Delta V$$
{ "domain": "physics.stackexchange", "id": 51962, "tags": "homework-and-exercises, thermodynamics, ideal-gas" }
Integrating Radial Vector Fields
Question: Given a integral $$\int_vd^3{r} \;\vec{r}\;\rho(r)$$ and How do you convert it to spherical coordinate system, noting that $\rho(r)$ is indeed as it is without vector, i.e. it is spherically symmetric $\rho(\vec{r})=\rho(r)$. $$\int_0^{\infty}\int_0^{2\pi}\int_o^\pi \dots \rho(r)r^2\sin(\phi)d\phi d\theta dr$$ I guess all I have to do is convert $\vec{r}$ from radial vector coordinate system to spherical coordinate system. But, I am stuck. ADDED There should not be $v$. $$\int d^3{r} \;\vec{r}\;\rho(r)$$ Actually, I was asked to calculate dipole moment in this case. Answer: The integral of a vector, as you have written it, is just shorthand notation for a vector of integrals. Concretely, if we write $\vec r= (x,y,z)$ in cartesian coordinates, then \begin{align} \int d^3r\, \vec r\,\rho(r) &= \left(\int d^3 r\, x\,\rho(r),\int d^3 r\, y\,\rho(r),\int d^3 r\, z\,\rho(r) \right) \end{align} Now, we simply note the transformation between cartesian coordinates and spherical coordinates, and use this to evaluate each of these component integrals. In the convention for $\phi$ and $\theta$ as the polar and azimuthal coordinates respectively, we have \begin{align} x = r\sin\phi\cos\theta, \qquad y = r\sin\phi\sin\theta, \qquad z = r\cos\phi \end{align} so we get, for example, \begin{align} \int d^3 r\, x\,\rho(r) = \int dr\,d\phi\,d\theta \,(r^2\sin\phi)(r\sin\phi\cos\theta)\rho(r) \end{align} and similarly for the other components.
{ "domain": "physics.stackexchange", "id": 10126, "tags": "homework-and-exercises, electrostatics, coordinate-systems, dipole-moment" }
Construct a decidable set $B$ such that $B \neq A_w$ for any $w \in \Sigma^\star$
Question: I've been stuck on this problem for a while. Any hints would be appreciated! Let $A \subseteq \Sigma^\star$ be decidable. Given $w \in \Sigma^\star$, define $$A_w = \{x \in \Sigma^\star\:|\: \langle x, w \rangle \in A\}.$$ Construct a decidable set $B$ such that $B \neq A_w$ for any $w \in \Sigma^\star$. Answer: Here is a hint. Let $w\in B$ if $w\not\in A_w$ and let $w\not\in B$ if $w\in A_w$.
{ "domain": "cs.stackexchange", "id": 13712, "tags": "formal-languages, turing-machines, computability" }
Which algorithm to detect errors in 32bit data with 8bit parity
Question: I want to transmit a 32bit message in eight groups of 5bit each. This leaves me with 8bits to use for error checking. Overall, a group is likely transmitted without error, but when there is an error transmitting a group, there are probably multiple bits wrong. If I use one parity bit per group, I have a 50% chance to detect a wrong group. But I don't need to know which group of a message is wrong, I want to check the entire message. I want 100% chance of detecting if one group of the message is incorrect, regardless of how many bits are flipped in that group. If possible, I also want to be able to check wheather two neighboring groups have been switched. Which algorithm should I use for error checking / How should I code the data? Answer: Think of your message as seven 5-bit numbers $x_2,\ldots,x_8 \in \{0,\ldots,30\}$. This gives you $7\log_2 31 \approx 34.68$ message bits. Calculate $x_1 = -\sum_{i=2}^8 ix_i \bmod{31}$, so that $$ \sum_{i=1}^8 ix_i \equiv 0 \pmod{31}. $$ It is easy to check that every single symbol error is detected, since $31$ is prime. Now suppose that $x_i$ and $x_j$ are switched, but the equation above still holds. Then $ix_i + jx_j \equiv ix_j + jx_i$, and so $(i-j)(x_i - x_j) \equiv 0$, implying that either $i = j$ or $x_i = x_j$.
{ "domain": "cs.stackexchange", "id": 14602, "tags": "algorithms, coding-theory" }
Why are some protein sequences known but their 3D structure isn't?
Question: Why are there some proteins that have a known amino acid sequence, but their 3D structure is not known? Wouldn't finding the former in a lab lead to the discovery of the latter? Please correct me if I have misunderstood something. Answer: Protein sequencing is a nicely constrained problem: you have a one-dimensional sequence of amino acid members, which come from a limited set of options (made a bit more complicated by post-translational modifications, but not much more so). Because it's one-dimensional, it's a problem you can easily solve by chopping up a protein into little bits, using mass differences between amino acids to understand their constituents, and determining the order from that distribution. If a DNA (or mRNA) sequence is known, it becomes even easier - you can skip the protein sequencing process and get the amino acid sequence directly from the nucleic acid sequence and the genetic code. By comparison, protein folding is an absolute nightmare to solve for. Chemical bonds between amino acids are not rigid, they can bend and twist in all directions. The conformation of those bonds also depends not just on adjacent amino acids (as in a 1-D problem) but potentially on any other amino acid in the sequence (not to mention external influences..). In a large molecule like a protein there is a massive massive degrees of freedom problem. From Wikipedia, describing Levinthal's paradox, bold mine: In 1969, Cyrus Levinthal noted that, because of the very large number of degrees of freedom in an unfolded polypeptide chain, the molecule has an astronomical number of possible conformations. An estimate of 3300 or 10143 was made in one of his papers[1] (often incorrectly cited as the 1968 paper[2]). For example, a polypeptide of 100 residues will have 99 peptide bonds, and therefore 198 different phi and psi bond angles. If each of these bond angles can be in one of three stable conformations, the protein may misfold into a maximum of 3198 different conformations (including any possible folding redundancy). Therefore, if a protein were to attain its correctly folded configuration by sequentially sampling all the possible conformations, it would require a time longer than the age of the universe to arrive at its correct native conformation. Now, of course that's not the actual process that proteins use to fold (they don't iterate through all possible combinations, they settle through an energy landscape where only certain intermediate conformations are realized), and we can use that in computational models to solve protein structures more quickly than the age of the universe, but it's still quite a slow process. Projects like Folding@home have aimed to distribute the computational load among unused processing power in devices around the world, including idle gaming consoles and personal computers, but there are many many protein structures to solve. It's possible to get a general picture of protein shape using imaging techniques like X-ray crystallography or cryo-EM, and for some purposes these techniques give a lot of information, but these techniques are also by no means simple and can be prone to errors.
{ "domain": "biology.stackexchange", "id": 10870, "tags": "proteins, amino-acids, protein-structure, sequence-analysis" }
How to automate ANOVA in Python
Question: I am at the dimensionality reduction phase of my model. I have a list of categorical columns and I want to find the correlation between each column and my continuous SalePrice column. Below is the list of column names: categorical_columns = ['MSSubClass', 'MSZoning', 'LotShape', 'LandContour', 'LotConfig', 'Neighborhood', 'Condition1', 'Condition2', 'BldgType', 'HouseStyle', 'RoofStyle', 'RoofMatl', 'Exterior1st', 'Exterior2nd', 'Foundation', 'Heating', 'Electrical', 'Functional', 'GarageType', 'PavedDrive', 'Fence', 'MiscFeature', 'SaleType', 'SaleCondition', 'Street', 'CentralAir'] Because its categorical vs continuous, I've read that ANOVA is the best way to go but I have never used it before and couldn't find a concise implementation of it in Python. I want to loop through and output the correlation between each element in the list and the SalePrice column. Answer: I am not sure ANOVA is the best and easiest way to find correlation between these categorical features and your target. You may see this great post where they propose many other methods along with ANOVA. If you persist to use ANOVA test or Kruskal-Wallis H Test, you need to know how it works to give you that notion of correlation (variation of variance among groups of categoricals). It is nicely explained in that post: ANOVA estimates the variance of the continuous variable that can be explained through the categorical variable. One need to group the continuous variable using the categorical variable, measure the variance in each group and comparing it to the overall variance of the continuous variable. If the variance after grouping falls down significantly, it means that the categorical variable can explain most of the variance of the continuous variable and so the two variables likely have a strong association. If the variables have no correlation, then the variance in the groups is expected to be similar to the original variance. Once you understand how it works, implementing it and automating it is not difficult. In fact scipy and statsmodels have ANOVA. Check this post out, where they demonstrate in details how to perform ANOVA test on an actual dataset and estimate the correlation between categorical variable and continuous target. It is just a matter of putting these pieces together and change a bit to make it work for your own dataframe.
{ "domain": "datascience.stackexchange", "id": 5623, "tags": "machine-learning, python, feature-engineering, dimensionality-reduction, kaggle" }
Feature Extraction - calculate slope
Question: Having a bit of a mind-blank at the moment and am looking for some advice. I am extracting features from time series data for input into a classification algorithm, for example I'm extracting average and variance from inputX. For input Y, I have graphed the data and have seen that for class A, it can be seen that there is an upwards slope, and for class B, it can be seen that there is a downward slope, for class C there is no slope, the line is more or less straight. For Feature Extraction, how can I best describe this? Would a calculation to get positive/negative slope be best? Answer: There are several ways to do this, here are a couple of options: Calculate different lag values (difference between now and t time units) Calculate a linear regression for different time windows and store the slope and the bias You can also involve higher order models to describe what is happening, if you think for example that the acceleration also matters you could use a 2nd or 3rd degree polynomial over the past couple of observations.
{ "domain": "datascience.stackexchange", "id": 906, "tags": "time-series, feature-extraction, multiclass-classification" }
Inverse Discrete Time Fourier Transform of $1$
Question: $\textrm{DTFT}(\delta[n]) =1$, but $\textrm{IDTFT(1)} = \frac{\sin(\pi n)}{\pi n}$. Why it is not equal to the unit impulse $\delta[n]$? Answer: The IDTFT of $X(e^{j\omega})=1$ is indeed $$x[n]=\frac{\sin(n\pi)}{n\pi}\tag{1}$$ Now, what happens for indices $n\neq 0$? As it turns out, you can safely rewrite $(1)$ as $$x[n]=\delta[n]\tag{2}$$ where $\delta[n]$ is the discrete-time unit impulse. (HINT: think about where the zeros of $\sin(x)$ are).
{ "domain": "dsp.stackexchange", "id": 7864, "tags": "discrete-signals, fourier-transform, digital-filters" }
Can upper motor neuron lesions cause hypotonia?
Question: I have been taught that hypotonia is always caused by lower motor neuron lesions while hypertonia is by upper motor neuron lesions. However, I recently learned of an entity called central hypotonia, which means hypotonia caused by a central nervous system lesion. This makes it so confusing. Answer: The rule-of-thumb you (and I) were taught reflects the role of the central and peripheral nervous systems in establishing muscle tone. Deprived of CNS regulation, alpha motor neurons increase in their responsiveness to spindle afferents, resulting in hypertonia. Deprived of PNS regulation, muscle spindles are less able to react to input, resulting in hypotonia. Beyond this rule-of-thumb, things aren't as black-and-white. As you have pointed out, there is such a condition as central hypotonia. In fact, the NINDS includes the CNS as an area where damage can cause hypotonia: "Hypotonia can happen from damage to the brain, spinal cord, nerves, or muscles." (source) Purves et al. also note that upper motor neuron syndrome involves an "initial period of 'hypotonia' after upper motor neuron injury" (Neuroscience, 2012, p. 395). To your question: Yes, the rule-of-thumb is not always accurate and upper motor neuron lesions can cause hypotonia. If you're looking for a hypothetical mechanism, consider this: Purves et al. also state that hypertonia is "probably caused by the removal of suppressive influences exerted by the cortex" (p. 396). So it is conceivable that overstimulation of these inhibitory control systems by some sort of unique upper motor neuron lesion could result in hypotonia.
{ "domain": "biology.stackexchange", "id": 9369, "tags": "neuroscience, medicine, neurology" }
Is this pseudo science or real: code found in superstring
Question: Article in question: http://humansarefree.com/2013/01/science-strange-computer-code.html Problem: no credible looking or sounding site has anything on it. Only bunch of youtube videos. And some sites. Here is the relevant paper on ArXiv Answer: The work being described is by Prof S. James Gates and it has a serious basis. He has noted that the supersymmetric equations of string theory contain some binary codes built in. These are the same as codes sometimes used in computing for error correcting. This is the Hamming [Block] Code in particular. He constructs mysterious looking diagrams from mathematical equations and the use of Adinkra symbols (which are used by the Dogon people of West Africa), as a way to show how these error correction codes create our universe / physical reality. Gates has hyped this quite a bit suggesting that it is a sign that we are living in a computer simulation as in the film "The Matrix". The video linked to is hyping this even further. In fact these codes are ubiquitous in several areas of mathematics. They are associated with sphere packings, lattices, reflection groups, octonions and exceptional Lie algebras (especially E8) It is not particularly remarkable to see these coming up in string theory. There are other string theorists looking at these structures in a less hyped way to understand the role of algebraic concepts such as octonions and E8. See e.g. papers by Mike Duff and collaborators. People working on quantum computing are also looking at these codes which are examples of stabilizer codes that can be generated as eigenvectors of Pauli matrices. They hope that the codes can be used to prevent decoherence and that this would make multi-qubit quantum computation feasible. It is always possible that these codes could play some kind of error correcting role in string theory preventing uncontolled decoherence of spacetime, but this is pure speculation and it is not clear if such a mechanism is even needed. In any case these are natural mathematical structures and there is certainly no indication that they have been programmed in to the laws of physics as implied in the video. It is not as if they have discovered sequences of coded instructions that the laws of physics are following. It is an interesting intellectual exercise to think about the way the universe might run like a computer or quantum computer. but suggesting that we are living in a matrix-like simulation is unjustified.
{ "domain": "physics.stackexchange", "id": 10190, "tags": "string-theory" }
What is the strongest way to fasten 8020 extrusions in this arrangement?
Question: I am making a project using 10mm aluminum extrusions. I am trying to connect all three extrusions like the picture below shows and I am going to make my own bracket (in red) out of aluminum since the variety of choices is very scarce for this 10mm size, but I will include a picture of something similar so you can get an idea of what I'm making. I am wondering which arrangement is stronger and more rigid- the left one, or right one? Or is there an even better way to do it? Answer: If you consider the intersecting areas for the left side extrusions as being more evenly distributed, you should be able to expect that the load distribution will also be more even in that configuration. What's equally important is that you require equal strength over the three pieces. In the right side image, if the strength required on the angled portion is less, you can expect similar distribution of forces. The gusset method you are considering is a common joining mechanism for extrusions primarily because of its ease and strength. You can increase the strength incrementally by adding straps on the inside of the angles attached with the channel nuts.
{ "domain": "engineering.stackexchange", "id": 3155, "tags": "structural-engineering, beam, building-design" }
Why is nitrate anion substituted but bromide anion is not in Von Richter reaction?
Question: In Von Richter reaction which is a nucleophilic aromatic substitution. As according to this source, there is only a 37% yield of the product.I want to know what are the competing side reactions which results in such a low yield of the desired product. I searched for it on the internet but couldn't find anything related to it. Secondly, I want to know the reason behind the substitution of nitrate anion as the major product although bromide anion is a better leaving group according to this table taken from Wikipedia wikipedia Answer: The Von-Richter reaction's accepted mechanism is given by- As you can see, this is not a simple case of say, removal of $\ce{NO2-}$ ion, instead it involves removal of $\ce{N2}$ which provides thermodynamic stability. the tables you use have been calculated on the basis of $\ce{pK_a}$ values which don't give the right answers in many cases due to solvent effects and the intermediates involved in specific reactions. Wikipedia itself states It is important to note that the list given above is qualitative and describes trends. The ability of a group to leave is contextual. For example, in $S_nAr$ reactions, the rate is generally increased when the leaving group is fluoride relative to the other halogens. This effect is due to the fact that the highest energy transition state for this two step addition-elimination process occurs in the first step, where fluoride's greater electron withdrawing capability relative to the other halides stabilizes the developing negative charge on the aromatic ring. The departure of the leaving group takes place quickly from this high energy Meisenheimer complex, and since the departure is not involved in the rate limiting step, it does not affect the overall rate of the reaction This order is a general trend and as the mechanism in this reaction suggests, is not the end-all be-all determining factor. Coming to why this reaction gives low yield, is mainly due to the hydrolysis of $\ce{CN-}$ ion by water/alcohol to $\ce{HCOO-}$/$\ce{CH3COO-}$. This is was suggested because according to this paper$^1$ the recovery of much unreacted starting material suggest that hydrolysis (or alcoholysis) competes with the nitro compound for cyanide ion to the extent that hydrolytic destruction of cyanide is the yield-limiting factor; it is known that cyanide ion is rapidly hydrolyzed in water solution at elevated temperatures There is one thing missing from wikipedia that von-richter reactions were conducted in elevated temperatures and in sealed tubes $(\ce{150^oC-165^oC})$ both originally by von richter himself and in subsequent studies for mechanism*. There may be other competitive side reactions but were not documented for this reaction (because it is of low yield anyway, there was not much work on it.) *The paper by J.F.Bunnett,J.F.Cormack and Frank C.Mckay$^1$ (linked by @waylander) cites- von Richter’s reactions were run in “alcohol” solution in sealed tubes at $\ce{180^oC-200^oC}$ or higher. and the letter by Prof. Myron Rosenblum $^2$ while explaining the mechanism with evidence from isotopic labelling cites- 9.5 mmoles of p-chloronitrobenzene, when treated with 20 mmoles of potassium cyanide, and 6.2 mmoles of ammonium nitrate in a sealed tube at $\ce{160^oC}$ for 1.75 hours, gave nitrogen gas containing 0.75% $\ce{N_2^29}$ It is normal for nucleophilic $S_nAr$ to be conducted around this temperature** so this is not surprising thing to be missed by wiki, it also means that $S_nAr$ of cyanide on bromine can probably be competitive but experiments predict a majority of unreacted products according to this table in most solvent cases: interestingly both this table and the letter by Prof. Rosenblum (46.5% yield for the same reaction but with chlorine) suggest different yields to what wikipedia suggests. TL;DR : Leaving group order doesn't matter here because of different mechanism involved converting $\ce{-NO2}$ to $\ce{-N2-}$ which is a very good leaving group and yield is low because of hydrolysis of $\ce{CN-}$ ion at the preferred temperature. **Ref :. Morrison & Boyd sec 26.7 (Nucleophilic aromatic substitution: bimolecular displacement) cited material: $1$ : J.F.Bunnett, J.F.Cormack and Frank C.Mckay ,"Mechanism and reactivity in aromatic nucleophilic substitution reactions"J. Am. Chem. Soc. 1958 5(3), 481-490 doi 10.1021/jo01149a007 $2$ : Rosenblum, M. The Mechanism of the von Richter Reaction. J. Am. Chem. Soc. 1960, 82, 3796–3798; doi 10.1021/ja01499a090
{ "domain": "chemistry.stackexchange", "id": 15812, "tags": "organic-chemistry, reaction-mechanism, nucleophilic-substitution" }
Will the ever accelerating space expansion (like at the level of inflation) eventually break causality?
Question: I have read this question: requires that "for an action at one point to have an influence at another point, something in the space between the points, such as a field, must mediate the action". In view of the theory of relativity, the speed at which such an action, interaction, or influence can be transmitted between distant points in space cannot exceed the speed of light. How to understand locality and non-locality in Quantum Mechanics? As far as I understand, space is expanding at an ever accelerating rate, and it could eventually reach the level of the former inflation right after the big bang. Inflation, and its space expansion was so extreme that it was able to separate otherwise bound/linked systems like particle antiparticle pairs. At this level of expansion, the speed at which space is expanding can exceed the speed of light (speed of causality), and this might even be true for the space inbetween bound quantum objects, like quarks in a nucleon or electrons and protons in an atom. From the point of view of one such object, the spacetime is something like an inside-out Schwarzschild black hole—each object is surrounded by a spherical event horizon. Once the other object has fallen through this horizon it can never return, and even light signals it sends will never reach the first object (at least so long as the space continues to expand exponentially). https://en.wikipedia.org/wiki/Inflation_(cosmology) So quantum objects are separated by an event horizon, and they keep separating faster then the speed of light (causality), thus the fields inbetween them cannot transmit causality. The answer says that the fields have to mediate causality inbetween these quantum objects, but if space itself is expanding faster then light, then the field itself will not be able to catch up to space expansion (for example the speed at which otherwise bound elementary particles are flying apart) so causality will not be transmitted. So basically what I am asking is, does space expansion expand (stretch) the fields that propagate causality? Question: Will the ever accelerating space expansion (like at the level of inflation) eventually break causality? Answer: In General Relativity, you should only ever compare the speeds of two objects that are (at least approximately) in the same locally inertial frame. In other words, you only compare the speed of two objects if the distance between the objects is much smaller than the length scale over which the spacetime curvature is changing. Or -- in the context of inflation -- it makes no sense to compare the relative speed of two observers who are separated by a Hubble distance (or more). Locally near each observer, you can define a light cone (if you treat gravity semi-classically), and quantum fields on that background spacetime will commute if they are spacelike separated, just as in flat-space quantum field theory. If there is a spacelike hypersurface (eg if we take a "snapshot" of the Universe at a fixed time), then two quantum field operators at different points on this surface will commute. Additionally, the horizon means that if comoving points $A$ and $B$ are separated by more than one horizon distance at time $t$, then the future light cones of $A$ and $B$ will never intersect. So quantum fields will never be in causal contact, once they are outside each other's horizons. The $\delta N$ formalism (see, eg: https://arxiv.org/abs/1003.5057, https://arxiv.org/abs/astro-ph/0506262, https://arxiv.org/abs/astro-ph/9507001) in inflation theory takes advantage of this behavior to calculate the statistics of primordial fluctuations in patches of the inflationary Universe that are causally separated from each other. On a different note, these issues around causally disconnected regions in an inflating Universe creates several problems with formulating quantum field theory on de Sitter spacetime (a fancier name for an exponential inflating Universe). We normally like to talk about the "S-matrix" in flat-space quantum field theory, where $N$ particles come in from infinity, scatter, and go out as $M$ outgoing particles approaching future infinity. The problem is that these outgoing particles will all eventually become separated from each other into different "Hubble patches", each one outside the horizon of the others, so there is no observer at asymptotic future infinity who can "collect" all the outgoing particles and observe what the final state of the scattering experiment was. Therefore it's not clear how to define an S-matrix in de Sitter. Even though you didn't ask about this specifically, I think your question is getting at a deep problem reconciling quantum field theory an de Sitter space; the S-matrix is one clear manifestation at that.
{ "domain": "physics.stackexchange", "id": 82059, "tags": "quantum-mechanics, general-relativity, particle-physics, cosmology" }
What are the problems in trying to interpret the Klein-Gordon equation as a single particle equation?
Question: What is the problem if we try to interpret KG equation as a single-particle equation? Also, I wish to know whether the born interpretation of wavefunction is applicable in relativistic quantum mechanics. Answer: There is no part of the physics involved that requires particle number to be conserved. Combine this with $E=mc^2$, which allows for energy to be converted into particle-antiparticle pairs, even when there is not enough energy to create such a pair, virtual particles allow for the temporary appearance of particles in a system that doesn't have sufficient energy for them. As such we cannot treat relativistic quantum equations as being based on particles, instead we treat them as fields with a certain energy, and assume the particle number of this field to not be constant.
{ "domain": "physics.stackexchange", "id": 11686, "tags": "quantum-mechanics, quantum-field-theory, klein-gordon-equation, born-rule" }
Time of collision of two relativistic speed particles
Question: Suppose I have two particles, one moving at $0.9c$ to the right, starting at $(-0.9c,0,0)$ in the lab frame at $t=0$ and the second one moving at $0.9c$ to the left, starting at $(0.9c,0,0)$. In the lab frame at $t=1$ these particles are going to collide at the origin $(0,0,0)$ where we've placed a marker and annihilate. At the beginning the marker is $0.9c$ from the first particle and the second particle is $1.8c$ away from it in the lab frame. In the lab frame the annihilation will happen at the point of the marker exactly. We know that in the frame of the first particle, the marker is moving towards the particle at speed $0.9c$ and by the speed addition formula the second particle is moving towards the first particle at speed $\frac{0.9c+0.9c}{1+\frac{0.81c^2}{c^2}} = 0.995c$. We also have length contraction in the frame of the first particle which sees the distance between it's starting point and the marker as $0.9c\sqrt{1-0.81c^2/c^2} = 0.39c$. It also sees the distance between its starting point and the starting point of the second particle as twice this, so $0.78c$. The marker is moving towards it at 0.9c, so from the reference frame of the particle the marker reaches it in $0.39c/0.9c = 0.43$ seconds. The second particle is moving towards the first particle at $0.995c$ and has a distance of $0.78c$ to cover which takes $0.78c/0.995c = 0.785$ seconds to reach the first particle and annihilate both of them.However when $0.785$ seconds will have elapsed the marker will be well behind the particle and so in this frame the annihilation event will not happen at the space location of the marker. To me this seems weird and my intuition isn't helping me see why it's possible for the location of the annihilation event to differ from the marker just based on the reference frame we're using. What's the right way to be thinking about this situation? EDIT: Here is the spacetime diagram as requested: This actually confuses me more if anything, it seems to show that the time of the collision in the reference frame of the first particle is at $t' > 1$ which means from the point of view of the particle it takes longer to collide than from the lab frame view, which is weird since distances are contracted for the particle frame compared to the lab frame, so the collision should occur in less time. Answer: In the lab frame, the first particle passes through 0.9 at the same time that the second particle passes through -0.9. But these events will not necessarily be simultaneous in other reference frames. Suppose you start in the location of the target, equidistant from each particle. You will receive a signal when each particle is at +/- 0.9. If you are in the lab frame, you will receive each signal at the same time and can conclude that these events were simultaneous. But if you are in the frame of one of the particles, then the signal from the other (approaching) particle will reach you first and therefore you will conclude that it reached 0.9 before the particle in your reference frame. Equivalently, suppose you will trigger emitters at +/- 0.9 with a signal when you are equidistant from them. Depending on your reference frame, your signal could reach the emitters at the same time or one could be triggered before the other. Either way, you will need to account for this head start when determining when the particles will strike the target.
{ "domain": "physics.stackexchange", "id": 96244, "tags": "special-relativity, spacetime, time, time-dilation" }
What are the reasons a perceptron is not able to learn?
Question: I'm just starting to learn about neural networking and I decided to study a simple 3-input perceptron to get started with. I am also only using binary inputs to gain a full understanding of how the perceptron works. I'm having difficulty understanding why some training outputs work and others do not. I'm guessing that it has to do with the linear separability of the input data, but it's unclear to me how this can easily be determined. I'm aware of the graphing line test, but it's unclear to me how to plot the input data to fully understand what will work and what won't work. There is quite a bit of information that follows. But it's all very simple. I'm including all this information to be crystal clear on what I'm doing and trying to understand and learn. Here is a schematic graphic of the simple 3-input perceptron I'm modeling. Because it only has 3 inputs and they are binary (0 or 1), there are only 8 possible combinations of inputs. However, this also allows for 8 possible outputs. This allows for training of 256 possible outputs. In other words, the perceptron can be trained to recognize more than one input configuration. Let's call the inputs 0 thru 7 (all the possible configurations of a 3-input binary system). But we can train the perceptron to recognize more than just one input. In other words, we can train the perceptron to fire for say any input from 0 to 3 and not for inputs 4 thru 7. And all those possible combinations add up to 256 possible training input states. Some of these training input states work, and others do not. I'm trying to learn how to determine which training sets are valid and which are not. I've written the following program in Python to emulate this Perceptron through all 256 possible training states. Here is the code for this emulation: import numpy as np np.set_printoptions(formatter={'float': '{: 0.1f}'.format}) # Perceptron math fucntions. def sigmoid(x): return 1 / (1 + np.exp(-x)) def sigmoid_derivative(x): return x * (1 - x) # END Perceptron math functions. # The first column of 1's is used as the bias. # The other 3 cols are the actual inputs, x3, x2, and x1 respectively training_inputs = np.array([[1, 0, 0, 0], [1, 0, 0, 1], [1, 0, 1, 0], [1, 0, 1, 1], [1, 1, 0, 0], [1, 1, 0, 1], [1, 1, 1, 0], [1, 1, 1, 1]]) # Setting up the training outputs data set array num_array = np.array num_array = np.arange(8).reshape([1,8]) num_array.fill(0) for num in range(25): bnum = bin(num).replace('0b',"").rjust(8,"0") for i in range(8): num_array[0,i] = int(bnum[i]) training_outputs = num_array.T # training_outputs will have the array form: [[n,n,n,n,n,n,n,n]] # END of setting up training outputs data set array # ------- BEGIN Perceptron functions ---------- np.random.seed(1) synaptic_weights = 2 * np.random.random((4,1)) - 1 for iteration in range(20000): input_layer = training_inputs outputs = sigmoid(np.dot(input_layer, synaptic_weights)) error = training_outputs - outputs adjustments = error * sigmoid_derivative(outputs) synaptic_weights += np.dot(input_layer.T, adjustments) # ------- END Perceptron functions ---------- # Convert to clean output 0, 0.5, or 1 instead of the messy calcuated values. # This is to make the printout easier to read. # This also helps with testing analysis below. for i in range(8): if outputs[i] <= 0.25: outputs[i] = 0 if (outputs[i] > 0.25 and outputs[i] < 0.75): outputs[i] = 0.5 if outputs[i] > 0.75: outputs[i] = 1 # End convert to clean output values. # Begin Testing Analysis # This is to check to see if we got the correct outputs after training. evaluate = "Good" test_array = training_outputs for i in range(8): # Evaluate for a 0.5 error. if outputs[i] == 0.5: evaluate = "The 0.5 Error" break # Evaluate for incorrect output if outputs[i] != test_array[i]: evaluate = "Wrong Answer" # End Testing Analysis # Printout routine starts here: print_array = test_array.T print("Test#: {0}, Training Data is: {1}".format(num, print_array[0])) print("{0}, {1}".format(outputs.T, evaluate)) print("") And when I run this code I get the following output for the first 25 training tests. Test#: 0, Training Data is: [0 0 0 0 0 0 0 0] [[ 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0]], Good Test#: 1, Training Data is: [0 0 0 0 0 0 0 1] [[ 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0]], Good Test#: 2, Training Data is: [0 0 0 0 0 0 1 0] [[ 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0]], Good Test#: 3, Training Data is: [0 0 0 0 0 0 1 1] [[ 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0]], Good Test#: 4, Training Data is: [0 0 0 0 0 1 0 0] [[ 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0]], Good Test#: 5, Training Data is: [0 0 0 0 0 1 0 1] [[ 0.0 0.0 0.0 0.0 0.0 1.0 0.0 1.0]], Good Test#: 6, Training Data is: [0 0 0 0 0 1 1 0] [[ 0.0 0.0 0.0 0.0 0.5 0.5 0.5 0.5]], The 0.5 Error Test#: 7, Training Data is: [0 0 0 0 0 1 1 1] [[ 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0]], Good Test#: 8, Training Data is: [0 0 0 0 1 0 0 0] [[ 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0]], Good Test#: 9, Training Data is: [0 0 0 0 1 0 0 1] [[ 0.0 0.0 0.0 0.0 0.5 0.5 0.5 0.5]], The 0.5 Error Test#: 10, Training Data is: [0 0 0 0 1 0 1 0] [[ 0.0 0.0 0.0 0.0 1.0 0.0 1.0 0.0]], Good Test#: 11, Training Data is: [0 0 0 0 1 0 1 1] [[ 0.0 0.0 0.0 0.0 1.0 0.0 1.0 1.0]], Good Test#: 12, Training Data is: [0 0 0 0 1 1 0 0] [[ 0.0 0.0 0.0 0.0 1.0 1.0 0.0 0.0]], Good Test#: 13, Training Data is: [0 0 0 0 1 1 0 1] [[ 0.0 0.0 0.0 0.0 1.0 1.0 0.0 1.0]], Good Test#: 14, Training Data is: [0 0 0 0 1 1 1 0] [[ 0.0 0.0 0.0 0.0 1.0 1.0 1.0 0.0]], Good Test#: 15, Training Data is: [0 0 0 0 1 1 1 1] [[ 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0]], Good Test#: 16, Training Data is: [0 0 0 1 0 0 0 0] [[ 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0]], Good Test#: 17, Training Data is: [0 0 0 1 0 0 0 1] [[ 0.0 0.0 0.0 1.0 0.0 0.0 0.0 1.0]], Good Test#: 18, Training Data is: [0 0 0 1 0 0 1 0] [[ 0.0 0.0 0.5 0.5 0.0 0.0 0.5 0.5]], The 0.5 Error Test#: 19, Training Data is: [0 0 0 1 0 0 1 1] [[ 0.0 0.0 0.0 1.0 0.0 0.0 1.0 1.0]], Good Test#: 20, Training Data is: [0 0 0 1 0 1 0 0] [[ 0.0 0.5 0.0 0.5 0.0 0.5 0.0 0.5]], The 0.5 Error Test#: 21, Training Data is: [0 0 0 1 0 1 0 1] [[ 0.0 0.0 0.0 1.0 0.0 1.0 0.0 1.0]], Good Test#: 22, Training Data is: [0 0 0 1 0 1 1 0] [[ 0.0 0.0 0.0 1.0 0.0 1.0 1.0 1.0]], Wrong Answer Test#: 23, Training Data is: [0 0 0 1 0 1 1 1] [[ 0.0 0.0 0.0 1.0 0.0 1.0 1.0 1.0]], Good Test#: 24, Training Data is: [0 0 0 1 1 0 0 0] [[ 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0]], Wrong Answer For the most part, it appears to be working. But there are situations where it clearly does not work. I have the labels in two different ways. The first type of error is "The 0.5 Error" which is easy to see. It should never return any output of 0.5 in this situation. Everything should be binary. The second type of error is when it reports the correct binary outputs but they don't match what it was trained to recognize. I would like to understand the cause of these errors. I'm not interested in trying to correct the errors as I believe these are valid errors. In other words, these are situations where the perceptron is simply incapable of being trained for. And that's ok. What I want to learn is why these cases are invalid. I'm suspecting that they have something to do with the input data not being linearly separable in these situations. But if that's the case, then how do I go about determining which cases are not linearly separable? If I could understand how to do that I would be very happy. Also, are the reasons why it doesn't work in specific cases the same? In other words, are both types of errors caused by linear inseparability of the input data? Or is there more than one condition that causes a Perceptron to fail in certain training situations. Any help would be appreciated. Answer: UPDATE: I found the answers on my own. To begin with I figured out how to plot the input data and output training data on a scatter plot using matplotlib. And once I was able to do that I could instantly see exactly what's going on. When the answers are correct the input data is indeed linearly separable based on what the training output is looking to recognize. The "0.5 error" occurs when a single violation of linear separability occurs. The "Wrong Answer" error occurs when linear separation is violated twice. In other words, there are conditions where linear separation is violated in two separate planes. Or when the data can be separated by planes but it would require more than one plane to do this. (see graphic examples below) I suspected that there would be a difference between these different types of errors and the answer is, yes, there is a difference. So I have solved my own question. Thanks to anyone who may have been working on this. If you'd like to see some of my graphs here are some specific examples: A graph of all possible binary inputs An example of a good training_outputs = np.array([0, 0, 0, 0, 1, 1, 1, 0]). As you can see in this graph the red points are linearly separable from the blue points. An example of a 0.5 error, training_outputs = np.array([0, 0, 0, 1, 0, 0, 1, 0]). The points are not linearly separable in one plane Example of a 2-plane wrong answer error, training_outputs = np.array([0, 0, 0, 1, 0, 1, 1, 0]). You can see that there are two planes that are not linearly separable An example of a different kind of wrong answer, training_outputs = np.array([0, 0, 0, 1, 1, 0, 0, 0]). In this case the data can be separated by planes, but it would require 2 different planes to do this which a perceptron cannot handle. So this covers all possible error conditions. Aren't graphs great!
{ "domain": "ai.stackexchange", "id": 1343, "tags": "training, perceptron" }
Double Slit and Self Interference of Electrons
Question: I have not been able to get a clear notion of how particle waves are different than photon waves. So I'll take a different approach. With a diffraction grating, light can be shifted from a beam to a planar wave through a procedure analogous to a phase-array radar system: The light beam enters N double slits at a mildly obtuse angle (transverse to the grating direction), and more or less all of the energy (with some loss) winds up propagating in a transparent crystalline wave guide through the process of total internal reflection. This effect will work even if you fire one photon at a time at the diffraction grating. Therefore, this implies, through wave particle duality, that an electron-diffraction grating could be created which achieved the same result. One electron at a time. This implies that electrons must be capable of self interference. And yet it seems like there is a belief out there that electrons fly along a physical path... and that the apparent self interference is "just" an illusion -- a probability established when the electron exits the double slit. But if that is the case, an electron could not be phased via diffraction to fly in a particular direction without depositing some energy somewhere and collapsing the wave function. I use this example because, in the case of the diffraction grating or "phased array", self interference of particles is relied on to achieve a non-newtonian, deterministic trajectory. Therefore, it is impossible for an electron to be newtonian and non-newtonian and get "phased"; at the same time, it is impossible for any kind of "multi-universe" interpretation because the energy direction output by a phase array or diffraction grating is an extremely deterministic one to many to one function (a multiverse creation and collapse in a puny diffraction grating: I don't think so); and the electron is not interacting with virtual particles, as that would imply that a phase array would "wear out" a region of space in the sense that a laser's transmission through air can change over time because the air molecules are modified by the heat exchange. We are left with a self interfering wave-particle duality. Or so I thought. Anyhow, it seems like a newtonian interpretation, a posteriori, yields no phased-array electron emitters. No electron based diffraction gratings. Seems unlikely... So which is it: do electrons become waves and collapse to particles when diffracted and detected? Or are they always particles? Answer: Your question is a composite of several questions, which is not generally acceptable here. However: An electron, or any other small massive particle, is "only" a wavefunction until it is detected. It definitely can interfere with itself, just like a photon. The big difference between "particle waves" and photons is that particle waves can move at any speed, while photons (in a vacuum) only move at the speed of light. Electrons - even one at a time - definitely can form standing wave patterns and therefore diffraction gratings (though since an electron can be detected at only one point, it takes a LOT of electrons to form a usable diffraction grating). The "standing wave" created by a single electron, single photon, etc., is not directly detectable; its existence can only be inferred from a large number of single-electron experiments. Similarly, an electron can be diffracted by a grating: an array of nanoscale metal lines, for example. And, the diffraction can be described in terms of Huygens Principle just as for light waves.
{ "domain": "physics.stackexchange", "id": 74509, "tags": "electrons, double-slit-experiment" }
What is an example of using aliasing to your advantage when recovering an input signal?
Question: Suppose you have an arbitrary analog input signal $x_a(t)$ guaranteed to have frequencies within a bandwidth $[f_1,f_2]$ Hz. Suppose your sampling frequency $F_s$Hz, and sample $x_a(t)$ to produce $x(n)=x_a(\frac{n}{F_s})$. Then by the sampling theorem, you can successfully recover (or reconstruct) the signal so long as $f_2 < \frac{F_s}{2}$. Otherwise you will experience aliasing and cannot recover the original signal $x_a(t)$. It appears (from this article) that there are "tricks" that can be employed that take advantage of aliasing scenario and can still recover the signal $x_a(t)$ completely. The specific part of the article of concern is pasted below: Using Nyquist aliasing as benefit The trick is to use the aliasing (or frequency folding) to your advantage. By undersampling the data converter, higher-frequency content will be aliased into all of the lower Nyquist zones (see Figure 2). You will need to make absolutely sure that nothing ends up in the lower bands – any noise or frequency components in the lower zones will also be aliased into the first Nyquist. The good news is that the data rate from the data converter is only a fraction of the required RF input sample rate if this were a first Nyquist system. Under sampling greatly reduces the data rate of the samples supplied to the digital signal processor (DSP) or FPGA. Could somebody explain this to me? Why is the Texas Instruments link not violating Nyquist's Theorem? Answer: you can successfully recover (or reconstruct) the signal so long as $f_2<F_s/2$ Nope. The sampling theorem says that you need 2 samples per Hz of bandwidth, so in this case you'd need $$Fs > 2*(f_2-f_1)$$ For more info on how this works google "bandpass sampling" or just read https://en.wikipedia.org/wiki/Undersampling. The basic idea is that sampling creates a periodic repetition of the original spectrum with a period of $F_s$. As long as none of the repeated spectra falls on the same frequencies as the original spectrum, you don't get aliasing.
{ "domain": "dsp.stackexchange", "id": 8837, "tags": "nyquist, aliasing" }
Corrupting Java arithmetic through reflection
Question: Inspired by a PPCG answer, I wrote this code, which shuffles the autoboxing caches of Number subclasses. This leads to, among other things, outputting 2 + 2 = -72 after invocation of EvilStuff.doEvilStuff(Class<? extends Number>,String,int). (The result differs for different invocations, due to reshuffling). I use a standard Fisher-Yates shuffle. I use reflection to access the autoboxing caches by the designated cacheID and cacheFieldNumber parameters, which are cache and 0 for java.lang.Integer. java.lang.Integer is the only class which implements an effective autoboxing cache other than java.math.BigDecimal, and that is not used much for normal arithmetic. Also, the Integer stuff is better documented. I figured I would do this since no one would otherwise think of posting evil code on Code Review, and to serve to warn myself of the dangers of using reflection in the future. Welcome to the dark side! How would anyone else do this? /** * Messes up arithmetic for some cases using reflection to shuffle cache stuffs. Enjoy! */ public class EvilStuff{ /** * Does a Fisher-Yates shuffling of the input. * @param stuff The array to shuffle * @return Nothing, array is sorted in-place */ public static void shuffle(Object[] stuff){ int length = stuff.length; for (int i = 0; i < length; i++) { // Get a random index of the stuff past i. //truncation (flooring) is necessary to avoid array bounds violations int random = i + (int) (Math.random() * (length - i)); // Swap the random element with the present element. Object randomElement = stuff[random]; stuff[random] = stuff[i]; stuff[i] = randomElement; } } /** * Access the "cache" field of a {@link java.lang.Number} subclass object, * and shuffle that cache. * * Currently known to work only with {@link java.lang.Integer} * (See the special handling?) * * Ridiculous output can only be seen if int (in the range of a {@link java.lang.Byte} * calculations are autoboxed to java.lang.Integer * * Try {@link java.lang.System.out.format(String,Object...)} for best results! * * @param victim The class whose cache should be shuffled. Recommend {@link java.lang.Integer} * @return Nothing, the victim should have been dealt with in-place */ public static void doEvilStuff(Class<? extends Number> victim,String cacheID, int cacheFieldNumber){ Class cache = victim.getDeclaredClasses()[cacheFieldNumber]; java.lang.reflect.Field c; try{ c = cache.getDeclaredField(cacheID); }catch(NoSuchFieldException noCache){ throw new IllegalArgumentException(String.format("Can't mess with something (%s) without a cache (%s) at %d!" ,victim+"",cacheID,cacheFieldNumber), noCache); } c.setAccessible(true); Object cached; try{ cached=c.get(cache); }catch(IllegalAccessException noAccessPermit){ throw new IllegalArgumentException(String.format("Can't access the cache (%s) of %s at %d!", cacheID,victim+"",cacheFieldNumber), noAccessPermit); } Number[] array; try{ array = (Number[]) cached; shuffle(array); }catch(NullPointerException dataLoss){ throw new IllegalArgumentException("Catastrophic data loss (for this program only)!", dataLoss); } } } A main class for testing purposes (DISCLAIMER: The following class not up for review): public class TestEvilStuff { public static void main(String[] args) { EvilStuff.doEvilStuff(Integer.class, "cache", 0); System.out.println("Enter 2 numbers between -64 (-128/2) and 63 (127/2):"); java.util.Scanner in = new java.util.Scanner(System.in); int n1=Integer.parseInt(in.nextLine()), n2=Integer.parseInt(in.nextLine()); System.out.format("%d + %d = %d", new Integer(n1), new Integer(n2), n1 + n2); } } Sample outputs: Enter 2 numbers between -64 (-128/2) and 63 (127/2): 63 64 63 + 64 = -94 Please find more and suggest how I can improve this. Maybe even make an EvilStuff library? Answer: Regarding shuffle: Declare length in the for loop. Use Random.random(int) instead of (int)(Math.random() * int). Rename random to swapIndex or something more descriptive. With these changes the comments are unnecessary. This method needn't be public. Also, I made things final although that's a matter of style. My version: private static void shuffle(final Object[] stuff){ final Random r = new Random(); for (int i = 0, length = stuff.length; i < length; i++) { final int swapIndex = i + r.nextInt(length - i); final Object randomElement = stuff[swapIndex]; stuff[swapIndex] = stuff[i]; stuff[i] = randomElement; } } Regarding doEvilStuff: The proliferation of try/catches really clutters the code, especially because you rethrow an IllegalArgumentException each time. Your code would be more readable if you wrapped the whole method in a try/catch and used a multi-catch. Actually I would be controversial and advocate catching Exception because no matter what goes wrong, you want to wrap the exception and provide diagnostic information and it's impractical (and impossible without depending on implementation details) to list every exception that could be thrown. Note that catching Exception is bad in most circumstances. However, with this approach of having just one catch block your error messages can't be quite as descriptive. That's okay as long as the thrown exception includes the parameters and the original exception - any more is IMO excessive. Another issue with your code is it puts the burden of specifying the index of the nested class on the caller. As all nested classes of Number type X are named XCache, we may use this pattern in doEvilStuff to automatically find the correct nested class. My version: public static void doEvilStuff(Cfinal lass<? extends Number> victim,String cacheID) { try { final Class cache = Class.forName(victim.getName() + "$" + victim.getSimpleName() + "Cache"); final java.lang.reflect.Field c = cache.getDeclaredField(cacheID); c.setAccessible(true); shuffle((Number[])c.get(cache)); } catch (final Exception e) { throw new IllegalArgumentException("can't corrupt class " + victim, e); } }
{ "domain": "codereview.stackexchange", "id": 19075, "tags": "java, integer, reflection, shuffle" }
Meyniel's theorem + finding a Hamiltonian path for a specific graph family
Question: Let's say we have a directed graph $G = (V, E)$ for which $(v, w) \in E$ and/or $(w,v) \in E$ holds true for all $v, w \in V$. My feeling is that this graph most definitely is Hamiltonian, and I want to find a Hamiltonian path in it (from any vertex to any other vertex, I don't care where to start or stop). I wanted to refer to Meylien's theorem for this: A strongly connected simple directed graph with $n$ vertices is Hamiltonian if the sum of full degrees of every pair of distinct non-adjacent vertices is greater than or equal to $2n − 1.$ There are two subtleties that I'm not sure about with this theorem: What is meant by "adjcent vertices". Does the order matter here? Is the pair $(v,w)$ adjacent even if $(w,v) \in E$ but not $(v,w)$ itself? If that is the case and the graph is strongly connected, than it must obviously be Hamiltonian, since there are no non-adjacent pairs of vertices at all. A graph with the above property is not necessarily strongly connected. I think this is easy to solve: We can just decompose the graph into SCCs. We will still have no non-adjacent pairs of vertices in the components and all of them are Hamiltonian. We can then construct a Hamiltonian path of the whole graph by connecting them in topological order. Is the above reasoning correct and the theorem applicable? Or is there some other argument we can use to show that there is a Hamiltonian path in the graph? In the end I want to actually find the Hamiltonian cycles in the SCCs, but haven't had much luck finding a constructive proof of the theorem, let alone an algorithm that solves this. Can it be done in a straightforward way? I feel like some kind of greedy approach could work, where we take the nodes in decreasing order of outdegree or something similar. Answer: It is enough to prove your claim for the case of a tournament, in which for every pair of vertices $v\neq w$, exactly one of the edges $(v,w),(w,v)$ is in the graph. Wikipedia has an algorithmic proof that every such graph has a Hamiltonian path. If the tournament is strongly connected, it has a Hamiltonian cycle.
{ "domain": "cs.stackexchange", "id": 2513, "tags": "algorithms, graphs, hamiltonian-path" }
The mean direction of waves in a directional distribution
Question: When modelling ocean waves, a directional distribution $D_f(\theta)$ is used together with a frequency spectrum $S(f)$ to describe the energy of waves at a particular frequency $f$ and angle $\theta$. I understand that the directional distribution can be written as a Fourier series i.e. $$D_f(\theta) = \frac{1}{2\pi}\left[ 1 + 2\sum_{n=1}^{\infty}\{a_n\cos(n\theta) + b_n\sin(n\theta) \} \right] $$ where $a_n = \int_0^{2\pi} D_f(\theta)\cos(\theta)\,d\theta$ and $b_n = \int_0^{2\pi} D_f(\theta)\sin(\theta)\,d\theta$. In Kuik (1988), the mean wave direction, $\theta_0$, is found by calculating $$\theta_0 = \arctan\left(\frac{b_1}{a_1}\right)$$ where $b_1$ and $a_1$ are the first order Fourier coefficients. Alongside this definition, the author refers the reader to Borgman (1969) but I can't find this paper on the web. My question is why is it only the first order Fourier coefficients $a_1$ and $b_1$ used in this calculation? EDIT: After giving this more thought, I think that the fact they are Fourier coefficients is somewhat of a coincidence. If the directional distribution is seen as the PDF (as the integral of it is equal to 1) then $a_1$ and $b_1$ are more like the expected values that the cosine and sine of the angle $\theta$ take. The average values can then be used in the $atan2$ function to determine the mean angle. Answer: It is helpful to understand the meaning of Fourier coefficients in the directional wave spectrum analysis. A pitch-and-roll buoy measures time series of water elevation $\eta$, and slopes in both Cartesian directions, $\eta_x$ and $\eta_y$. The Fourier components $a_0$, $a_1$, $b_1$, $a_2$, $b_2$, are related to cross-spectra of elevation and slope time series (introduced by Longuet-Higgins 1963): $$ a_0 = \dfrac{C_{11}}{\pi} $$ $$ a_1 = \dfrac{Q_{12}}{\pi} $$ $$ b_1 = \dfrac{Q_{13}}{\pi} $$ $$ a_2 = \dfrac{C_{22}-C_{33}}{k^2\pi} $$ $$ b_2 = \dfrac{2C_{23}}{k^2\pi} $$ where the cross-spectra are: $$ C_{11}(f) = C[\eta(f)\eta(f)] = \int_0^{2\pi} E(f,\theta)d\theta $$ $$ Q_{12}(f) = -Q[\eta(f)\eta_x(f)]k^{-1} = -\int_0^{2\pi} E(f,\theta)\cos{\theta}d\theta $$ $$ Q_{13}(f) = -Q[\eta(f)\eta_y(f)]k^{-1} = -\int_0^{2\pi} E(f,\theta)\sin{\theta}d\theta $$ $$ C_{22}(f) = C[\eta_x(f)\eta_x(f)]k^{-2} = \int_0^{2\pi} E(f,\theta)\cos^2{\theta} d\theta $$ $$ C_{23}(f) = C[\eta_x(f)\eta_y(f)]k^{-2} = \int_0^{2\pi} E(f,\theta)\sin{\theta}\cos{\theta} d\theta $$ $$ C_{33}(f) = C[\eta_y(f)\eta_y(f)]k^{-2} = \int_0^{2\pi} E(f,\theta)\sin^2{\theta} d\theta $$ From here you can see that the mean direction, as defined in the paper that you cite, is: $$ \theta_0 = \arctan\left({\dfrac{b_1}{a_1}}\right) = \arctan\left({\dfrac{Q_{13}}{Q_{12}}}\right) = \arctan\left({\dfrac{\int_0^{2\pi}E(f,\theta)\sin{\theta}d\theta}{\int_0^{2\pi}E(f,\theta)\cos{\theta}d\theta}}\right) $$ Because $Q_{12}$ and $Q_{13}$ are proportional to the integrated energy in the zonal and meridional directions, respectively, the arctangent of their ratio yields mean wave direction. The authors could use higher moments to calculate direction, but this would have different physical meaning. In this specific case, using $\theta = \arctan\left({b_2/a_2}\right)$ would yield peak (dominant) wave direction in the context of pitch-and-roll buoys. You see that the above analysis is inherently limited by the quantities that are measured, specifically $\eta$, $\eta_x$, and $\eta_y$. If higher order quantities, say $\eta_{xx}$, $\eta_{xy}$, $\eta_{yy}$ were available, the directional spectrum could be described at a higher accuracy even using just Fourier decomposition. Further reading: Oltman-Shay and Guza (1984) Nondirectional and directional wave data analysis procedures at NDBC
{ "domain": "earthscience.stackexchange", "id": 1001, "tags": "ocean, waves" }
What's the terminology for the deformity between the halluces and index toes caused by sandals?
Question: In Japan, especially in the past, people tended to wear wooden sandals or the like, which separated their halluces from their index toes. As a result, there is often a gap between their halluces and index toes. What's the terminology for this kind of deformity? I searched a potential terminology, Bunion or hallux valgus, but all pictures I got were the ones where there was no gap between the hallux and the index toe. Besides, it's said that 'proposed factors include wearing overly tight shoes, high-heeled shoes, family history, and rheumatoid arthritis', where geta is not mentioned at all. I searched hallux varus, the pictures and the description I got is nearer to what I expected, yet when I search 'hallux varus' + 'japan' or 'japanese', the results I got were all about hallux valgus. To add yet another twist, in the abstract of The etiology of hallux valgus in Japan, the author even says: Until recent years, hallux valgus did not exist in Japan. I'm confused. What's the correct terminology? Answer: This is sometimes called a "sandal gap." See here. Gap between the 1st and 2nd toes (sandal gap). The above NIH link lists 58 known conditions with this feature+. ... Other sources emphasize "halux varus" as a common cause for a larger interphalangeal gap between the first two toes. (Though, note, such a gap is unlikely to appear as in the OP's photo, which is lacking characteristic "bent" toes commonly seen in such deformations of the big toe and surrounding anatomy). Hallux varus is a deformity of the great toe that is characterized by adduction of the hallux and medial subluxation of the first MTP joint. [Source]1 See linked source for photos. See here2 for more technical information. Interesting, radiopaedia suggests that these two terms (sandal gap and hallux varus) are linked. A sandal gap deformity, also known as hallux varus, is an imaging observation in antenatal ultrasound (typically second trimester) where there is an expanded first interspace, i.e. the gap between the great toe of the foot from the rest of the toes (likened to the gap caused by a sandal). I think the intent to synonymize these terms is inappropriate based on the other more detailed and peer-reviewed sources I linked/cited. However, I also include this quote due to their usage of the phrase "expanded first interspace" to describe this gap -- However, I cannot find much reputable relevant usage of this phrase in a quick Google or Google Scholar search, so I don't think it's the broad-usage term the OP is looking for. Other conditions to rule out FYI: Plantar plate disruption (or injury) is related but not what you're looking for. It's a v-like gap due to injury vs a genetic anatomical characteristic that you're describing. Also, according to an article in the The Journal of Rheumatology3, "A gap between toes can be an indication of rheumatoid nodulosis, a relatively benign variant of rheumatic disease." However, this condition appears between any toe due to rheumatic disease during life, and images from the linked paper rule it out as the more general genetic morphological condition the OP is asking about. Citations: 1. Vanore, J.V., Christensen, J.C., Kravitz, S.R., Schuberth, J.M., Thomas, J.L., Weil, L.S., Zlotoff, H.J. and Couture, S.D., 2003. Diagnosis and treatment of first metatarsophalangeal joint disorders. Section 3: Hallux varus. The journal of foot and ankle surgery, 42(3), pp.137-142. 2. Boike, A.M. and Christein, G., 1994. Hallux varus. Hallux Valgus and Forefoot Surgery, pp.307-312. 3. Prati, C., Brion, B.B., Leclerc, G. and Wendling, D., 2014. Spacing of Toes Reveals Rheumatoid Nodulosis. The Journal of Rheumatology, 41(5), pp.973-974. + below is an unformatted quick list of the 58 conditions (listed alphabetically) copied in case the cited link dies. One could certainly explore the literature for each of these conditions to determine if some other common phrase/term is additionally used to describe this gap across these conditions. Acromesomelic dysplasia 4; Acrootoocular syndrome; Al-Raqad syndrome; ALG12-congenital disorder of glycosylation; Arthrogryposis, distal, type 2B2; Arthrogryposis, distal, with impaired proprioception and touch; Atelosteogenesis type II; Atelosteogenesis type III; Autosomal dominant intellectual disability-craniofacial anomalies-cardiac defects syndrome; Cardiac malformation, cleft lip/palate, microcephaly, and digital anomalies; Chromosome 6q24-q25 deletion syndrome; Clark-Baraitser syndrome; Cleft palate-stapes fixation-oligodontia syndrome; CLOVES syndrome; Coffin-Siris syndrome 1; Coffin-Siris syndrome 5; Congenital disorder of glycosylation, type iit; Coxopodopatellar syndrome; Cranioectodermal dysplasia 3; Cutis laxa with severe pulmonary, gastrointestinal and urinary anomalies; Desbuquois dysplasia 1; Developmental delay with autism spectrum disorder and gait instability; Duane-radial ray syndrome; Ectodermal dysplasia with facial dysmorphism and acral, ocular, and brain anomalies; Endocrine-cerebro-osteodysplasia syndrome; Growth delay due to insulin-like growth factor I resistance; Hydrocephalus-costovertebral dysplasia-Sprengel anomaly syndrome; Intellectual developmental disorder with dysmorphic facies and behavioral abnormalities; Intellectual disability, autosomal dominant 1; Intellectual disability-facial dysmorphism syndrome due to SETD5 haploinsufficiency; Kohlschutter-Tonz syndrome-like; Larsen-like syndrome, B3GAT3 type; Lethal acantholytic epidermolysis bullosa; Lethal hemolytic anemia-genital anomalies syndrome; Linear skin defects with multiple congenital anomalies 2; Mandibuloacral dysplasia progeroid syndrome; Meier-Gorlin syndrome 6; Menke-Hennekam syndrome 1; Menke-Hennekam syndrome 2; Microcephalus cardiomyopathy syndrome; Micrognathia-recurrent infections-behavioral abnormalities-mild intellectual disability syndrome; Microphthalmia with limb anomalies; Myofibrillar myopathy 10; Neurodevelopmental disorder with dysmorphic facies and distal limb anomalies; Neurodevelopmental, jaw, eye, and digital syndrome; Nicolaides-Baraitser syndrome; Oculofaciocardiodental syndrome; Orofaciodigital syndrome 18; Orofaciodigital syndrome V; Oto-palato-digital syndrome, type I; Seckel syndrome 1; Short stature, facial dysmorphism, and skeletal anomalies with or without cardiac anomalies 1; Short stature-optic atrophy-Pelger-HuC+t anomaly syndrome; Short-rib thoracic dysplasia 16 with or without polydactyly; Specific granule deficiency 2; Toes, space between first and second; X-linked intellectual disability Cabezas type; Zechi-Ceide syndrome;
{ "domain": "biology.stackexchange", "id": 12048, "tags": "human-anatomy, terminology, bone" }
How to rotate a rotation quaternion in the body frame to a rotation quaternion in the world frame?
Question: I have a sensor (3 axis gyroscope) which can rotate and measure angular velocity in 3 dimensions (aligned with the sensor). I know what its current orientation is with respect to the world frame. Call this quaternion qs. I take a reading from my gyroscope and integrate it to give me a rotation in the sensor frame. Call this quaternion qr. I now want to apply the rotation qr to the current orientation qs to obtain the new orientation, qs'. But I cannot use qr directly as it describes a rotation in the sensor body frame. I need to transform my rotation quaternion into the world frame, and then I could just apply it to the orientation i.e. qs' = qs * qr_world. But I am really struggling to understand how I can perform this transformation qr -> qr_world. Does this even make sense? I wonder if I have fundamentally misunderstood some concepts here. If it does make sense, then I am specifically interested in understanding how to do this using quaternion operations (if that is possible) rather than rotation matrices or euler angles. Answer: As pointed out in my earlier comment, this is actually simpler than you may think. Remember, $qs$ and $qr$ are fundamentally different, where the former represents orientation (in reference to the outside world) and the latter represents rotation (irrespective of any reference coordinate system). You're right in saying that you don't need to do any additional transformations and that your new orientation $qs'$ will be given by: $qs' = qs \times qr$ where; $qs$ is the previous position, and $qr$ is a single rotation by a given angle $\theta$ about a fixed axis (obtained from the gyroscope). Example Let's presume a simple analogy with straight line displacement; $\delta s = v \times \delta t$ Given an initial position $s$, it can be said that moving at a velocity $v$ for a period of time $t$ sees us ending up at a final position $s'$: $s' = s + \delta s$ Similarly, presume angular displacement (rotation in one axis); $\delta \theta = \omega \times \delta t$ Given an initial angular position $\theta$, it can be said that moving at an angular velocity $\omega$ for a period of time $t$ sees us ending up at a final angular position $\theta'$: $\theta' = \theta + \delta \theta$ Now, presume angular displacement in three-dimensional space (rotation in three axes) that, according to Euler's rotation theorem, is equivalent to a single rotation by a given angle $\theta$ about a fixed axis $\hat \omega$; $qr_w = \cos \dfrac{\theta}{2}$ $qr_x = \omega_x \times \sin \dfrac{\theta}{2}$ $qr_y = \omega_y \times \sin \dfrac{\theta}{2}$ $qr_z = \omega_z \times \sin \dfrac{\theta}{2}$ Given initial three-dimensional angular positions (orientation) $qs$, it can be said that moving at angular velocities $\omega_x$, $\omega_y$ and $\omega_z$ for a period of time $t$ sees us ending up at final three-dimensional angular positions (orientation) $qs'$: $qs' = qs \times qr$ Therefore, as straight line displacement $\delta s$ can be accumulated to track the position along a straight line $s$, so can angular displacements about three axes (or, rather, their single equivalent angular displacement about the Euler axis $qr$) be accumulated to track angular position in three-dimensional space (orientation) $qs$.
{ "domain": "robotics.stackexchange", "id": 1642, "tags": "rotation, frame" }
t-SNE Python implementation: Kullback-Leibler divergence
Question: t-SNE, as in [1], works by progressively reducing the Kullback-Leibler (KL) divergence, until a certain condition is met. The creators of t-SNE suggests to use KL divergence as a performance criterion for the visualizations: you can compare the Kullback-Leibler divergences that t-SNE reports. It is perfectly fine to run t-SNE ten times, and select the solution with the lowest KL divergence [2] I tried two implementations of t-SNE: python: sklearn.manifold.TSNE(). R: tsne, from library(tsne). Both these implementations, when verbosity is set, print the error (Kullback-Leibler divergence) for each iteration. However, they don't allow the user to get this information, which looks a bit strange to me. For example, the code: import numpy as np from sklearn.manifold import TSNE X = np.array([[0, 0, 0], [0, 1, 1], [1, 0, 1], [1, 1, 1]]) model = TSNE(n_components=2, verbose=2, n_iter=200) t = model.fit_transform(X) produces: [t-SNE] Computing pairwise distances... [t-SNE] Computed conditional probabilities for sample 4 / 4 [t-SNE] Mean sigma: 1125899906842624.000000 [t-SNE] Iteration 10: error = 6.7213750, gradient norm = 0.0012028 [t-SNE] Iteration 20: error = 6.7192064, gradient norm = 0.0012062 [t-SNE] Iteration 30: error = 6.7178683, gradient norm = 0.0012114 ... [t-SNE] Error after 200 iterations: 0.270186 Now, as far as I understand, 0.270186 should be the KL divergence. However I cannot get this information, neither from model nor from t (which is a simple numpy.ndarray). To solve this problem I could: Calculate KL divergence by my self, Do something nasty in python for capturing and parsing TSNE() function's output [3]. However: would be quite stupid to re-calculate KL divergence, when TSNE() has already computed it, would be a bit unusual in terms of code. Do you have any other suggestion? Is there a standard way to get this information using this library? I mentioned I tried R's tsne library, but I'd prefer the answers to focus on the python sklearn implementation. References [1] http://nbviewer.ipython.org/urls/gist.githubusercontent.com/AlexanderFabisch/1a0c648de22eff4a2a3e/raw/59d5bc5ed8f8bfd9ff1f7faa749d1b095aa97d5a/t-SNE.ipynb [2] http://homepage.tudelft.nl/19j49/t-SNE.html [3] https://stackoverflow.com/questions/16571150/how-to-capture-stdout-output-from-a-python-function-call Answer: The TSNE source in scikit-learn is in pure Python. Fit fit_transform() method is actually calling a private _fit() function which then calls a private _tsne() function. That _tsne() function has a local variable error which is printed out at the end of the fit. Seems like you could pretty easily change one or two lines of source code to have that value returned to fit_transform().
{ "domain": "datascience.stackexchange", "id": 69, "tags": "machine-learning, python" }
Why is it apparently not dangerous to fire a shotgun (such as when "skeet shooting") into the air?
Question: If you fire a gun or rifle into the air, whether straight up or at an angle, as I understand physics, a metal projectile will gain surprising momentum on its way down again, more than capable of killing a grown human being, not to mention small animals and children, property, etc. I have never fired a gun, but I assume that one of the first things they tell you is to never point the gun at a living being unless you intend to kill them, and next after that, to never fire up in the sky. However, people "skeet shoot" all the time, and seem to not apply this basic safety measure whatsoever when it comes to shotguns. I understand that shotguns work in a different manner from a normal "bullet", instead causing tons of small particles to spread out, but still, won't those small particles also come back to Earth in the same manner as the lethal metal bullet? Answer: One of the other rules of firearm safety is "Know your target and what is beyond it." A firearm (of any sort) can be safely fired if there is nothing of value between the muzzle and the point where the projectile loses the last of its kinetic energy. Some of these safe places are called "firing ranges". "Firearms, The Law, and Forensic Ballistics" by Margaret-Ann Armour provides formulae for the maximum range of a shotgun pellet based on the pellet diameter ($PD$): $$r_\mathrm{yards} = 2200 \tfrac{\mathrm{yd}}{\mathrm{in.}} \times PD_\mathrm{inches}$$ or $$r_\mathrm{meters} = 100 \tfrac{\mathrm{m}}{\mathrm{mm}} \times PD_\mathrm{millimeters}$$ When shooting skeet as in your question, you might select a shot between .110 inch (2.79 mm) and .080 inch (2.03 mm). Doing the math, we come up with approximately 175-275 meters total projectile distance. That range (of values) is the maximum range (in meters) you need to make a (firing) range...and most shots will fall harmlessly to Earth much, much closer than the maximum. You can see this when you look at a satellite image of a shotgun range. There is nothing of value within 300 meters of the firing line, but most of the shot falls to Earth within 50m. Oh, and just to drive home the point that we are really, really certain there won't be any risk beyond those distances, what could that be in the top-left corner of the image? It's an airport!
{ "domain": "physics.stackexchange", "id": 87892, "tags": "newtonian-mechanics, projectile, estimation, drag" }
What is the effect of pressure differentials due to gravity on buoyancy force?
Question: First of all, I am aware that this question has been answered in the past, however I have some follow up questions particularly regarded the argument posited in Why is Buoyant Force $V\rho g$? : When an object is removed, the volume that the object occupied will fill with fluid. This volume of fluid must be supported by the pressure of the surrounding liquid since a fluid can not support itself. When no object is present, the net upward force on this volume of fluid must equal to its weight, i.e. the weight of the fluid displaced. When the object is present, this same upward force will act on the object. This argument conceptually makes sense to me but I was confused around the interplay between the pressure differentials caused by gravitational fields acting on a fluid and the pressure exerted by the surrounding fluid. I see how the pressure is equal to the weight of the fluid displaced but what about the pressure differential. Shouldn't that still have to be considered? Following on, if an object was placed into a fluid of the same density why doesn't it rise since the pressure differential due to gravity is still there right? Answer: Remember that the force is the gradient of an underlying potential. Here, this is the gravitational potential, and we can ask the question: What is the energy difference between an air parcel (density 0 for simplicity) of volume $V$ submerged in a fluid with density $\rho$ at depth $z$ and the same parcel at depth $z + dz$? Assuming a cubic volume (we can make the cube infinitessimal, integrate and obtain the same result for any shape), we have $V = h^3$. Now, the difference in energy between the two configurations is given by the difference in potential energy of a slab of water of volume $h^2dz$ above the volume of height $h$ or below it. Using $U=mgz$, we get $dU=mgh$ for the difference in potential energy. As $m = h^2\rho dz$, we finally obtain (up to a sign change) $$dU/dz = F = h^3\rho g =V\rho g.$$ This is the familiar expression, without pressure differential present. The only circumstance under which this breaks down, is if we have a compressible fluid, s.t. $\rho = \rho(z)$. Then there would be an additional force due to the differential. However, water is highly incompressible, so we usually don't consider this effect. Please let me know whether this answers your question!
{ "domain": "physics.stackexchange", "id": 98325, "tags": "pressure, fluid-statics, density, buoyancy" }
Does the data rate of ofdm depends on number of subcarriers
Question: I am trying to investigate the performance of OFDM system practically, I am using sampling rate $1 GSps$ giving bandwidth $B = 0.5 GHz$. What I find is when setting the number of subcarriers $N = 64 $ or $N = 128$; the performance is OK. However, when increasing it into $N = 1024$; the performance becomes worse. Why does that happen? Is that related into the achieved data rate as it increases when increasing the number of subcarriers? According to my understanding, the subcarriers spacing $\Delta f = \frac{B}{N}$ so increasing the number of subcarriers $B$ will decrease the subcarrier spacing at the expense of performance. Is that analysis correct or there are other reasons? What's the effect of reducing the subcarrier spacing on the performance ? Answer: According to my understanding, the subcarriers spacing Δf=B/N so increasing the number of subcarriers B will decrease the subcarrier spacing and hence increase the achieved data rate at the expense of performance. Is that analysis correct or there are other reasons? No, it's not correct (by itself without a lot of assumptions that you'd need to do, which usually aren't true!) The DFT, no matter the length, just divides the overall bands in orthogonal subbands. If your noise was white before, that means the average SNR is totally regardless of number of subcarriers. If we don't ignore the guard interval: On the contrary, given that your channel's impulse response is independent of what OFDM system you build, the guard interval/cyclic prefix has constant length. Since with more subcarriers, you get more samples per OFDM symbol, and hence longer symbols, that means with more subcarriers, you spend a smaller percentage of your transmit time and energy on cyclic prefix/guard interval, and your amortized symbol rate grows. So, it's not clear why you're seeing what your seeing. Things that might happen: Your frequency synchronization is not perfect. Then, a fixed amount of frequency offset will lead to more inter-carrier interference. But: the more subcarriers you have, the longer your synchronization symbol can be, and typically, your frequency estimator variance drops by the same amount. If you have more subcarriers, adjust your frequency synchronization accordingly! This is inherent with techniques like Schmidl&Cox, as these use full OFDM symbols for estimation, and hence get more energy with longer OFDM symbols, but not for all possible frequency estimation methods. Your timing synchronization is not perfect. The phase of the same subcarrier in consecutive OFDM symbols rotates. Of course, that means that the longer the OFDM symbol, the more it rotates. But: for the same amount of data, you'd need proportionally more OFDM symbols with shorter symbols (=fewer subcarriers), so that's again a case of "amortized" identical error. What might be happening is that you do a per-subcarrier phase recovery (this is not usual in practical OFDM system, but who knows what kind of synchronization you have!) and the PLLs on each subcarrier need to be adjusted to the length of the symbols, and you forgot to do that. Your channel is time-variant, so that with longer symbols, your pilots just don't occur often enough. But more subcarriers also allow you to put in proportionally more pilots without losing rate. So, that would be a design mistake in increasing symbol duration without adjusting pilot insertion
{ "domain": "dsp.stackexchange", "id": 11346, "tags": "digital-communications, ofdm" }
Why do HOM measurements use a different definition of visibility?
Question: According to wikipedia (and any standard textbook), visiblity is defined as $\frac{\text{max - min}}{\text{max+ min}}$ But there are quite a few papers (here, and here for example) that exclude the minimum in the denominator, seen here. What is the motivation for this? Answer: It is probably misleading to refer to the quantity given by $$ V=\frac{P_{max}-P_{min}}{P_{max}} $$ as the visibility. However, it is defined in this way, because one wants to know the penetration of the dip as a fraction of the maximum. As such it give one information about how much uncorrelated photons are observed, whoch fills in the dip. Having said that, usually it is not actually the maximum that is used in these calculations. The reason is that the dip has the shape of an inverted sinc-function. It has oscillations on the sides making the maximum larger than the nominal value for uncorrelated photons. So the maximum could be misleading. Instead, one may use a value far away from the dip in the place of the maximum.
{ "domain": "physics.stackexchange", "id": 51257, "tags": "optics, quantum-optics" }
High overestimation on prediction data
Question: I am building lost sales estimation model for out of stock days etc. using XGBoost. I am using simple logic of training model on data of normal days with ample inventory (when sales and demand are same) and then using trained model to predict demand on out of stock days. For model building I am splitting the normal days data into train and test datasets. However I am getting a peculiar problem of very highly overestimated sales values of out of stock days. prediction for both train and test days are fine but only out of stock days prediction gives problem. Any tips what might be going wrong and how I can debug the problem in gradient tree type of model. Answer: Disclaimer: I am not 100% sure if my solution will be good for all cases but this solved my problem considerably. For me changing to XGBoost based imputation worked wonderfully. In the earlier case I was doing multiple different type of imputation like mean, mode etc. From there I retained only those imputations on which I was very much confident. Imputations on which I didn't have much confidence I did let XGBoost do the job. After this change most of the overestimation cases vanished. Another benefit I got from this change was: model fit parameters were direct reflective of the overestimation. So models in overestimation was still happening; all of those have had bad model fit Vs models with no overestimation always have good good fit. I believe this is happening because with XGBoost also doing the imputation it uncovers and leverages deeply hidden patterns in data. With this knowledge it is able to do better imputation of the scenarios where my ways of ordinary imputation were not much useful.
{ "domain": "datascience.stackexchange", "id": 1511, "tags": "machine-learning, r, xgboost" }
Folded Protein Chunk Dimensional Classification?
Question: Are there known dimensional measurements for the classification of folded proteins given a starting chunk/domain as defined by something like the clustering functionality of MSM Builder? Examples of what I would be looking for would be dimensions such as: pH Salinity Dissolved Sugars Reducing Agent Concentration Ambient Electronegativity With corresponding scales and precisions such as: logorithmic vs linear step size distribution (e.g. dissolved/flat vs fractal/inclusions) The above parameters would be included in folding software after a known starting conformation provided by MSM Builder or similar chunking software to derive a GUID for proteins without (or with extremely minimal and tuneable) collisions? For instance, if hundreds of GUIDs on average end up equating to the same folded protein that's fine, but if one GUID could map to two or more thermodynamically stable folded proteins at greater than some incredibly small (e.g. 0.0000001%) then it wouldn't fit what I'm looking for. I know this is inherently NP-hard, but the complete ID would be something like: { sequence, chunkId, pH, salinity, dissolvedSugar, reducingAgentConcentration } So I would expect the chunkId and dimensional parameters to play a significant role in reducing collisions for different starting parameters. It's worth noting I'm not seeking some magic bullet where you type an ID and get a fully folded protein, just a reliable system of GUIDs with enough starting information embedded in the ID to start the folding process. Answer: CATH The CATH database classifies protein by fold: https://www.cathdb.info/ So the value from that is probably the most useful for you. Crystallographic conditions pH Salinity Dissolved Sugars Reducing Agent Concentration Your crystallising conditions do not mean much. They are a solution that is close to precipitating your protein but slowly enough for it to crystallise. A protein can crystallise under multiple different conditions and a crystal may be chosen simply because it is a lower index well in the Hampton screening conditions... These mean very very little. And unless you are planning on doing ML to determine the best screening condition these should be avoided. Data from the PDB You have probably noticed that the only stats PDB/PDBe give are crystallographic, like space group, unit cell dimensions, resolution, completeness in shells, Rfree, Rmerge etc. So nothing there is of much use... except for bound ligand. Often this is the cofactor, but often it's a cofactor analog or a product (as opposed to the substrate). But that information can come from Uniprot etc. pH A solution has a pH, a protein has a pI. At a pH equal to the pI it will be neutral —and less insoluble. ProtParam can calculate the pI from a linear sequence. However, surface charge totally shows different spots. APBS can map the solution to the Poisson-Boltzmann equation to your protein. There is a PyMOL wrapper, a Pyrosetta wrapper and more. But you still would need to some stats and 3D operations to convert it into a small vector of values —rototranslations and projects are not for the faint of heart or in a rush. Melting temperature BRENDA has lots of enzymatic data on enzymes including melting temperature or kinetics at different temperatures (which follow the Eyring equation (TST), until MMRT kicks in) which could be useful. For a given protein you can calculate the relative folding ∆∆G (eg. with Rosetta Score), which is a proxy to melting temperature in a buffer it is happy in. Despite paining me to say... some fanciful Poisson-Boltzmann map derived vector and ∆∆G are probably all you could get from analysing structures. Whereas the rest are from CATH classification and other databases that are not structure based.
{ "domain": "bioinformatics.stackexchange", "id": 1628, "tags": "proteins, protein-structure, identifiers, thermodynamics" }
Does function type of callback work in class?
Question: Hi everyone, I am wondering if I can use function type of callback in a class? This is my function type callback signature: double current_joint_position[7]; double current_joint_velocity[7]; void controllerStateCB(const pr2_controllers_msgs::JointTrajectoryControllerState::ConstPtr& msg) { for (int joint_index = 0; joint_index < 7; joint_index++) { current_joint_position[joint_index]=msg->actual.positions[joint_index]; current_joint_velocity[joint_index]=msg->actual.velocities[joint_index]; } } This is where I define the subscriber: void move_group::MoveGroupMoveAction::executeMoveCallback_PlanOnly(const moveit_msgs::MoveGroupGoalConstPtr& goal, moveit_msgs::MoveGroupResult &action_res) { ... ... ... ros::NodeHandle handle; ros::Subscriber sub = handle.subscribe("r_arm_controller/state", 1000, &controllerStateCB); ... ... ... } The function signature is in the same cpp file of the class. However, it seems that the callback function is never called even though there is message in "r_arm_controller/state" topic. Thanks for any help in advance! Quan Originally posted by AdrianPeng on ROS Answers with karma: 441 on 2013-11-15 Post score: 0 Original comments Comment by AdrianPeng on 2013-11-17: Thank you tbh, I just miss spin in my code. Answer: Where is your spin? Callbacks won't happen until you reach a spin. So if your subscriber goes out of scope and is deleted before a spin happens, it won't do anything. Originally posted by thebyohazard with karma: 3562 on 2013-11-15 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 16174, "tags": "ros, moveit" }
Question about the refraction of Fresnel lenses
Question: If you line up the suns rays parallel to a Fresnel lens, the light is concentrated, and the focus directly underneath. However, what happens if the sun is off to the side, making the light hit at an angle (ei. 45 degrees)? Will the only difference be the focal point, of will the light be less concentrated? If so, by how much? What about a convex lens? Thanks Answer: A Fresnell lens is essentially a "collapsed" plano-convex lens. If you were to raise each ring of the Fresnell lens so the outer diameter of the ring were at the height of the inner diameter of the next larger ring, you'd have reconstructed the original plano-convex lens. Obviously, there are some diffraction effects and various other higher-order aberrations, but at the simple level I think you're asking, the Fresnell lens produces a full image of the object. That's another way of saying Yes, it would produce an image of the sun even if the sun were not on the optic axis.
{ "domain": "physics.stackexchange", "id": 11079, "tags": "visible-light, temperature, sun, refraction, lenses" }
Problems installing Hydro
Question: Trying to install Hydro i get the following problems. When i try to install one by one and go deeper i find out that libpcl-1.7 has problems how can i solve this? The following packages have unmet dependencies: ros-hydro-desktop-full : Depends: ros-hydro-desktop but it is not going to be installed Depends: ros-hydro-mobile but it is not going to be installed Depends: ros-hydro-perception but it is not going to be installed Depends: ros-hydro-simulators but it is not going to be installed E: Unable to correct problems, you have held broken packages. I have fuerte working and i need it so i cant uninstall it, any ideas will be welcomed ;) PD: Im installing hydro only to use the pcl-1.7 so if there is another way to use it and not install hydro that would work to Originally posted by ctguell on ROS Answers with karma: 63 on 2013-10-14 Post score: 0 Answer: The error suggests that you have packages being 'held' by apt. Try searching that error. I found several potential solutions Originally posted by tfoote with karma: 58457 on 2013-10-14 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 15860, "tags": "ros, installation, pcl, ros-hydro" }
Creating an object in Python with lots of long arguments
Question: See the proposed changes in this Pull Request under def add_talk. date = self.talkDetailsWidget.dateEdit.date() time = self.talkDetailsWidget.timeEdit.time() presentation = Presentation( unicode(self.talkDetailsWidget.titleLineEdit.text()).strip(), unicode(self.talkDetailsWidget.presenterLineEdit.text()).strip(), unicode(self.talkDetailsWidget.descriptionTextEdit.toPlainText()).strip(), unicode(self.talkDetailsWidget.categoryLineEdit.text()).strip(), unicode(self.talkDetailsWidget.eventLineEdit.text()).strip(), unicode(self.talkDetailsWidget.roomLineEdit.text()).strip(), unicode(date.toString(Qt.ISODate)), unicode(time.toString(Qt.ISODate))) There's a lot of boilerplate code (e.g. unicode(), seld.talkDetailsWidget, text(), strip(), etc.) How could you reduce that and still have the code be easy to understand? My thinking is if something along the lines of this were possible: map(str.strip, map(unicode, map(QLineEdit.text, map(self.talkDetailsWidget, fields)))) Answer: Your TalkDetailsWidget is underdeveloped, I think. You could say that you have a view but no model, and that is causing you problems. You want to be able to write talk = self.talkDetailsWidget presentation = Presentation(title=talk.title, speaker=talk.presenter, # ← Why the inconsistent vocabulary? description=talk.description, category=talk.category, event=talk.event, room=talk.room, date=unicode(talk.date.toString(Qt.ISODate)), time=unicode(talk.time.toString(Qt.ISOTime))) Therefore, you'll need to implement new properties in TalkDetailsWidget. To avoid copy-and-paste programming in TalkDetailsWidget, I suggest writing those getters using metaprogramming. class TalkDetailsWidget(QWidget): … def _field_reader(field, method='text'): return property(fget=lambda self: unicode(getattr(getattr(self, field), method)()).strip(), doc="Returns a Unicode string from the %s field with spaces stripped from each end" % (field)) title = _field_reader('titleLineEdit') presenter = _field_reader('presenterLineEdit') description = _field_reader('descriptionTextEdit', method='toPlainText') category = _field_reader('categoryLineEdit') event = _field_reader('eventLineEdit') room = _field_reader('roomLineEdit') @property def date(self): self.dateEdit.date() @property def time(self): self.timeEdit.time()
{ "domain": "codereview.stackexchange", "id": 6753, "tags": "python, python-2.x, pyqt" }
PHP Login and Registration system using BCrypt
Question: I'm new to web development and this is my first website. I was wondering if my login and registration system is secure. I was also wondering how to handle viewing parts of a webpage when the user isn't logged in (ie. hide a section if the user isn't logged in). Sorry in advance if there's irrelevant stuff in the code. Login.php <?php define('DB_HOST', '127.0.0.1'); define('DB_NAME', 'users'); define('DB_USER','root'); define('DB_PASSWORD','db_password'); $con=mysql_connect(DB_HOST,DB_USER,DB_PASSWORD) or die("Failed to connect to MySQL: " . mysql_error()); $db=mysql_select_db(DB_NAME,$con) or die("Failed to connect to MySQL: " . mysql_error()); LogIn(); function LogIn() { $log_username = mysql_real_escape_string($_POST['username']); $log_password = mysql_real_escape_string($_POST['password']); $query = "SELECT password FROM users WHERE username = '$log_username'"; $storedPassword = mysql_query("SELECT password FROM users WHERE username = '$log_username'"); $row = mysql_fetch_row($storedPassword); $storedSalt = mysql_query("SELECT salt FROM users WHERE username = '$log_username'"); $saltrow = mysql_fetch_row($storedSalt); $options = [ 'cost' => 12, ]; $hash = password_hash($row, PASSWORD_BCRYPT); $data = mysql_query ($query)or die(mysql_error()); if($data) { if (password_verify($log_password, $row[0])) { //echo "Valid login"; session_start(); $_SESSION["uname"] = $log_username; } else { echo 'Invalid username or password.'; } } } ?> Register.php <?php define('DB_HOST', '127.0.0.1'); define('DB_NAME', 'users'); define('DB_USER','root'); define('DB_PASSWORD','db_password'); $con=mysql_connect(DB_HOST,DB_USER,DB_PASSWORD) or die("Failed to connect to MySQL: " . mysql_error()); $db=mysql_select_db(DB_NAME,$con) or die("Failed to connect to MySQL: " . mysql_error()); if(isset($_POST['submit'])) { NewUser(); } function NewUser() { $reg_email = mysql_real_escape_string($_POST['email']); $reg_username = mysql_real_escape_string($_POST['username']); $reg_password = mysql_real_escape_string($_POST['password']); $reg_repeatpassword = mysql_real_escape_string($_POST['repeatpassword']); $reg_email = mysql_real_escape_string($_POST['email']); $reg_hash = mysql_real_escape_string($_POST['password']); //$options = [ // 'cost' => 12, //]; $hash = password_hash($reg_hash, PASSWORD_BCRYPT); if($reg_password != $reg_repeatpassword) { echo "Passwords do not match"; } else { $query = "INSERT INTO users (id, username, password, salt, email) VALUES ('', '$reg_username','$hash','','$reg_email')"; $data = mysql_query ($query)or die(mysql_error()); if($data) { //echo "Successfully registered"; echo '<script type="text/javascript">alert("Registration successful.");</script>'; } } } ?> Index.php <body> <div class="container" align="center"> <?php if(!isset($_SESSION['uname'])) { echo "<div style='margin-bottom: 200px; text-align: center;'>Please log in to view uploads.<br>"; echo "</div>"; echo "<div class='container2' align='center'>"; echo "<form action='/login.php' method='post' enctype='multipart/form-data' >"; echo "<label for='username'>Username: </label>"; echo "<input type='text' id='username' name='username'>"; echo "<br>"; echo "<label for='password'>Password: </label>"; echo "<input type='password' id='password' name='password'>"; echo "<div id='lower'>"; echo "<input type='submit' id='submit' value='Log in'>"; echo "</div><!--/ lower-->"; echo "</form>"; echo "</div>"; } else { echo "<div style='margin-bottom: 50px; text-align: center;'>"; echo "<h3>"; echo "Welcome, ". $_SESSION['uname']; echo "</h3>"; echo "</div>"; echo "<div class='logoutbutton' align='center'>"; echo "<form action='/uploads' class='logoutbutton' method='post' enctype='multipart/form-data'><input type='submit' id='submit' value='View uploads'></form>"; echo "<form action='/logout.php' class='logoutbutton' method='post' enctype='multipart/form-data'><input type='submit' id='submit' value='Log out'></form>"; echo "</div>"; }?> </div> <!--Version 3.2--> </body> Answer: You are using deprecated mysql_* functions. The very first thing you should do is update to msyqli_* or, better yet, PDO. Ideally you would also move towards using parametrized prepared statements for these queries. It is good that you use password_hash() and password_verify(), however I don't know why you would want to specifically use bcrypt unless you alwasy wanted to enforce this encryption in the future. I would simply consider leaving this parameter emtpy and using PHP default ecnryption, whihc is subject to change over time. This would allow your application to take advantage of potentially better encryption methodologies as they become default with PHP version changes, without sacrificing backwards compatibility. You are doing no validation of the POSTed values at all. I could literally POST empty values for every field and create a new user in the database. I am not understanding what your scripts are supposed to be outputting. You are not creating valid HTML documents anywhere, just HTML and/or javascript fragments. Don't output error messages to the end user. Log them. Generate user-friendly UI for presenting errors and be a inspecific about the unerlying platform. Don't give the user visibility to things such as "Failed to connect to MySQL". This is information an attacker can user against you. Get out of the habit of putting passwords into your code. These should ideally be derived from environmental configuration. Since you are using password_hash() You should not be having a different salt field in your database. The single string generated by password_hash() includes the salt (as well as the information needed by password_verify() to split apart the salt from the main hash). Don't use root mysql user for your applications. For every application you have, you should create a MySQL user with appropriate permissions on only the resources that application needs. Much of your code is happy path. You just assume things are going to work. For example, what is your username lookup returns 0 results (a very common state for a login). When doing login lookup, you should consider returning all pertient fields from the database that you may need to use and store in "user" object or data structure. No need to query the database on e field at a time. Get rid of your unused code. For example $query and $options variable sin login. Leaving around extra cruft in your code is going to make it harder to maintain. "Session" and "login" are not the same thing. I don't understand why you conditionally start a session only upon login. "Session" is used for server-side storage or data related to the session only and should not be used to convey login state. You do need to understand best practices about regenerating session id's and destroying session state based on login state. Review http://php.net/manual/en/features.session.security.management.php for good starting point on managing sessions securely in PHP. Why are you using password_hash() at all in your login script? All you should be doing is getting the user record based on the use name and then using password_verify() to compare the record vs. login input. Stylistically, you are not doing a good job of separating your PHP logic from your HTML display logic. This can sometimes be a challenge in PHP, but even the simple step of dropping in and out of PHP (while something should be minimized) can make you code much easy to read (and probably much easier to edit in your IDE. Compare your index page to the version I have made below. How much easier to read is this? Even notice how the StackExchange code highlighting works way better with code that is properly split apart like this. Example: <body> <div class="container" align="center"> <?php if(!isset($_SESSION['uname'])) { ?> <div style='margin-bottom: 200px; text-align: center;'>Please log in to view uploads.<br></div> <div class='container2' align='center'> <form action='/login.php' method='post' enctype='multipart/form-data'> <label for='username'>Username: </label> <input type='text' id='username' name='username'> <br> <label for='password'>Password: </label> <input type='password' id='password' name='password'> <div id='lower'> <input type='submit' id='submit' value='Log in'> </div> <!--/ lower--> </form> </div> <?php } else { ?> <div style='margin-bottom: 50px; text-align: center;'> <h3>Welcome, <?= $_SESSION['uname'] ?></h3> </div> <div class='logoutbutton' align='center'> <form action='/uploads' class='logoutbutton' method='post' enctype='multipart/form-data'> <input type='submit' id='submit' value='View uploads'> </form> <form action='/logout.php' class='logoutbutton' method='post' enctype='multipart/form-data'> <input type='submit' id='submit' value='Log out'> </form> </div> <?php } ?> </div> <!--Version 3.2--> </body> Since you are new to development, let me point you to a great resource for learning PHP - http://www.phptherightway.com/ .From what you are showing it looks like you are using very old, very poor examples of how to work with PHP.
{ "domain": "codereview.stackexchange", "id": 24016, "tags": "php, form, authentication" }
Deconvolution of Synthetic 1D Signals - How To?
Question: I convolved a square wave with a Gaussian wave using linear convolution. Can I get the original square wave back by deconvolving my output with the Gaussian function? I took the FFT of both signals, divided and then took the IFFT to get back the square wave. The output I got looked like random noise. I thought this might be due to the fact that I am doing a division and some denominator values might be very low, hence causing this error. I tried to set up a threshold and obtained the result depicted below. How can I improve the result? Edit 1 : Adding the code used to generate Gaussian signal. Was using matlab. fs = 200; % Sampling frequency t1 = 1/fs : 1/fs : n1/fs; % Where n1 is the length of the square wave signal s2 = 4*gaussmf(t1, [ 0.4 4.5 ]); % Generating Gaussian signal Edit 2 : Adding code used to generate square wave. I am afraid, it's a clumsy code! fs = 200; % Sampling Frequency t = 1/fs : 1/fs : 3; n = length(t); sq(1 : round(n/3) ) = eps; sq( (round(n/3) + 1) : 2*round(n/3) ) = 3; sq( (2*round(n/3) + 1) : n ) = eps; sq( (3*round(n/3) + 1) : 4*round(n/3) ) = 3; sq( (4*round(n/3) + 1) : 5*round(n/3) ) = eps; sq( (5*round(n/3) + 1) : 6*round(n/3) ) = 3; sq( (6*round(n/3) + 1) : 7*round(n/3) ) = eps; sq( (7*round(n/3) + 1) : 8*round(n/3) ) = 3; sq( (8*round(n/3) + 1) : 9*round(n/3) ) = eps; Answer: You cant't recover the original signal through deconvolution. A Gaussian kernel is in essence a lowpass filter, i.e. it will remove information at higher frequencies from the signal. Once it's gone, it's gone and you can't recover it. This problem shows up as "divide by zero" or "divide by a very small number", which then amplifies numerically noise of the original convolution to a massive amount. In order to recover the original signal the system must have "sufficient signal to noise ratio in the bandwidth of interest". What exactly that is, depends heavily on your specific application. You can de-convolve from a low-shelf or high-shelf filter, but not from a low-pass or anything with a stop band or lots of attenuation at specific frequencies. You'd also lose some information at the beginning and end of the signal.
{ "domain": "dsp.stackexchange", "id": 7229, "tags": "matlab, convolution, deconvolution, inverse-problem" }
How to make a constant flow rate of water from a tank with a faucet at the bottom side?
Question: Consider I have water tank as shown by the picture. By normal condition, the flow rate when the tank is full is higher than when the tank almost finish, and till it finally finish it will stop (in this case, what we consider is until the water level reaches the upper side of the faucet). But I want the water inside the tank flows from the faucet at constant flow rate without any additional electronic/microcontroller device. I want a pure mechanic, hopefully I may have a simple one. My question then, is any method to do it? Answer: I didn't know the answer to your question so I searched for "constant flow water valve" and found plenty of results. Here's one: Figure 1. When the pressure increases, the control rubber is pressed won into the conical seat and the orifice of the control rubber decreases. Image source: Berkfelt Teknik on YouTube.
{ "domain": "engineering.stackexchange", "id": 5176, "tags": "flow-control, hydrostatics, water-pressure" }
How can the Operating System run on the same chip it is supposed to be managing?
Question: From my readings about Operating Systems (reading the basic material on Wikipedia, tech sites, etc) I've learned that the Operating System is a program that allows programs and applications to interact with the hardware in an efficient and safe way. However I'm confused about how the Operating System oversees the computer's operation when it itself needs to be operated. What do I mean? Well, the way I would imagine an Operating System to work, is that on a computer, there would be two CPUs. One that runs the OS all the time, and another that the OS uses to run the computer. However, it turns out that the OS is running on the same CPU that the other processes are. This is like a manager having to work on the same production line as his employees, and only gets to use the power tools when another employee is done with them. He would not be a very effective manager, since he wouldn't have the ability to issue orders if his employee is even slightly undisciplined. So how can it be that the OS only runs part of the time on the same CPU that has to be shared between all the other processes? How does this end up working out? Answer: Modern CPUs are aware of the OS up to a certain degree. They provide some "power tools" for the first one who claims them. Usually this is the boot loader, which then hands over control to the OS. One usually speaks of "kernel mode" vs "user mode" or "ring 0" vs "ring 3" to distinguish between the one process with the extra privileges and the rest. These "power tools" are certain privileges for resource management: Control the memory, access to the hardware and how long user level code may be executed without interruption. The CPU executes the OS with its special privileges when one of the following events occurs: A user mode process explicitly hands over the control to the kernel mode process. This is called a syscall. The kernel mode process can use its special privileges to register for certain events (e.g. external hardware sends a special signal to the CPU or a user space process tries to access a reserved resource). When such an even occurs, the CPU stops the user mode process immediately and hands over the control the the kernel mode process. Usually one speaks from an interrupt. So the OS can run on the same chip because the chip is built for this. It can reserve special privileges for itself. The CPU may interrupt all other pieces of code without these special privileges anytime and hand over the control to the OS. Some chips with very limited support (e.g. a microcontroller) don't have this support for special privileged code. These chips usually run without an OS. There is only one big program running, that can access the hardware directly, must respond to the hardware interrupts and can access any resources anytime. If that program makes one mistake, usually the whole thing crashes.
{ "domain": "cs.stackexchange", "id": 7746, "tags": "operating-systems" }
Work done in a weird scenario
Question: This was a question asked in JEE Mains 2020, an engineering exam in India. It's common knowledge for anyone writing the exam that work done is the area under the PV graph. I gave this question to a few friends, and according to few of them, the answer is 48. According to me and few others it should be 68, since there is area under this graph too, which is hidden by the question maker showing that the graph begins from (2,2). To explain the answer 48, some of my friends said you just shift the origin to (2,2), and I believe I have a much better grasp of mathematics, and claimed that that operation is wrong, you are changing the function being plotted in such a scenario. But, I can't seem to be able to convince them to not consider (2,2) as the origin of the graph, as they are changing the values of real life functions. I also believe, they seem to have difficulty understanding the concept of what origin means. According to them, it refers to the point where both the axes meet. According to me, origin is just the point where both functions are 0. Axes tell you nothing extraordinary, they just help you locate the value a point represents, and map a certain direction to increase or decrease of a function. And axes don't need to be marked from the origin, according to my understanding, they should just help us locate the value of the particular function they correspond to, for a certain point. So at the intersection of the two weird axes stated in the question, the pressure is 2 and volume is 2. Shifting the origin changes the function we plot to Pressure - 2 and Volume - 2 which have no significance with the real life functions. Could anyone please share a convincing solution for as to why it's 68 (considering the complete view of the graph) or disprove me as to why the answer should be 48 (only the area under graph given)? Thanks! Answer: Calling an integral "the area under a graph" is a very convenient shorthand, but it is not a literally accurate description of what an integral is. It sounds like your friends have learned this shorthand without actually learning what an integral is. A more pedantically accurate but still informal way to describe the integral $\int_a^b f(x)\,dx$ might be something like "The signed area of the region bounded by the graph of $f$, the horizontal line through $(0,0)$, and the vertical lines through $(a,0)$ and $(b,0)$." Here "the graph of $f$" means the set of points $(x,f(x))$, and "signed area" is positive for subregions where $f(x)$ is positive and negative when $f(x)$ is negative. This description has its flaws, but it at least gets around your friends' misunderstanding. However, I think that rather than trying to convince your friends that this is what we mean when we talk about the "area under the graph," you could try to appeal to the uniqueness of the integral. When it's defined, $\int_a^b f$ should always be the same number regardless of who is calculating it; it depends on what the function $f$ is and not any pictures we may choose to draw of $f$. As long as they accept that calculating the area when the horizontal axis passes through the origin is a way to get an integral, they should be able to see that it's the only way. If your friends don't believe or can't be convinced that an integral is a unique number, you may want to consider finding new study partners.
{ "domain": "physics.stackexchange", "id": 87223, "tags": "homework-and-exercises, thermodynamics, work" }
Initializing and populating a Python dictionary, key -> List
Question: I have a simple task that comes up often my work. I need to create a dictionary for an id key to a list of data associated with that key. I'm wondering if there is a better way to do this than the one I am using. The pattern is so common that I feel there must be better solution. value_list_dict = {} for line in f: line.strip() rec = line.split("\t") my_key = rec[0] important_value = rec[5] #or what ever it is # the repetitive pattern I find myself doing a lot of. if my_key not in value_list_dict: value_list_dict[my_key] = [] value_list_dict[my_key].append(important_value) While this solution is fairly concise and clear to me, I wonder if there is a better way. I think that writing a custom function for this might be less readable although I'm open to suggestions. Answer: line.strip() does not modify the variable, it returns a modified string. But you can chain it together with the split. I would also jchoose a better variable name than rec. As J.F.Sebastian suggested in the comments, you could go with items: for line in f: items = line.strip().split('\t') Your code is the perfect place for a collections.defaultdict. You can give it a type which it will use when the key is not defined. It makes implementing a counter a lot easier (just pass it int, whose default constructor returns an int with value 0) or, give it list and it will give you an empty list: from collections import defaultdict value_dict = defaultdict(list) with open("file.txt") as f: for line in f: items = line.strip().split('\t') key, value = items[0], items[5] value_dict[key].append(value) I also added the with..as construct in there, in case you are not yet using it. In python 3.x I would use the extended iterable unpacking: key, *other_vals, value = line.strip().split('\t')
{ "domain": "codereview.stackexchange", "id": 21673, "tags": "python" }
What is definition of direction of induced EMF?
Question: This post consists of two questions both relating to the directional aspects of emf, in some way. I could not include both the questions in the title, so I chose the most troubling one. Introduction: To understand the questions we need to understand the meaning of induced emf for a loop at rest. For such loop induced emf is defined as the work done by net NON-conservative electric field on a charge divided by the charge. Hence it is a scalar quantity. Mathematically: $\epsilon =\oint_C\vec{E}\cdot d\vec{l}.$ But what is $d\vec{l}?$ It is a vector representing an element on the loop of length $dl$ and the direction of its direction is tangential to the curve. But there exist two such directions, both opposite to each other (see the fig). Hence we can calculate induced emf $\epsilon$ in two ways; one in an anticlockwise way and the other in an anticlockwise way. $1^{st}$ question: Faraday's law states that $\epsilon =\oint_C\vec{E}\cdot d\vec{l}=-\frac{d\phi_m}{dt}$ where $C$ represents a loop. Which emf out of two is used in Faraday's law? $2^{nd}$ question: In many books and articles "direction of induced emf" is used. What does the direction of induced emf mean? It's a scalar quantity and hence doesn't have direction in the sense that vectors have it. What does direction mean here? What does it mean, physically, when we say that induced emf is in a clockwise direction as seen from a point in a loop? Does induced emf here mean current? Note: This question originally consisted of this question as well, but I found that it made my question too lengthy and confusing. Answer: You can get an induced emf with no resulting induced current and hence no "opposition" but then no work is done. Thus any mention of induced current should be thought of as considering what might happen if an induced current was allowed to flow. Your question about direction is resolved if you follow the right-hand convention as shown in the diagram below. Decide on the direction of the normal unit vector to the area $\hat n$ and then the positive direction for the line integral around loop $C$ is decided by the right-hand rule, eg thumb of the right hand in direction of the normal unit vector and curled fingers of right hand give positive direction for the line integral. Suppose that the magnetic field is in the direction of $\hat n$ and increasing then $\vec B\cdot \hat n$ is positive and so the right-hand side of Faraday's law, $-\frac {d\phi}{dt}$, will be negative. The line integral on the left-hand side, the emf $(\displaystyle \oint_{\rm C} \vec E\cdot d \vec l)$ will thus be negative ie in a clockwise direction looking from the top in the diagram. That will also be the direction of the induced current which is consistent with Lenz. $\displaystyle \oint_{\rm C} \vec E\cdot d \vec l$ is the work done by the electric field on a unit positive charge when the charge goes around a complete loop which is at rest. In my example, the electric field direction is clockwise which means the induced current (movement of positive charges) is clockwise. This is produced by the induced emf which is also in a clockwise direction in that it drives positive charges that way ie in the opposite direction to the arrow with the $C$ by it.
{ "domain": "physics.stackexchange", "id": 86181, "tags": "electromagnetism, potential, voltage, definition, electromagnetic-induction" }
How to simulate a phase transition in Molecular Dynamics?
Question: I am trying to simulate the phase transition for gold and silicon separately using LAMMPS. I got the melting point for gold right using the below code. units metal atom_style atomic boundary p p p variable a equal 4.0782 lattice fcc 4.0782 region box block 0 10 0 10 0 10 create_box 1 box create_atoms 1 box mass 1 196.97 pair_style eam pair_coeff * * Au_u3.eam minimize 1.0e-8 1.0e-8 1000 100000 min_style cg timestep 0.001 velocity all create 300.0 454883 mom yes rot yes dist gaussian thermo 50000 thermo_style custom step pe ke etotal temp vol press density atoms fix 1 all press/berendsen iso 0.0 0.0 100.0 fix 2 all nvt temp 300.00 2400.00 1.0 run 10000000 However, when I apply this code to silicon, I don't get the right results. I feel like I am not understanding the code and the physics behind getting a phase transition graph using molecular dynamics. So I guess my request here is if you have recommendations of some papers or books I should read to be able to perform the simulation (phase transition in molecular dynamics). I have browsed through the book " Understanding Molecular Dynamics by Dan Frenkel and Berend Smit", but still I feel like I am missing something. edited : the output is shown below for both gold and silicon. Answer: Metals, such as gold, undergo phase transitions with very limited nucleation barrier, whereas silicon does not. For instance, if you have periodic boundary conditions (no surfaces) and no other defects, then silicon can be superheated very easily (at least on the typical time-scale of an MD simulation). If your aim is to measure the melting point then the correct way is to start off by creating two phases - crystalline and amorphous - in contact with each other, and then find at what temperature the two are in equilibrium. (At lower temperatures the crystal phase will grow, at higher temperatures the amorphous phase will grow). If, however, your aim is to simulate the nucleation event then this is a very difficult problem and a very active area of research. You could simply go to a higher temperature or fix your cell size (remove the press/berendsen fix) - that will cause a phase transition, but it won't necessarily be a realistic transition. There are much more complicated procedures using rare-event sampling methods like metadynamics [1] or seeding methods [2] for this.
{ "domain": "physics.stackexchange", "id": 46614, "tags": "computational-physics, phase-transition, molecular-dynamics" }
Recommended Modelling Technique for Influencer Marketing Scenario
Question: I have an approximately 90,000 row dataset that has information of social media profiles which has columns for biography, follower count, language spoken, name, username and the label (to identify whether the profile is that of an influencer, brand or news and media). Task: I have to train a model that predicts the label. I then need to produce a confidence interval for each prediction. As I have never come across a problem like this, I am just after some suggestions of what models I should be using for a situation like this? I am thinking Natural Language Processing (NLP), but not sure. Also, for NLP (if a suitable method), any codes or advice to help me implement for the first time on Python would be greatly appreciated! Thanks in advanced Answer: It depends very much on the structure of the data. I would think about feature extraction first, which could be certain words occurring in the bio, and a class of user name ('real' name, numerical id, etc). Once you have a set of features for each data item, turn them into a list of feature vectors. Then run them through a number of machine learning algorithms. This is where the shape of the data matters, as some algorithms will work better than others. I would try eg decision trees (ID3), which are very efficient once trained (but they don't give you a confidence interval). But any other ML algorithm might work. They will all have trade-offs with speed of training, memory requirements, and speed of classification; some will give you a class-label probability, others will just give you one label. The best way would be to use a sample, and identify which algorithm works well and fits your specific requirements. Then use that for the full data set. Alternatively you could just use, for example, the Stanford ML classifier. That will give you a confidence interval, and will probably work reasonably well.
{ "domain": "ai.stackexchange", "id": 509, "tags": "training, natural-language-processing, sentiment-analysis" }
Dynamic Robot Model Implementation
Question: I'm wondering if anyone can offer advice on how to approximate a non-rigid robot model. I'm implementing a model of a 3 DOF arm for a university independent study that uses a rollable prestressed tube (like this: http://www.youtube.com/watch?v=tJ5w8xoi1IU) as one of the extensible elements (this is the robot: https://www.youtube.com/watch?v=xlHGNTRlD1U). Essentially it is just a rolled fiber composite that can be unrolled to provide a rigid tube like a tape measure. I can't model the roll because it deforms from its rigid form, but I can make the assumption that the boom section is rigid. I initially set the model up with a prismatic joint so the boom could slide in an out of the arm, but the arm then retracts into the body of the arm and eventually out the back and looks terrible. This method also introduces problems with collision checking in moveit due to self collisions when the boom extends into the arm, and with the environment when it extends out the back. The next approach I'm researching is to dynamically set the length of the straight tube in the boom section so that the model renders properly within rviz/gazebo, dynamically resizes the collision elements, and correctly publishes transforms and link/joint states. I think I have a good handle publishing the transforms and states with a custom robot state publisher (I'm working on an implementation based on the tutorial: http://wiki.ros.org/urdf/Tutorials/Using%20urdf%20with%20robot_state_publisher) but I'm getting bogged down in the implementation details trying to work out how to update the collision and visualization meshes There are 3 possibilities that seem feasible: Publish the link as a marker with the transforms in a robot state publisher and skip the link in the urdf file that rviz parses so that the marker just fills in for the boom. Modify a rviz node to update the representation of the mesh for the tube by subscribing to a robot state publisher along the lines of the answer to this thread: http://answers.ros.org/question/9425/gazebo-model-dynamically-modified-by-programming/ I'm still trying to understand how exactly rviz displays the model, but I assume it can be done programatically similar to gazebo. It seems like a pointer to a robot model is passed around in gazebo, if this is also true in rviz it might be possible to subscribe to the pointer to the model in my state publisher, build the new model and switch it out with the old one. I'm not sure how to deal with race conditions that would be caused here without modifying rviz. Does anyone have a better idea for how this could be done? Do any of the ideas I have so far seem like they are the right way to go? Originally posted by St3am on ROS Answers with karma: 170 on 2014-02-25 Post score: 3 Answer: Pretty interesting tech :) The first thing I´d try is approximating the boom using multiple equal sized links that fit into the base without poking out of it when retracted and connecting them all via prismatic joints. This allows the following: Trivial to convert between the multiple prismatic joint values and a single "boom extension" joint value No geometry extending out of the back of the robot Self collision is not checked in rviz and Gazebo, so should work out of the box and look like the real thing Collision checking between select links can be switched off in the SRDF file used by MoveIt as described here (search for the disable_collisions tag), so that should also work Looking at the issue I just stumbled across the "mimic" joint tag in URDF that I vaguely remembered to exist (see URDF joint XML wiki page). If this works as expected, it probably is the way to go and you can build your model using this mechanism. There is support for mimic joints in MoveIt! (see JointModel documentation) so planning should work, too. As the "mimic" tag is not widely used, you might run into a few issues when trying it out, but those hopefully would be solvable. The other option would be to implement some more specialized software. This would be relatively easy in rviz (Implementing your own plugin and using OGRE for rendering is not too difficult), but that would not be directly portable to Gazebo and MoveIt!, where collision detection would have to be performed. Originally posted by Stefan Kohlbrecher with karma: 24361 on 2014-02-25 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by St3am on 2014-02-26: Thanks! I think your telescoping approach is definitely the best way to go, clean and robust. I'll report back in this thread on how the mimic approach works, if I have problems hopefully setting the joint limits will work as a fall-back. Comment by 130s on 2014-08-15: @St3am How did it go ;) ? Just curious since we're investigating a similar type of robot that has an extensible arm. Comment by St3am on 2014-08-15: Sorry about that, I forgot about this. The telescoping method worked well, although the mimic relationship did not. The bug is here: https://github.com/ros/robot_state_publisher/issues/1, but it looks like its been fixed. Comment by 130s on 2014-08-15: Cool. Thanks to this thread, my colleague was able to make a very simple model for a polar coordinate robot using slider joint and operate it by MoveIt! on RViz. https://www.youtube.com/watch?v=43HSaMrrLpQ&feature=youtu.be and https://www.youtube.com/watch?v=edgZP1T48pY&feature=youtu.be
{ "domain": "robotics.stackexchange", "id": 17089, "tags": "rviz, urdf, robot-model, moveit, collision-object" }
Are molecules with bond orders less than one stable?
Question: Are molecules with bond orders less than one stable? My professor commented that they were "barely alive" but what does he mean, scientifically? I know that molecules with fractional bond orders greater than one can exist - i.e. nitrogen oxide has a bond order of 2.5 (through MO calculations) and it exists just fine. So are molecules with fractional bond orders below one just very unstable? Answer: Molecules possessing a bond of order below 1 can be perfectly stable, in the sense that their resulting molecular structure lies in an energetic potential well. Strictly speaking it is enough that at $T=\pu{0 K}$ and in the absence of any interactions with matter or fields, the molecule will not spontaneously disassemble. However, there is no need to go quite so far to protect such a molecule from decomposing; there are examples of species which are chemically important in regular laboratory conditions. All other things being equal, it is true that species with bond orders below 1 are relatively unstable. This is mainly because the fractional bond is comparatively weak (requiring comparatively little activation energy to break, i.e. a smaller $E_\mathrm{a}$), and because in most cases the molecule can react with other substances in such a way as to form products with all covalent bonds of bond order 1 or higher (increasing the exergonicity of most reactions, i.e. a more negative $\Delta_\mathrm{r}G$). Since the kinetic barrier for reaction is comparatively low, and the thermodynamic drive for reaction is comparatively high, species with bond order below 1 tend to need some extra protection against the world to stay in one piece. For example, diborane and trimethylaluminium are compounds possessing bonds of order 0.5, and while being indefinitely stable when pure, they spontaneously ignite in air from exposure to oxygen and moisture. As rightly pointed out in the comments, noble gas compounds require fractional bond orders, yet several compounds can be obtained and stored (especially xenon compounds), though they tend to be sensitive to moisture and heating. Boron is also responsible for a very interesting class of compounds in which many boron atoms attach to each other in cage-like structures with very complicated bonding, in which fractional order bonds are involved. Some of the larger and more symmetric structures can be relatively stable, especially with adequate substituents. In space, there's really not much of anything around, so you might expect to find some molecules with fractional bond orders floating about. Indeed, one can find the trihydrogen cation, which is actually one of the most common ions in the Universe!
{ "domain": "chemistry.stackexchange", "id": 1240, "tags": "bond" }
BFS implementation that walks a tile-based field
Question: I wrote a BFS implementation that walks a tile-based field. It takes a function that should return true for walkable tiles and false for walls. It also takes the start and end points. It currently takes about 5 seconds to find the shortest path from (0, 0) to (1000, 1000) which isn't bad, but it really isn't great. import qualified Data.HashSet as H import Data.Maybe (mapMaybe, isNothing) import Data.List (foldl') bfs :: (Int -> Int -> Bool) -> -- The field function. Returns True if tile is empty, False if it's a wall (Int, Int) -> -- Starting position (Int, Int) -> -- Final position Int -- Minimal steps bfs field start end = minSteps H.empty [start] 0 where minSteps visited queue steps |end `elem` queue = steps + 1 |otherwise = minSteps newVisited newQueue (steps + 1) where (newVisited, newQueue) = foldl' aggr (visited, []) queue aggr (vis, q) node = if H.member node vis then (H.insert node vis, neighbors node ++ q) else (vis, q) neighbors (nx, ny) = filter (uncurry field) $ map (\(x, y) -> (nx + x, ny + y)) [(1, 0), (0, -1), (-1, 0), (0, 1)] hugeField x y = x >= 0 && x <= 1000 && y >= 0 && y <= 1000 main = print $ bfs hugeField (0, 0) (1000, 1000) Answer: I managed to make it five times faster but my solution uses some ad-hoc hacks. First, I replaced queue with a set. This makes checking for solution faster and allows to update visited with single set-union operation. bfs field start end = minSteps H.empty (H.singleton start) 0 where minSteps visited queue steps |end `H.member` queue = steps + 1 |otherwise = minSteps newVisited newQueue (steps + 1) where -- add whole frontier to visited set newVisited = queue `H.union` visited -- make next frontier from non-visited neighbors newQueue = H.fromList (concatMap neighbors $ H.toList queue) `H.difference` newVisited This change makes the program slightly slower but it allows further optimizations. Now to the ugly hack. Assuming you are on 64-bit machine and coordinates of empty tiles are in the range [0..2^31), it is possible to pack pair of coordinates into single Int. This will reduce memory footprint, improve cache locality and simplify hash calculation. Here are two functions to pack/unpack coordinates: enc :: (Int,Int) -> Int enc (x,y) = shiftL (x .&. 0xFFFFFFFF) 32 .|. (y .&. 0xFFFFFFFF) dec :: Int -> (Int, Int) dec z = (shiftR z 32, z .&. 0xFFFFFFFF) Use them to store packed coordinates in visited and queue and this will give you small improvement over the original code. So, we made some ugly changes but gained very small speedup. Now it's time for magic. Changing single line of code will make the code 5 times faster. Just import Data.IntSet instead of Data.HashSet. Here is full code: import qualified Data.IntSet as H import Data.Bits bfs field start end = minSteps H.empty (H.singleton $ enc start) 0 where minSteps visited queue steps |enc end `H.member` queue = steps + 1 |otherwise = minSteps newVisited newQueue (steps + 1) where newVisited = queue `H.union` visited newQueue = H.fromList (concatMap (map enc.neighbors.dec) $ H.toList queue) `H.difference` newVisited neighbors (nx, ny) = filter (uncurry field) $ map (\(x, y) -> (nx + x, ny + y)) [(1, 0), (0, -1), (-1, 0), (0, 1)] enc :: (Int,Int) -> Int enc (x,y) = shiftL (x .&. 0xFFFFFFFF) 32 .|. (y .&. 0xFFFFFFFF) dec :: Int -> (Int, Int) dec z = (shiftR z 32, z .&. 0xFFFFFFFF) hugeField x y = x >= 0 && x <= 1000 && y >= 0 && y <= 1000 main = print $ bfs hugeField (0, 0) (1000, 1000) The reason of such a speedup is that IntSet represents dense sets in much more compact form than HashSet. And this makes union and difference faster.
{ "domain": "codereview.stackexchange", "id": 7247, "tags": "optimization, haskell, breadth-first-search" }
Which alcohols change CrO3/H2SO4 from orange to green?
Question: Which of the following sets of compounds cannot turn a clear orange solution of $\ce{CrO3/H2SO4}$ to a greenish opaque solution? (More than one answer is correct) A) Ethanol B) 2-Propanol C) t-Butanol D) Phenol I think for turning a clear orange solution of $\ce{CrO3/H2SO4}$ to greenish opaque solution, the organic compound should be acidic, as $\ce{CrO3}$ turns green in basic medium. I am not sure about this. Hence, I think A and D are the answers. Am I right? Answer: This particular reaction is known as the Jones oxidation, in which primary ($1^\circ$) alcohols are oxidized to aldehydes, and then again to carboxylic acids, and secondary ($2^\circ$) alcohols are oxidized to ketones. The color change is due to the reduction of $\color{\orange}{\ce{Cr^{+6}}}$ in $\ce{H2CrO4}$ to $\ce{Cr^{+4}}$ in $\ce{H2CrO3}$. This in turn disproportionates to $\ce{Cr^{+5}}$ and $\color{\green}{\ce{Cr^{+3}}}$ in $\ce{[Cr(H2O)6]^{3+}}$, with $\ce{Cr^{+5}}$ further oxidizing the alcohols to also from $\color{\green}{\ce{Cr^{+3}}}$. The answer to your question lies in which carbons are capable of accepting additional bonds to the alcoholic oxygen, and which are not, therefore A and B are correct.
{ "domain": "chemistry.stackexchange", "id": 5578, "tags": "organic-chemistry, redox, alcohols, color" }
Simple Tip Calculator with JavaScript Console App
Question: Simple console tip calculator app in JavaScript. I would like to get some feedback on improving my code. Thank you beforehand. Source code: //header const headerMessage = () => { console.log('Tip Calculator') } //service quality const serviceQuality = (billEntered, userOption ) => { const userInput = Number(userOption); switch(userInput){ case 1: return (billEntered * 0.3) break; case 2: return(billEntered * 0.2) break; case 3: return(billEntered * 0.1) break; case 4: return(billEntered * 0.05) break; } } const tipCalulcator = () => { headerMessage() const enterBill = prompt('Please tell me bill amount: $ '); console.log(`Bill amount was $ ${enterBill}`) console.log(`How was your service ? Please enter number of the options`) const enterOption = prompt(` 1) Outstanding (30%) 2) Good (20%) 3) It was ok (10%) 4) Terrible (5%)`) const result = serviceQuality(enterBill, enterOption); console.log(`Tip amount ${result}`) const total = Number(enterBill) + result; console.log(`Overall paycheck ${total}`) } tipCalulcator(); Answer: A couple of things to consider: When dealing with a switch statement, if a case utilizes the return keyword then you do not need to also include a break. When the return is hit, no further code in that function will be reached. You should handle unexpected inputs; for example, serviceQuality should account for non-numeric inputs, or a number that is not 1-5. Look into try/catch statements and using a default case on your switch. Because you are not reusing headerMessage, this function is not really required. Your variable names are fairly unintuitive; consider changing them so that it's more obvious what each thing is such as enterBill -> billAmount and enterOption -> serviceOption. It might be worth using those same names as the serviceQuality arguments too.
{ "domain": "codereview.stackexchange", "id": 37092, "tags": "javascript" }
Algorithm optimisation to get first parent with specific kind of class
Question: I am writing an extension for UIViewController to search all the parents and return a specific parent which is a kind of specific class. here is my try : extension UIViewController { var rootViewController : RootViewController? { var parentController: UIViewController? = parent while let currentParent = parentController { if (currentParent is RootViewController) { return (currentParent as? RootViewController)! } else if let parentOfParent = currentParent.parent, parentOfParent != currentParent { parentController = parentOfParent } else { parentController = nil } } return nil } } Is there a better approach to solve the issue, maybe a recursive way to handle that. And if it's better in term of performance. Answer: I wouldn't care much about performance here, since a view controller hierarchy does not have hundreds of elements. But some things can be simplified. First, if (currentParent is RootViewController) { return (currentParent as? RootViewController)! } can be simplified to if let rootVC = currentParent as? RootViewController { return rootVC } which also has the advantage that no forced unwrapping operator is needed. Even if we know that it cannot fail here (because the type has been checked in the if-statement) it is better to write the code in a way that it becomes obvious that it cannot crash. Next, the loop itself can be simplified: var rootViewController: RootViewController? { var currentViewController = self while case let parentController? = currentViewController.parent, parentController != currentViewController { if let rootVC = parentController as? RootViewController { return rootVC } currentViewController = parentController } return nil } Here the "optional pattern" case let parentController? = currentViewController.parent is used to check that there is a parent, and the constraint parentController != currentViewController ensures that the parent is different from the current view controller. But I don't think that a view controller can be its own parent. If you don't have a special reason to consider this case, the code can further be simplified using optional binding: var rootViewController: RootViewController? { var currentViewController = self while let parentController = currentViewController.parent { if let rootVC = parentController as? RootViewController { return rootVC } currentViewController = parentController } return nil } Another suggestion is to make the method generic, so that it can be used to find a parent view controller of any given class: func parent<T : UIViewController>(ofType: T.Type) -> T? { var currentViewController = self while let parentController = currentViewController.parent { if let parent = parentController as? T { return parent } currentViewController = parentController } return nil } Finally: You can replace the loop by recursion, using the flatMap method of Optional and the nil-coalescing operator ??: func parent<T : UIViewController>(ofClass: T.Type) -> T? { return parent.flatMap { ($0 as? T) ?? $0.parent(ofClass: T.self) } } Here parent.flatMap { ... } returns nil if parent == nil, this terminates the recursion eventually. Otherwise $0 is the (unwrapped) parent. If the conditional cast $0 as? T succeeds then this will be the return value. Otherwise the method is called recursively on the parent view controller. I doubt that this will be faster, and it is more difficult to read and to debug. So it's up to you which approach you prefer.
{ "domain": "codereview.stackexchange", "id": 26928, "tags": "algorithm, recursion, swift, ios" }