text
stringlengths
70
452k
dataset
stringclasses
2 values
Specify Index Column in EF Model-First How to define a property as index column, in EF Model-First? Five years later when this question had been asked for EF-4.0. Do we still have to manually edit .edmx file to add index? I believe this is a pretty trivial scenario & there must be a better way, isn't it? In 2020 in the edmx editor I still don't see any visual way to specify if a column would be unique or I would want to have an index on it. Very disappointing, this is such a basic thing. Sorry, didn't read the question properly. This applies only to Code First. OP asked about Model-First. Starting with EF 6.1 you can use the Index attribute. See https://msdn.microsoft.com/en-us/data/jj591583.aspx for details like unique indexes and multicolumn indexes.
common-pile/stackexchange_filtered
Searching and finding word within string in C++/CLI everyone. I need your help. I am writing gui application in C++/CLI with VisualStudio 2017. I want to search and find a word within string. You're referring to managed String objects. SO this is just worth to google it. String^ text = "I love apple"; bool result = text->Contains("apple"); Full documentation here in the MSDN
common-pile/stackexchange_filtered
Starting php-cgi: spawn-fcgi: child signaled: 25 Starting php-cgi: spawn-fcgi: child signaled: 25 Does anybody know what this could mean? I'm trying to start php-cgi on CentOS. The only thing I can say is that it was working before the hosting shut the server down for maintenance. I don't know if it's important, but nmap -v -sV localhost -p 9000 shows: 9000/tcp closed cslistener. Port 9000 should be open, right? Sorry to be so vague, I know almost nothing about this stuff. If you need more info, please ask in comments. Edit: output of ulimit -a: core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 32512 max locked memory (kbytes, -l) 32 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 32512 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited According to man 7 signal, on x86 hardware signal 25 is SIGXFSZ "File size limit exceeded". Update your question with the output of ulimit -a (from whatever shell you're trying to start php-fpm from) I addded the output of ulimit -a. @p2ph8 ok, it's not a ulimit problem. Can you check your log files and make sure they haven't grown too large? (2GB is the maximum file size for any 32-bit program without largefile support) that was it man! The error log was 2GB. Thanks!
common-pile/stackexchange_filtered
Web messages interface - How do I control if messages are sent as SMS or MMS? I've been using messages.android.com for a few weeks now and it's amazing, even if sluggish when "new conversation" is clicked. The only issue I have is that if I'm texting more than one person it switches to sending as MMS and I can't seem to control it. I tried this but that setting doesn't seem to be listed. I'm using the default Messaging app on a dual sim moto g5. My phone recently upgraded itself to Android 8 but it hasn't solved the issue. It seems to happen, even with very short messages, and even if it's just text without any subject or media. The message converts to MMS as soon as I add a second recipient. Any suggestions? I will add a Windows steps recorder when I have a bit more time. If I add two recipients to a text message on the phone itself it works as normal and remains an SMS. It would be useful if someone could see if they get the same behaviour. Another edit: the problem persists with the new interface, whether opened directly or through a redirection from the old one. When you add a media item such as a photo to your message it automatically turns the message into a MMS Thanks for your response, but I'm afraid it happens even if it's just text. The message converts to MMS as soon as I add a second recipient. However, what you're saying is true, so I won't mark your answer down. In fact, I will edit my question to make it clearer.
common-pile/stackexchange_filtered
Remove audio from video file with FFmpeg How can I strip the audio track out of a video file with FFmpeg? Simple: ffmpeg -i input.mp4 -map v -c copy video.mp4 and ffmpeg -i input.mp4 -map a -c copy audio.mp4 (you can also use -an and -vn to remove audio/video) You remove audio by using the -an flag: input_file=example.mkv output_file=example-nosound.mkv ffmpeg -i $input_file -c copy -an $output_file This ffmpeg flag is documented here. I'm a bash and ffmpeg newbie but I put this answer together with some other pieces to create function ffsilent { ffmpeg -i $1 -c copy -an "$1-nosound.${1#*.}" } which you can use in your profile to quickly create a silent version of any video file. @Aaron nice, but should be function ffsilent { ffmpeg -i "$1" -c copy -an "${1%.*}-nosound.${1#*.}" } or you'll end up with "file.mp4-nosound.mp4" when using it on "file.mp4". This doesn't carry over GPS coordinates. Using -c copy -an works on most video files but it won't strip audio from SWF (Shockwave Flash) files. The solution is to shorten it to just -an then it works. Are you sure that this avoids re-encoding the video? @rlittles Yes, -c copy always avoids re-encoding, If it can't it will fail with an error. Be sure to put the -an flag before $output_file You probably don't want to reencode the video (a slow and lossy process), so try: input_file=example.mkv output_file=example-nosound.mkv ffmpeg -i $input_file -vcodec copy -an $output_file (n.b. some Linux distributions now come with the avconv fork of ffmpeg) This didn't make any difference to me compared to the accepted solution. vcodec is an alias for -c:v, so specifically it'd copy the video stream only. The only data you're preventing with this would be subtitles, metadata, etc from what I can see. In other words, this solution can conceivably lose more information than the accepted solution. We can call this "only video" solution :+1: I agree: This here is the "copy video only" solution, whereas the accepted answer is the "copy everything but audio" solution. avconv -i [input_file] -vcodec copy -an [output_file] If you cannot install ffmpeg because of existing of avconv try that . You can also use the -map option of ffmpeg to get better control on what exactly will be put into the final video container. Lets say for example that your video file my_video.mp4 is composed this way: Input #0 Stream #0:0 Video: h264 Stream #0:1 Audio: English Stream #0:2 Audio: German Stream #0:3 Audio: Japanese Stream #0:4 Audio: Spanish Stream #0:5 Audio: Italian To remove all audio tracks (like the -an option does): ffmpeg -i my_video.mp4 -map 0 -map -0:a -c copy my_video.noaudio.mp4` -map 0 grabs the entire input (videos, audios, subtitles, metadata, chapters, etc.). -map -0:a removes all audio tracks from input 0 (notice the - sign). -c copy copies as it is without re-encoding. To remove the Japanese and Spanish tracks: ffmpeg -i my_video.mp4 -map 0 -map -0:3 -map -0:4 -c copy my_video.nojap.noesp.mp4` -map -0:3 removes the 3rd track from input 0, which is the Japanese audio. -map -0:4 removes the 4rd track from input 0, which is the Spanish audio. To remove all audio tracks but Italian: ffmpeg -i my_video.mp4 -map 0 -map -0:a -map 0:5 -c copy my_video.ita.mp4` -map -0:a removes all audio tracks from input 0. -map 0:5 inserts the 5th track from input 0, which is the Italian audio (notice NO - sign in this case). This is also very useful also when dealing with more than one file. For example when: grabbing audio from one file audio tracks from another one subtitles and metadata from a third one To show tracks map of media file you could use ffprobe utility bundled with ffmpeg: [ffprobe my_video.mp4] I put together a short code snippet that automates the process of removing audio from videos files for a whole directory that contains video files: FILES=/{videos_dir}/* output_dir=/{no_audio_dir} for input_file in $FILES do file_name=$(basename $input_file) output_file="$output_dir/$file_name" ffmpeg -i $input_file -c copy -an $output_file done I hope this one helps! Out of interest, how would I use this snippet when there are spaces in the video dir (and output dir)? @PaulSkinner adding quotes should be enough eg: file_name=$(basename "$input_file") I have taken @apolak's answer and turned it in to a recursive loop for all folders underneath the input folder. It will retain the directory layout of the input folder, and you can set a max-depth for it to recurse through. The output directory must not be a child of the input directory or it will error, to stop accidental infinite recursion. It should also be fine with spaces in filenames and paths. NOTE: all files within the input directory will be attempted to be processed, so make sure they're all video files. #!/bin/bash process_files() { local current_dir="$1" local output_dir="$2" local max_depth="$3" local depth="${4:-0}" # Set default value of 0 if $4 is not set if [ "$depth" -gt "$max_depth" ]; then return fi # Check if output directory is a subdirectory of the input directory and error # This should stop accidental recursive loops if [[ "$output_dir" == "$current_dir"* ]]; then echo "Error: Output directory is a subdirectory of the input directory" exit 1 fi mkdir -p "$output_dir" for input_file in "$current_dir"/* do if [ -d "$input_file" ]; then # If the input file is a directory, recurse into it process_files "$input_file" "$output_dir/$(basename "$input_file")" "$max_depth" "$((depth+1))" elif [ -f "$input_file" ]; then # If the input file is a regular file, process it local file_name=$(basename "$input_file") local output_file="$output_dir/$file_name" ffmpeg -i "$input_file" -c copy -an "$output_file" fi done } # Call function with input and output directories and maximum depth process_files "/Volumes/Storage/ORIGINAL" "/Volumes/Storage/MUTED" 2 # Set the maximum recursion depth to 2
common-pile/stackexchange_filtered
Flags in general position. Let $V$ be a finite dimensional vector space. If i consider two complete flags $\mathcal{V}$ and $\mathcal{W}$ then i know that they are in general position if $V_i \cap W_{n-i}=0$ for all $i$. If i have a number $n >2$ of complete flags, when are they said to be in general position? It depends on the circumstances... absent any further information, I would interpret “general position” to mean an unspecified dense open subset of $F(V)^n$, where $F(V)$ is the flag variety. @JakeLevinson the circumstance is Pieri's Formula. In particular i am talking about the proof given in "3264 and all that" at page 145. To compute the triple intersection $\sigma_b \sigma_a \sigma_{c^{}}$, where $\sigma_b$ is a special class and $|a|+b=|c|$, the author considers the Schubert cycles $\Sigma_a(\mathcal{V}),\Sigma_b(\mathcal{U})$ and $\Sigma_{c^}(\mathcal{W})$ with respect to three general flags $\mathcal{V},\mathcal{U},\mathcal{W}$. It is not clear to me why that is the right way to compute that product between Schubert classes... For that calculation, I think a lot of it comes down to just considering the two non-special Schubert cycles, which is good because it's easier for two flags to be in general position than 3. In particular, any two flags that satisfy the condition you gave are equivalent (under change of coordinates) to any two other such flags. Therefore they must be "general enough". @JakeLevinson But he starts to consider the intersection $\Sigma_a(\mathcal{V}) \cap \Sigma_{c^}(\mathcal{U})$ to get the condition $ a_i \leq c_i$ and then considers $\Sigma_a(\mathcal{V}) \cap \Sigma_{c^}(\mathcal{W}) \cap \Sigma_{b}(U)$ to get $a_{i} \leq c_i \leq a_{i-1}$, where he only cares about the plane $U_{n-k+1-b}$ which (he says) must be general. He doesn't care about the transversality of $\Sigma_{b}(\mathcal{U})$ with respect to the irreducible components of $\Sigma_a(\mathcal{V})\cap \Sigma_{c^*}(\mathcal{W})$. So what i think is that for $U_{n-k+1-b}$ general enough, the Schubert cycle $\Sigma_b(\mathcal{U})$ is generically transverse to each irred. component of $\Sigma_a(\mathcal{V})\cap \Sigma_{c^*}(\mathcal{W})$. Is this true? Yes, by Kleiman’s Theorem.
common-pile/stackexchange_filtered
Unable to mount encrypted TimeMachine backupbundle using DiskImageMounter I've been backing up two of my MacBooks to a network-accessible USB disk. The backups have been done using TimeMachine and are encrypted. One of the machines was recently damaged and unrecoverable, and now I'm trying to pull some important files from the backup image. On the disk, I have a "lars-mbp.backupbundle", which seems to be an encrypted disk image. I've tried to mount the image with DiskImageMounter, but I can get an error: The following disk images couldn't be opened Reason: no mountable file system The thing is, I was able to open this bundle a few days ago and browse the files contained in the backup, but now I'm attempting to do it again and I've gotten this error. I've been browsing through a couple of threads that seem to be related: Repair Time Machine sparsebundle that will no longer mount However, this doesn't appear to apply to me. Here's why: Running hdiutil attach -nomount -readwrite lars-mbp.backupbundle, I only get this: /dev/disk5. Unlike the examples in the linked thread above, there are no partitions or any mention of HFS. The com.apple.TimeMachine.MachineID.plist file has a different <key>VerificationState</key> value. While the linked thread assumes the value is <integer>2</integer>, in my case it is 1. Here's the complete plist file: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>VerificationDate</key> <date>2021-01-17T13:54:45Z</date> <key>VerificationExtendedSkip</key> <false/> <key>VerificationState</key> <integer>1</integer> <key>com.apple.backupd.HostUUID</key> <string>REDACTED</string> <key>com.apple.backupd.ModelID</key> <string>MacBookPro14,3</string> </dict> </plist> I've been trying to figure out what VerificationState=1 means, but I haven't had much luck. I came across one hint that suggests that it means it's in a locked state, but I haven't found any solution to clear this. My only real clue as to the root cause is the fact that I was able to mount the backup bundle successfully the other night, and as a result of that, something caused the bundle to get locked. Any help would be appreciated. Edit: Here's some extra context--an attempt to mount the bundle: $ hdiutil attach -nomount -noverify -noautofsck -debug -stdinpass lars-mbp.backupbundle/ Enter disk image passphrase: [hidden] 2021-02-10 16:03:09.576 hdiutil[5939:97512] DIHLDiskImageAttach: disabling legacy image format attach 2021-02-10 16:03:09.576 hdiutil[5939:97512] DIHLDiskImageAttach: creating DIHelperProxy 2021-02-10 16:03:09.576 hdiutil[5939:97512] with dictionary: { agent = hdiutil; "auto-fsck" = 0; debug = 1; "drive-options" = {length = 42, bytes = 0x62706c69 73743030 d0080000 00000000 ... 00000000 00000009 }; "image-options" = {length = 116, bytes = 0x62706c69 73743030 d2010203 045a7061 ... 00000000 0000004f }; "main-url" = "file:///Volumes/LarsBackup/lars-mbp.backupbundle/"; "mount-attempted" = 0; "mount-required" = 0; operation = DIHelperAttach; quiet = 0; "skip-verify" = 1; verbose = 0; } 2021-02-10 16:03:09.576 hdiutil[5939:97512] [DIHelperProxy alloc] 2021-02-10 16:03:09.576 hdiutil[5939:97512] [DIHelperProxy alloc] returning self 0x7f8640a04110, retainCount 1 2021-02-10 16:03:09.576 hdiutil[5939:97512] DIHLDiskImageAttach: running DIHelperProxy 2021-02-10 16:03:09.576 hdiutil[5939:97512] [DIHelperProxy performOperationReturning] entry 2021-02-10 16:03:09.576 hdiutil[5939:97512] [DIHelperProxy performOperationReturning] detaching thread 2021-02-10 16:03:09.577 hdiutil[5939:97537] [DIHelperProxy workerThread] entry 2021-02-10 16:03:09.577 hdiutil[5939:97537] [DIHelperProxy workerThread] setting up server 2021-02-10 16:03:09.577 hdiutil[5939:97537] [DIHelperProxy threadSetupServer] entry 2021-02-10 16:03:09.577 hdiutil[5939:97537] XPC: created intermediaryConnection connection 2021-02-10 16:03:09.577 hdiutil[5939:97537] XPC: creating helperconnection connection 2021-02-10 16:03:09.578 hdiutil[5939:97537] [DIHelperProxy threadSetupServer] exiting 2021-02-10 16:03:09.578 hdiutil[5939:97537] [DIHelperProxy threadLaunchToolAuthenticated] entry (spawn version) 2021-02-10 16:03:09.578 hdiutil[5939:97537] launching helper tool at "/System/Library/PrivateFrameworks/DiskImages.framework/Resources/diskimages-helper". 2021-02-10 16:03:09.580 hdiutil[5939:97537] [DIHelperProxy threadLaunchToolAuthenticated] exiting (spawn version) 2021-02-10 16:03:09.580 hdiutil[5939:97537] [DIHelperProxy workerThread] running runloop 2021-02-10 16:03:09.591 hdiutil[5939:97515] [DIHelperProxy connectToFramework] entry, helper 2021-02-10 16:03:09.591 hdiutil[5939:97515] [DIHelperProxy sendOperationToHelper] entry 2021-02-10 16:03:09.591 hdiutil[5939:97515] [DIHelperProxy sendOperationToHelper] starting operation with helper 2021-02-10 16:03:09.591 hdiutil[5939:97515] [DIHelperProxy sendOperationToHelper] exit 2021-02-10 16:03:09.591 hdiutil[5939:97515] [DIHelperProxy connectToFramework] exit 2021-02-10 16:03:09.602 hdiutil[5939:97515] [DIHelperProxy frameworkCallbackWithDictionary] entry status proc called: initialize Initializing… myStatusProc: returning 0 2021-02-10 16:03:09.602 hdiutil[5939:97515] [DIHelperProxy frameworkCallbackWithDictionary] exit 2021-02-10 16:03:09.603 diskimages-helper[5942:97543] updateImageAndDriveDictionaries: before update _imageOptions: { "enable-keychain" = 1; } 2021-02-10 16:03:09.603 diskimages-helper[5942:97543] _driveOptions: { "auto-fsck" = 0; autodiskmount = 0; "unmount-timeout" = 0; } 2021-02-10 16:03:09.603 diskimages-helper[5942:97543] DIHelperAttach: performOperation: initializing framework 2021-02-10 16:03:09.604 diskimages-helper[5942:97543] { agent = hdiutil; "allow-tty-prompt" = 1; "auto-fsck" = 0; "auto-fsck-failure-override-type" = callback; "auto-open-ro-root" = 0; "auto-open-rw-root" = 0; "auto-stretch" = 0; "bundlebs-checkpointing" = 0; "bundlebs-localcloseonflush" = 0; "bundlebs-localcloseonidle" = 0; "bundlebs-localcloseonsleep" = 0; "bundlebs-localfdcount" = 6; "bundlebs-remotecloseonflush" = 0; "bundlebs-remotecloseonidle" = 0; "bundlebs-remotecloseonsleep" = 0; "bundlebs-remotefdcount" = 3; "burn-no-underrun-protection" = 0; "burn-synthesize-content" = 1; "bzip2-level" = 0; "callback-with-sla" = 1; debug = 1; "disable-encrypted-images" = 0; "disable-kernel-mounting" = 1; "disable-owners" = 0; "drive-options" = {length = 42, bytes = 0x62706c69 73743030 d0080000 00000000 ... 00000000 00000009 }; "enable-owners" = 0; "hfsplus-stretch-parameters" = { "hfsplus-stretch-allocation-block-size" = 4096; "hfsplus-stretch-allocation-file-size" = 8388608; "hfsplus-stretch-threshold" = 524288; }; "idle-timeout" = 15; "ifd-format" = UDZO; "ifd-segment-size" = 0; "iff-format" = UDZO; "iff-fs" = "HFS+"; "iff-layout" = GPTSPUD; "iff-source-owners" = auto; "iff-spotlight-indexing" = 0; "iff-temp-sparse-band-size" = 20480; "iff-temp-use-rw-if-possible" = 1; "iff-usehelper" = 1; "ignore-bad-checksums" = 0; "image-options" = {length = 116, bytes = 0x62706c69 73743030 d2010203 045a7061 ... 00000000 0000004f }; "main-url" = "file:///Volumes/LarsBackup/lars-mbp.backupbundle/"; "mount-attempted" = 0; "mount-point" = "/Volumes/"; "mount-private" = 0; "mount-required" = 0; "mount-type" = in; "nbi-spotlight-indexing" = 0; operation = DIHelperAttach; "pthread-reader-cap" = 4; quiet = 0; "skip-auto-fsck-for-system-images" = 1; "skip-previously-verified" = 1; "skip-sla" = 0; "skip-verify" = 1; "skip-verify-locked" = 0; "skip-verify-remote" = 1; "sparsebundle-compactonidle" = 0; "suppress-uiagent" = 1; "unmount-timeout" = 0; "use-keychain" = 1; verbose = 0; "zlib-level" = 1; } DILoadDriver: checking for disk image driver DILoadDriver: DI_kextExists() returned 0x00000000 (0) DIIsInitialized: returning NO 2021-02-10 16:03:09.606 diskimages-helper[5942:97543] -checkForPreviouslyAttachedImage: entry 2021-02-10 16:03:09.606 diskimages-helper[5942:97543] imageURL file:///Volumes/LarsBackup/lars-mbp.backupbundle/ 2021-02-10 16:03:09.606 diskimages-helper[5942:97543] shadowURL (null) 2021-02-10 16:03:09.606 diskimages-helper[5942:97543] sectionStart (null) sectionLength (null) 2021-02-10 16:03:09.606 diskimages-helper[5942:97543] checkForPreviouslyAttachedImage: setting legacy-disabled to 1 DIIsInitialized: returning YES DIBackingStoreNewWithCFURL: entry with file:///Volumes/LarsBackup/lars-mbp.backupbundle/ skip-permissions-check: true legacy-disabled: true DIBackingStoreInstantiatorProbe: entry file:///Volumes/LarsBackup/lars-mbp.backupbundle/ skip-permissions-check: true legacy-disabled: true DIBackingStoreInstantiatorProbe: probing interface 0 CBSDBackingStore CBSDBackingStore::newProbe directory, not a valid image file. CBSDBackingStore::newProbe score -1000 for file:///Volumes/LarsBackup/lars-mbp.backupbundle/ DIBackingStoreInstantiatorProbe: interface 0, score -1000, CBSDBackingStore DIBackingStoreInstantiatorProbe: probing interface 1 CBundleBackingStore CBundleBackingStore::newProbe: got bundle CFBundle 0x7fb23a304e10 </Volumes/LarsBackup/lars-mbp.backupbundle> (not loaded) band-size: 268435456 bundle-backingstore-version: 1 diskimage-bundle-type: com.apple.diskimage.sparsebundle size:<PHONE_NUMBER>256 CFBundleInfoDictionaryVersion: 6.0 CBundleBackingStore::newProbe score 1000 for file:///Volumes/LarsBackup/lars-mbp.backupbundle/ DIBackingStoreInstantiatorProbe: interface 1, score 1000, CBundleBackingStore DIBackingStoreInstantiatorProbe: probing interface 2 CRAMBackingStore CRAMBackingStore::probe: scheme "file": not ram: or ramdisk: scheme. CRAMBackingStore::probe: score -1000 for file:///Volumes/LarsBackup/lars-mbp.backupbundle/ DIBackingStoreInstantiatorProbe: interface 2, score -1000, CRAMBackingStore DIBackingStoreInstantiatorProbe: probing interface 3 CCarbonBackingStore CCarbonBackingStore::newProbe score -1000 for file:///Volumes/LarsBackup/lars-mbp.backupbundle/ DIBackingStoreInstantiatorProbe: interface 3, score -1000, CCarbonBackingStore DIBackingStoreInstantiatorProbe: probing interface 4 CDevBackingStore CDevBackingStore::newProbe: not /dev/disk or /dev/rdisk (/Volumes/LarsBackup/lars-mbp.backupbundle).CDevBackingStore::newProbe score -1000 for file:///Volumes/LarsBackup/lars-mbp.backupbundle/ DIBackingStoreInstantiatorProbe: interface 4, score -1000, CDevBackingStore DIBackingStoreInstantiatorProbe: probing interface 5 CCURLBackingStore CCURLBackingStore::probe: scheme is: file CCURLBackingStore::probe: not recognized URL scheme. CCURLBackingStore::probe: score -1000 for file:///Volumes/LarsBackup/lars-mbp.backupbundle/ DIBackingStoreInstantiatorProbe: interface 5, score -1000, CCURLBackingStore DIBackingStoreInstantiatorProbe: probing interface 6 CVectoredBackingStore CVectoredBackingStore::newProbe not "vectored" scheme. CVectoredBackingStore::newProbe score -1000 for file:///Volumes/LarsBackup/lars-mbp.backupbundle/ DIBackingStoreInstantiatorProbe: interface 6, score -1000, CVectoredBackingStore DIBackingStoreInstantiatorProbe: selecting CBundleBackingStore DIBackingStoreNewWithCFURL: CBundleBackingStore CBundleBackingStore::newWithCFURLandToken: perm is 0 DIIsInitialized: returning YES DIBackingStoreNewWithCFURL: entry with file:///Volumes/LarsBackup/lars-mbp.backupbundle/token legacy-disabled: true bs-no-follow: true skip-permissions-check: true use-filename-for-prompt: lars-mbp.backupbundle DIBackingStoreInstantiatorProbe: entry file:///Volumes/LarsBackup/lars-mbp.backupbundle/token legacy-disabled: true bs-no-follow: true skip-permissions-check: true use-filename-for-prompt: lars-mbp.backupbundle DIBackingStoreInstantiatorProbe: probing interface 0 CBSDBackingStore CBSDBackingStore::newProbe score 100 for file:///Volumes/LarsBackup/lars-mbp.backupbundle/token DIBackingStoreInstantiatorProbe: interface 0, score 100, CBSDBackingStore DIBackingStoreInstantiatorProbe: probing interface 1 CBundleBackingStore CBundleBackingStore::newProbe score -1000 for file:///Volumes/LarsBackup/lars-mbp.backupbundle/token DIBackingStoreInstantiatorProbe: interface 1, score -1000, CBundleBackingStore DIBackingStoreInstantiatorProbe: probing interface 2 CRAMBackingStore CRAMBackingStore::probe: scheme "file": not ram: or ramdisk: scheme. CRAMBackingStore::probe: score -1000 for file:///Volumes/LarsBackup/lars-mbp.backupbundle/token DIBackingStoreInstantiatorProbe: interface 2, score -1000, CRAMBackingStore DIBackingStoreInstantiatorProbe: probing interface 3 CCarbonBackingStore CCarbonBackingStore::newProbe: setting initial rval to +100 CCarbonBackingStore::newProbe score 100 for file:///Volumes/LarsBackup/lars-mbp.backupbundle/token DIBackingStoreInstantiatorProbe: interface 3, score 100, CCarbonBackingStore DIBackingStoreInstantiatorProbe: probing interface 4 CDevBackingStore CDevBackingStore::newProbe: not /dev/disk or /dev/rdisk (/Volumes/LarsBackup/lars-mbp.backupbundle/token).CDevBackingStore::newProbe score -1000 for file:///Volumes/LarsBackup/lars-mbp.backupbundle/token DIBackingStoreInstantiatorProbe: interface 4, score -1000, CDevBackingStore DIBackingStoreInstantiatorProbe: probing interface 5 CCURLBackingStore CCURLBackingStore::probe: scheme is: file CCURLBackingStore::probe: not recognized URL scheme. CCURLBackingStore::probe: score -1000 for file:///Volumes/LarsBackup/lars-mbp.backupbundle/token DIBackingStoreInstantiatorProbe: interface 5, score -1000, CCURLBackingStore DIBackingStoreInstantiatorProbe: probing interface 6 CVectoredBackingStore CVectoredBackingStore::newProbe not "vectored" scheme. CVectoredBackingStore::newProbe score -1000 for file:///Volumes/LarsBackup/lars-mbp.backupbundle/token DIBackingStoreInstantiatorProbe: interface 6, score -1000, CVectoredBackingStore DIBackingStoreInstantiatorProbe: selecting CBSDBackingStore DIBackingStoreNewWithCFURL: CBSDBackingStore CBSDBackingStore::setNoFollow: setting _noFollow to 1 DIBackingStoreNewWithCFURL: instantiator returned 0 DIBackingStoreNewWithCFURL: returning 0 CBundleBackingStore::setURL entry CBundleBackingStore::setURL CFURLStat passed CBundleBackingStore::setURL fPath is /Volumes/LarsBackup/lars-mbp.backupbundle CBundleBackingStore::isRemote: returning false CBundleBackingStore::setURL got unique identifier CBundleBackingStore::newWithCFURLandToken: setURL returned 0 CBundleBackingStore::setSecurityToken entry CBundleBackingStore::setSecurityToken proxy encoding not supported CBundleBackingStore::newWithCFURLandToken: setSecurityToken returned 0 copyInfoPlist: fcntl(F_OPENFROM, Info.plist) passed copyInfoPlist: total read 502 bytes band-size: 268435456 bundle-backingstore-version: 1 diskimage-bundle-type: com.apple.diskimage.sparsebundle size:<PHONE_NUMBER>256 CFBundleInfoDictionaryVersion: 6.0 CBundleBackingStore::processBundle: getting kDIBackingStoreSizeKey CBundleBackingStore::processBundle: dfLength<PHONE_NUMBER>256 CBundleBackingStore::processBundle: getting kDIBackingStoreBandSizeKey CBundleBackingStore::processBundle: _bytesPerBand 268435456 CBundleBackingStore::newWithCFURLandToken: processBundle returned 0 CBundleBackingStore::isRemote: returning false CBundleBackingStore::processOptions: _maxOpenBands is 6 entries CBundleBackingStore::processOptions: kMaxAsyncCloses is 16 CBundleBackingStore::processOptions: _bandTableCloseOnSleep is no CBundleBackingStore::processOptions: _bandTableCloseOnIdle is no CBundleBackingStore::processOptions: _bandTableCloseOnFlush is no CBundleBackingStore::processOptions: _bandTable initialized CBundleBackingStore::setPermission entry inPerm = 0 CBundleBackingStore::newWithCFURLandToken: setPermission returned 0 CBundleBackingStore::newWithCFURLandToken: returning 0 DIBackingStoreNewWithCFURL: instantiator returned 0 DIBackingStoreNewWithCFURL: returning 0 2021-02-10 16:03:09.608 diskimages-helper[5942:97543] -checkForPreviouslyAttachedImage: resolving file:///Volumes/LarsBackup/lars-mbp.backupbundle/ returned 0 2021-02-10 16:03:09.608 diskimages-helper[5942:97543] -checkForPreviouslyAttachedImage: imageUID ( "d16777227:i161" ) shadowUID (null) ***** testing: 0: d16777227:i161 (null) (null) 2021-02-10 16:03:09.609 diskimages-helper[5942:97543] DIHelperAttach: performOperation: image previously attached as 2021-02-10 16:03:09.609 diskimages-helper[5942:97543] { "system-entities" = ( { "content-hint" = ""; "dev-entry" = "/dev/disk5"; } ); } 2021-02-10 16:03:09.609 diskimages-helper[5942:97543] -[DIHelperHDID initWithDiskImage:] 2021-02-10 16:03:09.609 diskimages-helper[5942:97543] enableCallbacks() entry 2021-02-10 16:03:09.609 diskimages-helper[5942:97543] enableCallbacks() call backs now enabled 2021-02-10 16:03:09.610 diskimages-helper[5942:97543] DIHelperAttach: performOperation: attaching disk image 2021-02-10 16:03:09.610 diskimages-helper[5942:97543] performAttach: entry 2021-02-10 16:03:09.610 diskimages-helper[5942:97543] [DIHelperOperator frameworkCallbackWithDictionary] calling home { "status-stage" = attach; } 2021-02-10 16:03:09.610 hdiutil[5939:97515] [DIHelperProxy frameworkCallbackWithDictionary] entry status proc called: attach Attaching… myStatusProc: returning 0 2021-02-10 16:03:09.610 hdiutil[5939:97515] [DIHelperProxy frameworkCallbackWithDictionary] exit 2021-02-10 16:03:09.610 diskimages-helper[5942:97543] [DIHelperOperator frameworkCallbackWithDictionary] exit 0 2021-02-10 16:03:09.610 diskimages-helper[5942:97543] _needToAutoFsck: auto-fsck=NO detected 2021-02-10 16:03:09.610 diskimages-helper[5942:97543] _needToAutoFsck: returning NO 2021-02-10 16:03:09.610 diskimages-helper[5942:97543] setMountFlags: 0x0 2021-02-10 16:03:09.610 diskimages-helper[5942:97543] performAttach: calling remountReturningDictionary 2021-02-10 16:03:09.610 diskimages-helper[5942:97543] -remountReturningDictionary: 2021-02-10 16:03:09.610 diskimages-helper[5942:97543] _buildReturnDictionary: 2021-02-10 16:03:09.610 diskimages-helper[5942:97543] _dictionaryFromDisk: adding (disk5): { "dev-entry" = "/dev/disk5"; "potentially-mountable" = 0; } 2021-02-10 16:03:09.610 diskimages-helper[5942:97543] -remountReturningDictionary: _buildReturnDictionary returned 0 2021-02-10 16:03:09.611 diskimages-helper[5942:97543] -remountReturningDictionary: kicked off hdiejectd 2021-02-10 16:03:09.611 diskimages-helper[5942:97543] DIHelperAttach: performOperation: post-processing disk image 2021-02-10 16:03:09.611 diskimages-helper[5942:97543] [DIHelperOperator frameworkCallbackWithDictionary] calling home { "status-stage" = "post-process"; } 2021-02-10 16:03:09.611 hdiutil[5939:97515] [DIHelperProxy frameworkCallbackWithDictionary] entry status proc called: post-process Finishing… myStatusProc: returning 0 2021-02-10 16:03:09.611 hdiutil[5939:97515] [DIHelperProxy frameworkCallbackWithDictionary] exit 2021-02-10 16:03:09.611 diskimages-helper[5942:97543] [DIHelperOperator frameworkCallbackWithDictionary] exit 0 2021-02-10 16:03:09.611 diskimages-helper[5942:97543] postProcessing: devEntry: /dev/disk5 mountPoint: (null) 2021-02-10 16:03:09.611 diskimages-helper[5942:97543] performPostProcessing: returning 0. 2021-02-10 16:03:09.611 diskimages-helper[5942:97543] [DIHelperOperator frameworkCallbackWithDictionary] calling home { "status-stage" = cleanup; } 2021-02-10 16:03:09.611 hdiutil[5939:97515] [DIHelperProxy frameworkCallbackWithDictionary] entry status proc called: cleanup Finishing… myStatusProc: returning 0 2021-02-10 16:03:09.611 hdiutil[5939:97515] [DIHelperProxy frameworkCallbackWithDictionary] exit 2021-02-10 16:03:09.611 diskimages-helper[5942:97543] [DIHelperOperator frameworkCallbackWithDictionary] exit 0 2021-02-10 16:03:09.611 diskimages-helper[5942:97543] -[DIHelperHDIDDA dealloc:] 2021-02-10 16:03:09.611 diskimages-helper[5942:97543] disableCallbacks() entry 2021-02-10 16:03:09.611 diskimages-helper[5942:97543] disableCallbacks() call backs now disabled 2021-02-10 16:03:09.611 diskimages-helper[5942:97543] -[DIHelperHDID dealloc:] 2021-02-10 16:03:09.611 diskimages-helper[5942:97543] DIHelperAttach performOperation: returning 0 2021-02-10 16:03:09.611 diskimages-helper[5942:97543] -decrementBackgroundThreadCount: _backgroundThreadCount is now 0. 2021-02-10 16:03:09.611 diskimages-helper[5942:97540] DIHelper _report_results: reporting { payload = { "system-entities" = ( { "dev-entry" = "/dev/disk5"; "potentially-mountable" = 0; } ); }; "result-code" = 0; } 2021-02-10 16:03:09.612 hdiutil[5939:97515] reportResultsToFramework: proxy has finished operation 2021-02-10 16:03:09.612 hdiutil[5939:97515] reportResultsToFramework: results are: { payload = { "system-entities" = ( { "dev-entry" = "/dev/disk5"; "potentially-mountable" = 0; } ); }; "result-code" = 0; } 2021-02-10 16:03:09.612 hdiutil[5939:97515] reportResultsToFramework: _threadResultsError is 0 2021-02-10 16:03:09.612 hdiutil[5939:97515] reportResultsToFramework: disconnecting from helper. 2021-02-10 16:03:09.612 hdiutil[5939:97515] [DIHelperProxy disconnectFromHelper] entry 2021-02-10 16:03:09.612 hdiutil[5939:97515] disconnectFromHelper: terminating proxy 2021-02-10 16:03:09.612 hdiutil[5939:97537] [DIHelperProxy threadRunRunLoop] releasing connection 2021-02-10 16:03:09.612 hdiutil[5939:97537] [DIHelperProxy workerThread] after running runloop 2021-02-10 16:03:09.612 hdiutil[5939:97515] disconnectFromHelper: terminated proxy 2021-02-10 16:03:09.612 hdiutil[5939:97537] [DIHelperProxy workerThread] waiting for task to terminate to avoid zombies 2021-02-10 16:03:09.612 hdiutil[5939:97515] [DIHelperProxy disconnectFromHelper] exit 2021-02-10 16:03:09.612 diskimages-helper[5942:97541] DIHelper: terminateHelper: entry. 2021-02-10 16:03:09.612 diskimages-helper[5942:97541] [DIHelper frameworkConnectionDied] entry 2021-02-10 16:03:09.612 diskimages-helper[5942:97541] [DIHelper frameworkConnectionDied] releasing connection 2021-02-10 16:03:09.612 diskimages-helper[5942:97541] -DIHelperAgentMaster terminateUIAgentConnection. 2021-02-10 16:03:09.612 diskimages-helper[5942:97541] [DIHelper frameworkConnectionDied] marking _frameworkDisconnected 2021-02-10 16:03:09.612 diskimages-helper[5942:97541] [DIHelper frameworkConnectionDied] preparing to act like a daemon. 2021-02-10 16:03:09.612 diskimages-helper[5942:97541] [DIHelper frameworkConnectionDied] exiting 2021-02-10 16:03:09.612 diskimages-helper[5942:97540] -setCanTerminate: main thread can exit 2021-02-10 16:03:09.612 diskimages-helper[5942:97540] helper: child_after_exec returning 0 2021-02-10 16:03:09.613 hdiutil[5939:97537] [DIHelperProxy workerThread] helper exited 2021-02-10 16:03:09.613 hdiutil[5939:97537] [DIHelperProxy workerThread] exiting 2021-02-10 16:03:09.613 hdiutil[5939:97512] [DIHelperProxy performOperationReturning] returning 0 2021-02-10 16:03:09.613 hdiutil[5939:97512] DIHLDiskImageAttach: DIHelperProxy returned 0 2021-02-10 16:03:09.613 hdiutil[5939:97512] [DIHelperProxy dealloc] DIHLDiskImageAttach() returned 0 system-entities: 0: dev-entry: /dev/disk5 potentially-mountable: false /dev/disk5 I ended up figuring out a solution. It took several tries and a lot of time, but in the end I was able to recover my files. TL;DR: Run fsck_hfs as root. I also think I figured out why this problem occurred in the first place: I tried to plug my backup disk directly directly to another machine. I was originally able to access my backup files, but was no longer able to after I had put my machine to sleep. I think it's the sleep that caused the disk image to get into a bad state. I believe the reason why my previous attempts failed was due not being root. $ sudo su $ chflags -R nouchg lars-mbp.backupbundle $ hdiutil attach -nomount -noverify -noautofsck lars-mbp.backupbundle # this prompts me for the bundle encryption password Running diskutil list, I'm able to the device associated with the HFS volume. Output below is truncated from the real output: $ diskutil list /dev/disk5 (external, physical): /dev/disk5 GUID_partition_scheme /dev/disk5s1 EFI /dev/disk5s2 Apple_HFS /dev/disk5s2 is the one we want. Then I ran fsck_hfs on the disk: $ fsck_hfs -drfy /dev/disk5s2 journal_replay(/dev/disk5s2) returned 0 ** /dev/rdisk5s2 Using cacheBlockSize=32K cacheTotalBlock=65536 cacheSize=2097152K. Executing fsck_hfs (version hfs-556.60.1). ** Checking Journaled HFS Plus volume. ** Detected a case-sensitive volume. The volume name is Time Machine Backups ** Checking extents overflow file. ** Checking catalog file. ** Rebuilding catalog B-tree. hfs_UNswap_BTNode: invalid node height (1) ** Rechecking volume. ** Checking Journaled HFS Plus volume. ** Detected a case-sensitive volume. The volume name is Time Machine Backups ** Checking extents overflow file. ** Checking catalog file. Incorrect block count for file shutdown_time (It should be 1 instead of 0) Incorrect number of thread records (4, 21684) CheckCatalogBTree: dirCount = 480561, dirThread = 480560 Incorrect number of thread records (4, 21684) CheckCatalogBTree: fileCount = 3833287, fileThread = 3833274 ** Checking multi-linked files. ** Checking catalog hierarchy. Missing thread record (id = 6297728) Invalid directory item count (It should be 53 instead of 80) Incorrect folder count in a directory (id = 5263039) (It should be 1 instead of 12) ** Checking extended attributes file. Overlapped extent allocation (id = 6297658, /.Spotlight-V100/Store-V2/3D6A1DA4-ABC0-4E0F-B58E-85DBB4D0546C/live.2.shadowIndexArrays) extentType=0x0, startBlock=0x3550cd5, blockCount=0x96, attrName=(null) extentType=0x0, startBlock=0x3550cd5, blockCount=0x1, attrName=(null) Overlapped extent allocation (id = 6297731) extentType=0x0, startBlock=0x3550cd8, blockCount=0x1, attrName=(null) Overlapped extent allocation (id = 6297732) extentType=0x0, startBlock=0x3550cd9, blockCount=0x1, attrName=(null) Overlapped extent allocation (id = 6297744) extentType=0x0, startBlock=0x3550cd6, blockCount=0x1, attrName=(null) Overlapped extent allocation (id = 6297757) extentType=0x0, startBlock=0x3550cd7, blockCount=0x1, attrName=(null) Overlapped extent allocation (id = 6297758) ** Checking multi-linked directories. privdir_valence=13410, calc_dirlinks=123458, calc_dirinode=13410 ** Checking volume bitmap. Volume bitmap needs minor repair for orphaned blocks Volume bitmap needs repair for under-allocation ** Checking volume information. Invalid volume free block count (It should be 234391157 instead of 236195590) invalid VHB nextCatalogID Volume header needs minor repair (2, 0) Verify Status: VIStat = 0xa800, ABTStat = 0x0000 EBTStat = 0x0000 CBTStat = 0x0800 CatStat = 0x00004020 ** Repairing volume. Look for links to corrupt files in DamagedFiles directory. GetCatalogRecord: No matching catalog thread record found Cannot create links to all corrupt files GetCatalogRecord: No matching catalog thread record found GetCatalogRecord: No matching catalog thread record found GetCatalogRecord: No matching catalog thread record found GetCatalogRecord: No matching catalog thread record found FixOrphanedFiles: Created thread record for id=6297756 (err=0) FixOrphanedFiles: Created thread record for id=6297742 (err=0) FixOrphanedFiles: Created thread record for id=6297734 (err=0) FixOrphanedFiles: Created thread record for id=6297741 (err=0) FixOrphanedFiles: Created thread record for id=6297740 (err=0) FixOrphanedFiles: Created thread record for id=6297735 (err=0) FixOrphanedFiles: Created thread record for id=6297732 (err=0) FixOrphanedFiles: Created thread record for id=6297733 (err=0) FixOrphanedFiles: Created thread record for id=6297739 (err=0) FixOrphanedFiles: Created thread record for id=6297738 (err=0) FixOrphanedFiles: Created thread record for id=6297736 (err=0) FixOrphanedFiles: Created thread record for id=6297737 (err=0) FixOrphanedFiles: Created thread record for id=6297757 (err=0) FixOrphanedFiles: Created thread record for id=6297730 (err=0) FixOrphanedFiles: Created thread record for id=6297729 (err=0) FixOrphanedFiles: Created thread record for id=6297731 (err=0) FixOrphanedFiles: Created thread record for id=6297728 (err=0) FixOrphanedFiles: Created thread record for id=6297754 (err=0) FixOrphanedFiles: Created thread record for id=6297746 (err=0) FixOrphanedFiles: Created thread record for id=6297753 (err=0) FixOrphanedFiles: Created thread record for id=6297752 (err=0) FixOrphanedFiles: Created thread record for id=6297747 (err=0) FixOrphanedFiles: Created thread record for id=6297744 (err=0) FixOrphanedFiles: Created thread record for id=6297745 (err=0) FixOrphanedFiles: Created thread record for id=6297751 (err=0) FixOrphanedFiles: Created thread record for id=6297750 (err=0) FixOrphanedFiles: Created thread record for id=6297748 (err=0) FixOrphanedFiles: Created thread record for id=6297749 (err=0) FixOrphanedFiles: Created thread record for id=6297758 (err=0) FixOrphanedFiles: Deleted thread record for id=6297642 (err=0) FixOrphanedFiles: Deleted thread record for id=6297649 (err=0) FixOrphanedFiles: Deleted thread record for id=6297650 (err=0) FixOrphanedFiles: Deleted thread record for id=6297675 (err=0) FixOrphanedFiles: Deleted thread record for id=6297676 (err=0) FixOrphanedFiles: Deleted thread record for id=6297677 (err=0) FixOrphanedFiles: Deleted thread record for id=6297678 (err=0) FixOrphanedFiles: Deleted thread record for id=6297679 (err=0) FixOrphanedFiles: Deleted thread record for id=6297681 (err=0) FixOrphanedFiles: Deleted thread record for id=6297683 (err=0) FixOrphanedFiles: Deleted thread record for id=6297684 (err=0) FixOrphanedFiles: Deleted thread record for id=6297685 (err=0) FixOrphanedFiles: Deleted thread record for id=6297690 (err=0) FixOrphanedFiles: Deleted thread record for id=6297691 (err=0) FixOrphanedFiles: Deleted thread record for id=6297692 (err=0) ** Rechecking volume. ** Checking Journaled HFS Plus volume. ** Detected a case-sensitive volume. The volume name is Time Machine Backups ** Checking extents overflow file. ** Checking catalog file. ** Checking multi-linked files. ** Checking catalog hierarchy. Incorrect folder count in a directory (id = 2) (It should be 8 instead of 7) ** Checking extended attributes file. ** Checking multi-linked directories. privdir_valence=13410, calc_dirlinks=123458, calc_dirinode=13410 ** Checking volume bitmap. ** Checking volume information. ** Repairing volume. ** Rechecking volume. ** Checking Journaled HFS Plus volume. ** Detected a case-sensitive volume. The volume name is Time Machine Backups ** Checking extents overflow file. ** Checking catalog file. ** Checking multi-linked files. ** Checking catalog hierarchy. ** Checking extended attributes file. ** Checking multi-linked directories. privdir_valence=13410, calc_dirlinks=123458, calc_dirinode=13410 ** Checking volume bitmap. ** Checking volume information. ** Trimming unused blocks. ** The volume Time Machine Backups was repaired successfully. CheckHFS returned 0, fsmodified = 1 This ran for a long time, but afterward I was able to recover all files. Note: I actually had two different computers' backups on this backup disk, and both were corrupted due to this issue. Fixing the other required running fsck_hfs multiple times. Your results may vary. Some sources which helped me along the way: https://swissmacuser.ch/hfs-volume-data-recovery-diskutility-could-not-mount-error-49153/ Repair Time Machine sparsebundle that will no longer mount
common-pile/stackexchange_filtered
How do I print the string which __FILE__ expands to correctly? Consider this program: #include <stdio.h> int main() { printf("%s\n", __FILE__); return 0; } Depending on the name of the file, this program works - or not. The issue I'm facing is that I'd like to print the name of the current file in an encoding-safe way. However, in case the file has funny characters which cannot be represented in the current code page, the compiler yields a warning (rightfully so): ?????????.c(3) : warning C4566: character represented by universal-character-name '\u043F' cannot be represented in the current code page (1252) How do I tackle this? I'd like to store the string given by __FILE__ in e.g. UTF-16 so that I can properly print it on any other system at runtime (by converting the stored UTF-16 representation to whatever the runtime system uses). To do so, I need to know: What encoding is used for the string given by __FILE__? It seems that, at least on Windows, the current system code page (in my case, Windows-1252) is used - but this is just guessing. Is this true? How can I store the UTF-8 (or UTF-16) representation of that string in my source code at build time? My real life use case: I have a macro which traces the current program execution, writing the current sourcecode/line number information to a file. It looks like this: struct LogFile { // Write message to file. The file should contain the UTF-8 encoded data! void writeMessage( const std::string &msg ); }; // Global function which returns a pointer to the 'active' log file. LogFile *activeLogFile(); #define TRACE_BEACON activeLogFile()->write( __FILE__ ); This breaks in case the current source file has a name which contains characters which cannot be represented by the current code page. @Roddy: I'm using MSVC9, but I'm also interested in a solution for g++ 4.x This is also totally busted in MSVC 2015. Why can't Microsoft just make a compiler that doesn't screw up ? Use can use the token pasting operator, like this: #define WIDEN2(x) L ## x #define WIDEN(x) WIDEN2(x) #define WFILE WIDEN(__FILE__) int main() { wprintf("%s\n", WFILE); return 0; } This looks very interesting! However, it triggers a follow-up question: what encoding does the wide-character string use? UTF-16? Or is it a plain, unencoded, UCS-2 string? Right now it seems to me that this merely 'delays' the issue. However, it's much better than my current code so +1 from me. Unfortunately, it doesn't seem to work as expected: it just prints '???????' in case the file has a russian name. This is the same I see when listing the file with 'dir'. Maybe __FILE__ is really tied to the filesystem encoding, but it doesn't honour whatever field Windows explorer uses to show the russian characters? Works on my machine. Are you using a console mode program? Did you switch the console to a Cyrillic code page with a font that supports the glyphs? SetConsoleCP(1251) for example with, say, the Consolas font. The default console encoding is OEM, it doesn't have the glyphs. I'm using a console program (no /SUBSYSTEM:WINDOWS passed to the linker) but I'm actually printing the string via OutputDebugStringW. It really seems to be a font issue; printing the individual bytes of the string yields e.g. 0x043f 0x0440 0x043e which is certainly not the Unicode code for '?'. Accepting this answer, thanks a lot! Hans, I'd just like to remark that defining __WFILE__ is technically undefined behavior, since it is a reserved symbol (it begins with two underscores). This could easily be solved by defining the macro with any other name (perhaps simply WFILE). Hm, visiting this answer after quite a while, it doesn't work at all for me. E.G the format should be L"%ls\n" for wprintf: it expects a wide character string as a format and since the return of the macro is a wide character string, too, there must be the %ls and not %s. But even if I do so, I am only getting a ? for a special character. wprintf() is the wide version of printf(), it already expects a wide string as an argument for %s. Sounds to me you ought to click the Ask Question button and actually document what that "special character" is. Just for completeness here's the full reference to predefined macros in VS. __FILE__ will always expand to character string literal, thus in essence it will be compatible to char const*. This means that a compiler implementation has not much other choice than using the raw byte representation of the source file name as it presents itself at compile time. Whether or not this is something sensible in the current locale or not doesn't matter, you could have a source file name that contains basically garbage, as long as your run time system and compiler accept it as a valid file name. If you, as a user, have a different locale with different encoding than is used in your file system, you will see a lot of ???? or alike. But if both your locales agree upon the encoding, a plain printf should suffice and your terminal (or whatever you use to look at the output) should be able to print the characters correctly. So the short answer is, it will only work if your system is consistent w.r.t encoding. Otherwise your out of luck, since guessing encodings is a quite difficult task. The best solution is to use source filenames in the portable filename character set [A-Za-z0-9._-]. Since Windows does not support UTF-8, there's no way for arbitrary non-ASCII characters to be represented in ordinary strings without dependence on your configured local language. gcc probably does not care; it treats all filenames as 8bit strings and so if the filename is accessible to gcc, its name will be representable. (I know cygwin provides a UTF-8 environment by default, and modern *nix will normally be UTF-8.) For MSVC, you might be able to use the preprocessor to prepend L to expansion of __FILE__ and use %ls to format it. As for the encoding, I'm going to guess it's what's used by the filesystem, probably Unicode. As for dealing with it, how 'bout changing you code it something like: #define TRACE_BEACON activeLogFile()->write( FixThisString(__FILE__ )); std::string FixThisString(wchar_t* bad_string) { .....} (Implementation of FixThisString is left as an exercise for the student.) __FILE__ is a char string not a wchar_t string. You'll need to use the preprocessor to prefix L to it if you want to do this. And then you can use the right printf-family function to print it. @R: The error he is getting is that string he is printing contains a '\u043F' which would be a 16-bit, Unicode wchar_t. In MSVC, you can turn on Unicode and get UTF-16 encoded strings. It's in the project properties somewhere. In addition, you should just use wcout/cout not printf/wprintf. Windows needed Unicode before Unicode existed, so they had a custom multi-byte character encoding, which is the default. However, Windows does support UTF16- it's for example, C#. #include <iostream> int main() { std::wcout << __WFILE__; }
common-pile/stackexchange_filtered
OnCurrentIndexChange handler triggering before I select a row on the table in Wix I'm trying to get the row data from a table which is connected to a dataset. So as per the this article, since, I need the whole data of the selected row, I'm using OnCurrentIndexChange event handler to get the selected row's data from the dataset. But for some strange reason, the first row item will always be triggered on the load of the table/page. Basically, the first row data will be selected on the load of the page/table. Is this a bug or am I doing something wrong? Any help will be greatly appreciated. Thanks, Jilu Though I couldn't get to work with OnCurrentIndexChange handler. But I found a work around by which I could fix this bug/issue for my application. Solution: Instead of using OnCurrentIndexChange handler of the dataset, I used OnSelectRow event handler to get the row index of the table and used that index to get the whole row data from the dataset. More on this get method, you can find here. Below is the code for your reference: export function tblCustomerList_rowSelect(event) { $w("#dsCustomer").getItems(event.rowIndex, 1) .then(result => { let selectedRow = result.items[0]; console.log(selectedRow); if (selectedRow !== null) { let returnObj = { "name": selectedRow.name, "id": selectedRow._id } wixWindow.lightbox.close(returnObj); } }) } Hope this will be helpful for people who are stuck with such kind of issue in wix. Thanks, Jilu
common-pile/stackexchange_filtered
What is the equivalent /proc/process_pid/environ file in AIX? Usually I look this file in Linux using string -a command to see this file. Has AIX something like that? I didn't find anything in /proc/pid_process I use this file to know what environment variables some process is seeing. for example, I have an Oracle Database installed on the server. If I'd like to know what environment variable pmon process is seeing, I can find the process: [root@oracle-database 1664]# ps aux|grep pmon|grep -v grep oracle 8897 0.0 0.5 1133456 5312 ? Ss Nov27 0:18 ora_pmon_idbcloud [root@oracle-database 1664]# And look the file /proc/process_pid/environ [root@oracle-database 1664]# strings -a /proc/8897/environ XDG_SESSION_ID=4689 HOSTNAME=oracle-database SHELL=/bin/bash TERM=xterm HISTSIZE=1000 USER=oracle ORACLE_SID=idbcloud ORACLE_BASE=/u01/app/oracle MAIL=/var/spool/mail/oracle PATH= PWD=/u01 LANG=en_US.UTF-8 HISTCONTROL=ignoredups SHLVL=1 HOME=/home/oracle LOGNAME=oracle LESSOPEN=||/usr/bin/lesspipe.sh %s ORACLE_HOME=/u01/app/oracle/product/19.0.0/dbhome_1 NLS_DATE_FORMAT=DD/MM/YYYY HH24:MI:SS _=/bin/rlwrap OLDPWD=/u01 ORA_NET2_DESC=9,12 SKGP_SPAWN_DIAG_POST_FORK_TS=1606503159 SKGP_HIDDEN_ARGS=<FATAL/S/PMON/x0/x1/x0/x5AF86E15/8888/8888/x0/x2/x1/x5AF86E38/1606503159/1606503159/196609/0/(nil)> SKGP_SPAWN_DIAG_PRE_FORK_TS=1606503159 SKGP_SPAWN_DIAG_PRE_EXEC_TS=1606503159 ORACLE_SPAWNED_PROCESS=1 RDMAV_FORK_SAFE=1 RDMAV_HUGEPAGES_SAFE=1 As Thomas said, there is no file in an AIX system, but the ps command does let you "know what environment variables some process is seeing" e Displays the environment as well as the parameters to the command, up to a limit of 80 characters. ew Wraps the display from the e flag one extra line. eww Wraps the display from the e flag and displays the ENV list until the flag reaches the LINE_MAX value. ewww Wraps the display from the e flag and displays the ENV list until the flag reaches the INT_MAX value. For example: $ ps ewww 1835516 PID TTY STAT TIME COMMAND 1835516 - A 7:02 /usr/sbin/syncd 60 _=/usr/bin/nohup LANG=C PATH=/usr/sbin:/etc:/usr/bin LC__FASTMSG=true ODMDIR=/etc/objrepos HOME=/ PWD=/ CFGLOG=default NLSPATH=/usr/lib/nls/msg/%L/%N:/usr/lib/nls/msg/%L/%N.cat LIBPATH=/usr/lib:/lib AIX does not provide the same style of pseudo-filesystem for /proc as Linux does. Referring to the manual pages: /proc file (AIX) proc - process information pseudo-filesystem (Linux) the latter documents a pseudo-file dedicated to environment: /proc/[pid]/environ This file contains the initial environment that was set when the currently executing program was started via execve(2). while the former describes a file containing process information psinfo Contains information about the process needed by the ps command. If the process contains more than one thread, a representative thread is used to derive the lwpsinfo information. The file is formatted as a struct psinfo type and contains the following members: In particular: prptr64_t pr_envp; /* address of initial environment vector in user process */ While it is possible to write a script which reads that data structure, you will not do it with just grep or strings: that address would be used in accessing the memory image file: as Contains the address space image of the process. The as file can be opened for both reading and writing. The lseek subroutine is used to position the file at the virtual address of interest. Afterwards, you can view and modify the address space with the read and write subroutines, respectively.
common-pile/stackexchange_filtered
Permissions for List or Library Columns Is there any way to give permission to a specific field of a list or library? I want one of my users for example cannot add or edit data in a specific column. Can I set permissions to the form and its fields?? There exist third party solutions though, such as SharePointBoost PermissionBoost. I'm not sure about promoting commercial products on this page so I'm adding this as a comment this time :) Hy, You can do that in Infopath, not directly in a site or Designer and in infopath just try to manupilate rules its pretty easy ;) Good luck Can you help me on this? How can I assign permissions to my Infopath forms? A workaround for this is to create a lookup field that points to a list where only some users have access too. It works, but it is kind of a hack... No, it is impossible. You can do it through customization of sharepoint forms. It is not possible at column level. You can assign permission to particular Item. Should this field be editable to SPGroupA and not editable to SPGroupB? You can use javascript scom to find the SP group that the current user belongs to : https://stackoverflow.com/questions/22122139/check-if-current-users-belongs-to-sp-group-using-javascript-client-side-object-m And then make the field read-only : https://stackoverflow.com/questions/1362153/how-can-i-use-jquery-to-make-an-input-readonly You cannot do that with the native sharepoint's feature. If you want you can find some feature, in codeplex, like this: http://sptoolbasket.codeplex.com/wikipage?title=SharePoint%20-%20List%20Columns%20Manager this one too: https://sppex.codeplex.com/
common-pile/stackexchange_filtered
Centralised login system to redirect I have a web app which I need to duplicate for multiple clients. Each client will have his own server space and a database. Each database will have one user table per client and they have independent data from each others. Clients could access to their site via sub-domain URLs. One client's URL may look like this ex: www.abcgroup.example.com (client: abcgroup) I want to have one centralized login page at www.example.com/login. Once a client login, he will be redirected to abcgroup.example.com automatically, without having to type his own sub-domain URL. ex:This site managed to do it https://tictail.com/ Problem is that I have multiple databases with multiple user tables. So how can I authenticate users globally from one login page? I'm not sure whether I have to check against all the user tables in client databases. Do I need to read from all the user tables? (I'm using Codeigniter 2) Thanks! you should have atleast user table in one single database so that you can make login and than redirect Please check the update. thanks 2 ways to do this: Have a central DB that has all the users. This makes sure all the users are unique. Having multiple DBs, you may run into duplicate users. Making the user select or type in the subdomain in the login page. Some web apps do this. Thanks for the comment komirad. The reason I have multiple databases is that I need to isolate data from one another. I can't have one database for all the clients architecture as its too complicated for us to manage. Is there any workaround with this implementation, so I can redirect from one login page? You can either clone the users to a single DB. Moving forward, uniqueness validation should be queried against this DB. Or you can have a separate "product owner" accounts which they use to manage their "subscription" with you. (That's how atlassian/jira handles it) I don't think there's a scalable solution to query multiple databases.
common-pile/stackexchange_filtered
How to gracefully disable the interrupt line without a kernel crash? I have implemented a program that reads from the keyboad, and scans code and put it into the tasklet. The tasklet unblocks the read(). Thus, my QT-application can read the data and if it finds the scan code of l, it fires a callback to Qt-webkit. However, when I am doing rmmod of my character driver. The entire kernel crashes. What is the problem in my character driver. #include <linux/init.h> #include <linux/module.h> /** needed by all modules **/ #include <linux/kernel.h> /** This is for KERN_ALERT **/ #include <linux/fs.h> /** for file operations **/ #include <linux/cdev.h> /** character device **/ #include <linux/device.h> /** for sys device registration in /dev/ and /sys/class **/ /** for copy_to_user **/ #include <asm/uaccess.h> /** kernel thread */ #include <linux/kthread.h> /** spin lock essential here **/ #include <linux/spinlock.h> #include <linux/interrupt.h> #include <asm/io.h> /** For class registration to work, you need GPL license **/ MODULE_LICENSE("GPL"); #define NUMBER_OF_MINOR_DEVICE (0) /** to avoid namespace pollution, make everything static **/ static struct cdev basicCdev; static struct class *basicDriverClass; /** Created thread **/ static struct task_struct *basicCharThread = NULL; static wait_queue_head_t chrDriverQueue; static spinlock_t mLock; static int read_unblock_flag = 0; static int basicMajorNumber = 0; /** Just for debugging, whether bottom half (tasklet) is executed or not **/ static char my_tasklet_data[]="my_tasklet_function was called"; /** scan code that will be return by read **/ static unsigned char scancode; static unsigned char status; static void my_tasklet_function( unsigned long data ); DECLARE_TASKLET( my_tasklet, my_tasklet_function,(unsigned long) &my_tasklet_data ); /** Prototype for read, this will be invoked when the read function is done on to the driver **/ /** The declaration type is file operations based function pointer - read **/ static ssize_t basicRead(struct file *filp, char *buffer, size_t length,loff_t *offset); static int basicOspen(struct inode *inode, struct file *file); /** File Operations function pointer table **/ /** There are plenty of file operations **/ static struct file_operations fops = { .read = basicRead, .write = NULL, .open = basicOspen, .release = NULL }; static ssize_t basicRead(struct file *filp, char *buffer, size_t length, loff_t *offset) { unsigned long flags; char msg[] = "Hello SJ_read\0"; /** lock it **/ spin_lock_irqsave( &mLock, flags ); printk(KERN_ALERT "The Read operation called\r\n"); spin_unlock_irqrestore( &mLock, flags ); wait_event_interruptible(chrDriverQueue, read_unblock_flag != 0); /** set it back to original for sleep **/ read_unblock_flag = 0; printk(KERN_ALERT "Read out of sleep\r\n"); if( signal_pending( current ) ) { printk(KERN_ALERT "Signal pending error\r\n"); return -1; } spin_lock_irqsave( &mLock, flags ); if ( 0 != copy_to_user( buffer, &scancode, sizeof(scancode) )) { printk(KERN_ALERT "Copy_to_user failed !!\r\n"); } spin_unlock_irqrestore( &mLock, flags ); return sizeof(msg); } static int basicOspen(struct inode *inode, struct file *file) { printk("Kernel.Basic Driver Opened now!!\r\n"); return 0; } static void setup_cdev(struct cdev *dev, int minor, struct file_operations *fops) { int err = -1; /** MKDEV call creates a device number i.e. combination of major and minor number **/ int devno = MKDEV(basicMajorNumber, minor); /** Initiliaze character dev with fops **/ cdev_init(dev, fops); /**owner and operations initialized **/ dev->owner = THIS_MODULE; dev->ops = fops; /** add the character device to the system**/ /** Here 1 means only 1 minor number, you can give 2 for 2 minor device, the last param is the count of minor number enrolled **/ err = cdev_add (dev, devno, 1); if (err) { printk (KERN_NOTICE "Couldn't add cdev"); } } /** Un-used Kernel Thread here **/ static int basicCharThread_fn (void *data) { unsigned long j0,j1; int delay = 30*HZ; unsigned long flags; j0 = jiffies; j1 = j0 + delay; /** this loop will run till the jiffies is equal to the (old jiffies + 60Hz delay) , this is a 1 minute delay in total **/ while (time_before(jiffies, j1)) { //printk("The thread is started\r\n"); schedule(); } spin_lock_irqsave( &mLock, flags ); read_unblock_flag = 1; spin_unlock_irqrestore( &mLock, flags ); wake_up_interruptible( &chrDriverQueue ); printk("The process is moved out of wait queue now\r\n"); return 0; } /* Bottom Half Function - Tasklet */ void my_tasklet_function( unsigned long data ) { unsigned long flags; printk( "%s\n", (char *)data ); /** Scan Code 1c Pressed **/ printk(KERN_INFO "Scan Code %x %s.\n", scancode& 0x7F, scancode & 0x80 ? "Released" : "Pressed"); spin_lock_irqsave( &mLock, flags ); read_unblock_flag = 1; spin_unlock_irqrestore( &mLock, flags ); wake_up_interruptible( &chrDriverQueue ); return; } /*** Interrupt Handler ***/ irqreturn_t irq_handler(int irq, void *dev_id, struct pt_regs *regs) { /* * Read keyboard status */ status = inb(0x64); scancode = inb(0x60); /** BH scheduled **/ tasklet_schedule( &my_tasklet ); return IRQ_HANDLED; } /** character Driver load - Main Entry Function - Module_Init() - called on insmod **/ static int chrDriverInit(void) { int result = -1; dev_t dev; /** Initialize the kernel thread **/ char kthreadBasic[]="kthreadBasic"; printk("Welcome!! Device Init now.."); /** int alloc_chrdev_region(dev_t *dev, unsigned int firstminor,unsigned int count, char *name); **/ /** dev -> The dev_t variable type,which will get the major number that the kernel allocates. **/ /**The same name will appear in /proc/devices. **/ /** it is registering the character device **/ /** a major number will be dynamically allocated here **/ /** alloc_chrdev_region(&dev_num, FIRST_MINOR, COUNT, DEVICE_NAME); **/ result = alloc_chrdev_region(&dev, 0, NUMBER_OF_MINOR_DEVICE, "pSeudoDrv"); if( result < 0 ) { printk("Error in allocating device"); return -1; } /** From these two if's we are avoiding the manual mknod command to create the /dev/<driver> **/ /** creating class, and then device created removes the dependency of calling mknod **/ /** A good method - the mknod way is depreciated **/ /** mknod way is - mknod /dev/<driver_name> c <majorNumber> <minorNumber> */ /** add the driver to /-sys/class/chardrv */ if ((basicDriverClass = class_create(THIS_MODULE, "chardrv")) == NULL) //$ls /sys/class { unregister_chrdev_region(dev, 1); return -1; } /** add the driver to /dev/pSeudoDrv -- here **/ if (device_create(basicDriverClass, NULL, dev, NULL, "pSeudoDrv") == NULL) //$ls /dev/ { class_destroy(basicDriverClass); unregister_chrdev_region(dev, 1); return -1; } /** let's see what major number was assigned by the Kernel **/ basicMajorNumber = MAJOR(dev); printk("Kernel assigned major number is %d ..\r\n",basicMajorNumber ); /** Now setup the cdev **/ setup_cdev(&basicCdev,NUMBER_OF_MINOR_DEVICE, &fops); /** Linux thread is just another process, and is treated like that only **/ /** string that stores the name - kthreadBasic**/ /** struct task_struct *kthread_create(int (*function)(void *data),void *data const char name[], ...)**/ basicCharThread = kthread_create(basicCharThread_fn,(void *)NULL,kthreadBasic); /** To run the process, we need to call wake_up_process by passing the thread-id, obtained by kthread_create **/ /** To stop the thread, call kthread_stop pass the thread-id **/ if( basicCharThread == NULL) { printk( KERN_ALERT "Thread creation failed\r\n"); return -1; } /** inialize spin lock **/ spin_lock_init( &mLock); /** Initialize wait queue **/ init_waitqueue_head(&chrDriverQueue); /** Start executing the thread now **/ /** Disabled thread here- not in use **/ //wake_up_process(basicCharThread); /** Free IRQ - if already registered **/ free_irq(1, NULL); /** register the keyboard IRQ on shared mode,note this is specific to X86 architecture **/ return request_irq(1, /* The number of the keyboard IRQ on PCs */ (irqreturn_t *)irq_handler, /* our handler */ IRQF_SHARED, "test_keyboard_irq_handler", (void *)(irq_handler)); return 0; } /** Driver Exit - Module_Exit() .. called on rmmod **/ static void chrDriverExit(void) { //free_irq(1, NULL); /** stop the running thread **/ tasklet_kill( &my_tasklet ); //kthread_stop(basicCharThread); /** A reverse - destroy mechansim -- the way it was created **/ printk("Releasing Simple Devs -- %s\r\n", __FUNCTION__); /** delete the character driver added **/ cdev_del(&basicCdev); /** destroy the device created **/ device_destroy(basicDriverClass, MKDEV(basicMajorNumber, 0)); /** destroy the class created **/ class_destroy(basicDriverClass); /** unregister the chr dev **/ unregister_chrdev(basicMajorNumber, NUMBER_OF_MINOR_DEVICE); } /** Module Entry and Exit ***/ module_init(chrDriverInit); module_exit(chrDriverExit); Can't you limit the lines of code and zoom in on the problem? The code is static void chrDriverExit() as this will be called on rmmmod. What's the crash message? Perhaps my_tasklet is in the run Q while you terminate it. you can set up some printk debug msg in the exit routine. I disabled tasklet kill,still it crashes. The entire system goes down. As you know that kernel is monolithic. I can't capture the exact logs. But the crash is for the free irq. If I disable that, then also there is a crash. you can set yo kdb so that once the kernel crashed, kdb console will be prompt out and then you can see the details. keep running the cat /proc/kmsg in other window if possible(you need to be root for that). so that when rmmod called you can see what were the last calls simultaneously.
common-pile/stackexchange_filtered
How can I solve the expected number of frog jumps problem? A frog sits on the real number line at 0. It makes repeated jumps of random distance forward. For every jump, the frog advances by a random amount, drawn (independently) from the uniform distribution over $U([0, 1])$. The frog stops once it has reached or surpassed 1. How many jumps does the frog make on average? What’s the standard deviation? Here is my answer: Let $N$ be a random variable with possible values 2, 3, 4, ... which represents the number of jumps the frog makes immediately after it has reached or surpassed 1. We will neglect the possibility of only one jump being required. Let $X_1$, $X_2$, ... $X_n$ be a set of random variates taken from $U([0,1])$. Let $$S_n = \sum_{i=1}^n X_i.$$ For $n\ge2$, the probability that $N=n$ is given by $$ \begin{aligned} P(N=n) &= P[(S_n \ge 1) \cap (S_{n-1}<1)] \\ &= P(S_n \ge 1)P(S_{n-1}<1). \end{aligned} $$ From the CDF of the Irwin-Hall distribution we know that $$P(S_n\le x)=\sum_{k=0}^n\frac{(-1)^k}{n!(n-k)!}(x-k)^n_+.$$ Hence, $$P(S_n\le 1)=\frac{1}{n!}.$$ Similarly, $$P(S_{n-1}\le 1)=\frac{1}{(n-1)!},$$ $$P(S_n > 1)=1 - \frac{1}{n!},$$ $$\implies P(N=n)=\frac{1}{(n-1)!} - \frac{1}{n!(n-1)!}.$$ Hence the expected value of $N$ (i.e. the average number of jumps) is given by (see WolframAlpha), $$\begin{aligned} E(N)&=\sum_{n=2}^\infty nP(N=n) \\ &=\sum_{n=2}^\infty \frac{n}{(n-1)!}\left(1-\frac{1}{n!}\right) \\ &= 2e - I_0(2) \\ &\approx 3.1570. \end{aligned}$$ Let $\mu = E(N)$. Now we need to calculate, $$E[(N-\mu)^2] = E(N^2) - \mu^2.$$ The first term is given by (see WolframAlpha): $$\begin{aligned} E(N^2) &= \sum_{n=2}^\infty n^2P(N^2=n^2) \\ &= \sum_{n=2}^\infty n^2P(N=n) \\\ &=\sum_{n=2}^\infty \frac{n^2}{(n-1)!}\left(1-\frac{1}{n!}\right) \\ &=5e-I_0(2) - I_1(2)\\ &\approx 9.7212. \end{aligned}$$ Hence the standard deviation, $\sigma$ is approximately (see WolframAlpha), $$ \begin{aligned} \sigma \approx 0.2185. \end{aligned} $$ However, when I check these results using the code below, it seems that $$\mu\approx 2.7222 \approx e?,$$ $$\sigma \approx 0.8752.$$ Can you see where I went wrong? import numpy as np num_trials = int(1e5) N = np.zeros(num_trials) for n in range(num_trials): X = 0 while X < 1: N[n] += 1 X += np.random.uniform() print(np.mean(N)) print(np.std(N)) \begin{aligned} P(N=n) &= P[(S_n \ge 1) \cap (S_{n-1}<1)] \ &= P(S_n \ge 1)P(S_{n-1}<1). \end{aligned} What justifies this? is there some reason to think the events are independent? (I don't think they are) That's a good point. I don't have a justification. Do you know how to calculate $$P[(S_n\ge 1) \cap (S_{n-1} < 1)]?$$ We can use $$P[(S_n\ge 1) \cap (S_{n-1} < 1)] = P[S_n\ge 1|S_{n-1} < 1]P(S_{n-1} < 1),$$ but I'm not sure how to proceed from here. Do you agree that $$P(N=n) = P[(S_n\ge 1) \cap (S_{n-1} < 1)]?$$ I do agree. I recommend integrating $$ P(n=N)=\int_0^1 x P(S_{n-1} < x)dx$$ We can calculate that using: $$P(S_n<x)=\frac{x}{n!}$$ for $x\le1.$ Hence, $$\int_0^1xP(S_{n-1}<x)dx=\frac{1/2}{(n-1)!}.$$ But I don't see why $$P(n=N) = \int_0^1xP(S_{n-1}<x)dx,$$ can you explain why that is? My reasoning was that if you get to $x$ in $n-1$, the probability that you will get a total greater than 1 after jump $n$ is just $x$ (the distance must be between $1-x$ and $1$ I realize that to do this correctly, you need to use the pdf instead of the cdf , in this case, the pdf for $n-1$ jumps is.. $$\frac {x^{n-2}}{(n-2)!}$$ As noted in the comments, you have incorrectly assumed the events $S_n\geq 1$ and $S_{n-1}<1$ are independent. Note that the events $N=n$ and $S_n<1$ are mutually exclusive and collectively exhaust the event $S_{n-1}\leq 1$. Also note that the Irwin-Hall distribution tells us $P(S_n\leq 1)=1/n!$ Thus, $$\begin{align} P(N=n)&=P(S_{n-1}\leq 1)-P(S_n<1)\\ &={1 \over (n-1) !}-{1 \over n!}\\ &= {n-1 \over n!} \end{align}.$$ You can check that this should give you $E[N]=e,\text{Var}(N)=3e-e^2.$ You can write $$\mathbb{E}N = \sum_{n=0}^{\infty}nP(N=n) = \sum_{n=0}^{\infty}P(N>n)$$ $$=\sum_{n=0}^{\infty}P(S_n<1) $$ $$=\sum_{n=0}^{\infty}\frac{1}{n!} $$ Edit regarding the first line: The intuitive way is to think about how many times it gets counted if $N=3$, say. In the RHS sum this gets counted for $3$ once, at $n=3$. In the LHS it gets counted for $1$ three times, at $n=0,1,2$. Thank you very much. Do you mind explaining how you know that $$\sum_{n=0}^\infty nP(N=n)=\sum_{n=0}^\infty P(N>n)?$$ @Peanutlex The intuitive way is to think about how many times it gets counted if $N=3$, say. In the RHS sum this gets counted for $3$ once, at $n=3$. In the RHS it gets counted for $1$ three times, at $n=0,1,2$. Thank you for the responses. They were very helpful. Here is my answer that I find more intuitive. $$\begin{aligned} P(N\ge n+1) &= P(S_n < 1) \\ &= P(S_n \le 1) \text{ (since the probability $S_{n+1}=1$ is effectively 0.)} \\ &= \frac{1}{n!} \end{aligned}$$ Since $P(N\ge n)$ is a decreasing function of $n$, we know that $$\begin{aligned} P(N=n)&=P(N\ge n)-P(N\ge n+1) \\ &= \frac{1}{(n-1)!}-\frac{1}{n!} \\ &= \frac{n-1}{n!}. \end{aligned}$$ Hence, $$\begin{aligned} E(N)&=\sum_{n=2}^\infty \frac{1}{(n-2)!},\\ &=\sum_{n=0}^\infty \frac{1}{n!} \\ &=e. \end{aligned}$$ $$\begin{aligned} E(N^2)&=\sum_{n=2}^\infty \frac{n}{(n-2)!},\\ &=\sum_{n=0}^\infty \frac{n}{n!}+ \sum_{n=0}^\infty\frac{2}{n!}\\ &=\sum_{n=1}^\infty \frac{1}{(n-1)!}+ 2e\\ &=3e. \end{aligned}$$ Finally, $$\mu=e=2.71828...$$ $$\sigma=\sqrt{3e-e^2}=0.87509...\,.$$ Which agrees with Golden_Ratio's answer.
common-pile/stackexchange_filtered
Divide dataframe column by a specific cell I want to divide a dataframe column by a specific cell in the same dataframe. I have a dataframe like this: date type score 20201101 experiment1 30 20201101 experiment2 20 20201101 baseline 10 20201102 experiment1 60 20201102 experiment2 50 20201102 baseline 10 I want to compute the score_ratio by dividing the score by the 'baseline' score of that date. date type score score_ratio 20201101 experiment1 30 3 20201101 experiment2 20 2 20201101 baseline 10 1 20201102 experiment1 60 6 20201102 experiment2 50 5 20201102 baseline 10 1 The score_ratio for (date, type) = (20201101, experiment1) should be obtained by dividing its score by the score of (20201101, baseline). In this case, it should be 30 / 10 = 3. Similarly. for (20201101, experiment2), we should divide the score by the same thing, (20201101, baseline). For a different date, say (20201102, experiment1), it should be divided by the baseline of that date, (20201102, baseline). How do I add this column with dataframe operations? So far, I have this but am unsure of what expression I should be dividing by: df['score_ratio'] = df['score'].div(...) Edit: I get the error for the last line ValueError: Length of values does not match length of index ID date type room score 0 id1 20201120 baseline 1 450.25 0 id2 20201120 experiment1 1 -3637.24 0 id3 20201121 baseline 1 200.00 1 id4 20201121 experiment1 1 300.00 2 id5 20201120 baseline 2 600.00 3 id6 20201120 experiment1 2 800.00 _df = df.set_index('date', 'room') d = _df.query('type=="baseline"') print(_df['score'].div(d['score']).values) df['score_ratio'] = _df['score'].div(d['score']).values #Mask all whose type is baseline into a new datframe and merge to the main df g=pd.merge(df, df[df.type.eq('baseline')].drop(columns='type'),how='left', on='date', suffixes=('', '_right')) #Calculate the score_ratio and drop the extra column acquired during merge df=g.assign(score_ratio=g.score.div(g.score_right).astype(int)).drop(columns=['score_right']) print(df) date type score score_ratio 0 20201101 experiment1 30 3 1 20201101 experiment2 20 2 2 20201101 baseline 10 1 3 20201102 experiment1 60 6 4 20201102 experiment2 50 5 5 20201102 baseline 10 1 How it works #New dataframe with baselines only df1=df[df.type.eq('baseline')].drop(columns='type') #Modified original dataframe with baselines added g=pd.merge(df, df1,how='left', on='date', suffixes=('', '_right')) #new column called score_ratio g=g.assign(score_ratio=g.score.div(g.score_right).astype(int)) #drop column called score_right which was acquired during merge g=g.drop(columns=['score_right']) Can you explain? Look cool but i want to inderstand too @adir abargil see my edits on how it works for step by step approach Im not the question owner, but ofc i voted you You set date column as index, then filter out where type is baseline then use Series.div _df = df.set_index('date') d = _df.query('type=="baseline"') # same as _df.loc[_df['type'].eq('baseline')] df['score_ratio'] = _df['score'].div(d['score']).values df date type score score_ratio 0 20201101 experiment1 30 3.0 1 20201101 experiment2 20 2.0 2 20201101 baseline 10 1.0 3 20201102 experiment1 60 6.0 4 20201102 experiment2 50 5.0 5 20201102 baseline 10 1.0 Hi, I added an edit to my question showing the error when I tried doing this. The dataframe shown in the edit is more representative of my actual data. Can you take a look and see what I'm doing wrong? df.set_index('date', 'room') change this to df.set_index('date') @1step1leap in just 3 simple lines of code and commented: # set index to ['date', 'type'] df.set_index(['date', 'type'], inplace=True) # helper: values of score at index 'baseline' s = df.xs('baseline', level=1) # divide df by series and reset index df.div(s, level=0).reset_index() date type score 0 20201101 experiment1 3.0 1 20201101 experiment2 2.0 2 20201101 baseline 1.0 3 20201102 experiment1 6.0 4 20201102 experiment2 5.0 5 20201102 baseline 1.0
common-pile/stackexchange_filtered
How to solve second order ODE from a simplified version of the two-body problem? While thinking about the motion of a particle being attracted to another massive particle and considering that the acceleration of that particle is not constant, I found myself trying to solve this equation $$\frac{d^2f(t)}{dt^2}=\frac{1}{f(t)^2}.$$ This is how I got there: $m_1$ is the mass of particle 1 and $m_2$ is the mass of particle 2 which we will consider to be much larger than $m_1$. Both particles are at rest at t=0. Newton's law gives us: $F=ma$ $F=G\frac{m_1m_2}{r(t)^2}$ Let r be the distance between the two bodies. We assume here that the movement of the particle with mass $m_2$ with respect to a stationary point is negligible. $m_1\frac{dr(t)^2}{dt^2}=G\frac{m_1m_2}{r(t)^2}$ $\frac{dr(t)^2}{dt^2}=G\frac{m_2}{r(t)^2}$ I believe that all of this is a modified version of the two-body problem. Hopefully, the solution will be slightly simpler. This all leads us to the original question of how to solve: $\frac{d^2f(t)}{dt^2}=\frac{1}{f(t)^2}$ Related math SE post Note that your equation of motion should be $\ddot{r} = - G m_2/r^2$ if you want to have an attractive gravitational force (and $r > 0$.) @JasonFunderberker Thanks! @MichaelSeifert Doesn't the sign just depend on our coordinate system? If you want the object to represent an object accelerating under a gravitational force, and $r$ to represent the distance from the origin (which we normally take to be positive), then you need to have $\ddot{r} < 0$ and so you need the negative sign. There's not a lot of freedom in that coordinate choice; inverting $r \to - r$ means that $r$ becomes the negative of the distance to the origin, which is weird, and you can't translate $r \to r + d$ without also changing the form of the right-hand side of the equation. The DE has a solution: https://www.wolframalpha.com/input?i=%5By%28x%29%5D%5E2+y%27%27%28x%29%3D1 but it can be rendered explicit in $ft)$ ($y(x)$) You need the negative sign to make the force attractive. With the negative sign, this is the radial Kepler equation. You can find an explicit solution for time as a function of radial distance.
common-pile/stackexchange_filtered
Send and Receive File Descriptors through LIBUV Pipe I have to send file descriptors of some shared memory buffers from one process to another. I'm able to transfer the fds directly over UNIX Domain Sockets as below: Send FDs: static void send_fds(int socket, int* fds, int n) // send fd by socket { struct msghdr msg = {0}; struct cmsghdr* cmsg; char buf[CMSG_SPACE(n * sizeof(int))], dup[256]; memset(buf, '\0', sizeof(buf)); struct iovec io = {.iov_base = &dup, .iov_len = sizeof(dup)}; msg.msg_iov = &io; msg.msg_iovlen = 1; msg.msg_control = buf; msg.msg_controllen = sizeof(buf); cmsg = CMSG_FIRSTHDR(&msg); cmsg->cmsg_level = SOL_SOCKET; cmsg->cmsg_type = SCM_RIGHTS; cmsg->cmsg_len = CMSG_LEN(n * sizeof(int)); memcpy((int*)CMSG_DATA(cmsg), fds, n * sizeof(int)); if (sendmsg(socket, &msg, 0) < 0) printf("Failed to send message\n"); } Receive FDs: static int* recv_fds(int socket, int n) { int* fds = (int*)malloc(n * sizeof(int)); struct msghdr msg = {0}; struct cmsghdr* cmsg; char buf[CMSG_SPACE(n * sizeof(int))], dup[256]; memset(buf, '\0', sizeof(buf)); struct iovec io = {.iov_base = &dup, .iov_len = sizeof(dup)}; msg.msg_iov = &io; msg.msg_iovlen = 1; msg.msg_control = buf; msg.msg_controllen = sizeof(buf); if (recvmsg(socket, &msg, 0) < 0) printf("Failed to receive message\n"); cmsg = CMSG_FIRSTHDR(&msg); memcpy(fds, (int*)CMSG_DATA(cmsg), n * sizeof(int)); return fds; } But when i use domain sockets through abstraction libuv_pipe_t provided by libuv, I was not able to transfer the fds. Does libuv provides any way to transfer file descriptors between server pipe and client pipe ? If yes how to send and receive fds exactly ? uv_pipe_t server: #include <assert.h> #include <memory.h> #include <stdlib.h> #include <unistd.h> #include <uv.h> #define SOCKET_NAME "socket_name" void alloc_buffer(uv_handle_t* handle, size_t suggested_size, uv_buf_t* buf) { static char buffer[1024]; buf->base = buffer; buf->len = sizeof(buffer); } void read_cb(uv_stream_t* stream, ssize_t nread, const uv_buf_t* buf) { if (nread < 0) { uv_stop(stream->loop); uv_close((uv_handle_t*)stream, NULL); } printf("message received : %s\n", buf->base); } static void connection_cb(uv_stream_t* server, int status) { printf("new connection recevied\n"); int r; uv_pipe_t connection; r = uv_pipe_init(server->loop, &connection, 0); assert(r == 0); r = uv_accept(server, (uv_stream_t*)&connection); assert(r == 0); r = uv_read_start((uv_stream_t*)&connection, alloc_buffer, read_cb); assert(r == 0); } int main() { uv_pipe_t p; int r; r = uv_pipe_init(uv_default_loop(), &p, 0); assert(r == 0); unlink(SOCKET_NAME); r = uv_pipe_bind(&p, SOCKET_NAME); assert(r == 0); r = uv_listen((uv_stream_t*)&p, 128, connection_cb); assert(r == 0); printf("listening...\n"); uv_run(uv_default_loop(), UV_RUN_DEFAULT); uv_loop_close(uv_default_loop()); } uv_pipe_t client: #include <assert.h> #include <memory.h> #include <uv.h> #define SOCKET_NAME "socket_name" static void connect_cb(uv_connect_t* connect_req, int status) { printf("connected!"); // send file descriptor int fds[2]; fds[0] = fileno(fopen("fd_test_0.txt", "w")); fds[1] = fileno(fopen("fd_test_1.txt", "w")); // how to send "fds" array to server ? } int main() { uv_pipe_t p; uv_connect_t conn_req; int r; r = uv_pipe_init(uv_default_loop(), &p, 0); assert(r == 0); uv_pipe_connect(&conn_req, &p, SOCKET_NAME, connect_cb); uv_run(uv_default_loop(), UV_RUN_DEFAULT); uv_loop_close(uv_default_loop()); } this example shows how to transfer pipe handles over pipes but it's doesn't completely solve my problem. Any help is appreciated !!! It is explained here in libuv docs: http://docs.libuv.org/en/v1.x/guide/processes.html#sending-file-descriptors-over-pipes You need to: set ipc parameter to 1 in uv_pipe_init on both sides on client use uv_write2 to send file descriptor accept it on server through uv_accept, then take actual transferred fd number from uv_stream_t accept parameter with uv_fileno. But doc says also: libuv only supports sending TCP sockets or other pipes over pipes for now. So it may not work with your memory buffer file descriptor.
common-pile/stackexchange_filtered
OpenBLAS compilation with Visual studio 2017 I am compiling OpenBLAS with Visual Studio 2017, using this tutorial: https://github.com/xianyi/OpenBLAS/wiki/How-to-use-OpenBLAS-in-Microsoft-Visual-Studio#cmake-and-visual-studio. It works perfectly. However, when I try to link to a project that is using OpenBLAS, I have errors during the link. error LNK2019: unresolved external symbol spotrf_ error LNK2019: unresolved external symbol dpotrf_ error LNK2019: unresolved external symbol spotri_ error LNK2019: unresolved external symbol dpotri_ error LNK2019: unresolved external symbol sgeqrf_ error LNK2019: unresolved external symbol dgeqrf_ error LNK2019: unresolved external symbol sorgqr_ error LNK2019: unresolved external symbol dorgqr_ error LNK2019: unresolved external symbol ssyevd_ error LNK2019: unresolved external symbol dsyevd_ This project was working perfectly with the downloaded binary version of OpenBLAS. The only notable difference is, I changed the include path to the one generated by CMake. And instead of linking to libopenblas.dll.a, I link to openblas.lib, which should be a more Visual-Studio-friendly way of linking... I noticed that all these functions seems to come from Fortran files. Could that be the reason? And since OpenBLAS seems to be compilable with Visual Studio, how to fix the problem?
common-pile/stackexchange_filtered
Period of the propagator of quantum harmonic oscillators I found something that I'm confused with when calculating the propagator of harmonic oscillator. Using the energy representation, the propagator of a quantum harmonic oscillator can be expressed as : $$ K(x_f,t_f;x_i,t_i)=\sum_ne^{-i\omega T(n+\frac{1}{2})}\psi_n(x_f)\psi_n^*(x_i)\tag{1} $$ where the $\psi_n$s are the wave functions of energy eigenstates, and $T=t_f-t_i$. Let $T^\prime=T+\frac{2\pi}{\omega}$, then $K(T^\prime)=-K$. After some calculation, the final result of this propagator yields: $$ \begin{aligned} &K\left(x_f, t_f ; x_i, t_i\right) \\ &=\left[\frac{m \omega}{2 \pi \hbar i \sin \omega T}\right]^{1 / 2} e^{\frac{i}{\hbar} \frac{m \omega}{2 \sin \omega T}\left[\left(x_f^2+x_i^2\right) \cos \omega T-2 x_f x_i\right]} \end{aligned}\tag{2} $$ while $K(T^\prime)=K$ this time, rather than $K(T^\prime)=-K$. How did that happen? Good observation. OP's eq. (2) should be amended with a metaplectic correction/Maslov index: There is a caustic at every half-period, which leads to a phase factor $\exp\left(-\frac{i\pi}{2}\right)$ jump, cf. e.g. my Phys.SE answer here.
common-pile/stackexchange_filtered
Reverse Modulus Operator with given condition I have an equation: x^2 mod p = z ; p and z are given. x,p and z are positive integers and MAX value of x is given (say M). p is prime. How can i calculate (multiple possible values) x when p and z are known ? UPDATE: I found solution here: https://math.stackexchange.com/questions/848062/reverse-modulus-operator-with-given-condition/848106#848106 The question seems to be off-topic as it is about pure mathematics and not programming-related. Furthermore, if p is not a prime number, the equation might yield more than one solution. For instance, if p = 4 and z = 2, x = 0 and x = 2 are solutions. For primes p, this might be what you are looking for: http://en.wikipedia.org/wiki/Cipolla%27s_algorithm. But I agree that the question is off-topic for this site. http://math.stackexchange.com might be better suited. @Codor P is prime. I am more interesting in programmatic approach rather than mathematical explanation that's why i posted it here. If you are asking for code then (according to http://stackoverflow.com/help/on-topic) it is expected that you include your attempted solutions. @MartinR I am not looking for code but programmatic approach. What i means is mathematical approach may include some very complex mathematical function that is very difficult to code but if any approach that is not so difficult to code. This question appears to be off-topic because it is about maths (try http://math.stackexchange.com). @dream_machine-Kindly check my answer and comment if satisfied or unsatisfied! I have optimized more using Martin R's valuable comment! Tonelli-shanks for the win. if x^2 mod p = z then x^2 = n*p + z for some integer n with p and z known, substitute integer values for n to find x I don't why Santosh was downvoted,but his reasoning is correct! As x^2 mod p=z --->> x^2=n*p+z // for some integer n. As you have p and z as known in your hands,you can individually check if x^2 mod p=z as shown below in the code and then find x(or rather equate the value of x) :- public static void main(String[] args) { //main-method int x,p,z,xMAX=10; // as per your condition p=13; // assigned p a prime positive integer value z=10; // assigned z a positive integer value for(x=0;x<=xMAX;x++){ int sq=(int)Math.pow(x,2); // squrare of x for each loop if(sq%p==z){ // comparing square-value modulus prime value with value of z to be equal System.out.println("One value of x possible is "+x); // if matches,you have one solution } else continue; } } Sorry as the code is in Java,but I have mentioned comments in the code to identify each section. Also,this is very complicated algorithm as one can have a better algorithm for it,I guess! But,the output is working fine and is correct! Output as per code :- One value of x possible is 6 One value of x possible is 7 Your answer is "Try all values of x until you find a solution". This is sensible for "small" values of xMAX, but for "large" values (e.g. in the range 2^60) this would be extremly slow and there are better algorithms (see my link above). - Also, instead of iterating over all possible values of n, you could just test if ((x * x) % p == z). @MartinR-Yeah,I went through your link.But,I couldn't carry through it as I couldn't grasp properly and also the example given there was irrelevant as per this question's context. And,I already stated in my answer that my code is slow and complicated algo,OP better need to search for fast algo.But,also, it depends on what he was trying to achieve,it worked fine with small data values,BUT surely not an optimal one!! @MartinR-Also,I doubt OP has stated that only positive integers are allowed,and I guess 2^60 is too large to fit in any programming language's integer range! @MartinR--And yeah,thanks for pointing at if ((x * x) % p == z) condition,now it's a bit more optimized,THANKS!
common-pile/stackexchange_filtered
Adding todo inside a caption I am using the todonotes package and I wanted to add a note inside a caption like: \caption{This is the section heading\todo{We should rethink the section title.}} However, this is throwing the “Not in outer par mode” error. I understand why, but I wonder if there is a way to do this. Thanks. Maybe this works also for captions outside figures... http://tex.stackexchange.com/a/256802/124842 A workround is described in the answer https://tex.stackexchange.com/a/415323/153215 You can add a todo (not only todo[inline], but todo in margins) inside floating environments (like table, figure, etc). You just need to add the following patch: \usepackage{marginnote} \let\marginpar\marginnote Source: the todonotes package manual http://tug.ctan.org/macros/latex/contrib/todonotes/todonotes.pdf section 1.6.9. I have tested your code with this patch and it works correctly. Although 4 years have passed since this question was asked I post the solution here in case someone need it in the future, like myself today. This may or may not be useful, but the luatodonotes package works in captions. It requires the use of lualatex for compiling the document though. \documentclass{article} \usepackage{luatodonotes} \begin{document} \begin{figure} \caption{This is the section heading\todo{We should rethink the section title.}} \end{figure} \end{document} Thanks. I can't move to luatex for this project, unfortunately.
common-pile/stackexchange_filtered
How to explain Gibbs free energy is a pressure-dependent state function? I'm aware that through deriving Gibbs free energy to infinitesimal changes, we could get the formula: $\mathrm dG = V\,\mathrm dp - S\,\mathrm dT$, giving that Gibbs free energy is pressure-dependent. However, while dealing with the definition of Gibbs free energy: $ΔG = ΔH - TΔS$ (and $ΔH$ must be held at constant temperature, am I wrong?) most textbooks stated that it is held under constant pressure. I still couldn't understand the relationship between Gibbs free energy and pressure and why: $\mathrm dG = V\,\mathrm dp$. $G$ is a function of both pressure and temperature. Only at constant temperature is $dG=Vdp$. At constant pressure $dG=SdT$. $G$ is defined as $H - TS$, and $H$ is $U + PV$. Since in closed system you have: $\mathrm{d}U = T \mathrm{d}S - P \mathrm{d}V$, it follows that: $\mathrm{d}H = T \mathrm{d}S + V \mathrm{d}P$ $\mathrm{d}G = -S \mathrm{d}T + V \mathrm{d}P$
common-pile/stackexchange_filtered
Can we have a place for "definitive" questions There are a number of questions and topics that are now regularly closed as dupes of some generally accepted definitive question. As an example, new licensing question get closed pretty quickly as dupes of this one. I know we can simply bookmark those questions, set them as favourites, or whatever other method we prefer for the purposes of close voting. However, I believe there would be value in having a place set aside for such questions to which new members can be pointed via the FAQ (at least those very few who ever bother to read the FAQ). Is this viable? Do you even see value in the idea? Edit Both answers so far have completely missed the point of the question. I am not looking for a way to group answers, search by tag or any other such thing. I am asking of we can have a designated place for those questions which are routinely used as the ultimate or definitive question. i.e. Questions such as the one I linked. I would also like the FAQ to have a reference to that area so that people can read those questions, rather than post yet another one to be closed as a dupe. John - if nothing better comes along then perhaps we can have a community wiki here on meta to keep track of them all if that is agreeable to enough people? Not what you're asking for, I know, but it might 'get the job done' in the meantime and if we find ourselves referring to it enough then that might support having something fancier put in place? @Robert, while that wouldn't be the ideal solution (which I suspect will ever happen) it would certainly be useable. Perhaps someone who already has a list of those questions can kick it off. Sorting all questions or just those with a specific tag is some help, but I'd also like to see what John's asking for - a list of questions that we've decided are Canonical - an excellent answer to a recurring question, one that we'd refer to when closing very similar questions. Exactly. I have to admit I'm very tempted to create a Canonical meta-tag just to keep these together; for the time being I keep them in my favorites, but that doesn't help anyone else. having a tag for them on the main site is a good idea and quite do-able I would say. Additionally see http://blog.stackoverflow.com/2011/01/the-wikipedia-of-long-tail-programming-questions/ and the faq link on any tag mouseover Are you looking for something like: this: https://serverfault.com/tags/licensing/faq you can get here for any of the tags on the site, but going to /tags/<tag>/faq or clicking the 'faq' tab when you are looking at a tag's page. This tab is sorted by incoming links to that question.
common-pile/stackexchange_filtered
Markdown install error l have a problem with Markdown install and l m using Python3.5 version. By using pip install markdown and l got this error below: Command ""c:\users\ömer sarı\appdata\local\programs\python\python35-32\python.exe" -u -c "import setuptools, tokenize;file='C:\Windows\Temp\pip-build-zyp_5g0x\markdown\setup.py';exec(compile(getattr(tokenize, 'open', open)(file).read().replace('\r\n', '\n'), file, 'exec'))" install --record C:\Windows\Temp\pip-ckif2kvg-record\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in C:\Windows\Temp\pip-build-zyp_5g0x\markdown\ before that, l tried to install Django-markdown but it failed due to same problem above. How can l solve this issue? sorry, but this is not enough info. is there a more specific error message? are you using some sort of virtual enviroment? at my local unix system the markdown install worked flawlessly... l installed virtual environment. l have win 10 operating system. l shared all necessary info but after installing virtual environment, probably ,virtual environment in python35 must be created. l did not create any virtual environment for this project
common-pile/stackexchange_filtered
How to change node.js debug port? How to change it from 5858 to 7000, for example? You can use --debug option: node --debug=7000 app.js You can use --inspect option for latest node >= v8 node --inspect=7000 app.js https://nodejs.org/en/docs/inspector/ This is really useful when having Grunt run multiple concurrent tasks and still have them all debug simultaneously. The same syntax applies to nodemon as it passes the param through to node. For nodejs version >= v8 use this: node --inspect=7000 --inspect-brk app.js I believe this was the case at least as far back as node v6. (But not v4.)
common-pile/stackexchange_filtered
R and Export raster extent as esri shapefile I am working with a raster dataset in R and would like to make a polygon of its extent and then export this as an ESRI shapefile. My problem, or at least what I think is the problem, occurs when I try to export the spatial polygon dataframe as I get the following error: Error in writeOGR(p, ".", "xyz_extent", driver="ESRI Shapefile") : obj must be a SpatialPointsDataFrame, SpatialLinesDataFrame or SpatialPolygonsDataFrame My script follows. Please note, I have a beginner skillset when working with spatial data in R so I do ask that answers be well described. Thank you in advance to those that chime in. Script: library(raster) xyz <- raster("xyz.asc") crs(xyz) # CRS arguments: +proj=tmerc +lat_0=0 +lon_0=-115 +k=0.9992 +x_0=500000 +y_0=0 +datum=NAD83 +units=m +no_defs +ellps=GRS80 +towgs84=0,0,0 e <- extent(xyz) p <- as(e, 'SpatialPolygons') crs(p) <- crs(xyz) library(rgdal) writeOGR(p, ".", "xyz_extent", driver="ESRI Shapefile") The error occurs because you have a SpatialPolygons, not a SpatialPolygonsDataFrame object. An easy way around this is to use the shapefile function instead. library(raster) xyz <- raster() e <- extent(xyz) p <- as(e, 'SpatialPolygons') crs(p) <- crs(xyz) shapefile(p, "xyz_extent.shp") And you can read the file again with the same function x <- shapefile("xyz_extent.shp")
common-pile/stackexchange_filtered
$1,2,...,n(n+1)/2$ placed at random in bottom-heavy nxn triang. array. Prob. that largest num in every row is smaller than largest in any row below? From the 1990 Canada National Olympiad: $\dfrac{n(n+1)}{2}$ distinct numbers are arranged at random into $n$ rows. The first row has $1$ number, the second has $2$ numbers, the third has $3$ numbers and so on. Find the probability that the largest number in each row is smaller than the largest number in each row with more numbers. The conclusions I have reached so far: it is almost impossible to start from the top of the array, without knowing something about the distribution of numbers below (it is possible to meet the conditions for the first row even if it contains a number as large as $n$, whereas it is certain if it contains $1$) working from the bottom up, one can see that the last row must contain the number $\dfrac{n(n+1)}{2}$ if the condition is to be met Your first bullet is wrong. It is possible if the top number is $\frac {n(n+1)}2-n+1$ because each row could contain the next number and it is far from certain if the top number is $1$. For $n=3$ you could have the top number be $1$, then $2,6$ in the next row and $3,4,5$ in the bottom. Your second bullet is correct. I meant just the conditions for the first row. Conditions would still need to be met for all other rows relative to those below them. My word wrong is too strong. I was pointing out that you could have a larger number at the top and still satisfy the requirement. Understood. It wasn't wrong, but it was improvable. Let $p_n$ be the probability that this array meets the desired conditions. Then we note that for $p_{n+1}$, we need $\frac{(n + 1)(n+2)}{2}$ to be in the last row, and that the first $n$ rows satisfy the desired conditions. Thus, we have $$p_{n+1} = \mathbb{P}\left(\frac{(n + 1)(n+2)}{2}\text{ is in the last row}\right)p_n.$$ Since the last row has $n+1$ elements, and there are a total of $\frac{(n+1)(n+2)}{2}$ elements, we have $$ \mathbb{P}\left(\frac{(n + 1)(n+2)}{2}\text{ is in the last row}\right) = \frac{2}{n+2}.$$ We also note that $p_1 = 1$. Thus, we have \begin{align} p_{n+1} &= \frac{2}{n + 2}p_n \\ &= \frac{2^2}{(n + 2)(n + 1)}p_{n - 1} \\ &= \frac{2^n}{(n+2)(n+1)\cdots 3}\cdot p_1 \\ &= \frac{2^{n+1}}{(n+2)!}. \end{align} Thus, we have $p_n = \frac{2^n}{(n+1)!}$. The combinatorialist in me feels like there should be a quick, direct proof of this fact, but I can't come up with one; in the meantime, I think this recursive method works. Ah, now I think I understand. When looking at the row above the last one (and so on, inductively), it doesn't really matter what numbers have already been allocated, the only requirement is that you place the largest remaining number in this row. There is always exactly one number that you have to place in each row so that it doesn't cause a violation in some row above. Correct, that's precisely it. When I enumerate for a 3 tier configuration, I get 40 favorable which gives a probability of 1/18 instead of 1/3 by the formula in both the answers ! Based on your calculation, it appears that you ordered the rows, since $40 \cdot 18 = 720 = 6!$; the rows should be unordered. There should be a total of $\binom{6}{3,2,1} = 60$ total possibilities for $n = 3$; this is likely the error in your calculation. Let $T_r:=\frac{r(r+1)}{2}$ for every $r=0,1,2,\ldots$. We are counting the number of ways to arrange the numbers on this triangular array so that the largest number on each row is smaller than that of the next. On the $n$-th row of the triangular array, the number $T_n$ must be there, and $n-1$ other numbers are chosen from $T_n-1$ numbers. Hence, there are $n!\cdot \binom{T_n-1}{n-1}$ ways to choose and arrange elements of the $n$-th row. For $k=1,2,\ldots,n-1$, we have $T_{n-k}$ elements left, and the largest of them must be on the $(n-k)$-th row, and $n-k-1$ other entries of this row must be picked from $T_{n-k}-1$ remaining numbers. Hence, there are $(n-k)!\cdot\binom{T_{n-k}-1}{n-k-1}$ ways to choose and arrange elements for the $(n-k)$-th row. That is, the number of ways to assign elements into the triangular array according to the rule is $$\begin{align} \prod_{k=0}^{n-1}\,\left((n-k)!\cdot \binom{T_{n-k}-1}{n-k-1}\right) &=\prod_{k=0}^{n-1}\,\left(\frac{n-k}{T_{n-k}}\cdot\frac{T_{n-k}!}{T_{n-k-1}!}\right)=T_n!\,\prod_{k=0}^{n-1}\,\left(\frac{n-k}{T_{n-k}}\right) \\ &=T_n!\,\prod_{k=0}^{n-1}\,\left(\frac{2}{n-k+1}\right)=T_n!\,\left(\frac{2^n}{(n+1)!}\right)\,. \end{align}$$ Thus, the probability of getting this type of arrangements is $\frac{2^n}{(n+1)!}$. When I enumerate for a 3 tier configuration, I get 40 favorable which gives a probability of 1/18 instead of 1/3 by the formula in both the answers !
common-pile/stackexchange_filtered
Upload file with apollo-upload-client from nodejs I saw some examples where files are uploaded from the browser. Is it somehow possible to use apollo-upload-client also from nodejs? According to the documentation the Upload!-variable has to be FileList, File, Blob or ReactNativeFile. I wasn't able to imitate one of those objects in nodejs. Any ideas how to do this? I've uploaded file from frontend but following this just to know if it's possible to do that from backend why using apollo client? search for already answered fetch/axios methods ... if apollo ... is it configured and running for any other query/mutation? I wanted to develop a GraphQL-Client that already can do everything out of the box. I want to make it as easy as possible to use the client and the apollo-upload-client is already deeply integrated. You can just pass in the file as a variable and it does the magic. Right now I'm doing the multipart request with form-data, but it's not as easy.
common-pile/stackexchange_filtered
TryGetValue in condition in dictionary I need to check weather any value present in dictionary or not using TryGetValue. This is my code. return $"{context.Headers["routingkey"]}_Killswitch_Api"; Here - context.Headers is dictionary. This what I have tried :- string keyvalue = string.Empty; if (context.Headers.TryGetValue("routingkey",out keyvalue)) { return $"{keyvalue}_Killswitch_Api"; } else { return "_Killswitch_Api"; } Please let me know what I have tried is correct approach or not ? Please suggest me What's the problem with this code? Are you running into a specific error or problem? seems alright to me Your attempted solution looks fine to me. As a style suggestion, you can just do this because the output value is initialized to null if the value isn't found, which string interpolates to an empty string: string keyvalue; context.Headers.TryGetValue("routingkey", out keyvalue); return $"{keyvalue}_Killswitch_Api"; See the docs for Dictionary.TryGetValue. Thanks a lot Alyssa for your response @Kannan You're welcome! Please note the little checkmark ✓ for answers that satisfy the question :-) Actually it's defaulted to null and when you use a null in string interpolation it results in an empty string. Also you could inline the declaration like this context.Headers.TryGetValue("routingkey", out var keyvalue); @juharr: Good catch. I edited my answer to no longer be incorrect. Good to have extra options re: inline declaration Yes, what you are doing is valid. It is unnecessary to initialize keyvalue with a string.Empty if you are not using it further in the program. If you want, you could also do: string keyvalue; if (!context.Headers.TryGetValue("routingkey", out keyvalue)) { keyvalue = string.Empty; } return $"{keyvalue}_Killswitch_Api"; Which would essentially accomplish the same thing. This is what I am looking
common-pile/stackexchange_filtered
how to remove static methods or static initializes as part of code refactoring in java? private static TimedLruCache<String, List<ZoneCutoffInfoModel>> zoneCutoffCache = new TimedLruCache<String, List<ZoneCutoffInfoModel>>( 200, 60 * 60 * 1000); I have started with code refactoring, Idea is to reduce the static code in the project. what is the best way to remove static code? https://dzone.com/articles/why-static-bad-and-how-avoid Not aware of any automated way of doing that. Search & replace "static" with "" maybe. As long as the cache is member of a singleton bean it's effectively the same as a static field. "hands on" approach might be, eventually, the best choice. Change your static to non-static without worrying about the caller. Then fix the compilation issues. Though, note that not all static is inherently bad. Static code in DAOs (such as in active entities pattern) is bad indeed, though in a mathematical functions library, or in a string manipulation library, it is an acceptable (and widely adopted) choice. Removing static code can be tricky. I can suggest some I have been doing. put the methods in singleton beans and 1.use a factory as a DI to inject instances so you don’t need a DI framework. 2.setup an injection framework like google guice to inject your instances. I am working on play framework for this it has some unique way of request handling making everything much easier. or Take a look at Spring IOC for Resource injection -inesh
common-pile/stackexchange_filtered
cvc-enumeration-valid: Value '2' is not facet-valid with respect to enumeration '[1]'. It must be a value from the enumeration I am getting error validating with this XML: XSD <xs:simpleType name="XYZ"> <xs:restriction base="xs:nonNegativeInteger"> <xs:enumeration value="1"> </xs:enumeration> <xs:enumeration value="2"> </xs:enumeration> </xs:restriction> </xs:simpleType> XML value : <XYZ>2</XYZ> Error cvc-enumeration-valid: Value '2' is not facet-valid with respect to enumeration '[1]'. It must be a value from the enumeration. Can anyone please help me to understand the problem? How to resolve it ? I could not reproduce the error. It would be useful to have the entire schema as well as the instance, or any minimal example where the error appears so we can help you. The error message, cvc-enumeration-valid: Value '2' is not facet-valid with respect to enumeration '[1]'. It must be a value from the enumeration. and the simpleType from your question do not agree. The error message implies that only 1 is allowed yet 2 was encountered; your type definition does indeed allow both 1 and 2. To elicit an actual error message pertaining to your xs:simpleType, your XML would have to use a value, say 3, not allowed. Then, you would receive an error message like this: cvc-enumeration-valid: Value '3' is not facet-valid with respect to enumeration '[1, 2]'. It must be a value from the enumeration. Therefore, your (first, maybe only?) mistake is in believing that the posted xs:simpleType definition has anything to do with that error message. Hi kjhughes, Thanks for writing. is it problem with simpleType ? why validation failing when my xml value is 2 ? Re-read my answer. Your validation would not fail with the posted simpleType for an XML value of 2. The error message you posted simply does not correspond with such an outcome. If this fact does not sufficiently help you to resolve your problem, add to your question a [mcve] (small XML file and small XSD and exact error message) and we'll help you further. I've got this working, I think it addresses your question, but as KJ indicates without an example we're really just guessing. Here's a sample XML <xml> <XYZ>3</XYZ> </xml> And a sample schema <xs:schema elementFormDefault="qualified" attributeFormDefault="unqualified" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="xml"> <xs:complexType> <xs:sequence> <xs:element name='XYZ'> <xs:simpleType> <xs:restriction base="xs:nonNegativeInteger"> <xs:enumeration value="1"/> <xs:enumeration value="2"/> </xs:restriction> </xs:simpleType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> </xs:schema> With a value of 3 (invalid), I get the following exception. The 'XYZ' element is invalid - The value '3' is invalid according to its datatyp e 'NonNegativeInteger' - The Enumeration constraint failed. Line: 2 Column: 10
common-pile/stackexchange_filtered
How to get mobile inbox message from mobile to database using php? I want to fetch message from mobile inbox and store in database for further accessing in site. For that I did some R&D from Nokia forum. But it needs that phone should connected to pc for loading and required to access through pc suite. I don’t want to access through pc suite. Because different mobile have different pc suite so it should not dependent on the pc suite. Is there any way to do with PHP with out PC suit connectivity? I came to know that it may possible with AT command, I also go through the AT command usage in network. But I didn’t have much idea. So i want to develop it with the help of PHP. Let me know if there any solution to do this. And suggest some idea or reference so that i may refer that to learn more to focus on this. Thanks in advance I am very much happy to see this question because, I had a similar requirement for one of my project. I found a solution and still its working perfectly for last one and half years. I designed an architecture diagram for this. This kind of architecture needs a mobile application as well as a web application. You want to develop your mobile application in such a way that, when ever an incoming message event occurs, your application will generate a HTTP request to your web application API, which is a PHP page. Your application generated HTTP request will be something like this http://www.myownhost.com/API/apipage.php?senderNumber=9876543210&&message=recievedMessage&&applicationID=1234 Your API code will be something like this apipage.php (This page act as your solution API) <?php if ( isset( $_GET['senderNumber'] ) && isset( $_GET['recievedMessage'] ) && isset( $_GET['applicationID'] ) ) { $senderNumber = $_GET['senderNumber']; $message = $_GET['recievedMessage']; $appId = $_GET['applicationID']; /* Write your code to insert values in database. */ } ?> Generate HTTP request from a mobile device is an easy task, because HTTP libraries/Classes are already available in JAVA. I did my mobile application in Android and still that application is working well. Advantages of this type of architecture are We are able to use same web API for different mobile devices because we are sending data through HTTP You will be able to control device requests from API(Block/Allow/Forward) Easy implementation javad_Shareef yeah nice work friend, i dont know much about android app but now i am learning to implement this thanks Yadav Chetan, you could do the same in any platform, provided you want to know how to generate independent HTTP request from your app. Maybe a device like hardware sms gateway would be a resolution for you. This is a device which you buy, put inside a SIM card from your mobile operator, connect it to your LAN. You can send and receive sms messages with it with HTTP API (that's your scenario) or directly on a web-interface of the device. Hardware sms gateways are devices that are like a small computer with built-in GSM-modem. They usually operate on Linux, have built-in database and a web server. Pros: reliability (sms messages are stored inside a device's database, there are usually fail-over mechanisms for GSM-modem), scalability, less pain with integration. Cons: money. You have to make one time investment in such a device. One example of such hardware sms gateway is SMSEagle. See if this suits you Disclaimer: I work for SMSEagle @Chris S: thanks for pointing out. I tried to be informative here (giving both advantages and disadvantages) without crossing the thin line between information and promotion. If I crossed this line it was unintentional Much improved - Sorry about the tone and fine line, but spam is quite the issue on a site like this.
common-pile/stackexchange_filtered
MATLAB pdf of filtered Rayleigh distribution Good evening, I am trying to program the following equation in MATLAB, which is a rayleigh distribution made up of two gaussian arrays. No matter what I do it does not look close to the normalized histogram or the generic pdf distribution for a rayleigh fade: So, here is what I have done. x1 = randn(100000, 1); % Create a second array of Gaussian random numbers y1 = randn(100000, 1); % Pass both Gaussian arrays through low pass filter and save to variable x1_LPF = filter(LPF, 1, x1); y1_LPF = filter(LPF, 1, y1); % New Histograms of Raleigh distribution for filtered data ray1 = abs(x1_LPF + j*y1_LPF); figure('Name', 'Normalized Histogram of Raleigh Distribution') [a, b] = hist(ray1, 100) delta_x = b(3) - b(2) % SIGMA left out in equation because it is equal to 1 in the problem g = b .* exp((-b.^2) / 2) .* delta_x; plot(b , g, 'b') Which gives me this: When instead it should look something like this the black line on this: Here is the settings for my filter in fdatool in case someone wants to run it for themselves by exporting as variable LPF: To get the pdf of a filtered Rayleigh Distribution, you have to take the original pdf equation and substitute any instance of sigma^2 with the mean square value of the filtered rayleigh distribution. So, the equation becomes 2x/MSV * exp(-x^2 / MSV) Something like this: x1 = randn(N, 1); y1 = randn(N, 1); x1_LPF = filter(LPF, 1, x1); y1_LPF = filter(LPF, 1, y1); ray1_f = abs(x1_LPF + 1i*y1_LPF); range = [0:0.01:4]; subplot(1, 2, 2); histogram(ray1_f, 'Normalization', 'pdf'); title('Normalized Histogram of Raleigh Distribution (filtered)') xlabel('Random Variable') ylabel('Probability') mean_square = mean(ray1_f .^ 2); filter_theory = (range) * 2 / mean_square .* exp( - (range.^2) ./ mean_square); hold on plot(range, filter_theory, 'Linewidth', 1.5) xlim([0 .6]) legend('Simulation', 'Theoretical') The following code can generate the desired distribution: N=1000000; %Random uniform variables U=rand(N,1); %Rayleight random variable using an inverse transform sampling sigma=1; x1=sigma*sqrt(-2*log(1-U)); histogram(x1,'Normalization','pdf'); % Theoretical equation x=linspace(0,6,100); pdf=x/sigma^2.*exp((-x.^2)/(2*sigma^2)); % Plot hold on plot(x,pdf,'r') legend('Stochastic','Theoretical') The resulting plot is the following This is the proper simulation and theoretical printout for an unfiltered rayleigh distribution. So, I need the filtered theoretical and the filtered rayleigh distribution. So I guess my real question is how can I properly "filter" the theoretical curve to fit my simulated data. Can you send a sample of your data? here is a link to my github with the example code and the low pass filter that you have to import: https://github.com/TreverWagenhals/TreverWagenhals/tree/master/School/Wireless%20Communications/Rayleigh
common-pile/stackexchange_filtered
How can I get when a webservice does not respond due to internet connection problems As in title I have a page that calls a lot of webservices through JavaScript to get several infos in client side. I have inserted a counter++ in every webservice call and a counter-- in webservice respond methods or in the webserviceFail method. I tested it a lot and everything worked fine. I show a msg to the user to wait till all the info came from the server and when counter is 0 he can submit the form. In test environment everything was ok and the testes passed all. In a specific client (that has not a good internet connection), happens that even if he wait for long the msg remain, means that some webservice didn't respond or didn't returned to webserviceFail method. Is this possible? Is there a timeout for the webservice, and after the timeout expired does it return on the webserviceFail method? I found out that there is a case when a webservice doesn't response if I can say that. When you call a webservice be sure to surround it with try catch, when the webservice is called and some parameter is NaN and the webservice expect an int, it is thrown and exception in client side because NaN can't be serialized. By this way the server doesn't receive nothing so I will not return in the fail function in clientside.
common-pile/stackexchange_filtered
Codeigniter Query Builder Update function I have an update function in my Model which I call from My controller as if($_POST) { $this->User_model->update('user_about',$_POST,$this->session->userdata['id']); } takes three parameters, table name, post data and user id. The function is defined in the Model as public function update($table,$data,$id) { $row=$this->db->select('*')->from($table)->WHERE('user_id',$id)->row(); if($row) { $this->db->WHERE('user_id',$id)->UPDATE($table,$data); } else { $data['user_id']=$id; $this->db->insert($table,$data); } } What I am doing here is checking if the record of particular user doesn't exist it should insert, otherwise update. Works like a charm Question Is there a way to skip the IF condition block?. Is there any provision in query builder which performs the check itself? only with mysql you can use INSERT .. ON DUPLICATE KEY UPDATE, but codeigniter does not support such query. write query by yourself. Or extend the query builder class adnd write such method @splash58 thats what m doing and its working. just wanted to ask if someone else came across this issue and have the same thoughts. Here is a custom generic insert and update on duplicate function that I always used in my programming. public function updateOnDuplicate($table, $data ) { if (empty($table) || empty($data)) return false; $duplicate_data = array(); foreach($data AS $key => $value) { $duplicate_data[] = sprintf("%s='%s'", $key, $value); } $sql = sprintf("%s ON DUPLICATE KEY UPDATE %s", $this->db->insert_string($table, $data), implode(',', $duplicate_data)); $this->db->query($sql); return $this->db->insert_id(); } You can use the above function in the model and call it in the controller. The function will update the value if the duplicate occurs. Check out following blog post if you need detail explanation A Generic CodeIgniter Function for both Update and Insert. Excellent code James. I will use it in my project thanks
common-pile/stackexchange_filtered
Loop object that in the array and in the another object i have the following structure. I need to get Internal value and through in the React. I think i need to get an array of values, for example: ['Bitcoin', 'Etherium'...] and map through it. How can i implement it? let arr = [ { "CoinInfo": { "Id": "1182", "Name": "BTC", "FullName": "Bitcoin", "Internal": "BTC", "ImageUrl": "/media/19633/btc.png", "Url": "/coins/btc/overview" } }, { "CoinInfo": { "Id": "7605", "Name": "ETH", "FullName": "Ethereum", "Internal": "ETH", "ImageUrl": "/media/20646/eth_logo.png", "Url": "/coins/eth/overview" } ] Here's how you'd get an array of coin names using Array.prototype.map() const arr = [{ "CoinInfo": { "Id": "1182", "Name": "BTC", "FullName": "Bitcoin", "Internal": "BTC", "ImageUrl": "/media/19633/btc.png", "Url": "/coins/btc/overview" } }, { "CoinInfo": { "Id": "7605", "Name": "ETH", "FullName": "Ethereum", "Internal": "ETH", "ImageUrl": "/media/20646/eth_logo.png", "Url": "/coins/eth/overview" } } ]; const coinNames = arr.map(x => x.CoinInfo.FullName); console.log(coinNames); Sorry @IonutAchim I thought you needed them when using strings. Will edit @JackBashford - You need them when you need to lookup a key via a var. const propName = 'Name'; const value = obj[propName]; change let and var to const and this answer is perfect Ah, of course @AmirPopovich That makes sense - if you used dots in that example, it would create a property literally named propName. Thanks for clarifying that. Changed as per @Pavlo requested Object property names are always strings. You only wrap them in quotes when they are the same as reserved words or are not valid J's identifiers (e.g. contain a space, hyphen or starts with a number). Do it like this import React from 'react' export default class YourComponent extends React.Component { render() { let arr = [ { "CoinInfo": { "Id": "1182", "Name": "BTC", "FullName": "Bitcoin", "Internal": "BTC", "ImageUrl": "/media/19633/btc.png", "Url": "/coins/btc/overview" } }, { "CoinInfo": { "Id": "7605", "Name": "ETH", "FullName": "Ethereum", "Internal": "ETH", "ImageUrl": "/media/20646/eth_logo.png", "Url": "/coins/eth/overview" } } ] let newArr = arr.map((data) => { return data.CoinInfo.FullName }) console.log('new array', newArr); return ( <div> </div> ) } } You’re getting downvotes because this is not how you use map(). map() creates and returns an array. Using it to loop over and push into another array is redundant. See the answers above for examples. @MarkMeyer Thank you so much for the help! I did the mistake and later I have corrected it. @MarkMeyer I could have done it but I thought this is the best lesson I got.
common-pile/stackexchange_filtered
Increment variable by X every second Am stuck at this section in code. I want a int variable to increase by X every second till (variable<=required number). Please guide me. Edit // I am having a variable 'i'. And I want its max value to be say.. 280 I want to perform increment function on variable so that every second value of 'i' increases by 1 till (i=280) use Timer, add the code in the Tick event. Use a timer or Thread.Sleep. you need to put more about this... any can to understand you. Post the code or what you want This is am extremely simple problem with an extremely simple solution. Do your research before asking a question. There are plenty of similar questions here that can help you. I did search but wasnt able to find suitable solution...and its still showing error so do some research before downarking n question' Do you want to make it single-threaded? int i = 0; while (i < max) { i++; Thread.Sleep(x); // in milliseconds } or multi-threaded: static int i = 0; // class scope var timer = new Timer { Interval = x }; // in milliseconds timer.Elapsed += (s,e) => { if (++i > max) timer.Stop(); }; timer.Start(); I tried using Thread.Sleep(x); but its showing "'Thread' doesnt exsist in context I didnt get what u said? @user1371640: Use System.Threading.Thread.Sleep(x) or using System.Threading; in the top. Or google :) I want to make a linear transformation of an image....and there i want to keep it as animation. I am using VS2012 for windows 8 I tried 'using System.Threading;' but its still showing same error @user1371640: is it a compiler error? Post the full text. The name 'Thread' does not exsist in the current context @user1371640: Works for me though. You can create an instance of Timer class with 1 second interval (passing 1000 in the contructor) and then register the Elapsed event. Do the increment you are trying in the event handler code. Without any context this code should do its job: for(int i=0; i<280; i++){ Thread.Sleep(1000); } But for UI stuff you should use e.g. a Timer.
common-pile/stackexchange_filtered
android imageview onClick animation I guess this is kind of an odd question but I have tried setting onClicklistener on an ImageView and it has worked. But the problem is that the user cannot sense the click. I mean if some of u have worked on other mobile environs(like Apple iPhone) then when we click on an Image in other environs then it gives an effect on the image so that the user can understand that the image has been clicked. I have tried setting alpha using setalpha method but it doesn't work. Though the same thing is working fine on onFocusListener implementation. Can some1 suggest a different way to modify the image on click... I am new to android so haven't learnt the nuances of simple animation also... if there is any simple animation I can use for the same then please let me know. Thanks! If you are using AppCompat theme, check this answer: https://stackoverflow.com/a/34388361/1770409 <?xml version="1.0" encoding="utf-8"?> <set xmlns:android="http://schemas.android.com/apk/res/android"> <alpha android:fromAlpha = "1.0" android:toAlpha = "0.5" android:duration = "300"> </alpha> <scale android:fromXScale = "1" android:toXScale = "0.9" android:fromYScale = "1" android:toYScale = "0.9" android:pivotX="50%" android:pivotY="50%" android:duration = "50"> </scale> </set> I don't know if this is the correct method but defining an animation as mentioned did the trick. Now we just have to give public void onClick(View v) { v.startAnimation(AnimationUtils.loadAnimation(Context, R.anim.image_click)); //Your other coding if present } in the OnClick method and the change will be visible... ya.. all thanx to u and others who shared their ideas... hope the method i posted will be useful for some1 else too... What an awesome effect! I was looking for a black click effect and when i tested this I was surprised! Thanks a lot! thank you very much... found this post accidentally though !! but it starts animating after click.. it should behave like.. #1st shrink on touch #2nd become normal when you release press.. super effect! Thanks a lot for sharing that code! Awesome! You'll want to use a drawable that contains different images for the different states you want to support. Here's an example: <?xml version="1.0" encoding="utf-8"?> <selector xmlns:android="http://schemas.android.com/apk/res/android"> <item android:state_pressed="true" android:drawable="@drawable/img_pressed" /> <item android:state_focused="true" android:drawable="@drawable/img_focused" /> <item android:drawable="@drawable/img_at_rest" /> </selector> Name this file img.xml or something and put it in your drawable directory, then set your ImageView's image to img.xml. @drawable/img_at_rest is the original image you're trying to use, while @drawable/img_pressed and @drawable/img_focused are the images to use for their respective states. You can also use solid colors instead of images if it's appropriate for your use case. wow... this is something which i can use... but can i just tweek the code so that the image doesnt change but the alpha of the image is changed??? Hmm, I'm not sure. The xml interfaces for the android.graphics.drawable package are not well documented. See if you can find a way to specify an alpha transparency, or perhaps nest an alpha color over your image within an specification. well Mike my problem is that the image is not from resource but from the internet... hence changing images i guess wont help... Thanx 4 such a quick response... anim/anim_item.xml <?xml version="1.0" encoding="utf-8"?> <set xmlns:android="http://schemas.android.com/apk/res/android"> <alpha android:fromAlpha="0.0" android:toAlpha="1." android:duration="1000"> </alpha> </set> And add: myView.startAnimation(AnimationUtils.loadAnimation(context, R.anim.anim_item)); Not sure if it works, but have you tried setSelected() http://developer.android.com/reference/android/widget/ImageView.html#setSelected(boolean)
common-pile/stackexchange_filtered
Google App engine - need to use userService in each page On Google app engine, do I need to import & initialize the object userService to log-in the user through Gmail in each page (jsp), or is there a better way to achieve this? Using filters for example? Use sessions, when you login store the user id and get appropriate data to sessions that you will frequently use. Then only check sessions before the userService. If you mean Servlet sessions, according to another post I have to add <sessions-enabled>true</sessions-enabled> to web.xml.
common-pile/stackexchange_filtered
Unity - Reverse transform euler angles, so it counts clockwise? I'm making some sort of a egg timer thing... where you can input your minutes by rotating the clock. transform.Eulerangles however goes the wrong way around: But I need it to go this way around: So I can can easily get my minutes by dividing the numbers by 6... but I can somehow not figure it out, how to flip it. using System.Collections; using System.Collections.Generic; using UnityEngine; using UnityEngine.UI; using System.Linq; public class Test3 : MonoBehaviour { [SerializeField] RectTransform rectTransform; int[] snapValues = new int[13]{0,30,60,90,120,150,180,210,240,270,300,330,360}; public void Drag() { var dir = Input.mousePosition - Camera.main.WorldToScreenPoint(rectTransform.anchoredPosition); var angle = Mathf.Atan2(dir.x, dir.y) * Mathf.Rad2Deg; transform.rotation = Quaternion.AngleAxis(angle, -Vector3.forward); var nearest = snapValues.OrderBy(x => Mathf.Abs(x - transform.eulerAngles.z)).First(); print(transform.eulerAngles.z); } } To flip the result (i.e. 360 = 0, 180 = 180, 0 = 360), you can just subtract it from 360: var angle = 360 - ( Mathf.Atan2(dir.x, dir.y) * Mathf.Rad2Deg ); Big facepalm, to get transform.eulerAngle right I had simply to do: print(Mathf.Abs(transform.eulerAngles.z-360)); Thanks!
common-pile/stackexchange_filtered
Definitions of Serial TTL UART MCU acronyms I was looking for a comprehensive list of terms and acronyms concerning serial peripheral communication including their descriptions and, crucially, how they relate to each other. Since this site often comes up in searches for individual terms I thought it might be useful to collect all these terms in one place. You may contribute to the Community Wiki answer below. This has been collecting some downvotes. Please comment on whether you'd like to see this information deleted or why it is unhelpful. @thebusybee I did; the answer below is my own. I made it a Community Wiki, so you are free to offer constructive contributions. Not sure what to make of all the downvotes. That's a good idea. To make this preparation visible, you could add a note to your question. serial - The sending of data in sequential bits peripheral - A device connected to a computer system but not part of its core components bitbang - A method of implementing peripheral communication through software as opposed to dedicated hardware for generating and processing data signals DB-9 - Also DE-9, it is a 9-pin version of the early DB-25 style of physical connector consisting of 9 pins arranged in 2 rows commonly used for interfacing peripherals over the RS-232 serial data communication interface COM - Communication port The name of an addressable serial port interface originated by IBM using the DB-9 connector RS-232 - Recommended Standard #232 Also known as EIA/TIA-232, this is an interface standard for serial data communication RS-485 - Recommended Standard #485 Also known as EIA/TIA-485(-A) is an interface standard for balanced-pair serial data communication FTDI - Future Technology Devices International Limited This is a Scottish semiconductor company which produces a very popular USB-to-Serial adapter IC (integrated circuit) FT232RL - The part number of an IC produced by FTDI commonly found in TTL Serial adapters TTL - Transistor-Transistor Logic Denotes signal voltages at the transistor logic-level of 3v to 5v MCU - Microcontroller unit Also MC, UC, or μC, it is a single IC chip with complex, computer-like functionality UART - Universal Asynchronous Receiver/Transmitter Is a simple protocol for exchanging serial data utilizing between 2 to 9 data signals USB - Universal Serial Bus A modern serial data communication standard that is extendable, auto-configurable, and highly complex RX - Receive Is the signal pin used for receiving serial data at an implied TTL voltage TX - Transmit Is the signal pin used for sending serial data at an implied TTL voltage RXD - Receive Data Is a signal pin (#2 on DB9) used for incoming serial data per the RS232 standard TXD - Transmit Data Is a signal pin (#3 on DB9) used for outgoing serial data per the RS232 standard DTR - Data Terminal Ready Is a signal pin (#4 on DB9) used for an optional flow control mechanism of the RS232 standard GND - Ground Is a voltage reference pin (#5 on DB9) used for signal ground DSR - Data Set Ready Is a signal pin (#6 on DB9) used for an optional flow control mechanism of the RS232 standard RTS - Request to Send Is a signal pin (#7 on DB9) used for an optional flow control mechanism of the RS232 standard CTS - Clear to Send Is a signal pin (#8 on DB9) used for an optional flow control mechanism of the RS232 standard TXL - Transmit LED Is a status pin capable of driving an LED to indicate transmit activity on the COM port RXL - Receive LED Is a status pin capable of driving an LED to indicate receive activity on the COM port VCC - Voltage Common Collector Is a power-carrying connection with a higher voltage relative to GND VSS - Voltage Source Supply Same as GND SPI - *Serial Peripheral Interface is a synchronous serial communication interface well suited for short distances I2C - Inter-Integrated Circuit Also IIC or I2C, it is a serial communications bus well suited for short distance peripheral-microcontroller communication TWI - Two-Wire Interface Alternative name for I2C VCP - Virtual COM Port A component of modern USB-to-Serial adapter drivers that allows the operating system to present a legacy COM port to applications that expect or require it for serial communication The so-called "DB-9" connector should really be called "DE-9", according to Cannon, who introduced the D-subminature connector family. The second letter (A - E) indicates the shell size. @PeterBennett I've never seen it referred to by those letters, except on wikipedia, but I suppose that in and of itself enough reason to make a mention of it. Thanks The normal density D-subminature connectors are DA-15, DB-25, DC-37, DD-50 and DE-9. (looks like the 9 pin was an afterthought :-) )
common-pile/stackexchange_filtered
Cannot set UIPickerView dataSource / delegate programmatically I have a UIPickerView that I've placed on my story board and created a reference to the picker in my controller. In my controller, I would like to set the delegate programmatically. Here is what I've tried: class MyViewController: UIViewController { @IBOutlet weak var text1: UITextField! @IBOutlet weak var text2: UITextField! @IBOutlet weak var picker: UIPickerView! override func viewDidLoad() { super.viewDidLoad() let textPickerDelegate = TextPickerdDelegate(text1: text1, text2: text2, picker: picker) text1.delegate = textPickerDelegate text2.delegate = textPickerDelegate picker.delegate = textPickerDelegate picker.dataSource = textPickerDelegate picker.hidden = true } } The code for the delegate looks like this: class TextPickerDelegate: NSObject, UIPickerViewDelegate, UIPickerViewDataSource, UITextFieldDelegate { static let TEXT_1_DATA: Array<Array<String>> = { let text1Data: Array<Array<String>> = [[], ["a", "b"]] for i in 1...100 { var text1 = text1Data[0] as Array<String> text1.append(String(i)) } return text1Data }() static let TEXT_2_DATA: Array<Array<String>> = { let text2Data: Array<Array<String>> = [[], ["c", "d"]] for i in 1...100 { var text2 = text2Data[0] as Array<String> text2.append(String(i)) } return text2Data }() let text1: UITextField let text2: UITextField let picker: UIPickerView var pickerData: Array<Array<String>> = TextPickerDelegate.TEXT_1_DATA init(text1: UITextField, text2: UITextField, picker: UIPickerView) { self.text1 = text1 self.text2 = text2 self.picker = picker } func numberOfComponentsInPickerView(pickerView: UIPickerView) -> Int { return pickerData.count } func pickerView(pickerView: UIPickerView, numberOfRowsInComponent component: Int) -> Int { return pickerData[component].count } func pickerView(pickerView: UIPickerView, titleForRow row: Int, forComponent component: Int) -> String! { return pickerData[component][row] } func pickerView(pickerView: UIPickerView, didSelectRow row: Int, inComponent component: Int) { // Select the row picker.hidden = true; } func textFieldShouldBeginEditing(textField: UITextField) -> Bool { if (textField == text1) { pickerData = TextPickerDelegate.TEXT_1_DATA } else if (textField == text2) { pickerData = TextPickerDelegate.TEXT_2_DATA } picker.hidden = false return false } } As soon as I try to render the view containing the picker, I get the following error: 2015-08-20 20:19:14.840 MyApp[13642:645025] -[_UIAppearanceCustomizableClassInfo numberOfComponentsInPickerView:]: unrecognized selector sent to instance 0x7fe072d64ba0 The textPickerDelegate is local to viewDidLoad. Try moving the declaration one level up, like the IBOutlets. I think you also have a potential problem with your TEXT_?_DATA, see my added comment inside: static let TEXT_1_DATA: Array<Array<String>> = { let text1Data: Array<Array<String>> = [[], ["a", "b"]] for i in 1...100 { var text1 = text1Data[0] as Array<String> // <-- this creates a copy, not a reference text1.append(String(i)) // <-- this appends to the copy, leaving the original intact } return text1Data }() As a quick alternative you can try this: let text1Data = [(1...100).map { String($0) }, ["a", "b"]] Thanks for the tip using the range. When I try to move the delegate out of the method, I get an error that says MyController has no field text1. Right. You need to do something like this: var textPickerDelegate: TextPickerdDelegate! = nil and in the viewDidLoad textPickerDelegate = TextPickerdDelegate(text1: text1, text2: text2, picker: picker) I don't see how this could be the issue. TextPickerDelegate gets allocated on the heap and I should be able to pass references to it around. But somehow it did solve the issue... Gets allocated on the heap and removed from it when it goes out of scope. The delegate is weak - weak var delegate: UITextFieldDelegate?, so it won't keep it alive... By changing the scope you made it live longer. You need to add the delegate to your UIViewController like so class ViewController : UIViewController, UIPickerViewDelegate, UIPickerViewDataSource { This is assuming that you want to have a standard UIPickerView I don't want the ViewController to be the delegate though. In my code you can see I try to set the delegate explicitly. My bad. Sorry, I can't help you with that.
common-pile/stackexchange_filtered
How to define the execution order of cucumber Test Cases I want to have two different scenarios in the same feature. The thing is that Scenario 1 needs to be executed before Scenario 2. I have seen that this can be achieved through cucumber Hooks but when digging in the explanations, there's no concrete cucumber implementation in the examples I have found. How can I get Scenario 1 executed before Scenario 2? The feature file is like this: @Events @InsertExhPlan @DelExhPln Feature: Insert an Exh Plan and then delete it @InsertExhPlan Scenario: Add a new ExhPlan Given I login as admin And I go to automated test When I go to ExhPlan section And Insert a new exh plan Then The exh plan is listed @DeleteExhPlan Scenario: Delete an Exh Plan Given I login as admin And Open the automatized tests edition When I go to the exh plan section And The new exh plan is deleted Then The new exhibitor plan is deleted The Hooks file is: package com.barrabes.utilities; import cucumber.api.java.After; import cucumber.api.java.Before; import static com.aura.steps.rest.ParentRestStep.logger; public class Hooks { @Before(order=1) public void beforeScenario(){ logger.info("================This will run before every Scenario================"); } @Before(order=0) public void beforeScenarioStart(){ logger.info("-----------------Start of Scenario-----------------"); } @After(order=0) public void afterScenarioFinish(){ logger.info("-----------------End of Scenario-----------------"); } @After(order=1) public void afterScenario(){ logger.info("================This will run after every Scenario================"); } } The order is now as it should be but I don't see how does the Hooks file control exection order. Pedro, did I answer your question? @hfontanez. Yes, sorry for taking so long. It's good to know that execution order is simpler that I thought. If I answered your question fully and solved your issue, I would appreciate if you click on the checkmark next to my answer. ;) Yep, already did it... now it marks a +1. ;). Thanks for your answer I appreciate the upvote. Next to the answer, you should see a checkmark. Click on it. That is to indicate that this is the selected "best answer" by the OP (you in this case). You don't use Hooks for that purpose. Hooks are used for code that you need to run before and/or after tests, and/or before and/of after test suites; not to control the order of features and/or scenarios. Cucumber scenarios are executed top to bottom. For the example you showed there, Scenario: Add a new ExhPlan will execute before Scenario: Delete an Exh Plan if you pass the tag @Events in the test runner. Also, you should not have the scenario tags at the feature level. So, you should remove @InsertExhPlan and @DelExhPln at the Feature level. Alternatively, you could pass a comma-separated list of scenario tags to the test runner in the order you want. For example, if you need to run scenario 2 before scenario 1, you would pass the tags for the corresponding scenarios in the order you wish them to be executed. Moreover, you can do this as well from your CI environment. For example, you can have Jenkins jobs that execute the tasks in a specific order by passing the scenario tags in that order. And, if you wish to be run in the default order, simply you can pass the feature tag. About Hooks, this should be for code that needs to be run for all features and scenarios. For specific stuff you need to run for a particular feature, you need to use Background in the Cucumber file. Background block is run before each scenario in a given feature file.
common-pile/stackexchange_filtered
replace 32bit with 64bit advantage My first post. Successfully migrated from win 7 to ubuntu 12.04.1 on my samsung netbook. working smoothly. But I had installed 32 bit version. Is there any advantage of going in for 64 bit? What are the risks. Ram Prasad Depending on your netbook (which?) you may have a 32-bit CPU only. You won't be able to run and install 64-bit Ubuntu then. Welcome to Ask Ubuntu! Try reading this answer What are the differences between 32-bit and 64-bit, and which should I choose? and this website http://www.howtogeek.com/165144/htg-explains-should-you-use-the-32-bit-or-64-bit-edition-of-ubuntu-linux/ They should answer your question, if not, try googling difference between ubuntu 32 bit and 64 bit. That brought up several good sites.
common-pile/stackexchange_filtered
Java chardet that detects iso-8859-2 Is there a Java version of the python chardet that detects iso-8859-2? I've tried the Mozilla universalchardet and jchardet and neither worked, they both guessed windows-1252 but the python chardet that comes with Linux detected it just fine. I made a good experience with IBM's ICU4J for the charset detection, in regards to ISO-8859-2 too (http://site.icu-project.org/), it was giving consistently the best (most accurate) results for the files we were using for the tests. I didn't come accross Java version of the python chardet when doing the research. Thank you, I'll have to take a look at that.
common-pile/stackexchange_filtered
Content Builder Image Taggin Is it possible to tag images in content builder? I don't see the option when uploading and also not after an image has been uploaded. Is that function only in regards to templates? There is also an Einstein feature that tags your images automatically. https://help.salesforce.com/articleView?id=mc_ceb_einstein_content_tagging.htm&type=5 This is covered in this help doc: Nested Tags IMPORTANT Tagging to raw files like images is unavailable until the January 2020 release. Thank you, just a little longer then. It's weird though, I don't see that line in the doc.
common-pile/stackexchange_filtered
How to remove rows after a particular observation is seen for the first time I have a dataset wherein I have account number and "days past due" with every observation. For every account number, as soon as the "days past due" column hits a code like "DLQ3", I want to remove rest of the rows for that account (even if DLQ3 is the first observation for that account). My dataset looks like: Obs_month Acc_No OS_Bal Days_past_due 201005<PHONE_NUMBER> 3572.68 NORM 201006<PHONE_NUMBER> 4036.78 NORM 200810<PHONE_NUMBER> 39741.97 NORM 200811<PHONE_NUMBER> 38437.54 DLQ3 200812<PHONE_NUMBER> 23923.98 DLQ1 200901<PHONE_NUMBER> 35063.88 NORM So, for account<PHONE_NUMBER>, I want to remove all the rows post the date 200812 as now it's in default. So in all, I want to see when the account hits DLQ3 and when it does I want to remove all the rows post the first DLQ3 observation. What I tried was to subset the data with all DLQ3 observations and order the observation month in ascending order and getting an unique list of account number which have DLQ3 and their first month of hitting DLQ3. Post that I thought I could do some left_join with the orginal data and use ifelse but the flow is dicey. Can you indent your dataset. Hard to understand its structure. (highlight data, press command K on Mac) It's indented now. The following function will scan your data frame and find the row containing the DLQ3 tag. It will then remove all rows for that account number that occur after that tag. scan_table <- function(data_frame, due_column, acct_column, due_tag) { for(i in 1:nrow(data_frame)) { if(data_frame[i,c(due_column)] == due_tag) { # remove rows past here, for this account acct_num <- data_frame[i,c(acct_column)] top_frame <- data_frame[1:i,] # cut point sub_frame <- subset(data_frame, Acc_No != acct_num) final_frame <- unique(do.call('rbind', list(top_frame, sub_frame))) return(final_frame) } } } Example: df Usage: scan_table(df, 'Days_past_due', 'Acc_No', 'DLQ3') Let me know if you wanted something different. In the subframe line why is there a change in the name for account number? Have you assumed them to be written differently? The subframe line removes the offending account number from the dataset. Not sure what you mean by changing names. Given your example data <- read.table(text= "Obs_month Acc_No OS_Bal Days_past_due 201005<PHONE_NUMBER> 3572.68 NORM 201006<PHONE_NUMBER> 4036.78 NORM 200810<PHONE_NUMBER> 39741.97 NORM 200811<PHONE_NUMBER> 38437.54 DLQ3 200812<PHONE_NUMBER> 23923.98 DLQ1 200901<PHONE_NUMBER> 35063.88 NORM", stringsAsFactors=F, header=T) I will sort it data <- data[with(data, order(Acc_No, Obs_month)), ] and define a function that allows you to set the code indicating expiry ("DLQ3" or "DLQ1" from your example) sbst <- function(data, pattern){ if( all(data$Days_past_due %in% "NORM") == TRUE){ return(data)} else{ indx <- min(grep(1, match(data$Days_past_due, pattern, nomatch = 0))) data <- data[1:indx,] return(data) } } Finally, apply the function and aggregate the lists of data.frame into final data.frame Reduce(rbind, lapply(split(data, data$Acc_No), sbst, patter="DLQ3")) # Obs_month Acc_No OS_Bal Days_past_due #1<PHONE_NUMBER>000031 3572.68 NORM #2<PHONE_NUMBER>000031 4036.78 NORM #3<PHONE_NUMBER>000049 39741.97 NORM #4<PHONE_NUMBER>000049 38437.54 DLQ3 This way throws an error called : Error in match(data$Days_past_due, pattern, nomatch = 0) : argument "pattern" is missing, with no default Called from: grep(1, match(data$Days_past_due, pattern, nomatch = 0)) pattern declaration was missing patter="DLQ3", added it, thnx for the notice If I understand your coding right there but be an error due to my part. There are 4 categories in the days past due that is, NORM, DLQ1, DLQ2, DLQ3. And the code is taking care of only NORM? I supposed that NORM mean OK, indicating non-expiry, hence only the other values like DLQ1, DLQ2, DLQ3. could be indication of expiry. Is my assuption correct? The point wherein the account number hits DLQ3 for the first time that means he is in default. So an account can have all the 4 observations but we need to remove the for which it hits DLQ3 for the very first time
common-pile/stackexchange_filtered
Export Sql data to .csv and generate separate .csv files per table based on query Currently, I am using Go to access my database. Ideally, I'd like to generate .csv based on the table's name and export data to those files based on the query. For example, if I ran: select t1.*,t2.* from table1 t1 inner join table2 t2 on t2.table_1_id = t1.id where t1.linking_id = 22 I'd like a .csv file generated for both table 1 and table 2 where data from each table would generate and then export into these two generated files with the same names as the table's names. I know in PHP I can use $fp = fopen(getcwd().'/table1.csv', 'w'); fputcsv($fp, $columns); to generate the .csv files with the table's row names. But, I don't believe go needs continuous duplication of foreah columns to generate separate .csv files. I would like some guidance on exporting and generating sql data to .csv files. Thank you! My current import set-up is: import ( "database/sql" _ "github.com/go-sql-driver/mysql" "github.com/joho/sqltocsv" ) Im able to connect to my database without issue, and query my database when initializing the query via rows, _:= db.Query(SELECT * FROM table1 WHERE id = 22) I am able to write the query data to a pre-existing .csv file using err = sqltocsv.WriteFile("results.csv", rows) It is unclear what you're asking. Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. I'm uncertain how to export data from my database using GoLang, and then create .csv files based on the specific table data. Iterate the rows with a loop to fetch each row Use package encoding/csv to write a CSV file.
common-pile/stackexchange_filtered
Robustness check in Structural Equation Modelling (SEM) in R I have conducted SEM analysis in R and used Maximum Likelihood Robust estimator as my data are categorical and deviate from multivariate normality. when I submitted my manuscript, one reviewer asked for robustness analysis. I have been searching for such analyses in R but I cannot find anything. I only see resources about robustness check for example for endogeneity, unobserved heterogeneity, and non-linear effects in PLS-SEM mostly in SmartPLS software. I'm confused whether I need to do robustness analysis and if yes, how it can be done in R. I appreciate any input in this regard. I've never heard of this. I'd request more information from the reviewer. Perhaps the reviewer would like to see an analysis of your model with alternative estimation methods for categorical (I assume you mean ordinal = ordered categorical) indicator variables such as, for example, WLSMV/DWLS to see if such alternative methods would result in similar findings as your robust ML approach.
common-pile/stackexchange_filtered
A subspace of $\mathcal{B}(X;Y)$ isometric to $Y$ Let $X$ and $Y$ be normed spaces. If $\mathcal{B}(X,Y)$ is the space of all continuous linear functions of $^{X}Y$, I'm trying to prove that $\mathcal{B}(X,Y)$ has a closed subspace isometric to $Y$. Can someone give me only a clue. Hint: On a basis $e_n$ of $X$, $f\in \mathcal B (X,Y)$ is basically (roughly) the same as a specification of $f_n\equiv f(e_n)\in Y$ for all $n$. That is, there is one value in $Y$ for each $n$. You want only one value in $Y$. What set of functions could you consider? Normed spaces not necessarily have a basis. Well, axiom of choice... But that really isn't the point. This is a motivational hint which leads to working answers. Let $f\in \operatorname{Sphere}_Y(0,1)$ then the operator $$ T_y: X\to Y, x\mapsto f(x) y $$ is linear and $\Vert T_y\Vert=\Vert y\Vert$ for any $y\in Y$.
common-pile/stackexchange_filtered
Can't pop git stash, 'Your local changes to the following files would be overwritten by merge' So I had a load of changes and some untracked files. I needed to tweak something, so I used git stash -u, modified a couple of things, committed those changes, pushed them, and then tried to git stash pop. Because I'd modified a couple of files that I'd stashed, I got the following message: error: Your local changes to the following files would be overwritten by merge: file_1.py file_2.py Please, commit your changes or stash them before you can merge. Aborting This seems odd, I had committed all new changes, my checkout was clean when I ran the command. It seems the git stash pop operation un-stashed half of my changes and the untracked files, but if I try and git stash pop again I get output like: some_file.html already exists, no checkout some_other_file.html already exists, no checkout yet_another_file.html already exists, no checkout Could not restore untracked files from stash git stash show still shows a list of my stashed changes, but I'm at a loss as to what I do now. How can I get myself unstuck? Possible duplicate of Cannot apply stash to working directory Related when there are local changes: git stash -> merge stashed change with current changes I got around this, I think it must have been some kind of bug, as my working directory was clean and up to date. I ran git checkout . and after that git stash apply worked fine, I got everything back no problems at all. I'd be interested to work out what actually caused it to fail though. For the record, doing this still gave the same errors for me even with a clean working directory. This worked for me... emphasis on the . Can someone explain why this works? For the uninformed, the . is an alias for "all non ignored files recursively", not to be confused with *. But it's 5 years ago, so you know this now. @JamieM. I had a clean directory too but I was facing the same error, for me this solution didn't worked. I tried a git stash apply --index and it worked, can't really explain what this arg do but as per documentation "attempt to recreate the index" For those who do have un-committed work, and want to pop their stash without losing that work, here is a way (with thanks to @iFreilicht): Temporarily stage any uncommitted changes: git add -u . Now you can apply your stash without git complaining (hopefully): git stash pop Now unstage everything, but leave the files as they are now: git reset If step 2 couldn't patch cleanly due to conflicting changes, then you will need to resolve the conflicts manually. git diff should help you find them. git mergetool might help by opening your editor with before and current files. Whenever I do this I feel like I'm not using git properly.. Found no alternative though. I managed to find a second way: Instead of committing, run git add <files> for all files that would be overwritten. Then you can pop, and then just unstage again with git reset HEAD. This has to be a bug of some sort. @Joost Why not stash instead of commit, then turn around and drop the top of the stash stack? `git stash save "junk"` `git stash drop` `git stash pop` Very nice @iFreilicht, I have switched to your method instead! LastStar007, we want to keep the current "junk", and pop what is in the stash as well. This is great, thanks. It happens.. too often.. that I stash changes because I need to do urgent work on another branch. Then I go back to the original branch and keep working without popping my stash... It's a mess. This totally solves that problem. @little_birdie Not sure if this helps, but I use a git-aware-prompt and when I check out a branch it will remind me whether there was a stash made on that branch, nudging me to pop again. Ran into a merge conflict...now what The stash that was made with -u needs to have the untracked files cleaned away before being apply-ed (and pop is just apply+drop). Out of general paranoia I'd mv the untracked files somewhere safe, then git stash apply, check everything carefully, and git stash drop once I'm sure I have it all correct. :-) This doesn't seem to work, I'm starting with a clean working directory with no untracked files, there's nothing to mv out of the way! Hm. Might be a bug in git stash (I've found one before) ... it would be interesting if you can reproduce this, especially with a small example. None of these solutions worked for me. I was using git stash index command to restore a specific stash id. So, I ended up doing a commit of my local changes to local repo. Then git stash index worked for me. And finally I rolled back my commit using git reset (with keep changes). Problem solved.
common-pile/stackexchange_filtered
How to create and drop database, schema PostgreSQL by Java code? I try import java.io.BufferedReader; import java.io.File; import java.io.FileReader; import java.io.Reader; import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException; import java.sql.Statement; import com.ibatis.common.jdbc.ScriptRunner; public static void createDatabase() throws ClassNotFoundException, SQLException { Class.forName("org.postgresql.Driver"); Connection connection = DriverManager.getConnection("jdbc:postgresql://<IP_ADDRESS>:5432/postgres", "postgres", "123456a@"); Statement stmt = connection.createStatement(); stmt.executeQuery("CREATE DATABASE IF NOT EXISTS foo"); stmt.executeQuery("USE foo"); connection.close(); } and public static void dropDatabase() throws ClassNotFoundException, SQLException { Class.forName("org.postgresql.Driver"); Connection connection = DriverManager.getConnection("jdbc:postgresql://<IP_ADDRESS>:5432/", "postgres", "123456a@"); Statement statement = connection.createStatement(); statement.executeUpdate("DROP DATABASE foo"); connection.close(); } but create, also drop method not success. Error when call create method: Exception in thread "main" org.postgresql.util.PSQLException: ERROR: syntax error at or near "NOT" Position: 20 at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2453) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2153) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:286) at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:432) at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:358) at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:305) at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:291) at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:269) at org.postgresql.jdbc.PgStatement.executeQuery(PgStatement.java:236) at com.nttdata.RunSqlScript.createDatabase(RunSqlScript.java:57) at com.nttdata.RunSqlScript.main(RunSqlScript.java:27) Error when call drop method: Exception in thread "main" org.postgresql.util.PSQLException: ERROR: database "foo" is being accessed by other users Detail: There is 1 other session using the database. at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2453) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2153) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:286) at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:432) at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:358) at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:305) at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:291) at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:269) at org.postgresql.jdbc.PgStatement.executeUpdate(PgStatement.java:249) at com.nttdata.RunSqlScript.dropDatabase(RunSqlScript.java:71) at com.nttdata.RunSqlScript.main(RunSqlScript.java:28) Are you open to using liquibase or other utilities ? I must manipulating PostgreSQL from Java, I willingness use 3th party tools Where in the Postgres manual did you find the USE command or the IF NOT EXISTS for DROP DATABASE? Firstly, the SQL syntax used while creating a database is incorrect in your question. The stack trace says it all about the incorrect syntax. If you want to check whether the database exists or not, then you might have to do something like this in your Java code: ResultSet rs = stmt.executeQuery("select datname from pg_database where datname like 'foo';"); not by the IF NOT EXISTS approach Accessing this rs object will let you know whether the database exists or not. Then you can fire either your CREATE or DELETE database operations accordingly. String databaseName = ""; if(rs.next()) { databaseName = rs.getString("datname"); } stmt.executeQuery("DROP DATABASE " + databaseName); If a direct DROP DATABASE doesn't work (which I had faced a lot many times), you might consider using the dropdb utility or by one of the following approaches. APPROACH-1 Use the following query to prevent future connections to your database(s): REVOKE CONNECT ON DATABASE foo FROM public; You can then terminate all connections to this database except your own: SELECT pid, pg_terminate_backend(pid) FROM pg_stat_activity WHERE datname = current_database() AND pid <> pg_backend_pid(); Since that you have revoked the CONNECT rights to the corresponding database, no external auto-connect's will no longer be able to do so. You'll now be able to drop the database without any issues. APPROACH-2: This approach goes by the batch job way, where you can invoke this class from the corresponding jar: Process batchProcess = Runtime.getRuntime().exec("C:/Program Files/PostgreSQL/9.5/bin/psql -h \"DB SERVER ADDRESS\" -U postgres -f C:/batch.sql"); batch.sql holds the SQL DROP DATABASE statements, which will be dropped when executed. Hope this helps! Exception in thread "main" org.postgresql.util.PSQLException: ResultSet not positioned properly, perhaps you need to call next. Obvious thing is to check whether the ResultSet contains anything or not using the rs.next(), updated the answer. Please check! Use Process is nice idea. C:/Program Files/PostgreSQL/9.5/bin/psql -h \"DB SERVER ADDRESS\" -U postgres -f C:/batch.sql how to add password? Aren't you able to DROP the database with a normal DROP DATABASE command. The approach of calling the utility directly goes by a batch job! It will prompt for password then! Oh, no. It must run automatically. @DoNhuVy, Updated my answer based on your question! Please check! Thank friends for many helpful information. In your approach2, I must do everything automatically, therefore prompt password will make approach-2 failure. There is no difference in dropping the database through JDBC or through the command line. In both cases you can only drop it if no connection is open to that database and you can't drop the database you are connected to Understood @a_horse_with_no_name, I was clarifying the syntactical error that OP has in his question. Thank you for clarifying this to the rest of the world! You wrote: "If a direct DROP DATABASE doesn't work (which I had faced a lot many times), you might consider dropping using the command prompt, which is simple enough anyway" - I was referring to that. Yeah, with all respect to your comment there are situations where DROP DATABASE doesn't drop the database due to active connections to it right. I was suggesting the utility / batch job approach and OP wanted it to be automated. An option that you can try to use is to use a database migration tool like liquibase. There are couple of options that you can try from liquibase. One is to have an executable directly executed from the code (You first create a database change log file , with change sets. One of the commands in the change sets will be an executable <changeSet author="exec-change-drop" id="drop-foo"> <executeCommand executable="<bat file with drop for PSQL or dropdb>"/> </changeSet> Another option that you can try is to write a sql and call it <changeSet id="exec-change-drop2" author="drop-foo-2"> <sql>DROP DATABASE foo;</sql> </changeSet> You can then execute this from your code as follows Class.forName("org.postgresql.Driver"); Connection connection = DriverManager.getConnection("jdbc:postgresql://<IP_ADDRESS>:5432/postgres", "postgres", "123456a@"); Database database = DatabaseFactory.getInstance().findCorrectDatabaseImplementation(new JdbcConnection(connection)); Liquibase liquibase = new liquibase.Liquibase("path/to/changelog.xml", new ClassLoaderResourceAccessor(), database); liquibase.update(new Contexts(), new LabelExpression()); Note that your changeLogSchema may need to be in a different schema so that it executes seamlessly. Additionally liquibase can be added with maven (this was the way it was supposed to be) and executed as well Thank you for your suggestion! I hope have a chance to try your approach, then reply the answer.
common-pile/stackexchange_filtered
Split IEnumerable by predicate In order to divide an IEnumerable in C# by a predicate, I implemented the following (in a static class): public static IEnumerable<IEnumerable<T>> SplitBeforeIf<T>(this IEnumerable<T> input, Func<T, bool> pred) { //This exists to store the predicate satisfier, which I kept as //the first T in the inner IEnumerable (except, maybe the first one). T predSatis = default(T); while(input.Any()) { var temp = input.TakeWhile(v => !pred(v)); //I was told that the below is the best test for default-ness. //(I assume that pred(default(T)) is always false.) if(predSatis == null || predSatis.Equals(default(T))) yield return temp; else yield return (new List<T>{predSatis}).Concat(temp); input = input.SkipWhile(v => !pred(v)); predSatis = input.FirstOrDefault(); input = input.Skip(1); } if(predSatis != null && !predSatis.Equals(default(T))) yield return new List<T>{predSatis}; } This returns as follows: new List<int>{1,2,3,4,5}.SplitBeforeIf(v => v % 2 == 0) gives {{1}, {2,3}, {4,5}} new List<int>{6,1,2,3,4,5}.SplitBeforeIf(v => v % 2 == 0) gives {{}, {6, 1}, {2,3}, {4, 5}} new List<int>{1,2,3,4,5,6}.SplitBeforeIf(v => v % 2 == 0) gives {{1}, {2,3}, {4,5}, {6}}. I have 3 questions (besides any other comments on the code): I see that, despite my best efforts, this is \$O(n^2)\$. Why? Can this be improved for a generic IEnumerable? This does not support a stateful predicate or a stateful IEnumerable input. Can this be remedied? This looks awfully like Clojure's partition-by. How does Clojure avoid the first issue (it may also have the second, but functional programming dislikes state anyway)? Being a little pedantic about these things, I also would prefer solutions that don't accumulate values into another data structure (just to see if full laziness is possible). your implementation doesn't give you the results you are expecting.1, 2, 3, 4, 5 => {1}{2,3}{4,5} ; 0, 1, 2, 3, 4, 5 => {}{1}{2,3}{4,5} ; 1, 2, 3, 4, 5, 6 => {1}{2,3}{4,5}{6}; 2, 3, 4, 5 => {}{2,3}{4,5} If you fear your algoritmh is O(N^2) in complexity, just run it with input size doubling and see if the runtime doubles or quadruples. If it doubles it is O(n), if it quadruples it is O(n^2) @Gareth: Yes, you're right. I'll edit that... @Caridorc: I tested it a few times and it does seem to be n^2, I just want to understand why. (I'll edit to reflect this.) @HemanGandhi - run through your algorithm with an input where every element satisfies the predicate -> the while loop will run n times with a bunch of O(n) operations inside giving you O(n^2) Full laziness may be possible, but it's certainly not going to be maintainable. You would have a situation in which both the "outer" and the latest "inner" enumerators can advance the "source" enumerator, and if the outer one advances it then it needs to cache the items it advances over so that the inner one can return them later. @PeterTaylor Additionally supporting the case where an inner enumerable is enumerated multiple times would cause problems; you'd need to memoize the inner enumerables, even if you compute them lazily, to make sure it works properly. Of course, if the inner enumerables could be very large, or the items themselves have a large memory footprint, then just materializing the whole group may not be an option, however much it makes the code much easier. Did you get a comparison of clojure? @Gareth, I tried my own clojure thing and I think it lacks the first of the above pitfalls. So I thought I'd try this out with a test enumerable. public class NumberValues : IEnumerable<int> { public NumberValues(int startValue, int endValue) { _endValue = endValue; _startValue = startValue; counter = 0; } public NumberValues(int endValue) : this(0, 10) { } public int counter { get; set; } public IEnumerator<int> GetEnumerator() { var iterator = _startValue; while (iterator < _endValue) { Trace.Write(iterator + ","); iterator++; counter++; yield return iterator; } } IEnumerator IEnumerable.GetEnumerator() { return GetEnumerator(); } } followed closely by [TestCase(0,5)] [TestCase(1,5)] [TestCase(0,6)] [TestCase(1,6)] [TestCase(0,10)] [TestCase(1,10)] public void TestSkipBeforeIf(int start, int end) { var numberValue = new NumberValues(start,end); numberValue.SplitBeforeIf(i => i%2 == 0).ToList(); } which results in the following output. 0,5 > 0,0,1,0,1,2,0,1,2,3,0,1,2,3,4,0,1,2,3,4,0,1,2,3,4, - 25 1,5 > 1,1,1,2,1,2,3,1,2,3,4,1,2,3,4,1,2,3,4, - 19 0,6 > 0,0,1,0,1,2,0,1,2,3,0,1,2,3,4,0,1,2,3,4,5,0,1,2,3,4,5, - 27 1,6 > 1,1,1,2,1,2,3,1,2,3,4,1,2,3,4,5,1,2,3,4,5, - 21 0,10 > 0,0,1,0,1,2,0,1,2,3,0,1,2,3,4,0,1,2,3,4,5,0,1,2,3,4,5,6,0,1,2,3,4,5,6,7,0,1,2,3,4,5,6,7,8,0,1,2,3,4,5,6,7,8,9,0,1,2,3,4,5,6,7,8,9, - 65 1,10 > 1,1,1,2,1,2,3,1,2,3,4,1,2,3,4,5,1,2,3,4,5,6,1,2,3,4,5,6,7,1,2,3,4,5,6,7,8,1,2,3,,4,5,6,7,8,9,1,2,3,4,5,6,7,8,9, - 55 0,20 > 230 0,40 > 860 0,100 > 5150 But why? while (input.Any()) { var temp = input.TakeWhile(v => !pred(v)); if (predSatis == null || predSatis.Equals(default(T))) yield return temp; else yield return (new List<T> { predSatis }).Concat(temp); input = input.SkipWhile(v => !pred(v)); predSatis = input.FirstOrDefault(); input = input.Skip(1); } if you evaluate your 2 lines input = input.SkipWhile(v => !pred(v)); predSatis = input.FirstOrDefault(); over iterations of your while loop, it is necessary to expand the input variable for each iteration. input.SkipWhile(pred).FirstOrDefault input.SkipWhile(pred).SkipWhile(pred).FirstOrDefault input.SkipWhile(pred).SkipWhile(pred).SkipWhile(pred).FirstOrDefault input.SkipWhile(pred).SkipWhile(pred).SkipWhile(pred).SkipWhile(pred).FirstOrDefault having seen this, we now consider the evaluation of input.SkipWhile(!pred) so over 100 iterations pred = x => x % 2 == 0 : 5150 pred = x => x % 3 == 0 : 3600 pred = x => x % 4 == 0 : 2625 pred = x => x % 5 == 0 : 2120 pred = x => x % 6 == 0 : 1849 pred = x => x % 15 == 0 : 837 pred = x => x % 20 == 0 : 605 pred = x => x % 25 == 0 : 504 pred = x => x % 30 == 0 : 564 pred = x => x % 30 == 0 : 413 Your functions behaviour is therefore dependent not just on the size of your data, but on the form of your predicate. For your own analysis, this is what I used for evaluating the predicate [TestCase(1,5000,10)] public void TestSkipBeforeIf(int start, int end, int mod) { var numberValue = new NumberValues(start,end); numberValue.SplitBeforeIf(i => i%mod == 0).ToList(); } I'm not clear what you meant by stateful predicate or IEnumerable, however in terms of the IEnumerable I have maintained a count as state. Similarly I could maintain a count for the pred, or add some bizarre checking behaviour. Func<object, int, bool> predStateful = (y,x) => { if (x == y.Check(x)) { return false; } return x%mod == 0; }; Func<int, bool> pred = x => predStateful(MyCheckingObject,x); numberValue.SplitBeforeIf(pred).ToList(); Thanks for the interesting question. EDIT: Modified Chunk function using a predicate. original public static IEnumerable<IEnumerable<TValue>> Chunk<TValue>( this IEnumerable<TValue> values, Func<TValue,bool> pred) { using (var enumerator = values.GetEnumerator()) { while (enumerator.MoveNext()) { yield return GetChunk(enumerator, pred).ToList(); } } } private static IEnumerable<T> GetChunk<T>( IEnumerator<T> enumerator, Func<T,bool> pred) { do { yield return enumerator.Current; } while (!pred(enumerator.Current) && enumerator.MoveNext()); } Thank you very much for the interesting points! I meant a similar thing by "stateful predicate" as you've seen for the "stateful IEnumerable" - if the predicate captures variables (see https://blogs.msdn.microsoft.com/matt/2008/03/01/understanding-variable-capturing-in-c/). here is a function that returns lazily http://stackoverflow.com/questions/12389203/how-do-i-chunk-an-enumerable I like the idea behind that SO thread. Unfortunately, I have to use the predicate. the predicate is easy to add, by just replacing int size with Func<int,bool> pred I think the issue is that the chunk size is fixed and I'd like the predicate on the value type of the IEnumerable. Either way, I think my code turns out to be much like http://stackoverflow.com/a/23652544/5292630 I added an edit to show you what I meant. see above under EDIT: Modified Chunk function using a predicate I can't really answer your questions but I can show you a simpler (easier to read) way to achieve the same result : public static IEnumerable<IEnumerable<T>> SplitBeforeIf<T> ( this IEnumerable<T> source, Func<T, bool> predicate) { var temp = new List<T> (); foreach (var item in source) if (predicate (item)) { if (temp.Any ()) yield return temp; temp = new List<T> { item }; } else temp.Add (item); yield return temp; } Edit If you want something with no intermediate data structure and with O(n) complexity, I think this could do the job. Note that it's a bit convoluted ; I tried to make a class with a state machine like the ones used internally by Linq [WhereEnumerableIterator for example] but without success It could be "simpler" (all state contained in one single method) with VB.Net because it has lambda Iterator static class Extensions { private enum EnumerationState { Ended = -1, Started = 0, Enumerating } private static EnumerationState currentState; public static IEnumerable<IEnumerable<T>> SplitBeforeIf<T> ( this IEnumerable<T> source, Func<T, bool> predicate) { using (var enumerator = source.GetEnumerator ()) { currentState = EnumerationState.Started; var counter = 0; while (currentState != EnumerationState.Ended) { ++counter; yield return SplitBeforeIfInternal (enumerator, predicate, counter); } } } private static IEnumerable<T> SplitBeforeIfInternal<T> (IEnumerator<T> enumerator, Func<T, bool> predicate, int amountToSkip) { while (amountToSkip > 0) { --amountToSkip; if (currentState == EnumerationState.Enumerating && amountToSkip == 0) yield return enumerator.Current; bool hasMoved; while ((hasMoved = enumerator.MoveNext ()) && !predicate (enumerator.Current)) if (amountToSkip == 0) yield return enumerator.Current; currentState = hasMoved ? EnumerationState.Enumerating : EnumerationState.Ended; } } } I join the VB.Net equivalent just for informatory purpose : Module Extensions Private Enum EnumerationState Ended = -1 Started = 0 Enumerating End Enum <Runtime.CompilerServices.Extension> Iterator Function SplitBeforeIf(Of T)(source As IEnumerable(Of T), predicate As Func(Of T, Boolean)) As IEnumerable(Of IEnumerable(Of T)) Using e = source.GetEnumerator Dim state = EnumerationState.Started Dim amountToSkip = 0 While state <> EnumerationState.Ended amountToSkip += 1 Yield Iterator Function() While amountToSkip > 0 amountToSkip -= 1 If state = EnumerationState.Enumerating AndAlso amountToSkip = 0 Then Yield e.Current Dim hasMoved = e.MoveNext While hasMoved AndAlso Not predicate(e.Current) If amountToSkip = 0 Then Yield e.Current hasMoved = e.MoveNext End While state = If(hasMoved, EnumerationState.Enumerating, EnumerationState.Ended) End Function() End While End Using End Function End Module This was one of my implementations. I just wanted to force myself to use lazy sequences and IEnumerables instead of having a List. (Sorry for not mentioning that.) Technically, I think this fixes question 1 everywhere and 2 for a stateful predicate. @HemanGandhi having a backing array (what List essentially is) does not prevent it from being lazy. yield is what makes it lazy. @AbdulRahmanSibahi My point was that I wanted the inner lists to also be lazy. I really like this idea. Unfortunately, it seems that unless you enumerate the previous inner values, you won't progress through the IEnumerable. (ie. SplitBeforeIf().ElementAt(n)) returns the same thing for any n.) I wonder if this can be altered? @HemanGandhi indeed... something I neglected during tests ; I've updated the code to add a counter incremented in the "outer loop" and decremented in the "inner" one to reach the good point. Not ideal but that do the job (I think) Kudos for source and predicate parameter names. I'd make Func<T, bool> a plain Predicate<T> delegate though. @Mat'sMug I choose Func only because it's what the other "Linq" method do too ;) Your use of brackets in that first code snippet is just evil. @Shelby115 most of the language I use the most don't use bracket (VB.Net) and/or rely mainly on indentation to identify block (python, F#/OCaml) so I tend to use them when strictly required because it is a public method you should validate the arguments at least against null, but according to Ben Aaronson's comment You shouldn't do guard checks like this inside an iterator method because they'll only get checked when the code starts iterating, rather than when the method is called. you should then call a separate method where the Enumeration will take place. omitting braces {} although they might be optional can lead to serious bugs. I would like to encourage you to always use them to make your code less error prone and more readable. names like predSatis which needs comments like to store the predicate satisfier are a sign that the variables are named bad. If you have the feeling that you need a variable you should just name it well although it will involve more typing but the readability of the code increases a lot. If you had named this predicateSatisfier it would have been clear what it is about. That is true for pred parameter as well. if(predSatis == null || predSatis.Equals(default(T))) well, the first check is superflous because for value types it won't do anything and for reference types default(T) equals to null. So better just use if (predSatis.Equals(default(T))) Linq methods are a great way to reduce typing and a lot of devs think it is easier to read. But using 5 different linq methods to do this job of which 2 are using the same predicate is over the top IMO. Using just a plain foreach with the support of List<T> will run in O(N) public static IEnumerable<IEnumerable<T>> SplitBeforeIf<T>(this IEnumerable<T> input, Func<T, bool> predicate) { if (predicate == null) { throw new ArgumentNullException("predicate"); } return SafeSplitBeforeIf(input, predicate); } private static IEnumerable<IEnumerable<T>> SafeSplitBeforeIf<T>(this IEnumerable<T> input, Func<T, bool> predicate) { var result = new List<T>(); foreach (var item in input) { if (predicate(item)) { yield return result; result.Clear(); } result.Add(item); } if (result.Count > 0) { yield return result; } } I had said that I didn't want a List in order to make the inner lists lazy as well (a part of my pedantry). Also, I compared against null because I was getting some sort of null pointer exception without it. @HemanGandhi you are right about the null check. I have edited my answer to address the lazyness of the inner enumerables as well. @Heslacher You're not actually lazily computing the inner values. You're eagerly generating all of them, you're just creating a sequence that's super inefficient to enumerate with all of the nested concats and by wrapping every single item in an array. Your solution is just strictly worse than if you had used a list. @Servy its worse than that it throws an StackOverflowException. I rolled it back to the version using a List<T>. @Heslacher it's always better then the CodeOverviewException ;-P You shouldn't do guard checks like this inside an iterator method because they'll only get checked when the code starts iterating, rather than when the method is called. MS splits LINQ methods for this reason: http://referencesource.microsoft.com/#System.Core/System/Linq/Enumerable.cs,577032c8811e20d3
common-pile/stackexchange_filtered
Hawking radiation for closely orbiting black holes Suppose we have two black holes of radius $R_b$ orbiting at a distance $R_r$. I believe semi-classical approximations describe correctly the case where $R_r$ is much larger than the average black body radiation wavelength due to Hawking radiation. Do we have approximations for Hawking radiation temperature where the distance $R_r$ is of the same order, or in the case where it is much shorter than the radiation average wavelength? In the absence of a concrete analysis for either one, Do we have any physical insight to affirm if Hawking radiation will be either inhibited or increased in the above situations? I guess the radiation would be inhibited, since the black holes absorb radiation from each other, thus lose mass and hawking-radiate more slowly. remember the distance between the BHs is the same order or smaller than the wavelength of the radiation. There might be nontrivial boundary effects that qualitative change the Bogoliubov transformation Just a guess. I'm not bothering to do any serious mathematics so I can't prove anything. What exactly do you mean by at a distance $R_r$? @MBN Read the first sentence of the question. I did, that's why I am asking. imagine both black holes orbiting each other like a binary system, separated by a distance $R_r$. What are you unsure about? What exactly is the geometry of this space-time? Or are thinking classically of two black holes orbiting in three dimensional Euclidean space? far away of the horizons spacetime can be accurately described by Newtonian gravity in flat Minkowski spacetime, so, yes I think that the answer can be made with generally accepted thermodynamics. The small one loses mass to the larger one. Since the proximity lowers the gravitational barrier between them, this will happen faster than if they were sufficiently separated. Mass loss of the two combined to space will be decreased because they contribute to the gravity well of each other, making the event horizons larger. NOTE: I should qualify I don't understand the role kinetic energy plays in this. Note that it is virtually impossible to unambiguously define distances in extremely non-stationary GR situations (as this clearly would be). Even the term of the horizon is ill defined and it is really not adequate to specify the horizon distances up to thermal wavelengths. Furthermore, thermodynamical equilibrium requires at least (quasi-)stationarity so any TD hand-waiving is useless here. The last thing that should be noted is that when typical gravitational lengths are comparable with radiation lengths, interference starts to play a role. This problem would be very difficult. Obviously for it to be physically relevant this would have to pertain to two small black holes in a mutual orbit. Large astrophysical black holes are are at temperature $6.1\times 10^{-8}K(M/M_{sol})$ and Hawking radiation is insignificant, General relativity can only solve the orbit of a test mass. The general two body problem is not integrable. Hence the motion of two black holes in a mutual orbit is not solved by exact means, but must be numerically evaluated. Then if you throw Hawking radiation into the picture it becomes complicated. Geroch looked at the issue of the shape of a black hole horizon due to external distributions of matter. This was way back in the 1970s. Two mutually interacting black holes will also distort their horizons. In a sense the more curvature there is to the horizon per unit area the more radiation there will be. Think analogously with a flat radiating surface and one with a rough surface, or why fins and the rest are used in units to dissipate heat. The gravity field near a horizon area that is curved will have a larger gradient to the gravity field. The surface gravity $g^2~=~\nabla^a\xi_b\nabla_a\xi^b$ will be larger if the Killing fields have a greater divergence. You would then have two competing processes. The first is that by distorting the horizons of the black holes the Hawking radiation might actually increase in rate. However, the two black holes will radiate some of that radiation at each other. How that detailed balance works out is hard to conjecture on. The interesting form of radiation here is not Hawking radiation, but gravitational wave radiation. For astrophysically-sized black holes, the Hawking radiation is completely negligible relative to other processes. For example, for a solar mass sized Schwarzschild black hole, the black hole radiates like a black body at 60 nano Kelvins (far below the background CMB temperature). Certainly the calculation of the Hawking effect in this more complicated setting will be more difficult, but of course the Hawking radiation won't suddenly become important once you have two black holes that are orbiting each other closely. Hawking radiation is (in part) electromagnetic waves; gravitational radiation is gravity waves. They're different. Why is one more interesting than the other? By interesting I mean physically dominant, in the sense that the physical effects of the Hawking radiation will be swamped by the effects of the gravitational radiation.
common-pile/stackexchange_filtered
Show that $H_x$ is a group for all $x.$ $H_{x}$ denotes the class of $x$ for the Green relation $\mathcal{H}$. Let $S$ be a finite semigroup where all elements can be written as a product of idempotents, that is, $x=e_1 e_2\dots e_n,$ for idempotents $e_1,e_2,\dots, e_n \in S$ for any $x \in S.$ Assume also that $H_{ef}$ is a group for all idempotents $e,f \in S$ Show that $H_x$ is a group for all $x \in S .$ whats green relation Green's relations What is the origin of this question? $ \newcommand{\oH}{\mathop{\cal H}\nolimits} \newcommand{\oL}{\mathop{\cal L}\nolimits} \newcommand{\oR}{\mathop{\cal R}\nolimits} $ Since $S$ is finite, there exists a positive integer $\omega$ such that, for all $s \in S$, $s^\omega$ is idempotent. Let us prove by induction on $n$ that if $x$ is a product of $n$ idempotents, then $H_x$ is a group, that is, $x \oH x^\omega$, or, equivalently $x^{\omega+1} = x$. For $n = 1$ et $n = 2$, this follows from the hypothesis (for $n= 1$, it suffices to take $e = f$). Let $n \geqslant 2$ and let $x = e_1e_2 \dotsm e_ne_{n+1}$ be a product of $n+1$ idempotents. By the induction hypothesis, $e_1e_2 \dotsm e_n \oH\, (e_1e_2 \dotsm e_n)^\omega$ and $e_ne_{n+1} \oH\, (e_ne_{n+1})^\omega$. I claim that the following diagram is part of the $\cal D$-class of $x$ (as usual, a star means that the corresponding $\cal H$-class is a group). \begin{array}{|c|c|} \hline x = e_1e_2 \dotsm e_{n-1}e_ne_{n+1} &{}^*\ u = e_1e_2 \dotsm e_{n-1}(e_ne_{n+1})^\omega \\ \hline {}^*\ v = (e_1e_2 \dotsm e_n)^\omega e_{n+1} &{}^*\ w = (e_1e_2 \dotsm e_n)^\omega (e_ne_{n+1})^\omega \\ \hline \end{array} Indeed, by the induction hypothesis, $e_1e_2 \dotsm e_n \oH\ (e_1e_2 \dotsm e_n)^\omega$ and thus $e_1e_2 \dotsm e_ne_{n+1} \oL\ (e_1e_2 \dotsm e_n)^\omega e_{n+1}$. Since $n \geqslant 2$, the induction hypothesis also implies that $e_ne_{n+1} \oH\ (e_ne_{n+1})^\omega$, whence $e_1e_2 \dotsm e_{n-1}e_ne_{n+1} \oR e_1e_2 \dotsm e_{n-1}(e_ne_{n+1})^\omega$ and $(e_1e_2 \dotsm e_n)^\omega e_{n+1} = (e_1e_2 \dotsm e_n)^\omega e_ne_{n+1} \oR (e_1e_2 \dotsm e_n)^\omega (e_ne_{n+1})^\omega$. The "stars" occur because the elements $v$ and $w$ are the product of two idempotents and $u$ is the product of $n$ idempotents. Now, since $R(v) \cap L(u)$ contains an idempotent, the product $s = u^\omega v^\omega$ belongs to $H_x$. Thus $H_x = H_s$, but since $s$ is the product of two idempotents, $H_s$ is a group. Thus $H_x$ is a group.
common-pile/stackexchange_filtered
CORS error while calling the unofficial Airbnb API I have been trying to use the Airbnb API from this link. I know it is not an official API, but I really need to use it in my website. But I am getting CORS (Cross-Origin Resource Sharing) error and I cannot call the API. When I tried from hurl.it it was working absolutely fine. But now I am not sure how do I correct this? I am using ruby on rails on backend and jQuery Ajax to call the API. My Ajax code call is below. I am creating an empty listing as a host in Airbnb from this code. $("#submit_apartments").on("click", function() { $.ajax({ type: "POST", url: "https://api.airbnb.com/v1/listings/create?client_id=3092nxybyb0otqw18e8nh5nty", headers: { "Content-Type": "application/json; charset=UTF-8", "X-Airbnb-OAuth-Token": "myAuthToken" }, data: { room_type_category: "private_room", property_type_id: 2, bathrooms: 1, person_capacity: 1, beds: 1, bedrooms: 1, city: "Sunnyvale, California, US" }, success: function() { console.log("Sucess"); }, error: function(xhr,err){ console.log("Error!!!"); } }); }); To use the Airbnb API from client-side JavaScript in a web app, you’ll either need to set up your own CORS proxy using code from https://github.com/cyu/rack-cors or similar, or you can send your request through an open CORS proxy such as https://cors-anywhere.herokuapp.com/: $.ajax({ type: "POST", url: "https://cors-anywhere.herokuapp.com/https://api.airbnb.com/v1/listings/create?client_id=3092nxybyb0otqw18e8nh5nty", … } The CORS proxy will send the request to the Airbnb API endpoint and then when it gets a response from Airbnb, it will add the Access-Control-Allow-Origin response header and all other needed CORS header to the response it passes on to the browser and that the browser sees. However, note that you probably really don’t want to use a third-party open proxy to send requests to any logged-in endpoint that requires a X-Airbnb-OAuth-Token access token—because the owner of proxy would be able to see your X-Airbnb-OAuth-Token value and reuse it. So you really should set up your own proxy instead, using https://github.com/cyu/rack-cors or such. Making the request through a proxy like that is the only way that will work, because the Airbnb API itself doesn’t send the Access-Control-Allow-Origin response header required by the CORS protocol, and also doesn’t seem to support the alternative of allowing you to specify a callback name so that you can get JSONP-formatted responses. So that means there’s no way to call the Airbnb API directly from client-side JavaScript running in a web app, because the browser won’t allow your client-side JS code to access the response at all. To confirm that, try a request to any endpoint URL for its API and look at the response headers: curl -i -H "Origin: http://example.com" \ "https://api.airbnb.com/v2/search_results?client_id=3092nxybyb0otqw18e8nh5nty" Response headers you’ll get back: HTTP/1.1 200 OK Server: nginx/1.7.12 Content-Type: application/json; charset=utf-8 Status: 200 OK Content-Security-Policy: default-src 'self' https:; connect-src 'self' https: ws://localhost.airbnb.com:8888 http:; font-src 'self' data: *.muscache.com fonts.gstatic.com use.typekit.net; frame-src *; img-src 'self' https: http: data:; media-src 'self' https:; object-src 'self' https:; script-src 'self' https: 'unsafe-eval' 'unsafe-inline' http:; style-src 'self' https: 'unsafe-inline' http:; report-uri /tracking/csp?action=index&controller=v2&req_uuid=cd9b2eb5-5014-4587-8f6b-144c800b6d7b&version=b11f4837d2aaab4f25311eaabfd788770abc5557; X-Content-Type-Options: nosniff X-XSS-Protection: 1; mode=block X-Frame-Options: SAMEORIGIN Cache-Control: no-store, max-age=0, private, must-revalidate ETag: W/"1dc7df77adc42a864c6a7a6806a68a6f" X-UA-Compatible: IE=Edge,chrome=1 Strict-Transport-Security: max-age=10886400; includeSubdomains Date: Sun, 26 Mar 2017 03:31:08 GMT Transfer-Encoding: chunked Connection: keep-alive Connection: Transfer-Encoding Notice there’s no Access-Control-Allow-Origin response header there. https://developer.mozilla.org/en-US/docs/Web/HTTP/Access_control_CORS has more CORS info.
common-pile/stackexchange_filtered
Puppeteer freezing w/ multiple URLs, & skip any URL that times out and move to the next in my list I have a Puppeteer script where it iterates through a list of URLs saved in urls.txt to scrape. I have 2 issues: If one of the URLs in the list times out, it stops the whole process. I would like it to skip any URLS that don't work / timeout, and just move on to the next URL. I have tried to put in a catch(err), but I'm not putting it in correctly and it fails. If the list of URLs is more than about 5, it freezes my server and I have to reboot. I think maybe it's waiting to iterate through all the URLs before saving and that's overloading the server? Or is there something else in my code that is causing the problem? const puppeteer = require('puppeteer'); const fs = require('fs'); const axios = require('axios'); process.setMaxListeners(Infinity); // <== Important line async function scrapePage(url, index) { // Launch a new browser const browser = await puppeteer.launch({ headless: true, args: ['--no-sandbox'] }); // Open a new page const page = await browser.newPage(); // Set the user agent await page.setUserAgent('Mozilla/5.0 (Macintosh; Intel Mac OS X 13_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/<IP_ADDRESS> Safari/537.36'); // Navigate to the desired webpage await page.goto(url, { waitUntil: "domcontentloaded", }); // Wait for selector await (async () => { await page.waitForSelector("#root > section > section > main > div.py-6.container > div.columns.mt-4 > div.column.is-flex-grow-2 > div:nth-child(3) > div.ant-card-body > div > div > div > canvas", { visible: true }); })(); // Get the HTML content of the page const html = await page.content(); // Generate the file name using the index value const htmlFileName = `${index.toString().padStart(4, '0')}.html`; const screenshotFileName = `${index.toString().padStart(4, '0')}.png`; // Check if the HTML file exists const filePath = '/root/Dropbox/scrapes/' + htmlFileName; if (fs.existsSync(filePath)) { // If the file exists, rewrite the content with the new scraped HTML fs.writeFileSync(filePath, html); } else { // If the file doesn't exist, create the file fs.closeSync(fs.openSync(filePath, 'w')); // Save the scraped content to the newly created file fs.writeFileSync(filePath, html); } // Capture a screenshot of the page await page.screenshot({ path: '/root/scrapes/' + screenshotFileName }); // Close the browser await browser.close(); } // Read the lines of the file const lines = fs.readFileSync('/root/Dropbox/urls.txt', 'utf-8').split('\n'); // Iterate through each URL in the file for (let i = 0; i < lines.length; i++) { // Scrape the page scrapePage(lines[i], i + 1); } Since you're in an async closure you might as well use async version of fs methods. Url's opening/closing too quickly and not waiting is probably the problem that you have while iterating through the urls, changing scrapePage(lines[i], i + 1); to await scrapePage(lines[i], i + 1); should solve that. Also page.waitForSelector doesn't need to be in (async () => {..}, the way you have it in your code. To check if the URL's work or not you need to get the response from the page.goto() if that is 200 (HTTP status code) it means ok. On try...catch, take a look at the code below. urls.txt - random urls to test the code - stackover5flow is there to get an error https://stackoverflow.com/questions/75911774/error-during-ssl-handshake-with-remote-server-node-js-with-apache https://stackoverflow.com/questions/75911761/unit-tests-of-private-function-in-javascript https://stackoverflow.com/questions/75911767/is-r-language-more-simplified-to-use-than-sql https://stackoverflow.com/questions/75911766/is-there-an-equivalent-of-term-variable-on-windows https://stackover5flow.com/questions/75911176/puppeteer-timeout-30000ms-when-headless-is-true https://stackoverflow.com/questions/75909981/how-can-i-elevate-the-privileges-of-an-executable-using-setuid-on-mac https://stackoverflow.com/questions/75911746/function-that-returns-an-array-of-4-int-taking-the-values-0-or-1-and-that-rando code : const puppeteer = require('puppeteer'); const fs = require('fs'); const fsp = fs.promises; process.setMaxListeners(Infinity); // <== Important line let browser; (async () => { async function scrapePage(url, index, timeout = false) { // Launch a new browser const browser = await puppeteer.launch({headless: false, args: ['--no-sandbox']}); const page = await browser.newPage(); // Set the user agent await page.setUserAgent('Mozilla/5.0 (Macintosh; Intel Mac OS X 13_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/<IP_ADDRESS> Safari/537.36'); try { let res = await page.goto(url, {waitUntil: "domcontentloaded", timeout: 20000}); res = res.status(); if (res == 200) { // test if url can connect or not let selector = (timeout) ? "body[class=potato]" : "div.container"; // just to test timeout await page.waitForSelector(selector, {visible: true, timeout : 5000}); // timeout set to 5 secs, change if required const html = await page.content(); const htmlFileName = `${index.toString().padStart(4, '0')}.html`; const screenshotFileName = `${index.toString().padStart(4, '0')}.png`; const dir = 'test/scrapes'; const filePath = `${dir}/${htmlFileName}`; const screenshotPath = `${dir}/${screenshotFileName}`; await fsp.mkdir(dir, { recursive: true }, (e) => { if (e) console.log(e);}); // chk dir, if doesn't exist create await fsp.writeFile(filePath, html, { flag: 'w+' }, (e) => { if (e) throw e;}); // write/overwrite or throw error will stop script await page.screenshot({ path: screenshotPath}); } else { console.log (res); } } catch (e) { // this will catch any timeout or connection error console.log(e.message); } await browser.close(); } // Read const lines = (await fsp.readFile('urls.txt', 'utf8')).split('\n'); for (let i = 0; i < lines.length; i++) { let timeout = (i == 1) ? true : false; // just to generate a timeout await scrapePage(lines[i], i + 1, timeout); } })().catch(err => console.error(err)). finally(() => browser ?. close()); result : so it will give errors but will run unless that error is while writing/overwriting the files. Thank you so much, this solved both problems.
common-pile/stackexchange_filtered
How do I create an ArrayList of Y-axis values for a LineGraph in Kotlin using MPChartsLirary? I am trying to add values to a dataset in the Y-Axis but I don't seem to understand how to do that. I want to add two values, a float and an integer. Below is my code, it works when it was in Java but when I translated to Kotlin, it stopped to work. I am new to Kotlin and doing my first app using MPAndroidCharts. //Create an array list of Y-Axis values private fun setYAxisValues():ArrayList<Entry> { val yVals = ArrayList<Entry>() yVals.add(Entry(60, 0)) yVals.add(Entry(48, 1)) yVals.add(Entry(70.5f, 2)) yVals.add(Entry(100, 3)) yVals.add(Entry(180.9f, 4)) return yVals }
common-pile/stackexchange_filtered
Can wildcards be used in Areas Extension? Areas Extension allows for areas to be defined by postal code. Is it also possible to define areas with postal codes that include wildcards? For example, I might want all the postal codes in the form A1A *** to be included in my area (otherwise, I have to list the thousands of possible postal codes). Has anyone got something working along these lines? None of the obvious wildcard options seems to work. I think you'd either have to hack this line https://lab.civicrm.org/extensions/areas/-/blob/e5579b00c9826cfb27cc1bfbdaca932e4cbc5864/Civi/Areas/DefinitionType/PostalCode.php#L40, or do like this example, and in the getWhereClause() do that logic. Thanks, looks like a LIKE is needed, I'll give that a try.
common-pile/stackexchange_filtered
Printing Stdout In Command Line App Without Overwriting Pending User Input In a basic Unix-shell app, how would you print to stdout without disturbing any pending user input. e.g. Below is a simple Python app that echos user input. A thread running in the background prints a counter every 1 second. import threading, time class MyThread( threading.Thread ): running = False def run(self): self.running = True i = 0 while self.running: i += 1 time.sleep(1) print i t = MyThread() t.daemon = True t.start() try: while 1: inp = raw_input('command> ') print inp finally: t.running = False Note how the thread mangles the displayed user input as they type it (e.g. hell1o wo2rld3). How would you work around that, so that the shell writes a new line while preserving the line the user's currently typing on? You have to port your code to some way of controlling the terminal as slightly better than a teletype -- e.g. with the curses module in Python's standard library, or other ways to move the cursor away before emitting output, then move it back to where the user's busy inputting stuff. You could defer writing output until just after you receive some input. For anything more advanced you'll have to use Alex's answer import threading, time output=[] class MyThread( threading.Thread ): running = False def run(self): self.running = True i = 0 while self.running: i += 1 time.sleep(1) output.append(str(i)) t = MyThread() t.daemon = True t.start() try: while 1: inp = raw_input('command> ') while output: print output.pop(0) finally: t.running = False
common-pile/stackexchange_filtered
How to position float items relative to bottom of container DIV? I have a DIV (with non-fixed height) and I want some float images, a menu of my site, to be always positioned at fixed distance from the bottom of the DIV. Code is like this: <div class="header" id="header"> <img src="img/aip.png" width="435" height="18" class="pageName"> <!--margin-top: 72px;--> <!--menu. Last item in line is 1st--> <img src="img/m-about.png" width="75" height="10"class="menu" > <img src="img/m-services.png" width="71" height="10" class="menu"> <img src="img/m-portraits.png" width="79" height="10" class="menu"> <img src="img/m-cosplay.png" width="65" height="10" class="menu" > <img src="img/m-concert.png" width="68" height="10" class="menu" > </div> where class "menu" says "float:right" AND contains margin-top parameter that sets position of menu from TOP of the "header" DIV. I achieve fixed position related to bottom via adjustments by JS on every resize. This does not work well: since height of enclosing DIV is variable (set in % of screen), I have to call javascript to compute pixel values and make adjustments to margin-top. It causes nasty side effects and jumps. So, instead, I need a pure CSS to set position related to bottom of "header" DIV, not top of it. How can I do it? CSS: .menu { float:right; margin-top:0px; /* adjusted by JS later so menu items have certain margin-top*/ padding:8px; padding-top:10px; padding-bottom:10px; cursor:pointer } .header { display: table-row; background-color:#181818; min-height: 114px; height:10%; } You should post the CSS also #header {position:fixed;bottom:20%;left:0;} something like that ? Do you have any other content in the header? It looks like the header needs to be at least 18px tall based on your tallest image? @suspectus: posted css. Header of course has min height. Where do you want the .pageName element? Working Prototype The following might be a way of achieving this design: The HTML could be: <div class="header" id="header"> <img src="http://placekitten.com/500/20" width="435" height="30" class="pageName"> <!--margin-top: 72px;--> <!--menu. Last item in line is 1st--> <div class="nav-wrap"> <img src="http://placekitten.com/100/10" width="68" height="10" class="menu" > <img src="http://placekitten.com/100/10" width="68" height="10" class="menu" > <img src="http://placekitten.com/100/10" width="68" height="10" class="menu" > <img src="http://placekitten.com/100/10" width="68" height="10" class="menu" > <img src="http://placekitten.com/100/10" width="68" height="10" class="menu" > </div> </div> I would wrap the menu/images in a wrapper and then absolutely position it with respect to the parent container .header using the CSS: .menu { float:right; margin-top: 0px; /* adjusted by JS later so menu items have certain margin-top*/ padding:8px; padding-top:10px; padding-bottom:10px; cursor:pointer } .header { background-color:#181818; min-height: 114px; height:10%; position: relative; min-width: 960px; } .nav-wrap { overflow: auto; position: absolute; right: 0; bottom: 0; display: inline-block; } .pageName { position: absolute; left: 10px; bottom: 10px; } You can float the images either left or right or use inline-blocks If you want the .nav-wrap to shrink to fit the menu items, use inline-block, else block will also work. For .pageName, I would pin it to the bottom left corner, add some offsets if needed. I would also give .header a min-width value to make sure that the various images don't overlap when you shrink the window size. Demo fiddle: http://jsfiddle.net/audetwebdesign/vN3RQ/ Your placekitten site is great XD My suggestion would be to wrap the menu images in a div (this is just as, if not more semantically valid IMO), then position that absolutely: html <div id="header"> <img height="18" width="435"/> <div id="menu" class="cf"> <img height="10" width="75"/> <img height="10" width="71"/> <img height="10" width="79"/> <img height="10" width="65"/> <img height="10" width="68"/> </div> </div> css img { border: 1px solid red; } #header { border: 1px solid blue; min-height: 150px; padding-bottom: 12px; /*note +2 for border, should be 10px really*/ position: relative; } #menu { border: 1px solid green; position: absolute; right:0; bottom:20px; } #menu img { float: right; } .cf:before, .cf:after { content: " "; display: table; } .cf:after { clear: both; } See a js fiddle here micro clear fix courtesy of N. Gallagher
common-pile/stackexchange_filtered
Dictionaries with volatile values in Python unit tests? I need to write a unit test for a function that returns a dictionary. One of the values in this dictionary is datetime.datetime.now() which of course changes with every test run. I want to ignore that key completely in my assert. Right now I have a dictionary comparison function but I really want to use assertEqual like this: def my_func(self): return {'monkey_head_count': 3, 'monkey_creation': datetime.datetime.now()} ... unit tests class MonkeyTester(unittest.TestCase): def test_myfunc(self): self.assertEqual(my_func(), {'monkey_head_count': 3}) # I want to ignore the timestamp! Is there any best practices or elegant solutions for doing this? I am aware of assertAlmostEqual(), but that's only useful for floats iirc. Just delete the timestamp from the dict before doing the comparison: class MonkeyTester(unittest.TestCase): def test_myfunc(self): without_timestamp = my_func() del without_timestamp["monkey_creation"] self.assertEqual(without_timestamp, {'monkey_head_count': 3}) If you find yourself doing a lot of time-related tests that involve datetime.now() then you can monkeypatch the datetime class for your unit tests. Consider this import datetime constant_now = datetime.datetime(2009,8,7,6,5,4) old_datetime_class = datetime.datetime class new_datetime(datetime.datetime): @staticmethod def now(): return constant_now datetime.datetime = new_datetime Now whenever you call datetime.datetime.now() in your unit tests, it'll always return the constant_now timestamp. And if you want/need to switch back to the original datetime.datetime.now() then you can simple say datetime.datetime = old_datetime_class and things will be back to normal. This sort of thing can be useful, though in the simple example you gave, I'd recommend just deleting the timestamp from the dict before comparing. Why didn't I think of that simple solution? Although I will be testing a lot of possible return values of my_func, so I think I will monkeypatch datetime as you suggested. Much appriciated, really clever! You should put code to switch back the datetime in a teardown method -- that way this unit test won't affect other test classes Another option is to pass in datetime.datetime.now as the default for a named parameter; pass in your own function in the unit test. This avoids monkeypatching. Chrispy, a unit test shouldn't require modifying the code their testing.
common-pile/stackexchange_filtered
Sigma 70 - 210 auto focus lens Compatability with Nikon DSLR I'm having a problem deciding on a course of action to replace my Nikon F401X. I only have a Sigma 70-210 1:4-5.6 lens in the mix. My dilemma is this: I am considering buying a Fujifilm Finepix HS30EXR with a 30X zoom which is considered a bridge camera. My other option is to buy a Nikon D3100/5100 replacement which would come with a standard 18 - 55VR lens and use the sigma 70 - 210 lens which I already have for telephoto. Will the Sigma lens autofocus on these two Nikons? The lens barrel says autofocus.At the business end the lens writing says " SIGMA UC ZOOM 70-210 1:4-5.6 MULTI-Coated Lens MADE in JAPAN 52. Back end of lens has 5 small pins protruding around the barrel in one group of 4 pins and a space and then a single pin, 5 pins in total. It has a manual F stop adjustment at the trailing edge of the lens. There appears to be a serial number which reads 2017941. The lens is quite heavy if that helps. Now I imagine that the purist will have difficulty understanding why I would even consider a Fujifilm camera but these have improved greatly over the years and work very well. My interests are in landscapes and scenery with some macro and people pictures. For about the same money, both of these options are attainable. Can you help me to value pro and cons of these alternatives? From what i've been able to find your Sigma is AF-D lens and as such lacks autofocus motor, so sadly it will not autofocus on D5100/D3100 because these bodies lack in body motor. You will need Nikon D90 or D7000 to autofocus with this lens. The lens barrel says autofocus.At the business end the lens writing says " SIGMA UC ZOOM 70-210 1:4-5.6 MULTI-Coated Lens MADE in JAPAN 52. Back end of lens has 5 small pins protruding around the barrel in one group of 4 pins and a space and then a single pin, 5 pins in total. It has a manual F stop adjustment at the trailing edge of the lens. There appears to be a serial number which reads 2017941. The lens is quite heavy if that helps. I suspect it has a drive motor incorporated. Can you help me to value pro and cons of these alternatives? Image quality The HS30EXR has a 1/2" sensor (6.4 x 4.8 mm). The D5100, D90 and D7000 have an APS-C sized sensor (23.6 x 15.7 mm) - this will make a difference to image quality. Macro The advertised macro capability of superzooms usually specify closest focus distance at widest-angle. I find this is almost never a useful scenario. Take a small thing to a shop and try the camera to get a realistic idea of macro ability. Obviously, the DSLRs can take a true macro lens (e.g. at some point in the future). Some really good macro photos are taken using an inexpensive reversing ring on a DSLR lens. Focus speed DSLRs use phase-shift detection for autofocus, this is usually significantly faster than the contrast-detection method used in most mirrorless cameras includng the HS30EXR. Bulk & Weight The bridge camera will be much easier to carry around than the DSLRs listed above with an equivalent range of lenses.
common-pile/stackexchange_filtered
What is the physical limit of thermal insulation? If we consider a container (at very low) temperature $T_0$ surrounded by some passive structure, in turn surrounded by an environment at temperature $T_1>T_0$, there will be an inward heat flow leaking through the structure. What is the physical limit of minimal heat leakage? Or, conversely, what is the limiting thermal insulation structure? Insulation An obvious choice of the structure is some insulating material. In this case the flow is $$q \approx kA_0\frac{T_1-T_0}{d}$$ where $k$ is the thermal conductivity, $A_0$ the area of the inner compartment, and $d$ the thickness (this ignores geometric effects, hence the approximation). Under standard conditions silica aerogel and polyurethane foam reaches 0.020 W/Km, but most of that is air thermal conductivity. By reducing pressure it can be reduced: at $10^{-7}$ atmospheres it is down to 0.000012 W/Km. (The thermal conductivity $k$ of metals is temperature dependent as $k=k_0T$ at low temperature, so the formula above needs to be adjusted a bit. I will ignore this as follows, but it is worth noting. Also, a pre-cooled insulator can hence get lower conductivity; however this will not be stable as ambient heat diffuses in and changes it and the thermal capacity. Again, I will ignore this time-dependent nonlinearity since it does not appear to dominate over radiation.) Radiation Were we to instead just have a perfect vacuum between the container and environment they would couple by blackbody radiation. The flow would be $$q=\epsilon A_0\sigma (T_1^4-T_0^4)$$ where $\epsilon$ is the emissivity and $\sigma$ the Stefan-Boltzman constant. There is no dependency on $d$ since the container only "sees" the environment in all directions. The minimum emissivity in standard tables is polished silver, 0.02. If we for example assume $A_0=1$ m$^2$, $T_0=10^{-29}$ K, $T_1=3$ K then using $10^{-7}$ atm air insulation the flow is $q=3.6\cdot 10^{-5}/d$ Watts. Using vacuum and polished silver the flow is $9.186\cdot 10^{-8}$ W. In theory we could get a smaller flow by having a more than 391 meter thick insulation, but this implicitly presupposes that the entire mass of insulation has been cooled to an appropriate temperature beforehand. Edit: There is multi-layer insulation that can reduce the thermal flow even in the radiative case. If we insert $N$ layers the heat transfer coefficient will be $4\sigma T^3/(N(2/\epsilon - 1)+1)$, which can be made arbitrarily small by increasing $N$. The limit of the number of layers is when the distance become a few wavelengths since near-field effects will start coupling them. From the Wien displacement law we get $\lambda=b/T$, so for 3K this $\lambda \approx 1$ mm, while down at $T=10^{-29}$ K $\lambda\approx 9$ Gpc. For a given $d$ the number of layers that can fit in scales as $d^{3/4}$. Discussion So, refining the question a bit to avoid "cheating" by having infinitely thick pre-cooled insulation: for a fixed $d$, what sets the physical limit of heat flow? For conduction there are clearly many physical effects, but getting rid of the material gets rid of the phonons, only leaving photon modes in blackbody radiation. Other fields are too short-range to matter or couple too weakly (there is presumably some gravitational wave heat transfer, but it is vastly overshadowed by the electromagnetic contribution). Closely related sub-questions: Is there any principle that gives a lower bound on emissivity? Does quantization effects hinder radiative heat transfer at very low temperatures (I am interested in the $\sim 10^{-29}$ K range)? Is there some thermodynamic principle that implies that the heat flow must be nonzero? I have used either this multilayered insulation or a very similar product. The data sheet claims a hard vacuum limit of a few microwatts per meter per kelvin at a mean temperature of 80K. Did you intend to type $10^{-29}$ kelvin? The record low temperatures achieved with microscopic masses of Bose-Einstein condensates are many orders of magnitude warmer than this. Macroscopic passive insulation probably stops being important around the microkelvin range; the isolation of the dilute gas that makes the condensate uses a very different method. @rob - Yes, the temperature is intentional. It is around the de Sitter temperature of the universe. In my research, I am interested in extreme low-temperature physics of the far future.
common-pile/stackexchange_filtered
Java Implementing A Working FPS System I am coding a "Gravity Simulator" in java, which just includes a ball that has a x and y force. The way I'm doing it at the moment (which is not working) is the following: Rules of the program: 4 pixels count as 1 meter. the gravity is 9.8m/s^2, so earth gravity. in my main class I have this: public int sleepMilli = 17; //the thread will sleep 17 milliseconds after every execution public double g = 9.8; //gravity then, in the main loop I have this: try { Thread.sleep(17); } catch (InterruptedException e) { System.out.println("Thread couldn't sleep."); e.printStackTrace(); } so. how would I go about implementing a working fps system into this "game" so it doesn't affect the movement speed of the ball? If you want any more information provided, simply ask me to do so. Sorry, what's fps? frames per second @Pang.... And what's a fps system? my oral expression of a mechanism in the game that allows the program to run at different speeds without modifying the movement behaviour of an object @andih There is at least one explanation on What are the fundamentals of game development? and there are a lot of solutions like Introduction - The World Of Bouncing Balls. One common solution is make all your simulation methods take the amount of seconds that elapsed since the last frame as an argument. For example public void move(float secondsElapsed). Then, in that main thread, measure the time that elapsed since last frame using System.currentTimeMilis(), don't assume that Thread.sleep(17); will sleep for exactly 17 miliseconds. This has two advantages: 1) super easy change of desired FPS and 2) your objects will move at correct speeds even if the game FPS would drop. so, in the main loop (while(true)) I shouldn't make the thread sleep at all? just execute as fast as possible? @kajacx @PeterLikesCode for max FPS, sure thing. Or you could have let's say max 60, so then you would wait. I'm going to see if I get this to work and then reply/mark this is "the" answer I'm having some trouble with this. @kajacx So I tried to measure the elapsed time by storing the current time millis, and then subtracting the current time millis from those last millis. However, this always returns as 0.0. Make sure you don't use integer division, because 20/1000 == 0!. Something like (current-previous)/1000f would do, note the f detonating the float constant. Or if computation takes less than milisecond (can happen) use System.nanoTime() for time in nanoseconds. that fixed it. Thank you. I'm just looking how to do this now as I'm not really a creative person. I now have the move method taking into account the elapsed millis (not seconds) and I'd like to make it move now. Could you give me an example? Please respond to the above :) @kajacx @PeterLikesCode Well, this page is for single problem solving, not a tutorial on how to make whole 2D physics engine. Try googling "how to make simple 2D game with gravity" or something. Then ask questions such as "This object is accelerating twice more than expected" and add relevant code, not "How do I make moving and accelerating object", that is too broad.
common-pile/stackexchange_filtered
AWS CodePipeline for ECS scheduled tasks? I know that AWS CodePipeline supports updating ECS services. What if I want to instead update an ECS scheduled task, which does not contain a service definition? It turns out that CodePipeline doesn't support deploying to scheduled tasks. Instead, you have to specify a static tag (such as latest) in the task definition for the ECS scheduled task, and then make sure it always pulls the latest image by setting the image pull policy accordingly. (The default policy should work, but not guarantee that a cached image will not be run if the pull fails). Since your ECS scheduled task always uses the latest image, all you need to do in the CodePipeline is include CodeCommit and CodeBuild, then skip CodeDeploy. Your CodeBuild should include a buildspec.yml file that builds the latest image. The CodeBuild pushes the latest image to your ECR. So when you git push a commit to the repo, the pipeline triggers the CodeBuild which builds the new image, so next time your ECS scheduled task runs, it uses the new image from ECR.
common-pile/stackexchange_filtered
adding a css file to my plugin i'm new to wordpress and trying to find out how i can add a css file to plugin. I have created a plugin called my-plugin and wrote the following into my-plugin.php: <?php /* Plugin Name: my Plugin Description: some description Version: 1.0 Author: Ginso */ add_action('wp_enqueue_scripts', 'callback_for_setting_up_scripts'); function callback_for_setting_up_scripts() { wp_register_style( 'mein-plugin', 'localhost/wordpress/wp-content/plugins/mein-plugin/style.css' ); wp_enqueue_style( 'mein-plugin' ); } function cssTest() { return '<p class="myClass">some text</p>'; } add_shortcode("test", "cssTest"); ?> and this is currently my style.css: .myClass{ color:red; } now i added [test] to a page and it displays the correct text, but it doesnt apply the style. What am i doing wrong? Have you viewed the source to see whether the CSS file is actually getting called? Have you tried inspecting the element to see if perhaps one of the theme styles is overriding it? Some themes may have more specific styles, i.e. if the theme has a style like main.themeClass p it would override .myClass because it has higher specificity. i am not sure what u mean with your first question. When i inspect the paragraph, my style doesn't appear. if it was overridden, it would still appear there Don't put the full path to the file in wp_register_style() use plugins_url() to get the full url with just the path from the plugins folder. So change: 'localhost/wordpress/wp-content/plugins/mein-plugin/style.css' To: plugins_url( 'style.css', __FILE__ )
common-pile/stackexchange_filtered
Java NumberFormat instance creation Got this doubt during using NumberFormat and Locale class What is the difference between this approach NumberFormat nf = NumberFomat.getCurrencyInstance(); nf.setCurrency(Currency.getInstance(Locale.US)); String us = getCurrency().getDisplayName(); And this approach NumberFormat us = NumberFormat.getCurrencyInstance(Locale.US); Is that third line of code in first snippet really needed? If not, delete. Keep your Question as simple and as pointed as possible, with no distractions. The first version has two errors that cause it to not compile. NumberFomat and getCurrency() are both undefined. The difference, therefore, is that the first approach will lead to a compilation error and the second will produce a NumberFormat instance that prints numbers as currency as dictated by the US locale. Java 1.1 versus 1.4 One appears to be legacy code from Java 1.1, the other more modern from Java 1.4. The NumberFormat class and its getCurrencyInstance() method appear to both date back to Java 1.1. The Currency class was not added to Java until Java 1.4. So the NumberFormat#setCurrency method did not arrive until Java 1.4. I glanced quickly through the Javadoc, and did not see any guidance. I do not know the full story here. But I would be inclined to use the newer code. I would explicitly get my Currency instance, and pass to setCurrency. But perhaps someone else will post a better Answer explaining if there is actually any practical difference. By the way, heed the caution to the bottom of the description of Currency class: It is recommended to use BigDecimal class while dealing with Currency or monetary values as it provides better handling of floating point numbers and their operations.
common-pile/stackexchange_filtered
apply excel vba to entire column instead of single cell Hi I would like to apply the below vba to the entire column AK instead of just AK1 Sub Tidy_Cell() Range("AK1") = Replace(Range("AK1"), Chr(13), "") For x = 1 To Len(Range("AK1")) If Mid(Range("AK1"), x, 1) = Chr(10) And Mid(Range("AK1"), x + 1, 1) = Chr(10) Then Range("AK1") = Left(Range("AK1"), x) & Mid(Range("AK1"), x + 2) End If Next With Range("A1") .Value = Mid(.Value, 1) .VerticalAlignment = xlTop End With End Sub Thanks a lot for any help! Try recording a macro where you select the entire column and perform this replacement (but perhaps replace "a" with "b" or something just to test). Then see if you can use this created code to modify yours. I would put all your code into a Loop that checks column AK dim lLastUsed As Long lLastUsed = Cells(1048576, "AK").End(xlUp).Row For i = 1 to lLastused //insert your code here Next i Remember every spot you defined it to be Range("AK1") you need to change it to Range("AK" & i) so it ends up something like this: Sub Tidy_Cell() Dim lLastUsed As Long lLastUsed = Cells(1048576, "AK").End(xlUp).Row For i = 1 to lLastUsed Range("AK" & i) = Replace(Range("AK" & i), Chr(13), "") For x = 1 To Len(Range("AK" & i)) If Mid(Range("AK" & i), x, 1) = Chr(10) And Mid(Range("AK" & i), x + 1, 1) = Chr(10) Then Range("AK" & i) = Left(Range("AK" & i), x) & Mid(Range("AK" & i), x + 2) End If Next x Next i With Range("A1") .Value = Mid(.Value, 1) .VerticalAlignment = xlTop End With End Sub Hope this helps you out Thanks a lot! Works perfectly fine
common-pile/stackexchange_filtered
JQuery do something after DOM element is created - DOMNodeInserted not working Ok, I have one empty div, which is dynamically filled with some content later: <div class="row no-margin"> <div id="sensorInfoContent"> </div> </div> Later it is filled with some other divs and content. I need to select this child div: <div class="circDash" id="batPerc" accesskey="30" style="height:100px; background-color:#000000;"> And call this function on it: $(*DIV*).circleProgress({ value: 0.75, size: 80, fill: { gradient: ["red", "orange"] } }); I tried to do that on click, and it's working. Like this: $('#sensorInfoContent').on("click", ".circDash", function() { $(this).circleProgress({ value: 0.75, size: 80, fill: { gradient: ["red", "orange"] } }); }); But I need it to be done after element is dynamically added, not on click. Tried DOMNodeInserted, it's not working: $('#sensorInfoContent').on("DOMNodeInserted", ".circDash", function() { $(this).circleProgress({ value: 0.75, size: 80, fill: { gradient: ["red", "orange"] } }); }); Any ideas, alternatives or work-arounds? which is dynamically filled with some content later How are you doing this? Presumably you're using AJAX, if so just use a callback function. How you "dinamically add" the div? Hmm, I'll check it out. Back end is created with mostly Java, Java Scala, don't know if Ajax is used for this. Still hoping if it could be solved in some simpler way. Ok, I solved it on my own, by blind guessing. Script code was initially written on the top of the page, now I put the script below this .circDash div and it's working! :D It was just about script positioning, I guess now script is running after div is created. I am new in JQuery/JS so... This might be a rookie mistake. :D Cheers! EDIT: If someone wants and knows to answer on why function on.("click",... worked and function on.("DOMNodeInserted",... didn't, that would be the best answer on this all, regardless of my solved solution. Just trigger the click manually $('#sensorInfoContent').on("click", ".circDash", function() { $(this).circleProgress({ value: 0.75, size: 80, fill: { gradient: ["red", "orange"] } }); }); Like this $('.circDash').trigger('click');
common-pile/stackexchange_filtered
TypeError: Cannot read property 'name' of undefined, at Array.forEach When I press the button to display the task pane, I am seeing the following error message in the console window which I cannot remove or explain ? "Uncaught (in promise) TypeError: Cannot read property 'name' of undefined" at t (word-web-16.00.js:26) at word-web-16.00.js:26 at Office.js:46 at Array.forEach () at d (Office.js:46) at Office.js:46 Steps to Reproduce - Side load the TypeScript add-in into Word Online and press the Button to display the task pane. The task pane is displayed but why is this error message appearing. This error message might have always been there, not sure. I have tried all the following URLs and the message is consist. src="appsforoffice.microsoft.com/lib/beta/hosted/Office.js" src="appsforoffice.microsoft.com/lib/1/hosted/Office.js" src="appsforoffice.microsoft.com/lib/1.1/hosted/Office.js" This appears in Word Online Windows 10 64 bit and Office 64 bit Browser - Microsoft Edge, Version 88.0.705.68 (Official build) (64-bit) The following repository contains the project (only 5 files): https://github.com/OfficeAddins/undefined This issue was logged on github but no-one has replied yet: https://github.com/OfficeDev/office-js/issues/1644 Can you try using this using lowercase o when referencing the office.js url - src="appsforoffice.microsoft.com/lib/beta/hosted/office.js" This could be due to a case sensitive string comparison bug, which we will fix. You should be able to workaround it by using the lowercase office.js. Thank you so much Jay !!! Yes, the error message has disappeared !!
common-pile/stackexchange_filtered
Upgrading symfony from 1.3 to 3 I have symfony version 1.3 . I wanted to know is it possible to upgrade from 1.3 to 3. if yes then how we can do it ? or else we have to upgrade to 2 first and then go to 3 Wraping it as Anna suggested might work. But basically, S1 and S2/S3 are completely different and there is no upgrade path. I would suggest making a new S3 project, learn how Symfony 3 works then start implementing functionality from your S1 base. But it will end up being a complete rewrite. No need to go to S2 at all. ok thanks , but writing everything is not good solution Perhaps hiring a contractor? Even though both frameworks start with the word Symfony they are completely different. Like trying to upgrade an operating system between Windows and linux. ohhhh . example of windows and unix was gud To upgrade from 1 to 2.0, you can use answer given to this question: You may wrap your legacy project in a brand new sf2 project, by using this bundle. This way, you'll be able to migrate your project one piece at a time, and new functionalities may be developed with sf2 as soon as you get the wrapper to work. You may be interested by this post about migrating. Upgrading to Symfony 3.0 First, make your application run on Symfony 2.8 without errors. Then, install the PHPUnitBridge component and fix all the reported deprecation issues. Now you are ready to upgrade to Symfony 3.0. You can also use any of these tools to spot and fix those deprecations: Deprecation Detector (details) Symfony Upgrade Fixer The Bundle u mentioned it compatibility is so far only confirmed with symfony 1.0 as docs so carefully when using it. I asked this quetion on one of symfony core devs in one of symfony community meetings and they answered me that the best way is to re-write code :(. @Anna Adanchuk , what to do after 7 step of IngewikkeldWrapperBundle To upgrade from Symfony 2.8 to 3.4 (at the moment), you might find useful this tool: RectorPHP It is AST-based automated refactoring tool, to use in CLI. At the time of writing this it contains around ~60 changes. Still growing every day.
common-pile/stackexchange_filtered
Set-Content Error : The process cannot access the file because it is being used by another process (Powershell) What I'm trying to do here is delete all shared folders on my computer except for the default ones, which I'm removing with a single space before I loop through. It was working for about 10 minutes when all I of the sudden I get an error message saying that my Set-Content command won't go through because "it is being used by another process." Any ideas? Thanks in advance! Side Note: I know that net share %%s /delete is echoed out. I just wanted to see what will be deleted before I actually put it into effect. DeletingSharedFiles.bat: set /p name= "Name of the Admin user | " net share> "C:\Users\%name%\Desktop\SharedFiles.txt" C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -Command "C:\Users\%name%\Desktop\SharedFiles.ps1" for /f "tokens=1" %%s in (C:\Users\%name%\Desktop\SharedFiles.txt) do( echo net share %%s /delete ) SharedFiles.ps1 (Get-Content "C:\Users\$env:name\Desktop\SharedFiles.txt") | Foreach-Object {$_ -replace "ADMIN\$", ""} | Foreach-Object {$_ -replace "C\$", ""} | Foreach-Object {$_ -replace "IPC\$", ""} | Set-Content "C:\Users\$env:name\Desktop\SharedFiles.txt" Output of DeletingSharedFiles.bat Set-Content : The process cannot access the file 'C:\Users\Sebastian\Desktop\SharedFiles.txt' because it is being used by another process. At C:\Users\Sebastian\Desktop\SharedFiles.ps1:5 char:1 + Set-Content "C:\Users\$env:name\Desktop\SharedFiles.txt" + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo: NotSpecified: (:) [Set-Content], IOException + FullyQualifiedErrorId : System.IO.IOException,Microsoft.PowerShell.Commands.SetContentCommand presumably you have tried to access the file before the previous process had entirely finished using the file. the solution seems likely to be adding a delay between accesses to the file. ///// as an aside ... WHY are you doing this in such a bizarre way? all of this can be done in powershell with no need for a BAT file.
common-pile/stackexchange_filtered
Convolution integral with limits I'm trying to implement a convolution integral in Python with limits, but I cannot get it to work. I have this convolution integral: Where t0 is the start year and t is the end year. E is a list of values (one value per year) and G is defined as: Where A, alpha and tau are known constants. Now I need to calculate the result from the convolution integral with the t0 to t limits, but I cannot get it to work. Scipy seems to not have the option to enter the limits. I tried to use the Convolve functions of Scipy and Numpy, but both cannot handle self-chosen limits inside the functions. Does someone know how to calculate the convolution integral with Python with the limits t0 and t? You can use sympy module: https://docs.sympy.org/latest/modules/concrete.html#concrete-class-reference @Goat Solutions Do you have an example of this? I think I should use this one: https://docs.sympy.org/latest/modules/discrete.html#convolution , but then how to provide the limits? The integration limits are provided like sympy.integrate(A, (tau, t0, t)). It will integrate the function A from t0 to t (for tau) But this is not a convolution integral right? I need to know use that with limits. It wouldn't work if you replace A (my example) to G*E (your example) ? Nope, that's why numpy and scipy both have their own function (convolve) for this type of integral. Scipy can evaluated arbitrary definite integrals with quad: https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.quad.html @GaneshGebhard If E is a list of values rather than a function, then it's not clear what you mean by an "integral". In any case, just redefine the function E (or wrap it in another function) so that it is zero outside of the interval from t0 to t. Once you do so, the integral you want is the full convolution integral between G and the modified E. @BenGrossmann I'm sorry for the confusion. What I mean by list is that there is a value for E for every year. So if my t0 is 2023 and my t is 2025, there will be three values for E. @Ganesh Ok, but for the purposes of the integral, what is E(t)? For instance, if E(2023) = 1 and E(2024) = 2, then what is E(2023.5)? The integral requires E(t) to be defined for all real valued inputs @BenGrossmann E(2023.5) will never occur, only integers. @Ganesh Then having E(tau) in the integral formula makes no sense. It sounds like what you actually want is a discrete convolution, which is a sum rather than an integral. @BenGrossmann Okay so what is your suggestion on putting this in Python? @GaneshGebhard My suggestion is that before you put anything into Python, you should figure out what you're actually trying to calculate. Another possibility besides the one I suggested is that you do want a continuous convolution and that you use E to build a stepwise function so that the integral makes sense. Both this calculation and the previous I suggested are common in the context of signal processing.
common-pile/stackexchange_filtered
How can I delete local realm in macOS? I want to debug my app from scratch so I need to remove its' Realm file. How do I do that in macOS? (deleting DerivedData, keychain and cleaning did not work) Thank you! I have a button with the following code to remove Realm data in Swift 4 func doDelete() { let realmURL = Realm.Configuration.defaultConfiguration.fileURL! let realmURLs = [ realmURL, realmURL.appendingPathExtension("lock"), realmURL.appendingPathExtension("note"), realmURL.appendingPathExtension("management") ] for URL in realmURLs { do { try FileManager.default.removeItem(at: URL) } catch let error as NSError { print(error.localizedDescription) } } } alternately you can manually remove the database portion using the finder by deleting the folder: ~/Library/Application Support/com.company_name.app_name the bundle identifier (found in the General settings for your app in Xcode) is the last part of the path. The path may vary depending on sandboxing, iOS/macOS and the bundle name. Edit With sandboxing on the location is ~/Library/Containers/com.company_name.app_name I have removed the app from this location (~/Library...) but running from Xcode still shows me the data. It's only stored at this location if the app was properly installed. @gerbil Did you mean that you deleted the file while the application was running? Or did you delete the file after exiting and re-launch again? If it is the former, even if you delete the file while opening, it will not actually be deleted until the file is closed. That is expected behavior. I deleted after exiting. This is not the location for apps that run from Xcode. @gerbil Yes, it is the correct path with sandboxing off. For more info please see Service Not Running Realm data can be removed at this folder: ~/Library/Containers/com.company_name.app_name The location I mentioned in my answer is correct when sandboxing is turned off (at stated). With sandboxing on it's stored in a different location, updated my answer and provided a link to some additional information.
common-pile/stackexchange_filtered
Algebraic rules concerning the complex power function $z^\alpha$ In Gamelin's Complex Analysis there is a discussion regarding the complex power function. Let $\alpha$ be an arbitrary complex number. Define the power function $z^\alpha$ to be the multivalued function $$z^\alpha = e^{\alpha \ log(z)}$$ where $log(z) = log |z| + i Arg(z) + 2d\pi$ for $d\in \mathbb{Z}$. Letting $z = i$ and $\alpha = i$ we get $$i^i = e^{\frac{-\pi}{2}}e^{-2m\pi}$$ for $m \in \mathbb{Z}$. Similarly for $z = i$ and $\alpha = -i$ we obtain $$i^{-i} = e^{\frac{\pi}{2}}e^{-2k\pi}$$ for $k \in \mathbb{Z}$. Taking the product of both results we get, $$(i^i)(i^{-i}) = e^{2n\pi}$$ and for all $-\infty < n < \infty$ $$ e^{2n\pi} \neq i^0 = 1$$ and Gamelin points out that the usual algebraic rules do not apply to power functions when they are multivalued. What else, except that $(i^i)(i^{-i}) \neq i^0 = 1$, can be concluded about the algebraic rules of multivalued powerfunctions? Of course it means that it is not always the case that $(z^a)(z^{-a}) = 1$ for arbitrary $a \in \mathbb{C}$. But am I missing something? On a side note I realize I am not completely sure what is meant by 'multivalued'. Is it just that its multivalued because $z = x + iy$ where $x$ and $y$ can vary on $\mathbb{R}$, or is it because $\alpha$ is arbitrary? (Another example in the book states that when $\alpha$ is an integer, $z^{\alpha}$ is single-valued. Why?) Thanks! You should avoid the "multivalued" concept. $\log$ is an analytic group isomorphism $(\Bbb{C}^, \times) \to (\Bbb{C}/2i\pi \Bbb{Z},+)$ whose inverse is $\exp : (\Bbb{C}/2i\pi \Bbb{Z},+) \to (\Bbb{C}^, \times) $. Do you see how the multiplication by $a$ can be defined for $z \in \Bbb{C}/2i\pi \Bbb{Z}$ when $a$ is integer, rational, real, complex ? Mixing different branches of the logarithm never leads to anything good. When you define $z^a\equiv e^{\log(z) a}$ then let $\log$ be one particular branch and you will have yourself a consistent definition for which $z^a z^{-a} = 1$ (and by fixing a particular branch I mean that you take $\log(z) = \log|z| + i\arg(z) + 2\pi ik$ for a fixed value of $k$). @Winther, from what you are saying it seems like $z^\alpha$ is multivalued in the sense that it is not injective? And $z^\alpha$ is single-valued, when related to a particular branch of its local inverse,? Am I talking nonsense? But even when we consider a particular branch of the logarithm, does the usual power rules hold? Thanks!! @reuns, I don't understand what you mean, I have never heard of group isomorphisms, I googled the definition but can't really connect it to our discussion. And I also, what do you mean by multiplication by a? @iaenstrom You are the one that gets to define $z^\alpha$ as a normal function. I'm arguing that if you need to define it for non-integer $\alpha$ then define it so that you only have one answer (i.e. choose a particular branch). It's like saying the square root of $4$ is $+2$ instead of $-2$. Both are equally valid, but when we define a square-root function we choose the former.
common-pile/stackexchange_filtered
How to get a rigid body to rotate? I would like to get a cube to rotate in a scene as seen in this image. Is there any way to do this without animation? Could you use animation nodes? Of course, unless procedural animation is your thing it might be more difficult than just animating it. It's really very easy to do this, Just follow these steps: In addition to this if you want to add a force in the forward axis, then add a linear velocity or force in that axis. There isn't a way that I know of to do that without animation. But, that would be very simple to do. You can simply add your cube, press "i" and press lokrot scale, then chose another frame rotate it and do lokrot again.
common-pile/stackexchange_filtered
Can text generated by pseudo-elements be made user-selectable? I am digitizing a 75-page index of job competencies. Users need to be able to link to each competency, competencies are meaningfully grouped, and competencies are often a sentence long. So instead of using each competency's text as an anchor, I'm creating a labeling scheme and creating anchors for each item. The competency "Know your right hand from your left" might be given the label "E.1.A.2.2". This is tedious. I'm trying to save myself from manually adding the label to every competency again (since I've already added it once in the anchor.) To show the label, I can use the pseudo-element :before to generate the label from the anchor, {content: attr(name);}. That works nicely, but the generated text isn't selectable. To create a link to a specific competency, users will have to manually type in "#E.1.A.2.2", which invites more user error than I'd like to think about. Is it possible to make text generated by a pseudo-element selectable? I'm open to other suggestions as well. If creating each label in HTML is the only way to get to the needed result, I'll do that. But ouch. I don't believe it's possible. That text isn't even readable by a screen reader. I believe you'll have to go the JS route for your particular task. I just came across this question which confirms my theory: http://stackoverflow.com/questions/19914349/how-can-i-make-generated-content-selectable This question might help: http://stackoverflow.com/questions/2651739/how-to-access-css-generated-content-with-javascript Thanks, sorry my search-fu isn't up to the task today. @TylerH that question is interesting, though I've given up on incremental counters. I have no JS experience and can sort of follow the conversation there. To make it selectable and indexable by search engines I think the only way is to add content using JS For example jQuery code: $div.prepend("<span>My Text</span>") It looks like our CMS's system loads jQuery 1.7.2, though I'm totally out of my element with jQuery. I only want to prepend the anchor name to the text of the link. So here's my example code from above: <a name="E.1.A.1.2">Know your right hand from your left</a> I'd like the result to be something like <span class="label">E.1.A.2.2</span><a name="E.1.A.1.2">Know your right hand from your left</a> for each of the hundreds of competencies in the index. Can anyone point me in a direction?
common-pile/stackexchange_filtered
After adding microdata not listed in google In order to improve the search presence of my website, I added microdata to it (http://schema.org/Event, http://schema.org/Offer). But since I did this google can not find my website any more, even though google webmaster says that there are no problems and errors with it. The data is read correctly. before I added the microdata my website was on the third position when I searched for it. Now it is not listed with the same keywords. <div itemscope itemtype="http://schema.org/Event"> <meta itemprop="name" content="IGNITE THE NIGHT - Maturaball des BRG Kepler" /> <meta itemprop="url" content="www.keplerball.at" /> <meta itemprop="location" content="Kammersäle, Graz" /> <meta itemprop="image" content ="http://www.keplerball.at/wp-content/uploads/2014/10/vorschau_cropped_compressed.jpg" /> <b>WER?<br> <table><tr><td> MaturantInnen des BRG Kepler </td></tr></table> <br> WANN?<table><tr> <td>Datum:</td> <td>Freitag, 28. November</td> </tr> <tr> <td>Einlass:</td> <td><meta itemprop="startDate" content="2014-11-28T19:00">19 Uhr</time></td> </tr> <tr> <td>Polonaise:</td> <td>20:30 Uhr</td> </tr> </table> <br> PREISE<table> <tr> <td>VVK</td> <div itemprop="offers" itemscope itemtype="http://schema.org/Offer"> <meta itemprop="price" content="16" /> <meta itemprop="priceCurrency" content="EUR" /> <meta itemprop="category" content="VVK" /> <td>16€</td> </div> </tr> <tr> <td>AK</td> <div itemprop="offers" itemscope itemtype="http://schema.org/Offer"> <meta itemprop="price" content="20" /> <meta itemprop="priceCurrency" content="EUR" /> <meta itemprop="category" content="AK" /> <td>20€</td> </div> </tr> </table> <br> WO? <br> <table><tr><td> Kammersäle </td></tr></table> <br> DANACH?<br> <table><tr><td> Flann O'Brien </td></tr></table> </b> </div> Why not? How to it the right way? @Hacketo Meta tags are one of the suggested ways of adding non-visible microdata. See the Non-visible content section in https://support.google.com/webmasters/answer/176035?hl=en Of course yes, but in this case, this should not be meta tags. the price is displayed, the date, the description ... Usefull stuff for microdata : http://www.google.com/webmasters/tools/richsnippets This question appears to be off-topic because it is about a SEO issue. It might be on-topic on [webmasters.se]. As far as I can know, you should add the itemprop values directly to the element: Don't: <td><meta itemprop="startDate" content="2014-11-28T19:00">... Do: <td itemprop="startDate" content="2014-11-28T19:00">... I'm not entirely sure but I think search engines treat <meta> tags inside the <body> as malformed code. @StefanSchuchlenz: It’s valid if used for Microdata (or for RDFa).
common-pile/stackexchange_filtered
Execute JSP files in IIS7 and Tomcat6 I want to execute JSP files in on Windows Server having IIS7, Tomcat6 and Java 1.6 installed. Already Installed: Java 1.6 IIS7 Tomcat6 BonCode Apache Tomcat AJP 1.3 Connector Current scenario: I am able to execute JSP files on the server if taken from Tomcat Example folder, if I create a custom JSP file it does not execute. i.e. www.somedomain.com/examples/jsp/jsp2/el/basic-arithmetic.jsp [this works] www.somedomain.com/test.jsp [this does not work] Can't figure out what exactly is the issue. Found out the solution by my own. I simply choose to simply convert my Tomcat Server into web-server. Pointed my IP address to the Apache configuration file.
common-pile/stackexchange_filtered
Dropdown of available statuses I'm trying to figue out what I"m doing wrong to get the error message. I have my User model that has a status_id field and it is a foreign key to my statuses table with an id and name field. public function scopeAvailable($query, $current = null) { $options = $this->getAvailableOptions($current)->toArray(); return $query->whereIn('name', $options); } public function getAvailableOptions(string $current = null) { $options = collect(['Active', 'Inactive']); switch ($current) { case 'Active': return $options->merge(['Fired', 'Suspended', 'Retired']); case 'Injured': return $options->merge(['Fired', 'Retired']); case 'Suspended': return $options->merge(['Suspended', 'Retired']); } return $options; } public function availableStatuses() { $status = $this->status ? $this->status->name : null; return UserStatus::available($status)->get(); } Type error: Argument 1 passed to App\Models\UserStatus::getAvailableOptions() must be of the type string or null, object given, called in /home/vagrant/projects/app/app/Models/UserStatus.php on line 45 You're passing an object when calling the getAvailableOptions method. Looking at line 45 in UserStatus.php will confirm that. But it should be either a string or null how is $status an object at the point. You haven't shared that code with us, so can't answer.
common-pile/stackexchange_filtered
Class not found with composer autoload I have my file structure as / /app/ /app/logs/ /app/templates/ /app/index.php /public_html/ /public_html/.htaccess /public_html/index.php /vendor /vendor/(all vendor here) My vhost is pointing to /public_html in app/Index.php namespace App; class Index {} Composer.json "autoload": { "psr-0": { "App\\": "app/" } } Yet it is still showing as ( ! ) Fatal error: Class 'App\Index' not found in C:\wamp\www\project\public_html\index.php on line 34 Line 34: new \App\Index(); Using Slimframework as well if that matters, can't think what is wrong It seems psr-0 failed, changing it to psr-4 works :) Since you were using the PSR-0 standard, PHP was looking for the file app/App/Index.php, which does not exist. Note that in PSR-0, you define a base directory (app in your case) where the mapped namespace (App) can be found. However, the file structure within that base directory should exactly match the fully qualified class names. So class App\FooBar should be in the file app/App/FooBar.php. Note that app is the base directory, and App is the directory that contains all subdirectories and PHP files for that namespace. Since this is not the case in your application (and also because PSR-0 has been deprecated), you should (as you already did) use PSR-4, the new autoloading standard, instead. In PSR-4, you can directly map a certain namespace to a certain directory. In your case, you have mapped the App namespace to the app directory, so that PHP will open the app/Index.php file if you need to use the App\Index class.
common-pile/stackexchange_filtered
auto send sms when gps location is available i'm developing an app to send user location via sms after a fixed time and if the phone restarts the app should starts automatically in the background without any launcher activity. I have quiet accomplished my task I'm only facing problem in getting location, as gps takes some time and app gets crash when it does not find any location. here is my code of main Activity public class MainActivity extends Activity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); startLocationTracking(); } private void startLocationTracking() { AlarmManager am = (AlarmManager)getSystemService(ALARM_SERVICE); Intent alarmintent1 = new Intent(MainActivity.this, AlarmReceiver.class); PendingIntent sender1=PendingIntent.getBroadcast(MainActivity.this, 100, alarmintent1, PendingIntent.FLAG_UPDATE_CURRENT | Intent.FILL_IN_DATA); try { am.cancel(sender1); } catch (Exception e) { // TODO Auto-generated catch block e.printStackTrace(); System.out.println("exjfkd"+e); } Calendar cal = Calendar.getInstance(); cal.add(Calendar.MINUTE,10); am.setRepeating(AlarmManager.RTC_WAKEUP, cal.getTimeInMillis(), 1000*600, sender1); System.out.println("set timer"); } } public class AlarmReceiver extends BroadcastReceiver{ long time = 600* 1000; long distance = 10; @SuppressLint("NewApi") @Override public void onReceive(final Context context, Intent intent) { System.out.println("alarm receiver...."); Intent service = new Intent(context, MyService.class); context.startService(service); //Start App On Boot Start Up if ("android.intent.action.BOOT_COMPLETED".equals(intent.getAction())) { Intent App = new Intent(context, MainActivity.class); App.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK); context.startActivity(App); } try{ LocationManager locationManager = (LocationManager)context .getSystemService(Context.LOCATION_SERVICE); Criteria criteria = new Criteria(); String provider = locationManager.getBestProvider(criteria, true); locationManager.requestLocationUpdates(provider, time, distance, locationListener); Location location = locationManager.getLastKnownLocation(provider); TelephonyManager tm = (TelephonyManager) context.getSystemService(Context.TELEPHONY_SERVICE); String device_id = tm.getDeviceId(); // returns IMEI number String phoneNo = "+923362243969"; String Text = "Latitude = " + location.getLatitude() +" Longitude = " + location.getLongitude() + " Device Id: " + device_id; SmsManager smsManager = SmsManager.getDefault(); smsManager.sendTextMessage(phoneNo, null, Text, null, null); Log.i("Send SMS", ""); } catch (Exception e) { e.printStackTrace(); } this.abortBroadcast(); } LocationListener locationListener = new LocationListener() { @Override public void onStatusChanged(String provider, int status, Bundle extras) { } @Override public void onProviderEnabled(String provider) { } @Override public void onProviderDisabled(String provider) { } @Override public void onLocationChanged(Location location) { } }; } i'm little confuse in logic I want my app to send sms when gps coordinates are available. and if gps in disabled on the phone it if network is available on the phone it should get the location through network and send it through the network. You can send SMS with the android.telephony package. You can see how add to your project here: How do I add ITelephony.aidl to eclipse? Something like this permit you send a textSMS: SmsManager sms = SmsManager.getDefault(); sms.sendTextMessage(phoneNumber, scAddress, texto, null, null); Hope it helps you!! I have method for sending msgs. 7AlarmManager am = (AlarmManager)getSystemService(ALARM_SERVICE); Intent alarmintent1 = new Intent(MainActivity.this, AlarmReceiver.class); PendingIntent sender1=PendingIntent.getBroadcast(MainActivity.this, 100, alarmintent1, PendingIntent.FLAG_UPDATE_CURRENT | Intent.FILL_IN_DATA); Thank you for contributing to the Stack Overflow community. This may be a correct answer, but it’d be really useful to provide additional explanation of your code so developers can understand your reasoning. This is especially useful for new developers who aren’t as familiar with the syntax or struggling to understand the concepts. Would you kindly [edit] your answer to include additional details for the benefit of the community?
common-pile/stackexchange_filtered
onbackpressed method error I am using authentication to enter into a page, after authenticated only the user enters into the page. i wrote a code for onbackpressed(), but it is not working. Here DatabaseDemo and Login are the two classes. when i press the back button the login class with username and password is displaying. DatabaseDemo.java public class DatabaseDemo extends TabActivity { DatabaseHelper dbHelper; GridView grid; TextView txtTest; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); SetupTabs(); } @Override public boolean onCreateOptionsMenu(Menu menu) { menu.add(1, 1, 1, "Add Employee"); return true; } public boolean onOptionsItemSelected(MenuItem item) { switch (item.getItemId()) { //Add employee case 1: Intent addIntent=new Intent(this,AddEmployee.class); startActivity(addIntent); break; } super.onOptionsItemSelected(item); return false; } void SetupTabs() { TabHost host=getTabHost(); TabHost.TabSpec spec=host.newTabSpec("tag1"); Intent in1=new Intent(this, AddEmployee.class); spec.setIndicator("Add Employee"); spec.setContent(in1); TabHost.TabSpec spec2=host.newTabSpec("tag2"); Intent in2=new Intent(this, GridList.class); spec2.setIndicator("Employees"); spec2.setContent(in2); host.addTab(spec); host.addTab(spec2); } @Override public void onBackPressed() { Intent i = new Intent(DatabaseDemo.this, Login.class); startActivity(i); } } I am having some more classes, it is working fine for the other intents. Login.java public class Login extends Activity implements OnClickListener{ Button btn; EditText et1, et2; public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.login); btn = (Button)findViewById(R.id.btnlogin); btn.setOnClickListener(this); et1 = (EditText)findViewById(R.id.Leditename); et2 = (EditText)findViewById(R.id.Leditpw); } @Override public void onBackPressed() { super.onBackPressed(); finish(); } @Override public void onClick(View v) { String ename = et1.getText().toString().trim(); System.out.println("ename is..." +ename); String epw = et2.getText().toString().trim(); System.out.println("Password is..." +epw); if(ename.equals("srikanth") && epw.equals("12345")){ Toast.makeText(getApplicationContext(), "valid login..!", Toast.LENGTH_LONG).show(); Intent in = new Intent(getApplicationContext(), DatabaseDemo.class); startActivity(in); } else { Toast.makeText(getApplicationContext(), "Invalid authentication..!", Toast.LENGTH_LONG).show(); Intent intent = new Intent(getApplicationContext(), Login.class); startActivity(intent); } } Logcat is showing no error. 02-14 17:45:33.595: I/ActivityManager(59): Starting activity: Intent { cmp=com.android.databaseex/.DatabaseDemo } 02-14 17:45:34.835: I/ActivityManager(59): Displayed activity com.android.databaseex/.DatabaseDemo: 1141 ms (total 1141 ms) 02-14 17:45:50.145: I/System.out(613): ename is...srikanth 02-14 17:45:50.145: I/System.out(613): Password is...12345 02-14 17:45:50.175: I/ActivityManager(59): Starting activity: Intent { cmp=com.android.databaseex/.DatabaseDemo } 02-14 17:45:51.055: I/ActivityManager(59): Displayed activity com.android.databaseex/.DatabaseDemo: 819 ms (total 819 ms) you need to skip super.onBackPressed(); line: @Override public void onBackPressed() { Intent i = new Intent(DatabaseDemo.this, Login.class); startActivity(i); } i pasted my logcat here.. and am not getting any error in the logcat.. even if i delete the onBackPressed() method it working like this only.. even if ignoring onBackPressed still causes the same problem then i believe there must be something else wrong in your code can u give me your email id, so that i can send my code to your mail. http://pastebin.com/d5vvcM70 is the link for my code, i put 6 classes there, and i skipped the 3 classes that do the normal things like set value, get value,and DialogListener, utilities classes. please see this and ask me if you want some more.. what is TabHost host=getTabHost(); in your DatabaseDemo.java? I guess it should be something like TabHost host=(TabHost)findViewById(R.id.your_tabhost_id_in_layout);. Fix this and i hope it should work it's working fine when i use a new button instead of onBackpressed. i tried doing TabHost host=(TabHost)findViewById(R.id.your_tabhost_id_in_layout); also, but it's getting error.. @Override public void onBackPressed() { Intent i = new Intent(DatabaseDemo.this, Login.class); startActivity(i); } this is working perfectly , please check your androidmanifest file first , is there entry for activity Login ? if there , then no any issue otherwise make activity login in manifest file Login is the first activity that comes at startup, is it need to add another time in the manifest file? override method onBackPressed in activity as bellow @Override public void onBackPressed(){ Intent i = new Intent(DatabaseDemo.this, Login.class); startActivity(i); }
common-pile/stackexchange_filtered
Webview Causing Fatal Exception WebViewCoreThread I am getting this Fatal exception while loading an HTML file from sdcard in a WebView. This happens only to few HTML files; others are loading properly . What can be the problem? Can't resolve this issue. E/SensorManager(2425): Error: Sensor Event is null for Sensor: {Sensor name="MPL Accelerometer",vendor="Invensense", version=1, type=1, "Accelerometer", vendor="Invensense", version=1, type=1, maxRange=19.6133, resolution=0.039226603, power=5.5, minDelay=1000} 08-06 23:52:41.170: D/chromium(2425): Unknown chromium error: -6 08-06 23:52:41.250: E/Web Console(2425): Uncaught ReferenceError: OTW is not defined at file:///storage/emulated/0/123/ads/1/index.html:2104 FATAL EXCEPTION: WebViewCoreThread java.lang.NullPointerException at android.webkit.DeviceMotionService$2.run(DeviceMotionService.java:103) at android.webkit.DeviceMotionService$2.run(DeviceMotionService.java:103) at android.os.Handler.handleCallback(Handler.java:730) at android.os.Handler.dispatchMessage(Handler.java:92) at android.os.Looper.loop(Looper.java:137) at android.webkit.WebViewCore$WebCoreThread.run(WebViewCore.java:814) at java.lang.Thread.run(Thread.java:841) then give us some code. the error log isnt realy enough... seems like there should be more to that error, can you post the rest, and the html file causing the issue as well? I have edited my question again with few more lines which i get along with exception for your reference , and the html file is confidential which cannot be shared I predict this issue is because of handling device motion in html file . Is this not supported in Android ? . Is it possible to load html in webview which has DeviceMotionEvent. Please help Still this issue persists . is there any other way to fix this issue in android ?
common-pile/stackexchange_filtered
How to have 2 headers (row & column) I want to create a table where there are 2 axis. It seems I can only create a table where the first row is a header/axis. My table looks something like this: 1/1/21 2/1/21 3/1/21 10:00:00 AM Work Work Work 11:00:00 AM Relaxing Work Watching TV 12:00:00 AM Work Relaxing Watching TV 13:00:00 AM Lunch Lunch Work As you can see the time and the date are axis. This way I can create a column graph and see how many hours I spent doing one thing each day. And I can also create another chart and see what thing I do the most during each hour. How am I able to do this? Records have one "axis" only: Fields. So, combine date and time into one field, have another field for work type, and then use a crosstab query filtering on a period of dates and grouping on time of the date/time values.
common-pile/stackexchange_filtered
Checking if divisible with 90+180*n I want, in php, to check if $var1 == 90+180*n Where n is all natural numbers, ie. 1,2,3,4... Thanks in advance. No, I want to check if $var1 is equal to 90, 270, 450... If I used your code 180 would be true. Ahhh, I get it. Thank you. Silly me Just use: $var1 % 180 == 90 It's exactly what you need by definition of modulo operation You need to include the parenthesis to ensure your calculation comes out correctly. if ( $var1 == (90+(180*n) ) ) Or just calculate your value before you check it: $checkme = 90 + (80*$n); if ( $var1 == $checkme ) {}
common-pile/stackexchange_filtered
A Lottery Solidity Smart Contract that accept erc20/bep20 token instead of ether to enter the lottery function enter() public payable { require(msg.value > .01 ether); players.push(msg.sender); } Change this function to accept erc20/bep20 token to enter the lottery You need to establish the address(es) of the tokens to accept and then use the Approve and TransferFrom pattern. include "IERC20.sol"; contract Lotto { address public token; constructor(address token_) { token = token_; } function enter(unit amount) public payable { require(amount > .01 ether); // it is assumed that the UI already prompted the user to approve this transferFrom IERC20(token).transferFrom(msg.sender, amount); players.push(msg.sender); } Special note: If accepting arbitrary tokens (you can't confirm the contracts are well-behaved before hand), then wrap the IERC20 interface with SafeERC20 from openzeppelin where you also get the IERC20 interface. Hope it helps Is there an example for that? https://gist.github.com/willianmano/eac9b24ff9df04cce88465f485a8fdbc this is the contract that I want to change the ether to erc20 as fee for lottery. @RobHitchens can you elaborate more on this topic.. I am trying to send the ERC20 token to the address of a contract. The user interface does a two-step. First, the user sends an "approve" transaction to the token contract to authorize the receiver contract to draw from the user's account a certain amount of tokens. Then, the user transacts with the receiver contract and it pulls funds from the user's address in the token contract. This is contingent on the first step. If there is no pre-approved allowance, it will fail, by design.
common-pile/stackexchange_filtered
Flutter Module compiled with an incompatible version of Kotlin I know this question has been asked a lot, but I seem to try everything and yet that everything fails, and I have no clue why. I am building my flutter app as an .apk for release, and get several errors stating: Module was compiled with an incompatible version of Kotlin. The binary version of its metadata is 1.9.0, expected version is 1.6.0. My build.gradle: buildscript { ext.kotlin_version = '1.9.0' repositories { google() mavenCentral() } dependencies { classpath 'com.android.tools.build:gradle:7.3.0' classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version" } } My gradle-wrapper.properties: distributionBase=GRADLE_USER_HOME distributionPath=wrapper/dists zipStoreBase=GRADLE_USER_HOME zipStorePath=wrapper/dists distributionUrl=https\://services.gradle.org/distributions/gradle-8.3-all.zip I tried: Changing gradle version to the newest one(8.3.0) in build.gradle: classpath 'com.android.tools.build:gradle:7.3.0'. I get an error: A problem occurred configuring root project 'android'. Could not resolve all files for configuration ':classpath'. Could not find com.android.tools.build:gradle:8.3.0. Searched in the following locations: - https://dl.google.com/dl/android/maven2/com/android/tools/build/gradle/8.3.0/gradle-8.3.0.pom - https://repo.maven.apache.org/maven2/com/android/tools/build/gradle/8.3.0/gradle-8.3.0.pom Required by: project : Changing the kotlin version in build.gradle to the expected version: ext.kotlin_version = '1.6.0'. I also get an error: Your project requires a newer version of the Kotlin Gradle plugin. Changing the package version to an older one in gradle-wrapper.properties: distributionUrl=https\://services.gradle.org/distributions/gradle-8.3-all.zip, but that doesn't seem to do anything. I am lost as nothing helps that I found online. I hope I can find an answer here.
common-pile/stackexchange_filtered
SqlCommand - conversion from varchar to int Hello I have got this code : SqlCommand sc2 = new SqlCommand("SELECT ... WHERE akce=" + zakce.Text, spojeni); spojeni.Open(); object vysledek2 = sc2.ExecuteScalar(); // This is the exception line I'm receving following Exception: System.Data.SqlClient.SqlException (0x80131904)Conversion failed when converting the varchar value '137000-01' to data type int. On the exception line when I set the breakpoint on vysledek2 is null and then the exception occurs. Consider changing your variable names to English - this is quite difficult to read :) Also you have a typo between sumpayments and FROM. BTW - please see "Bobby Tables" - via Google, Bing, etc Yes, this is a really good way to code if you want to become another SQL injection statistic. Never. Ever. Concatenate. Input. SqlCommand sc2 = new SqlCommand("SELECT SUM(ISNULL(payments,0)) AS sumpaymentsFROM clientpayments WHERE akce=@acke", spojeni); sc2.Parameters.AddWithValue("acke", zakce.Text); Also - commands, connections, etc are all IDisposable - you should use using around each of them. Hell Marc, thanks for respond, I tried what you advised but still got the same exception, don't you have an idea where did I made a mistake? @Marek:- As a peice of advice why you want to correct your mistake with a bad coding practice?? @Marek that cannot throw the same exception - there is no string/into conversion here. check you have built etc, and check the exact message @RahulTripathi I guess the OP may not know What's wrong with his code, this of course solves his problem but He doesn't have a chance to know why his code didn't work (Of course unless he took some look at some other answers). const string sqlSelect = @"SELECT ... WHERE akce=@akce"; using (spojeni = new SqlConnection(connectionString)) using(var command = new SqlCommand(sqlSelect,spojeni)) { command.Parameters.AddWithValue("@akce", zakce.Text); command.Connection.Open(); object vysledek2 = command.ExecuteScalar(); } Firstly, try changing SqlCommand sc2 = new SqlCommand("SELECT SUM(ISNULL(payments,0)) AS sumpaymentsFROM clientpayments WHERE akce=" + zakce.Text, spojeni); to something like SqlCommand sc2 = new SqlCommand("SELECT SUM(ISNULL(payments,0)) AS sumpaymentsFROM clientpayments WHERE akce='" + zakce.Text + "'", spojeni); Secondly, have a look at what SQL Injection is and how to use parametereized queries.
common-pile/stackexchange_filtered
On the observability of nonlinear systems I am looking for a reference on the observability of nonlinear systems of the form $$\begin{aligned} \dot{x} &= f(x,u) \\ y &= h(x) \end{aligned}$$ As far as I remember, it should be something dual to the controllability/feedback linearization via the Lie derivatives. Unfortunately, years have passed since I touched the subject last time and, surprisingly, I cannot find anything related in my notes. I am looking, e.g., for a tutorial or lecture notes that can be given to (good) students. There is a review with a bibliography of 176 items: https://link.springer.com/article/10.1007/s10513-005-0147-5 I think every engineering book on control theory has a chapter on Observability/Controllability. E.g. here http://www.cds.caltech.edu/~murray/amwiki/index.php/Main_Page @don'tstopthebit I see only linear systems there. Not the same Have you checked the book by Sontag or the one by Sastry? I assume there should be something on observability on Isidori's Nonlinear Control Systems. If not, perhaps on Jurdjevic's Geometric Control Theory?
common-pile/stackexchange_filtered
Hayley adjusts the laser pointer aimed at the milk-water mixture. The scattered light pattern keeps changing when I add more milk drops. Look how the intensity jumps around instead of scaling smoothly. That's because you're hitting the transition zone where particle size matters most. The fat globules in milk are roughly the same size as your red laser wavelength. But the scattering should just get stronger with more particles, right? More stuff to bounce light off. Not linearly though. The signal goes with the square of particle diameter, so doubling the size gives you four times the signal. Plus there's this angular dependency thing. I'm seeing that. The forward scattering is way brighter than the side scattering, especially with the bigger drops. Exactly. When particle size approaches wavelength, you get these resonance effects. Some sizes scatter incredibly strongly, others barely at all. So this is why my green laser pointer looks completely different through the same mixture? Right, because now you're changing the ratio between particle size and wavelength. Green light sees those same fat globules as relatively smaller targets. That explains why the atmospheric guys use different wavelengths for different particle measurements. They're tuning into specific size ranges. And why simple Rayleigh theory fails here. Those equations assume particles much smaller than wavelength, but milk fat globules are borderline cases. The Maxwell equations must get ugly for this intermediate regime. Infinitely ugly. You need spherical harmonics and multipole expansions. No simple approximations work when size and wavelength are comparable
sci-datasets/scilogues
How to Change color while in a repeat in turtle I have everything set up with the process of my initials being repeated working properly, however, I need the color to change after each repeat. (I know this question has been asked before I just did not understand the answers) import turtle s = turtle.Turtle() s.color("purple") def Square(turtle): for i in range(4): s.left(90) s.forward(150) s.right(20) for i in range(16): Square(s.right(20)) Square(s.right(10)) Square(s.right(20)) import turtle s = turtle.Turtle() #Create list of colors you want colors = ["red", "green", "blue", "orange", "pink", "yellow"] def Square(turtle): for i in range(4): #Choosing color from list- this option is repeating colors at list from first to last color_number = i % len(colors) current_color = colors[color_number] #Here you set color you want s.color(current_color) s.left(90) s.forward(150) s.right(20) for i in range(16): Square(s.right(20)) Square(s.right(10)) Square(s.right(20)) Hi MatoX_svk , Thanks for contributing your answer with stack overflow , please try to post description instead of directly giving the code solution i.e description + code
common-pile/stackexchange_filtered
Arraylist of object overriding all indexes in AsyncTask callback Read couple of question here on stack overflow but couldn't find the working solution. My scenario => In onActivityResult method, receiving a list which is send to Async Task for some background work. After performing some work, it sends back a list say output. After receiving output back in Async callback, logged theoutput list and it works as expected. Now from here, i want to put it in an Arraylist say addProductModelListTemp so i can add further objects. But on adding other objects in background as illustrated in above scenario, addProductModelListTemp gets overridden all of its indexes with the last output list received in Async callback. Below is the code how i am updating addProductModelListTemp. currentRowIdToInsertImages is the index to insert object in addProductModelListTemp which is working fine(Not included here for simplicity) new CompressFileAsyncTask(new AsyncResponse() { @Override public void processFinish(List<Uri> output) { for (int i = 0; i < output.size(); i++) { Timber.e("Received from Async Class contains => " + output.get(i)); } AddProductModel tempProductModel = new AddProductModel("", "", "", dummyList);// Dummy list is an empty list you can assume ... Tried setting it directly as well with the **output** addProductModelListTemp.add(tempProductModel); addProductModelListTemp.get(currentRowIdToInsertImages).setImagePathList(output); for (int i = 0; i < addProductModelListTemp.size(); i++) { Timber.e("addProductModelListTemp index " + i); for (int k = 0; k < addProductModelListTemp.get(i).getImagePathList().size(); k++) { Timber.e("addProductModelListTemp.get(i).getImagePathList() index " + k + " contains " + addProductModelListTemp.get(i).getImagePathList().get(k)); } } } }).execute(imageURIList); // imageURIList is the list received in onActivityResult method Below is the model class through which i am creating & updating objects. public AddProductModel(String size, String color, String quantity, List<Uri> imagePath) { this.size = size; this.color = color; this.quantity = quantity; this.imagePath = imagePath; } public void setImagePathList(List<Uri> imagePath) { if (this.imagePath.isEmpty()) { Timber.e("Image Path List not exists, adding"); this.imagePath = imagePath; } else { Timber.e("Image Path List exists, appending"); this.imagePath.addAll(this.imagePath.size(), imagePath); } } // Getter setter not attached for clarity} The AsyncTask class is class CompressFileAsyncTask extends AsyncTask<List<Uri>, Void, List<Uri>> { public AsyncResponse asyncResponse = null; public CompressFileAsyncTask(AsyncResponse asyncResponse) { this.asyncResponse = asyncResponse; } @Override protected List<Uri> doInBackground(List<Uri>... imageUriList) { List<Uri> fileList = imageUriList[0]; for (int i = 0; i < fileList.size(); i++) { filesAdded.add(Uri.parse(compressImage(String.valueOf(fileList.get(i))))); } return filesAdded; } @Override protected void onPostExecute(List<Uri> uriArrayList) { asyncResponse.processFinish(uriArrayList); } } And the interface itself public interface AsyncResponse { public void processFinish(List<Uri> output); } Can you please share your expected input and output to the async task. And clarify what you mean by "addProductModelListTemp gets overridden all of its indexes" "addProductModelListTemp" is the arraylist of AddProductModel. "imageUriList" is the intentData i am receiving in onActivityResult method. "output" is the list i am receiving after AsyncTask finishes and trigger the callback This is the ImageURIList i am receiving in onActivityResult method when i selected three images ImageURIList index 0 content://media/external/images/media/140871 ImageURIList index 1 content://media/external/images/media/139181 ImageURIList index 2 content://media/external/images/media/139180 I just send them in Async & get three compressed images back. I created an object of AddProductModel & saved these images in "imagePath" list & then i saved this object in "addProductModelListTemp" list so i can use it later. Now i have three images on index 0 of "addProductModelListTemp". Right ? Now i called the gallery again & picked 5 images which i tried to saved on second index of "addProductModelListTemp" by same method used above. Now if i log my "addProductModelListTemp" array, it contains two indexes but on both index, the "imagePath" list contains 5 same images I am expecting that i would retreive two objects from "addProductModelListTemp" now. And in first object, "imagePath" list will contain three indexes & for second object, the "imagePath" will be of 5 indexes. Can you please share the processFinish() method call. I have a feeling that you are updating the same image list in the AsyncTask over and over. @RishabhJain , please check the edit. Updated AsyncTask class & interface. @Umair where is the field filesAdded initialised. Do you clear the list before each AsyncTask execution? Yes, i am clearing it as soon as onActivityResult get called. filesAdded is also a list added in global scope. Timber.e("Received from Async Class contains => " + output.get(i)); This gets printed as expected upon adding newer items .... Just adding it to the "addProductModelListTemp" makes things go wrong. Not getting an idea why it is behaving like this. The issue is that you are modifying the same list object and storing it in each index of addProductModelListTemp. You need to make filesAdded a local variable. @Override protected List<Uri> doInBackground(List<Uri>... imageUriList) { List<Uri> fileList = imageUriList[0]; List<Uri> filesAdded = new ArrayList<>(); for (int i = 0; i < fileList.size(); i++) { filesAdded.add(Uri.parse(compressImage(String.valueOf(fileList.get(i))))); } return filesAdded; } Well, you are right. Declaring it as local works. Still thinking why filesAdded.clear() doesn't work here. When you clear the list, you remove all the items from that list object. All the indexes of addProductModelListTemp store the reference to the same filesAdded object. So when you make any changes to filesAdded, that is reflected all over.
common-pile/stackexchange_filtered
Peano's theory of arithmetic and Gödel's 1st Incompleteness Theorem Let $\mathcal{N}$ be Peano's 1st order theory of arithmetic and $\mathscr{A}$ it's standard model (which we assume exists). Infer from Gödel's 1st Incompleteness Theorem that there exists a closed well founded formula say $B$ of $\mathcal{N}$ and a model $\mathscr{W}$ of $\mathcal{N}$, such that $B$ is ture in $\mathscr{A}$ and $B$ is false in $\mathscr{W}$. Any help is really needed and app recited. Thanks in advance. The theory is usually denoted by $\sf PA$ and the standard model by $\Bbb N$. Also, "wff" means well-formed formula, not well-founded formula. Don't let the words "Peano" and "Gödel" impress you. Being an incomplete theory exactly means to have at least two non elementary equivalent models. Take a canonical Gödel G sentence for PA i.e. a wff that "says" of itself 'I am unprovable in PA'. Ask yourself: Is G provable from the axioms of PA? Is G true in every model of the axioms of PA? Is G true at least in the standard model of PA? What can you deduce from your last two answers? Hint: for one of these answers you appeal to Gödel's completeness theorem for first-order logic, and the fact that PA is a first-order theory. Yes, more specific than is needed, but the obvious choice ... Thanks, Peter This is my reasoning then, would appreciate if you could let me know if it is correct. From the 1st Incompleteness theorem we know we have a sentence $G$ and that $G$ is true, in $\mathbb{N}$ but not provable. So since $G$ is not provable, $PA$ & $\neg G$ is consistent. Therefore by Gödel's completeness theorem, there is a model $\mathscr{W}$ in which $G$ is false. Thanks again really appreciate a great don as yourself answering my question. We used your formal Logic book in my 1st and 2nd years. Yes indeed. Exactly! (And always good to hear that the Logic book is being used out there!)
common-pile/stackexchange_filtered
Lockscreen widgets are unresponsive for several seconds I often use the Siri suggested apps widget to start an app. After upgrade to iOS 10, I have noticed that I have to press several times on the app icon before it responds. If I wait a couple of seconds before pressing it the first time, it will respond. It feels like all the other widgets has to load their content before I can start the app. This lag kind of defeats the main purpose of quick access to an app as I have used it for in the past. Is this a known issue? Is it possible to do something to fix it? I'm not sure this is actually an 'issue' that needs 'fixing'. These widgets, most likely, have to make some kind of internet query to get their data and refresh the interface. That can take several seconds depending on the speed of your internet connection. It's best to wait a second or two for the widgets to get their data. The specific "issue" I feel is a bug is it displays a set of app icons, then it "recalculates" and might change to a different set of app icons. Before the upgrade to iOS 10 it also changed after some seconds, but it was possible to click on it before it changed, but now it does not respond.
common-pile/stackexchange_filtered
Unable to read the stream produced by the pipeline - Parameter name: format I have a BizTalk 2016 FP3 solution. Using the wizard, I've created a flat file schema for my send port, to assemble from xml to a fixed position text file. When I run through BizTalk, I get a suspended instance with the following Error Information: Unable to read the stream produced by the pipeline. Details: Value cannot be null. Parameter name: format So, I opened the message tab for the suspended message and copied the xml to a test file. I then ran this through the ffasm.exe tool, passing the path to my flat-file schema as the -bs parameter - the output from this tool was perfect, the exact flat file content I would expect. On the send port (which contains only the Microsoft Flat file assembler components), I have populated as the "DocumentSpecName" property with the required schematypename,assemblystrongname . This is not really required since BizTalk is able to determine the schema from the promoted namespace#rootnode , so I've tried without the property being set but still get the same result. I'm afraid this is one that was fixed without me knowing how. The problem had been driving me nuts for hours so I went back to basics and created a new solution on a different dev VM - it worked! So, I returned back to my main dev VM and it the problem was no more. Now I don't believe in magic, so I'm sure I must have changed something but I've since tried to recreate the error by meddling with the input file and the flat-file schema and I haven't been able to. I have learned that using VS to "Generate Instance" of a flat-file for a given xml file is useless - it will produce a file but uses the xml element names as data. Better to use the FFAsm.exe that can be found in D:\Program Files (x86)\Microsoft BizTalk Server 2016\SDK\Utilities\PipelineTools Off to wrap a unit test around this now in case the bug strikes again. There look to be multiple situations that can result in this error. The schema is not actually a Flat file schema (but you've checked for that already), see MSDN Unable to read the stream produced by the pipeline, Flat Send Pipeline It is a fixed length schema and one of the required fields is missing (see Error details: Unable to read the stream produced by the pipeline. Details: Cannot find definition for the input: {Record, Element, or Attribute} and also Flat file assembler Resolved by forcing the creation of optional elements from the source schema. Thanks for your answer Colin, I really appreciate it. The problem was actually solved just before I read your answer but I returned to it to see if point 2 could have been the cause of the problem. I removed some of the data used for the output flat-file but this didn't cause any error just head the same problem yesterday, try replacing the pipeline with an other one, apply and than switch back to the correct pipeline. Test. Hope this help you to. Interesting that you faced the same problem and solved it without knowing the precise cause. I had the same experience - the worst kind of bug! I have also experienced such behavior. I use Deployment Framework for BizTalk as soon as I can to avoid manual handling of any sort (also, its "GAC project output manually"-feature is useful sometimes!). However, working with pipeline components, the toolbar (Visual Studio) requires restart after compiling and GACing an updated version. Otherwise there can be such problems... I had this issue today. I had three similar schemas, and one was failing. (I was just archiving the file received by the orchestration by converting it back to a CSV). After some study, it turns out that one of the date/time columns should have been a string because the date was coming from CSV, and was in the format "06/07/2022 16:44" for example. My map fixes the date before mapping to SQL for inserting into database.
common-pile/stackexchange_filtered
Middle point of an arc I have coordinates of two points, i.e., $X\equiv(x_1,x_2,x_3)$, $Y\equiv(y_1,y_2,y_3)$, connected with an arc located on a desired surface. Given only this information, how can I compute the middle point of this arc? Thanks! PS: It would be nice to have an answer which can later be implemented efficiently on MATLAB/Mathematica since I need this to generate a surface through blockMesh. Also, a solution for 2D would work, too, i.e., for $x_3=0=y_3$. By "middle point of an arc located on a desired surface", do you mean the midpoint of the geodesic on the surface between points X and Y? If you do, then it all depends on how you define the surface. If planar, it is trivial, and matches the midpoint of the line segment if the endpoints are both on the plane. If spherical, it is again trivial if the center of the spherical surface is at origin. If a single biquadratic or bicubic patch, there might be an analytical answer. In all cases, the surface definition is key. If your arc is arc-length parametrized, say $\gamma : [a,b] \to S$ where $S$ is your surface, I guess the midpoint of the arc refers to $\gamma(\frac{a+b}{2})$. @Didier Thanks. Right, but I don't have an explicit expression for $\gamma(\cdot)$ in my case. @Glärbo Thanks, yes, of the geodesic. However, in my case, these two points are on a semi-ellipsoid that I am trying to generate. @kk17: What do you know about the surface, then? If you are trying to generate it, you surely have something controlling it, because if you didn't, you'd just go "well, we have nothing to work with, so I'll go with the physicists' first approximation: all cows are spherical." Please describe what you have to work with; don't leave out constraints or details you know but "think" are not relevant. Only then can we help you. Numerical values are not important, but having the list of datums/features you expect to have at hand at runtime, is the crucial missing key here. For example, if the surface was indeed spherical, knowing the radius $r$ and the two points $\vec{x}$ and $\vec{y}$ on the surface would NOT be enough to determine where the middle point of the geodesic between those two are; we could find the circle it is on, but that's it. On the other hand, if we had $\vec{o}$ describing the center of the sphere, we'd have a solution. The case were $\lVert\vec{x}-\vec{o}\rVert \ne \lVert \vec{y}-\vec{o}\rVert$ would need checking and more work (because it'd no longer be on the surface of the sphere, and involved "elevation"). [...] We need to know what we know of the surface when we need to produce the middle point of the geodesic. We don't need those as numerical values, but we need to know if say the surface is defined as a surface of rotation, or say as some sort of elliptical approximation of the isosurface of a complex metaball. Otherwise, the best anyone can do is, "It's the midpoint of the geodesic on that surface, duh", because without information, there is no possibility of meaningful answers either. Math isn't magic. It seems that the arc is not well defined - why not having many arcs on the surface?
common-pile/stackexchange_filtered
How can I repair mistakenly formatted EFI partition? I had Windows 10 dual booting with Ubuntu. Then I realize I could not login to Ubuntu so I tried to fix it with Live CD. In Live CD i mistakenly formatted /dev/sda2 with command mkfs.vfat -F 32 -n "name" /dev/sda2 which is EFI partition. Here is the listed partitions: partitions Windows 10 is installed on sda4. Because I couldn't boot to any of them, I reinstalled Ubuntu on sda5 Now I can boot to Ubuntu but I cannot boot to Windows due to sda2 is EFI partition I mistakenly formatted above. I tried to fix using many Linux distro live CD with parted but I can't repair /EFI. Other websites shows that I should use Windows 10 install media and use diskpart. I have Windows 10 USB install media but it won't boot (I'm guessing because it is UEFI boot based). On My BIOS (Asrock) when I tried to "Launch EFI Shell from filesystem ", now it shows error file not found. I used to be able to get EFI shell. I tried added Windows menu entry to GRUB but still won't boot, I guess because currently Grub doesn't boot via EFI. menuentry "Windows 10" { search –fs-uuid –no-floppy –set=root 8A3C60A93C60924D chainloader (${root})/efi/Microsoft/Boot/bootmgfw.efi } How can I fix this situation? My Priority would be to be able to login to Windows while keeping Ubuntu as dual booting. So when you reinstalled Ubuntu, where did you put the /boot? It sounds like you could fix this by reinstalling Ubuntu again and specifying sda2 as EFI during that installation. @Fiximan when I reinstalled Lubuntu, I put /boot on sda. How can I specify sda2 as EFI during installation? `sda2: __________________________________________________________________________ File system: vfat Boot sector type: FAT32 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: /EFI/BOOT/bootx64.efi /EFI/BOOT/drivers_x64/ext4_x64.efi` isn't this already EFI ? Type is EFI, but Ubuntu stores its EFI-files in a different location and just needs to put the loader in the efi-partition. How to do it: during installation use manual partitioning (like you did) and then select sda2 and define type or use as (not sure about the exact wording) to efi. Like sdb2 in this example. @Fiximan but it is already EFI type. https://paste.ubuntu.com/p/Pd9XFRCwk8/ /dev/sda2 923648 1126399 202752 99M EFI System You deleted the Microsoft bootloaders, and you're not going to replace them from any Ubuntu disk. They're just files, on a FAT filesystem. You can copy them from another W10 comnputer. I see. Does update-grub list Windows? Do you have a Windows boot disk available? It looks like you will need to reset the windows boot loader from the MS repair boot options. @Fiximan No it doesn't list Windows, not even the one I added to Grub menu manually. Is there a way to restore the EFI files? It seems like nothing works because there is no /EFI/* every where. Thats the problem, Because no EFI from BIOS, my USB Windows 10 install media doesn't want to boot even when I put priority in boot order. @ubfan1 Copy them how? I have Windows 10 on VMs inside Ubuntu. Sure, Microsoft offers the Windows 10 iso-file as a download. Make an install medium with a USB drive from that and boot it. Use the repair options there. Then you will only be able to boot into Windows - because that is what MS does. Then go to UEFI via Windows, make Ubuntu boot loader as first entry, rerun update-grub from ubunutu. Link to W10.iso Sorry - overlooked the "W10 USB not booting"-part. How did you create it? Did it boot earlier? For both Windows & Ubuntu how you boot install/repair media UEFI or BIOS is then how it installs or repairs. When you reinstalled Ubuntu you reinstalled in BIOS boot mode (and BIOS mode need bios_grub partition as your Boot-Repair report suggests). You now have grub in gpt's protective MBR and no mount of /EFI/ubuntu in fstab. You can convert install to UEFI, by booting live installer in UEFI mode and purging grub-pc and installing grub-efi-amd64. Often easier to do a full reinstall of grub using Boot-Repair. https://help.ubuntu.com/community/Boot-Repair If you keep Ubuntu in BIOS mode you will not be able to boot Windows as UEFI & BIOS are not compatible. Once you start booting in one mode you cannot switch, once in BIOS grub can only boot other BIOS systems. And Windows only boots in UEFI mode from gpt partitioned drives. To fix Windows you will need a Windows repair flash drive, DVD or installer booted in UEFI mode to make UEFI repairs using Windows repair console. https://superuser.com/questions/460762/how-can-i-repair-the-windows-8-efi-bootloader After Windows is repaired & has /EFI/Microsoft folder in ESP, you can run grub updates which runs os-prober to add Windows entry to grub menu. sudo update-grub
common-pile/stackexchange_filtered
issues using X-Forwarded-For Log Filter for Windows Servers I've encountered some problems trying to use X-Forwarded-For Log Filter for Windows Servers. I've downloaded binaries (x86 version) and followed to installation manual from http://devcentral.f5.com/weblogs/Joe/archive/2009/08/19/x_forwarded_for_log_filter_for_windows_servers.aspx, but when I try to open a web-page on my site I get an error: HTTP Error 500.0 - Internal Server Error Calling GetProcAddress on ISAPI filter "C:\ISAPI Filters\F5XFFHttpModule\F5XFFHttpModule.dll" failed Module IIS Web Core Notification Unknown Handler StaticFile Error Code 0x8007007f System Info: OS - Windows Server 2008 Datacenter, 32-bit IIS - 7.0 .NET Framework Version - 4.0 ISAPI Extensions & ISAPI Filters installed ok. Filter is added to ISAPI and CGI Restrictions and to ISAPI filters for web-application too. IIS user (UISR) has read and execute access permissions for F5XFFHttpModule.dll. Web-application application pool works on .NET Framework 4 in Integrated mode, Process Model Identity - NetworkService(changing Process Model to ApplicationPool doesn't help). Debug version doesn't create any log file:( What I see in windows event log: The HTTP Filter DLL C:\ISAPI Filters\F5XFFHttpModule\F5XFFHttpModule.dll failed to load. The data ithe error. Could not load all ISAPI filters for site '%sitename%'. Therefore site startup aborted. However the filter works fine in Windows7 x64 + IIS7.5. An error described here is fixed by setting "Enable 32-bit Application" to true in web-application application pool settings. Be so kind to help me to puzzle out this trouble please. Sorry for my English :) Ok, I have figured it out. I use downloaded HTTP module as ISAPI filter, i.e without installing it in IIS - it's my error Now I run install.ps1 script from HTTP module distrib(http://devcentral.f5.com/weblogs/Joe/archive/2009/12/23/x-forwarded-for-http-module-for-iis7-source-included.aspx), and all work fine! thanks to Joe Pruitt for help! Thanks for taking the time to let us know the solution! When you can, please mark this as the accepted answer by clicking the tick on the left hand side - This will let others know an answer has been found without looking at the page in detail.
common-pile/stackexchange_filtered