text stringlengths 70 452k | dataset stringclasses 2 values |
|---|---|
Connecting to the Parse Server on a VPS using https (self-sined cert for SSL)
For some reasons Parse users must migrate their Parse environment to a VPS (this is the case for my question) or Heroku, AWS (don't need these platforms), etc. There is a new Parse SDK for Android (1.13.0) which allows to initialize connection using the new Parse interface, as follows:
Parse.initialize(new Parse.Configuration.Builder(this)
.applicationId("myAppId")
.clientKey(null)
.addNetworkInterceptor(new ParseLogInterceptor())
.server("https://VPS_STATIC_IP_ADDRESS/parse/").build());
This kind of request is done using the port 443. The appropriate .js (nodejs) connector file has already been edited so that port 443 is locally connected to port 1337 (port-listener) and it works when accessing Parse Server in browser (remotely, of course: from outside VPS) where it's possible to apply a self-signed certificate and go further. But when an Android app (launcher) tries to connect it, it cannot because of self-signed certificate. Is there any possibility from within Parse SDK to apply a self-signed certificate?
P.S. Is it true that there's a bug concerning this issue and that this is the reason why 1.13.1 Parse version has been released? If yes, where is it possible to get the jar-library of this version?
Thank you!
You should probably fix the typo in the title...
I just solved this one -
Parse SDK for android does not come with out of the box support in SelfSigned certificates.
You need to modify the code yourself.
First Step - The relevant piece of code is in ParseHttpClient
public static ParseHttpClient createClient(int socketOperationTimeout,
SSLSessionCache sslSessionCache) {
String httpClientLibraryName;
ParseHttpClient httpClient;
if (hasOkHttpOnClasspath()) {
httpClientLibraryName = OKHTTP_NAME;
httpClient = new ParseOkHttpClient(socketOperationTimeout, sslSessionCache);
} else if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.KITKAT) {
httpClientLibraryName = URLCONNECTION_NAME;
httpClient = new ParseURLConnectionHttpClient(socketOperationTimeout, sslSessionCache);
} else {
httpClientLibraryName = APACHE_HTTPCLIENT_NAME;
httpClient = new ParseApacheHttpClient(socketOperationTimeout, sslSessionCache);
}
PLog.i(TAG, "Using " + httpClientLibraryName + " library for networking communication.");
return httpClient; }
If your target support is for version more advanced then KITKAT -
Then you need to add in ParseURLConnectionHttpClient constructor:
HttpsURLConnection.setDefaultHostnameVerifier(new HostnameVerifier(){
public boolean verify(String hostname, SSLSession session) {
if(hostname.equals("YOUR TARGET SERVER")) {
return true;
}
return false;
}});
In other cases (older versions) the code will fall to the Apache, I was not able to work with it- so I did the following: I added the okhttp library to my app (take version 2.4 - the same one parse indicates in the build , the most recent has different package name) and then the code will step into the first condition since it will find okhttp on the Path.
You should probably replace the if conditions order so it will happen only on versions less then KITKAT.
In ParseOkHttpClient add the following selfsigned certificate code:
public void initCert() {
try {
Log.i("PARSE","initCert");
CertificateFactory cf = CertificateFactory.getInstance("X.509");
String yairCert = "-----BEGIN CERTIFICATE-----\n" +
YOUR CERTIFICATE HERE
"-----END CERTIFICATE-----\n";
InputStream caInput = new ByteArrayInputStream(yairCert.getBytes());
Certificate ca = null;
try {
ca = cf.generateCertificate(caInput);
System.out.println("ca=" + ((X509Certificate) ca).getSubjectDN());
} catch (CertificateException e) {
e.printStackTrace();
} finally {
try {
caInput.close();
} catch (IOException e) {
Log.e("PARSE_BUG","Failure on Cert installing",e);
e.printStackTrace();
}
}
// Create a KeyStore containing our trusted CAs
String keyStoreType = KeyStore.getDefaultType();
KeyStore keyStore = KeyStore.getInstance(keyStoreType);
keyStore.load(null, null);
keyStore.setCertificateEntry("ca", ca);
// Create a TrustManager that trusts the CAs in our KeyStore
String tmfAlgorithm = TrustManagerFactory.getDefaultAlgorithm();
TrustManagerFactory tmf = TrustManagerFactory.getInstance(tmfAlgorithm);
tmf.init(keyStore);
// Create an SSLContext that uses our TrustManager
SSLContext context = SSLContext.getInstance("TLS");
context.init(null, tmf.getTrustManagers(), null);
Log.i("PARSE","Initiating Self Signed cert");
okHttpClient.setSslSocketFactory(context.getSocketFactory());
try {
cf = CertificateFactory.getInstance("X.509");
} catch (CertificateException e) {
Log.e("PARSE_BUG","Failure on Cert installing",e);
e.printStackTrace();
}
} catch (IOException e) {
Log.e("PARSE_BUG","Failure on Cert installing",e);
e.printStackTrace();
} catch (CertificateException e) {
Log.e("PARSE_BUG","Failure on Cert installing",e);
e.printStackTrace();
} catch (NoSuchAlgorithmException e) {
Log.e("PARSE_BUG","Failure on Cert installing",e);
e.printStackTrace();
} catch (KeyStoreException e) {
Log.e("PARSE_BUG","Failure on Cert installing",e);
e.printStackTrace();
} catch (KeyManagementException e) {
Log.e("PARSE_BUG","Failure on Cert installing",e);
e.printStackTrace();
}
And the final part is the calling this method + verifying hostname , it should happen in the Constructor too.
initCert();
okHttpClient.setHostnameVerifier(new HostnameVerifier() {
@Override
public boolean verify(String s, SSLSession sslSession) {
if(s.equals("YOUR TARGET SERVER")) {
return true;
}
return false;
}
});
Thats it. Build PARSE locally and deploy to your app and it will work like a charm.
Enjoy
| common-pile/stackexchange_filtered |
two way data binding with watch
Why is $scope.watch not working with 'model'?
http://jsfiddle.net/lesouthern/8PUtP/5/
.directive('testDirective',function() {
return {
restrict : 'A',
scope : {
model : '=ngModel'
},
link : function($scope,$element,$attrs) {
$scope.$watch('model',function(x) {
console.log('this is model: ' + x);
});
}
}
});
In this case the "=" should be "@" fiddle
This Answer sums it up.
Please see this question. Here is your corrected code :
return {
restrict : 'A',
link : function($scope, $element, $attrs) {
$scope.$watch($attrs.ngModel, function(x) {
console.log('this is model: ' + x);
});
}
}
Thank you for your anwser. The issue with this solution is that I need to restrict scope in this directive. How do I watch the value of this input, and have two way data binding with this scope variable?
@koolunix IMHO, you should accept this answer because you used the exact same solution in your answer.
the complete question had the requirement of limiting the scope of the directive. what is shown above does not answer this
Notice that your solution doesn't restrict the scope neither. This answer (and the related comments) will give you more details about what to do.
@Blackhole you are right. Thanks for your input. Here is an updated answer based on that link: http://jsfiddle.net/lesouthern/WN6aC/2/
| common-pile/stackexchange_filtered |
When to use hyphens in compound words
I've been puzzled a lot on when to use a hyphen in compounds words such as cross-section, time-of-flight, state-of-the-art etc. I am writing scientific documents and I haven't found a definite rule on the use of a hyphen.
However, I came across a general rule according to which such words have to be hyphenated when they come before a noun, otherwise the hyphen is not necessary (or perhaps it's even wrong?). A few examples follow
Example 1
Cross-section measurements of nuclear reactions.
vs
Measuring nuclear reaction cross sections.
Example 2
The time-of-flight technique
vs
The particle has a small time-of-flight.
words have to be hyphenated when they come before a noun
Is there a rule that can be applied in such case or the use of a hyphen is interchangeable?
There are 1,992 questions on this site that contain the word "hyphen". YES 1,992! It is a frequently asked-about subject. If you type "hyphen" in the search-bar above, I feel sure you will find much that is of interest.
@WS2 : Thank you very much for your comment, help and great sense of humour! Kindness is the language which the deaf can hear and the blind can see! It's the only way to really make an impact!
As a nuclear physicist who has used and measured many cross sections, the examples are correct. Cross-section measurements (an adjective) versus a measured cross section (a noun).
The most important thing in writing is that you are understood. People need to know that a compound word is in fact is a compound word if they are to understand it correctly. For compound words which are not frequently used or could easily be mistaken for something else, you should use hyphens.
When there are 3 or more words connected together, I'd use hyphens. In the case of cross-sections vs cross sections, many people are familiar with the term so you could take out the hyphen, or you could leave it in to be more clear. Just be consistent in your decision to keep/leave it.
| common-pile/stackexchange_filtered |
WinPhone 8 authorization to Skydrive Skydrive
I'm writing an app, that can upload files to Skydrive using http://developer.nokia.com/community/wiki/SkyDrive_-_How_to_upload_content_on_Windows_Phone.
On my own phone authorization works excellent, but on another device displays the error:
What i'm doing wrong?
are you using the same credentials for both the devices?
If you mean MS accounts - they are different. If not, please, explain what you mean.
yeah I meant the MS accounts. When you register your app for the Live SDK I guess you can only use those credentials to login. Till you submit the app to the store you won't be able to check with other credentials. Have a look at this http://msdn.microsoft.com/en-us/library/dn631823.aspx
Thank you! I uploaded the app to store and download it to devices. On my own device it works but on another don't.
Try sign in with your data connection without wifi,
Yes, I tried sign in with wifi and 3g-internet on phone. Internet is working and I can sign in in another apps.
Thank you! I found the solution.
I solved this by adding capability ID_CAP_NETWORKING in the WMAppManifest.xml file and it work.
I don't know what you mean with "another device"?! Do you mean a hyper-v machine or a different physical device? In the hyper-v machines you cannot use a MS account.
Do you use single-sign on in your code and do you have your MS account on the second device?
A little more information from you would be good.
UPDATE: Just checked the link of you, and yes you use the scope wl.signin which means single sign on. So the question is: Are you logged in with your MS account on the second device?
Thank you for answer! Another device is a different physical device with another MS account/
Yes, I logged on another device. Can you say more about single- and multiple-sign on?
When you use your app and execute the login code a page shall appear where you have to accept, that the app has the rights which you defines in your scopes. (wl.xxx) The scope wl.signin for example stands for single sign on, which means (and I tried the same yesterday) you login in just once and after this, everytime you debug (or later run) your app, your credentials will not be requested.
Thank you! If I understand right, I need to remove scope "wl.signin" and app will be work?
No that is not what I meant. I'd like to comment the Dialog above but I'm not allowed to do this. Maybe it is as Kulasangar said, and you can only use the credentials you used when registered your app for OneDrive. Simple check for that: On the second device: Connect the device with your MS Account and try again the debugging.
I don't need to debug it. I uploaded the app to store and download it to devices. On my own device it works but on another don't.
After last update, where I remove "wl.signin" it don't work too.
Just a thought, in the Nokia example they have added just the wl.skydrive_update scope. In my app I have both wl.skydrive_update and wl.skydrive scopes, and it works on other devices...
No, look here: http://msdn.microsoft.com/en-us/library/hh243646.aspx The Update scope contains the whole wl.skydrive permissions and additionally the skydrive.
Thank you! I found the solution.
| common-pile/stackexchange_filtered |
how to prevent document.write from overwriting my page
i have this code in my shopify theme which is over writing my document when i trigger it with settimeout function
var _0x1310=["\x3C\x73\x63\x72\x69\x70\x74\x20\x73\x72\x63\x3D\x22","\x74\x69\x6D\x62\x65\x72\x6A\x73\x70\x61\x74\x68","\x70\x61\x74\x68","\x73\x68\x6F\x70\x74\x69\x6D\x69\x7A\x65\x64\x64\x65\x6D\x6F\x2E\x6D\x79\x73\x68\x6F\x70\x69\x66\x79\x2E\x63\x6F\x6D\x7C\x73\x68\x6F\x70\x74\x69\x6D\x69\x7A\x65\x64\x35\x2D\x30\x2E\x6D\x79\x73\x68\x6F\x70\x69\x66\x79\x2E\x63\x6F\x6D\x7C\x6F\x75\x74\x64\x6F\x6F\x72\x73\x61\x64\x76\x65\x6E\x74\x75\x72\x65\x72\x2E\x6D\x79\x73\x68\x6F\x70\x69\x66\x79\x2E\x63\x6F\x6D","\x2F\x63\x6F\x6C\x6C\x65\x63\x74\x69\x6F\x6E\x73\x2F","\x69\x6E\x64\x65\x78\x4F\x66","\x68\x72\x65\x66","\x6C\x6F\x63\x61\x74\x69\x6F\x6E","\x2F\x70\x72\x6F\x64\x75\x63\x74\x73\x2F","\x61\x62\x6F\x72\x74","\x6F\x66\x66","\x66\x6F\x72\x6D\x5B\x61\x63\x74\x69\x6F\x6E\x3D\x22\x2F\x63\x61\x72\x74\x2F\x61\x64\x64\x22\x5D","\x24","\x73\x75\x62\x6D\x69\x74","\x50\x6C\x65\x61\x73\x65\x20\x65\x6E\x74\x65\x72\x20\x79\x6F\x75\x72\x20\x76\x65\x72\x69\x66\x69\x63\x61\x74\x69\x6F\x6E\x20\x70\x75\x72\x63\x68\x61\x73\x65\x20\x63\x6F\x64\x65\x20\x66\x6F\x72\x20\x66\x75\x6C\x6C\x20\x74\x68\x65\x6D\x65\x20\x66\x75\x6E\x63\x74\x69\x6F\x6E\x61\x6C\x69\x74\x79","\x61\x6C\x65\x72\x74","\x70\x72\x65\x76\x65\x6E\x74\x44\x65\x66\x61\x75\x6C\x74","\x73\x74\x6F\x70\x50\x72\x6F\x70\x61\x67\x61\x74\x69\x6F\x6E","\x6F\x6E","\x62\x6F\x64\x79","\x22\x3E\x3C\x2F\x73\x63\x72","\x70\x72\x6F\x64\x75\x63\x74\x5F\x6B\x65\x79","","\x62\x6C\x61\x6E\x6B","\x64\x6F\x6D\x61\x69\x6E","\x56\x65\x72\x69\x66\x69\x63\x61\x74\x69\x6F\x6E\x20\x73\x74\x61\x72\x74\x20\x69\x73\x20\x62\x72\x6F\x6B\x65\x6E","\x64\x65\x62\x75\x67","\x63\x6F\x6E\x73\x6F\x6C\x65","\x5F\x73\x68\x6F\x70\x69\x66\x79\x5F\x70\x72","\x63\x6F\x6F\x6B\x69\x65","\x31\x38\x37\x34\x63\x33\x61\x65\x65\x34\x38\x64\x33\x34\x62\x65\x65\x36\x36\x31\x65\x38\x32\x30\x35\x38\x31\x32\x35\x32\x34\x32","\x74\x79\x70\x65","\x47\x45\x54","\x75\x72\x6C","\x68\x74\x74\x70\x73\x3A\x2F\x2F\x6D\x65\x6D\x62\x65\x72\x73\x2E\x73\x68\x6F\x70\x74\x69\x6D\x69\x7A\x65\x64\x2E\x6E\x65\x74\x2F\x61\x70\x69\x2F\x76\x61\x6C\x69\x64\x61\x74\x65\x2F","\x2E\x6A\x73\x6F\x6E","\x64\x61\x74\x61","\x76\x65\x72","\x35\x2E\x31\x2E\x30","\x64\x61\x74\x61\x54\x79\x70\x65","\x6A\x73\x6F\x6E","\x73\x75\x63\x63\x65\x73\x73","\x68\x61\x73\x4F\x77\x6E\x50\x72\x6F\x70\x65\x72\x74\x79","\x76\x61\x6C\x69\x64","\x44\x61\x74\x65","\x67\x65\x74\x54\x69\x6D\x65","\x73\x65\x74\x54\x69\x6D\x65","\x65\x78\x70\x69\x72\x65\x73","\x2F","\x3C\x64\x69\x76\x20\x73\x74\x79\x6C\x65\x3D\x22\x64\x69\x73\x70\x6C\x61\x79\x3A\x62\x6C\x6F\x63\x6B\x20\x21\x69\x6D\x70\x6F\x72\x74\x61\x6E\x74\x3B\x70\x6F\x73\x69\x74\x69\x6F\x6E\x3A\x66\x69\x78\x65\x64\x20\x21\x69\x6D\x70\x6F\x72\x74\x61\x6E\x74\x3B\x7A\x2D\x69\x6E\x64\x65\x78\x3A\x39\x39\x39\x39\x39\x20\x21\x69\x6D\x70\x6F\x72\x74\x61\x6E\x74\x3B\x62\x6F\x74\x74\x6F\x6D\x3A\x30\x20\x21\x69\x6D\x70\x6F\x72\x74\x61\x6E\x74\x3B\x6C\x65\x66\x74\x3A\x30\x20\x21\x69\x6D\x70\x6F\x72\x74\x61\x6E\x74\x3B\x77\x69\x64\x74\x68\x3A\x31\x30\x30\x25\x20\x21\x69\x6D\x70\x6F\x72\x74\x61\x6E\x74\x3B\x68\x65\x69\x67\x68\x74\x3A\x31\x30\x30\x70\x78\x20\x21\x69\x6D\x70\x6F\x72\x74\x61\x6E\x74\x3B\x6C\x69\x6E\x65\x2D\x68\x65\x69\x67\x68\x74\x3A\x31\x30\x30\x70\x78\x20\x21\x69\x6D\x70\x6F\x72\x74\x61\x6E\x74\x3B\x74\x65\x78\x74\x2D\x61\x6C\x69\x67\x6E\x3A\x63\x65\x6E\x74\x65\x72\x20\x21\x69\x6D\x70\x6F\x72\x74\x61\x6E\x74\x3B\x62\x61\x63\x6B\x67\x72\x6F\x75\x6E\x64\x3A\x23\x66\x66\x30\x30\x30\x30\x20\x21\x69\x6D\x70\x6F\x72\x74\x61\x6E\x74\x3B\x63\x6F\x6C\x6F\x72\x3A\x23\x66\x66\x66\x20\x21\x69\x6D\x70\x6F\x72\x74\x61\x6E\x74\x3B\x6F\x70\x61\x63\x69\x74\x79\x3A\x31\x20\x21\x69\x6D\x70\x6F\x72\x74\x61\x6E\x74\x3B\x70\x6F\x69\x6E\x74\x65\x72\x2D\x65\x76\x65\x6E\x74\x73\x3A\x6E\x6F\x6E\x65\x3B\x22\x3E","\x6D\x65\x73\x73\x61\x67\x65","\x3C\x2F\x64\x69\x76\x3E","\x61\x70\x70\x65\x6E\x64","\x61\x6A\x61\x78","\x73\x65\x74\x54\x69\x6D\x65\x6F\x75\x74","\x69\x70\x74\x3E","\x77\x72\x69\x74\x65","\x64\x6F\x63\x75\x6D\x65\x6E\x74"];(function(){var _0x72cbx1=_0x1310[0]+ window[_0x1310[2]][_0x1310[1]],_0x72cbx2=_0x1310[3],_0x72cbx3;if(window[_0x1310[7]][_0x1310[6]][_0x1310[5]](_0x1310[4])=== -1&& window[_0x1310[7]][_0x1310[6]][_0x1310[5]](_0x1310[8])=== -1){_0x72cbx3= _0x1310[9]};function _0x72cbx4(){window[_0x1310[12]](_0x1310[11])[_0x1310[10]]();window[_0x1310[12]](_0x1310[19])[_0x1310[18]](_0x1310[13],_0x1310[11],function(_0x72cbx5){window[_0x1310[15]](_0x1310[14]);_0x72cbx5[_0x1310[16]]();_0x72cbx5[_0x1310[17]]();return false})}_0x72cbx1+= _0x1310[20];window[_0x1310[12]](function(){var _0x72cbx6=window[_0x1310[21]]&& window[_0x1310[21]]!== _0x1310[22]?window[_0x1310[21]]:_0x1310[23],_0x72cbx7=window[_0x1310[24]];if(_0x72cbx7!== _0x1310[22]&& _0x72cbx6=== _0x1310[23]&& _0x72cbx2[_0x1310[5]](_0x72cbx7)!= -1){if(_0x72cbx3!= _0x1310[9]){window[_0x1310[27]][_0x1310[26]](_0x1310[25])};return}else {if($[_0x1310[29]](_0x1310[28])=== _0x1310[30]){return}};window[_0x1310[54]](function(){var _0x72cbx8={};_0x72cbx8[_0x1310[31]]= _0x1310[32];_0x72cbx8[_0x1310[33]]= _0x1310[34]+ _0x72cbx6+ _0x1310[35];_0x72cbx8[_0x1310[36]]= {};_0x72cbx8[_0x1310[36]][_0x1310[24]]= _0x72cbx7;_0x72cbx8[_0x1310[36]][_0x1310[37]]= _0x1310[38];_0x72cbx8[_0x1310[39]]= _0x1310[40];_0x72cbx8[_0x1310[41]]= function(_0x72cbx9){var _0x72cbxa={},_0x72cbxb;if(_0x72cbx9[_0x1310[42]](_0x1310[41])&& _0x72cbx9[_0x1310[42]](_0x1310[43])){if(_0x72cbx9[_0x1310[43]]){_0x72cbxb= new window[_0x1310[44]]();_0x72cbxb[_0x1310[46]](_0x72cbxb[_0x1310[45]]()+ 86400000);_0x72cbxa[_0x1310[47]]= _0x72cbxb;_0x72cbxa[_0x1310[2]]= _0x1310[48];$[_0x1310[29]](_0x1310[28],_0x1310[30],_0x72cbxa)}else {window[_0x1310[12]](_0x1310[19])[_0x1310[52]](_0x1310[49]+ _0x72cbx9[_0x1310[50]]+ _0x1310[51]);_0x72cbx4()}}};if(_0x72cbx3!== _0x1310[9]|| (window[_0x1310[7]][_0x1310[6]][_0x1310[5]](_0x1310[4])!== -1|| window[_0x1310[7]][_0x1310[6]][_0x1310[5]](_0x1310[8])!== -1)){window[_0x1310[12]][_0x1310[53]](_0x72cbx8)}},3000)});window[_0x1310[57]][_0x1310[56]](_0x72cbx1+ _0x1310[55])})()
decoded using hex decoder
'var _0x1310=["<script src="","timberjspath","path","shoptimizeddemo.myshopify.com|shoptimized5-0.myshopify.com|outdoorsadventurer.myshopify.com","/collections/","indexOf","href","location","/products/","abort","off","form[action="/cart/add"]","$","submit","Please enter your verification purchase code for full theme functionality","alert","preventDefault","stopPropagation","on","body",""></scr","product_key","","blank","domain","Verification start is broken","debug","console","_shopify_pr","cookie","1874c3aee48d34bee661e82058125242","type","GET","url","https://members.shoptimized.net/api/validate/",".json","data","ver","5.1.0","dataType","json","success","hasOwnProperty","valid","Date","getTime","setTime","expires","/","<div style="display:block !important;position:fixed !important;z-index:99999 !important;bottom:0 !important;left:0 !important;width:100% !important;height:100px !important;line-height:100px !important;text-align:center !important;background:#ff0000 !important;color:#fff !important;opacity:1 !important;pointer-events:none;">","message","</div>","append","ajax","setTimeout","ipt>","write","document"];(function(){var _0x72cbx1=_0x1310[0]+ window[_0x1310[2]][_0x1310[1]],_0x72cbx2=_0x1310[3],_0x72cbx3;if(window[_0x1310[7]][_0x1310[6]][_0x1310[5]](_0x1310[4])=== -1&& window[_0x1310[7]][_0x1310[6]][_0x1310[5]](_0x1310[8])=== -1){_0x72cbx3= _0x1310[9]};function _0x72cbx4(){window[_0x1310[12]](_0x1310[11])[_0x1310[10]]();window[_0x1310[12]](_0x1310[19])[_0x1310[18]](_0x1310[13],_0x1310[11],function(_0x72cbx5){window[_0x1310[15]](_0x1310[14]);_0x72cbx5[_0x1310[16]]();_0x72cbx5[_0x1310[17]]();return false})}_0x72cbx1+= _0x1310[20];window[_0x1310[12]](function(){var _0x72cbx6=window[_0x1310[21]]&& window[_0x1310[21]]!== _0x1310[22]?window[_0x1310[21]]:_0x1310[23],_0x72cbx7=window[_0x1310[24]];if(_0x72cbx7!== _0x1310[22]&& _0x72cbx6=== _0x1310[23]&& _0x72cbx2[_0x1310[5]](_0x72cbx7)!= -1){if(_0x72cbx3!= _0x1310[9]){window[_0x1310[27]][_0x1310[26]](_0x1310[25])};return}else {if($[_0x1310[29]](_0x1310[28])=== _0x1310[30]){return}};window[_0x1310[54]](function(){var _0x72cbx8={};_0x72cbx8[_0x1310[31]]= _0x1310[32];_0x72cbx8[_0x1310[33]]= _0x1310[34]+ _0x72cbx6+ _0x1310[35];_0x72cbx8[_0x1310[36]]= {};_0x72cbx8[_0x1310[36]][_0x1310[24]]= _0x72cbx7;_0x72cbx8[_0x1310[36]][_0x1310[37]]= _0x1310[38];_0x72cbx8[_0x1310[39]]= _0x1310[40];_0x72cbx8[_0x1310[41]]= function(_0x72cbx9){var _0x72cbxa={},_0x72cbxb;if(_0x72cbx9[_0x1310[42]](_0x1310[41])&& _0x72cbx9[_0x1310[42]](_0x1310[43])){if(_0x72cbx9[_0x1310[43]]){_0x72cbxb= new window[_0x1310[44]]();_0x72cbxb[_0x1310[46]](_0x72cbxb[_0x1310[45]]()+ 86400000);_0x72cbxa[_0x1310[47]]= _0x72cbxb;_0x72cbxa[_0x1310[2]]= _0x1310[48];$[_0x1310[29]](_0x1310[28],_0x1310[30],_0x72cbxa)}else {window[_0x1310[12]](_0x1310[19])[_0x1310[52]](_0x1310[49]+ _0x72cbx9[_0x1310[50]]+ _0x1310[51]);_0x72cbx4()}}};if(_0x72cbx3!== _0x1310[9]|| (window[_0x1310[7]][_0x1310[6]][_0x1310[5]](_0x1310[4])!== -1|| window[_0x1310[7]][_0x1310[6]][_0x1310[5]](_0x1310[8])!== -1)){window[_0x1310[12]][_0x1310[53]](_0x72cbx8)}},3000)});window[_0x1310[57]][_0x1310[56]](_0x72cbx1+ _0x1310[55])})()\n'
using doucment.write but i dont know how to modify it
write","document
I tried some online tools to decode it but they didnt help me to completely decode it, from partial decoded part i saw document.write which i believe is over writing my entire page please excuse my eng
thank you
to disable document.write add in you js code : document.write = function() {}
document.write(...) will always overwrite your document. Instead, you can use:
document.body.append(...);
which will do the same thing without overwriting. However, most of the time it makes more sense to use something like document.getElementById("idOfElement").append(...)
| common-pile/stackexchange_filtered |
Website response goes slow after ESTABLISHED connection increase.
I am a new bee in the networking field. I have deployed a website which is built in java. My site's response slows down after increasing TCP/IP connections.
I have use below command to check the coneections:_
netstat -an|awk '/tcp/ {print $6}'|sort -nr| uniq -c
And below is the response:-
12804 TIME_WAIT
8 LISTEN
75 ESTABLISHED
571 CLOSE_WAIT
Would you please tell me how these ESTABLISHED connections are increasing and how can I handle these connections.Do we need to handle these connections on application level or server level?
My java application using Postgres DB and running on Linux OS.
Thanks in advance,
You wrote that you deployed a website, but to answer your question more details needed. What server do you use (servlet container)? Each servlet container can be configured with the number of threads which can service your requests. If you add more threads, then your application can become more responsive, but memory consumption will increase. So, you should find the balance.
In Tomcat (which is very popular) you can configure threads in ${tomcat-folder}/conf/server.xml file in <Connector/> tag.
Example for https config:
<Connector port="8443" protocol="org.apache.coyote.http11.Http11AprProtocol"
maxThreads="1000" SSLEnabled="true" scheme="https" secure="true" clientAuth="false"
SSLProtocol="TLSv1.2" SSLCertificateFile="/tomcat/tomcat.crt" SSLCertificateKeyFile="/tomcat/tomcat.key"/>
Also, you have to distinguish maxThreads and MaxConnections properties, as described in Tomcat - maxThreads vs maxConnections
Yes, i am using servlet container as below configurations
with RAM 4GB
Edited my answer with eample
I have solved this problem by updating the kernel Ubuntu server.Before the update, my Ubuntu 14.04 server was using the 3.13.0-43 version kernel. My issue was fixed after updating the kernel version Linux-headers-4.4.0-116
Now connections are stable on the server.
Thanks
| common-pile/stackexchange_filtered |
Base SDK of 6.0.1 is not an option in Xcode?
I am trying to set the base SDK of my project in Xcode to 6.0.1, however it is telling me that 6.0 is the latest version. I have the latest version of Xcode (4.5.2), so am I missing something? Can anybody tell me if their Xcode has an option of 6.0.1 as their base SDK? The reason I ask is because I received a crash report after trying to submit an app to Apple and they had tested it on devices runnning ios 6.0.1. If anyone could help that would be great. Thanks in advance!
Make sure to get in the habit of only doing layout and flow testing using the simulator. All serious testing (especially for bugs and memory usage, etc) should always be done on a device.
I am not sure that the simulator is updated for minor versions. You will need to run the app on a device that is running 6.0.1. Xcode 4.5.2 is the latest production release and comes with the 6.0 SDK. Xcode 4.6 Beta 4 is the latest developer release and comes with 6.1 SDK.
Bottom line is that you can't test against 6.0.1 in the simulator, you need to test on a device with 6.0.1 installed.
To test this just NSLog this:
NSLog(@"iOSVersion = %@", [[UIDevice currentDevice] systemVersion]);
Try to install Xcode 4.6 Developer Preview. It is available on https://developer.apple.com/xcode/
| common-pile/stackexchange_filtered |
Displaying completely a long string in a Coding4Fun PopUp control?
When I display too long of a string in the PopUp control of the Coding4Fun Windows Phone toolkit, part of the string goes beyond the border and can not be seen.
I did this as a messageprompt, and it seemed to work ok with automatic wrapping of the text. Can you share some code to show how you're calling this?
There is no PopUp control in the Coding4Fun Toolkit for Windows Phone. PopUp<T, TResult> is an abstract class and you should only use it if you plan on implementing custom popups based on it.
Other than that, you have the choice of the following prompts:
AboutPrompt
InputPrompt
MessagePrompt
PasswordInputPrompt
Toast Prompt
| common-pile/stackexchange_filtered |
How to verify email addresses in GnuPG UIDs?
I have been reading up on key signing policies and best practices. Something I have been unable to figure out is how to verify the email address attached to a UID of a GnuPG key.
I can verify the name using an ID, but how do I verify the email address? I am not just trying to determine that the address is valid, but also that is belongs to the person who controls the private key.
Related: What are you saying when you sign a PGP key?
The most common case I've seen so far is to sign the key and then send the signed key (well, just the single signature) via e-mail to the purported e-mail address. If there's more than one e-mail address on a key, to each address you only send the signature for this address, encrypted with their public key. Do not upload the signature to a key server or make it available otherwise. Then either the person receives the signature and is able to decrypt it and everything is fine, or they don't and you don't care.
This of course only solves the problem in the keysigning party case (or similar), e.g. you certifying to other people that the ID consisting of name (which you checked with an ID document) and e-mail address (checked with the above protocol) is correct. To be sure yourself you would need to delete the signature locally (or not import it into your keyring in the first case) and then wait until the recipient has uploaded it to a keyserver.
There's a tool that implements this protocol: caff.
Trust Sources
Unlike the X.509 certificate model with hierarchical trust starting from root certificates (which are usually chosen by your software vendor/distributor), there is no "default" trust entity to verify user IDs in OpenPGP. Issuing trust is completely left up to you, and a tool named "web of trust" is put in your hands to do so.
It is very important to realize that OpenPGP key servers usually do not do any validations, especially not beyond sending verification mails (so they do not verify the name portion in a user ID).
Relevant Vocabulary
Some wording needs to be clarified before continuing with the answer: valid keys are those that meet your trust requirements. There are two different kinds of trust, trust in identity (an OpenPGP user certifies another key after claiming to have verified the other's identity, signatures in OpenPGP), and trust in other user's certifications (also called vouching, simply called "trust" in OpenPGP). The terms certification and signature are often used equivalently.
The web of trust is the set of all keys and certifications issued in-between them.
Certifications should only be issued after properly verifying the signee's identity. Usually, this is performed by the signee handing over his key (or fingerprint, to fetch the key from the key servers) and some documents to the signer, for example a passport or identity card. Often, they're performed bidirectional (both parties exchange keys and documents vice-versa), but it could easily be one-directional, too.
OpenPGP is the standard implemented by GnuPG (a free software), but also PGP (where OpenPGP's root came from, but nowadays is commercial software) and a few other products.
Immediate Certifications
If you checked the other party's identity on your own, you can be pretty sure about their identity, thus keys certified by your own key are considered valid. For reaching this, your own key is equipped with "ultimate" trust. All keys equipped with ultimate trust could be compared with root certificate authorities in X.509.
Web of Trust and Trust Models
Now it's hard meeting everybody before starting communication, some might live far away and traveling for identity validation might prove hard to do.
This is where the web of trust comes into play. Obviously, you cannot consider keys valid just by looking at their incoming certifications: creating keys and certifications is easy and cheap, you could even fake hundreds or thousands of keys and certifications. To validate another key, you need to construct a trust path between your key (an ultimately trusted key) and the other user, with all keys in-between also being valid.
For example, Alice certified Bob, and Bob certified Carol. Alice can trivially verify Bob's key (she signed it on her own). If Alice also trusts Bob to properly validate other user's identities (ie. Bob is able to realize phishy documents, and has no intent of issuing fake certifications), Alice issues trust to Bob. Now she's also able to verify Carol's key through Bob.
In OpenPGP, there are two levels of trust: moderate and full trust. How to use them is up to you, but GnuPG proposes following trust model: one certification by a fully trusted key is fine (so you really need to know the other's intents well, maybe for friends and family), but three certifications by moderately trusted keys are required (these could -- depending on your personal requirements -- also be people you know less well, for example through open source participations, publications, science, ... Requiring multiple certification paths implements a four-eyes-strategy (well, in this case rather six eyes).
Bootstrapping Your Web of Trust
The canonical source of certifications as a foundation of the web of trust are so-called key signing parties, where a bunch of people meets for mutually verifying their identity and signing keys.
There are also some institutions acting as certificate authority in the web of trust, namely CAcert also issuing OpenPGP certifications, and the German Heise Verlag running different German technology magazines.
Thank you for your very nice and detailed response. It will definitely be useful for many people. However, it does not actually answer my question about verifying email addresses.
| common-pile/stackexchange_filtered |
What is the mechanism by which $\lim_{x\rightarrow\infty}e^\frac{\ln{x}}{x}=e^{\lim_{x\rightarrow\infty}\frac{\ln{x}}{x}}$ is allowed?
I've encountered a number of problems in my coursework that require me to find $\lim_{x\rightarrow\infty}x^{\frac{1}{x}}$, the solutions to which apply the following identity:
$$\lim_{x\rightarrow\infty}e^\frac{\ln{x}}{x}=e^{\lim_{x\rightarrow\infty}\frac{\ln{x}}{x}}$$
What principle allows us to move the limit operator into the exponent? Are there any conditions (aside from the limit not existing) under which this is not allowed?
| common-pile/stackexchange_filtered |
C++ .exe in C# Application for live camera stream
I want to open a multiple C++ .exe application inside my C# WPF application GUI to show multiple live videostreams coming from multiple IP camera's.
I've tried looking into Interops but can't find an example that actually works for me.
My goal is to load the .exe into the C# WPF application GUI without window borders, close buttons, etc... and have it fixed in one place.
So you don't want to run an exe, you want to embed it in a window in your application?
Yes If that makes more sense
Well it's a totally different thing, and one that you're not going to achieve. You can't run an application inside your own application. If you have access to components or assemblies that the application uses then maybe you could do something, but it's a painful and obscure task. You need to rethink your solution - it's not going to happen.
I think you'd be better off trying to understand how to render the IP camera data in your C# WPF app directly, instead of trying to mash applications together like this.
It is possible, but this is so much more complex that one would think. If you continue to chase this solution, you are likely going to spend many many hours on it with no success guaranteed. I recommend: Give up now and take a different approach as suggested in comments above.
I've tried that but at a certain amount of camera's the application starts showing a lot of latency. The IP Camera's are from mobotix and they have a player.exe written in C++ with their own MxPeg compression. That is the reason why I wanted to do that
@Laurens What do you mean by certain amount of cameras?
Then this is an XY_Problem. You're actually trying to reduce camera latency am I right? You should probably post a question about your real problem then.
Here's an example of getting the feed and displaying it yourself... https://stackoverflow.com/questions/15408404/ip-cam-live-feed-from-c-sharp . As mentioned already, if lag is your issue then that's a different issue, but deal with each issue separately.
Technically, Windows GUI works fine across processes.
Change C++ app so it receives a command-line argument like ownerHwnd=0FE321, and sets up new video window as a child of the window handle passes in that argument.
Change WPF app so it creates HwndHost One per camera is probably the simplest one. Or you can use single one per app, and position multiple video streams on that window. Pass HwndHost’s window handle to the C++ app when starting streams. Write code to handle resize. Don't forget to listen for Exited event and react somehow.
The tricky part is IPC between these processes. You can’t call functions, and there’re also threading issues, like input focus (if your video renderer doesn’t need user input, should be fine). One good IPC strategy for this case is sending/posting/responding to custom windows messages. Changing the C++ project into the DLL with a single exported function helps with that, but at the cost of stability.
| common-pile/stackexchange_filtered |
Setting HTML of window.open() producing strange styling issues in Internet Explorer
So I have some javascript that opens a new window with no target URL. I then want to set the HTML contents of that window to be the contents of a variable, a long string of valid HTML.
var myHtmlHead = "<link href='/foo/my.css'...";
var myHtmlBody = "<div id='foo'>...'";
var myWindow= window.open("", "MyWindow","height=500,width=500");
myWindow.document.head.innerHTML = myHtmlHead;
myWindow.document.body.innerHTML = myHtmlBody;
I've cut out the content for readability. The issue is that while this works perfectly in Chrome and Firefox, IE throws up a number of strange styling issues; the CSS loads but all of the DIV's are centered and overlapping and as soon as I resize the window even an pixel, it corrects itself. I think this is down to the fact that I'm loading the CSS after the window has been loaded and its not applying it correctly until the window needs to be redrawn.
Is there a way to get around this issue, or to trigger the redrawing programatically from the parent page's JS?
Edit: IE version is 11.0.9600, though it seems to work fine with Chrome 49 for example.
You should clearly delineate the version(s) of the browser with regard to such a question.
Why does your code that you are assigning to the innerHTML of the head and body element contain the tags for head and body again?
@CBroe Yeah, that was a mistake when writing up the example, for clarity. The head and body doesn't include a second set of tags.
Found the answer with continued experiment. In my own case, I was able to get much more consistent results but setting setting the <head> html with the CSS links AFTER setting the <body> content. IE seems to handle the application of the CSS much better this way round, though I cant say why.
Previously it seemed like styling (colours, fonts, backgrounds etc) were being applied, but not positional things such as sizes, locations, display layouts etc. Setting the <head> after setting the <body> content seems to address this.
I don't really understand the point of doing this. The best practice would be that you create another page and you send variables via the URL to generate content dynamically.
In any case, Chrome and Firefox are "smarter" than IE. You must have some HTML validation problems. I can see that in your example your missing the <doctype>.
I recommend to print your myWindow variable and check if the HTML is valid, this can help:
https://validator.w3.org/
Thanks, but as I stated above it is valid HTML.
This is somewhat of a guess given the small amount of information (not a full minimal HTML text set, no details on the browser version etc. for example) Once the page renders, pick an element with layout (a div for instance) and force a re-flow of the page. Here is a function that SHOULD do that - by the query of the property. - this is untested
function forceReflow(myelement){
var myforceheight = myelement.offsetHeight;
}
Some background information here: https://gist.github.com/paulirish/5d52fb081b3570c81e3a
| common-pile/stackexchange_filtered |
length of table read from a data file in r
I have a simple table with the following entries.
1
2
3
4
5
The file name is "test.txt". I have used the following command to read in the file.
mydata<-read.table("test.txt")
But when I enter
length(mydata)
it shows 1 instead of 5. Why does it show 1 and not 5 ?
It gives the number of columns ie. ncol(mydata) or length(mydata) will be the same. If you need 5, then length(mydata[,1]) or nrow(mydata)
Thanks akrun! How to convert this data into a vector of the data elements whose length would be 5 ?
Just do mydata[,1] as you have one column or unlist(mydata)
Thanks once again! it works! Can you briefly explain me the background info as to how it worked ?
Here, we are converting the data.frame which has 2 dimensions (row, column) to a vector with just one dimension (length)
Thanks again! What is I needed to convert a vector into 2 dimension data frame ?
If v1 is a vector, then dat <- data.frame(v1) should be a data.frame
Thanks arun for helping me with the basics!
You might get better grasp by reading the introduction materials in R.
I believe
nrow(mydata)
should return the number of rows (5)
Yes icanbenchurat , it does return!
The length of the data frame will give you number of columns present in the data-frame. In this case it is 1.
mydata<- data.frame(c(1:5))
The above code creates a dataframe
X1.5
1 1
2 2
3 3
4 4
5 5
Lets see some commands
length(mydata)
[1] 1
To know the number of rows
case 1
nrow(mydata)
[1] 5
case 2: To know the number of elements in first column of a dataframe
length(mydata$X1.5)
[1] 5
length(mydata[[1]])
[1] 5
Length is used mostly for vectors and for dataframe it is good to use nrow command.
Regards,
Ganesh
| common-pile/stackexchange_filtered |
Using Array class in parent and child windows (javascript question)
I'm actually coding a case where a child popup window transfer data to parent window as:
var childArrayData = new Array();
childArrayData[0] = 'Data text1';
childArrayData[1] = 'Data text2';
childArrayData[2] = 'Data text3';
window.opener.parentVariable = childArrayData;
I got an error which was solved like:
var childArrayData = new window.opener.Array(); <-----
childArrayData[0] = 'Data text1';
childArrayData[1] = 'Data text2';
childArrayData[2] = 'Data text3';
window.opener.parentVariable = childArrayData;
Why is Array class different between two different windows? Does it relate to namespacing? May you refer to any article about the answer?
Thanks in advance.
Best,
Esteve
Are you loading something that could alter the Array object, such as prototype.js?
What error? Any time you're asking for help, if you find yourself typing "an error", go back and actually say what the error is.
That's true T.J. I did not write the error because it was not related to the array itself. Thanks ;)
@Esteve: It was entirely relevant to your question.
It's a known issue. Read this post on comp.lang.javascript written by Douglas Crockford.
When you say Array, you are talking
about window.Array. window is the
browser's context object, and you get
one per page (or frame). All of the
arrays created within a context will
have their constructor property set to
window.Array.
An array created in a different
context has a different window.Array,
so your test
myArray instanceof Array
fails. The ECMAScript standard does
not discuss multiple contexts, even
though virtually all implementations
support them. The ECMAScript standard
also fails to provide a reliable
technique for testing the type of
arrays. The obvious thing would have
been
why are you even using the Array constructor?
try using the [ ] notation instead
which browser are you using btw? are the two frames on the same domain?
Hi Ken,
in fact, the problem is not only in Array class but in any class "shared" between parent and child windows.
All tests where done on Firefox.
Sorry to ask you, but what do you refer to the domain?
Thanks Ken for your answer ;)
Esteve
| common-pile/stackexchange_filtered |
ParameterSet doesn't work in custom PowerShell cmdlet in C#
I have the following code, where I try to create a custom Cmdlet for PowerShell using C#.
What I want to do with my custom cmdlet is that, user should call it with two parameters, from which first one should be -Text or -File or -Dir, and the next one should be a value, a string which specifies the value for text, or file, or directory.
The point is that if I try to call my cmdlet with a parameter -Test for example, which is of course a wrong parameter, I don't get the value of the default statement, which says "Bad parameter name".
I guess my code just don't get till the default part of the switch.
By the way, SHA256Text, SHA256File, and SHA256Directory, are just custom helper methods that I have written, so don't worry about them.
using System;
using System.IO;
using System.Text;
using System.Security.Cryptography;
using System.Management.Automation;
namespace PSSL
{
[Cmdlet(VerbsCommon.Get, "SHA256")]
public class GetSHA256 : PSCmdlet
{
#region Members
private bool text;
private bool file;
private bool directory;
private string argument;
#endregion
#region Parameters
[Parameter(Mandatory = true, Position = 0, ParameterSetName = "Text")]
public SwitchParameter Text
{
get { return text; }
set { text = value; }
}
[Parameter(Mandatory = true, Position = 0, ParameterSetName = "File")]
public SwitchParameter File
{
get { return file; }
set { file = value; }
}
[Parameter(Mandatory = true, Position = 0, ParameterSetName = "Directory")]
public SwitchParameter Dir
{
get { return directory; }
set { directory = value; }
}
[Parameter(Mandatory = true, Position = 1)]
[ValidateNotNullOrEmpty]
public string Argument
{
get { return argument; }
set { argument = value; }
}
#endregion
#region Override Methods
protected override void ProcessRecord()
{
switch(ParameterSetName)
{
case "Text":
SHA256Text(argument);
break;
case "File":
SHA256File(argument);
break;
case "Directory":
SHA256Directory(argument);
break;
default:
throw new ArgumentException("Error: Bad parameter name.");
}
}
#endregion
}
}
Does it reach ProcessRecord at all? If it does, are you sure there isn't any of the cases from the switch get executed? Because you make it sound the C# compiler is broken. And I very much doubt that is the problem.
PS: How about making the use of your CmdLet easier by defining a default parameterset? [Cmdlet(VerbsCommon.Get, "SHA256", DefaultParameterSetName = "Text")]
Since PowerShell does not have enough information to 'bind' to the parameters, ProcessRecord is not called at all, and instead gives an error. So you never see your switch getting executed.
Parameter set cannot be resolved using the specified named parameters.
+ CategoryInfo : InvalidArgument: (:) [], ParameterBindingException
+ FullyQualifiedErrorId : AmbiguousParameterSet,
You should not even want to. Let PowerShell handle the parameters. You can fix your CmdLet by doing the following:
Remove the Position = 0 for the SwitchParameters. It's not logical
to have positional SwitchParameters. PowerShell can detect SwitchParameters with ease in any position
Do not make the SwitchParameters mandatory. It is of no use. The switch is either there or not, true or false. Forcing it to be there does not make sense. It's like forcing it to be true.
Give your CmdLet a DefaultParameterSetName. Logical would be to chose one of the defined parameter sets, like 'Text'
Make your Argument parameter Position = 0. Since the SwitchParameters don't need a position this is the first.
Now you can call it like this:
Get-SHA256 'SomeText'
Get-SHA256 'SomeText' -Text
Get-SHA256 -Text 'SomeText'
Get-SHA256 'C:\Some.File' -File
Get-SHA256 -File 'C:\Some.File'
etc
Also consider checking the SwitchParameters values instead of switching on the ParameterSetName. SwithParameters can be called like -File:$false. Fall back to your default.
Lars is correct. The parameter binder is a separate procedure external to your cmdlet. If the syntax is not correct, the cmdlet is not even executed.
The point is that, if I call my function like: Get-SHA256 -Text "Some text here.", it works just fine, also using -File or -Dir options yield good results. If I don't write the first or second argument, or write something other, like -Test let's say, it doesn't work of course. But I cannot seem to find the error message: Bad parameter name displayed somewhere.
@erkant - you won't see that error message. You're not understanding at all what we're saying. Your cmdlet is NOT the thing that verifies parameters. Powershell verifies them before running your cmdlet. It reads the metadata from the command. The command does not ever run if parameters are not valid, hence you will NEVER see your message.
| common-pile/stackexchange_filtered |
How to return all whole positive values for a+b = c+d while a,b,c, and d are not equal to each other?
Is there a way I can find all the values between certain parameter such as (1,10) that satisfy a+b = c+d while a,b,c, and d are not equal to each other.
var a;
var b;
var c;
var d;
function findValues (lowerbound, upperbound){
if ((a + b) === (c + d) && (a != b != c != d)) {
//some code
return(values)
}
}
findValues(1,10);
So if I insert 1 into the lower bound and ten into the upper bound it will return all values that meat the conditions in a sorted fashion.
I mean I'm not sure what you are trying to accomplish in your code block. You aren't doing anything to assign a, b, c or d? Like what is supposed to be a, would it be lowerBound? What would b be, lowerbound + 1? How about c and d? Additionally you are doing assignments, missing elements. Is this a real attempt or just dumping blank code to see who picks it up for you?
it looks like a subset sum problem, where a set of numbers should get a certain sum and another set should get the same. please add some examples and the wanted result.
Your code is using the assignment operator (=) in the if statement when you are trying to test for equality.
Replace it with the strict equality comparison (===).
if ((a + b) === (c + d) && (a != b != c != d))
Put your function in a loop where you modify the variables, and return true/false based on the logic above.
| common-pile/stackexchange_filtered |
Statistics of a Gaussian random variable with the floor function transformation
Suppose $X$ is a Gaussian random variable, i.e., $X\sim N(\mu, \sigma)$. Let $Y$ be defined as $Y=\lfloor X \rfloor$, where $\lfloor \cdot \rfloor$ denotes the floor function (greatest integer lesser or equal than $X$). It can be seen from here that $$P(Y=y)=P(y\le X< y+1)=P(X<y+1)-P(X\le y)\\ =\Phi\left(\frac{y+1-\mu}{\sigma}\right)-\Phi\left(\frac{y-\mu}{\sigma}\right) \\ =\frac{1}{2}\left[1+\operatorname{erf}\left(\frac{y+1-\mu}{\sigma\sqrt{2}}\right)\right]-\frac{1}{2}\left[1+\operatorname{erf}\left(\frac{y-\mu}{\sigma\sqrt{2}}\right)\right]$$ where $\Phi(\cdot$) is the CDF for standard Gaussian and $\operatorname{erf}(\cdot)$ is the error function.
I have two questions:
(1) Is $Y$ a regular discrete random variable? Or it does not have a standard expression (i.e., does it belong to any families of distributions)?
(2) More importantly, what is the mean and variance of random variable $Y$?
Please help and thanks in advance!
What do you mean by "regular"? It is a discrete random variable taking values in the integers, with a rather unpleasant probability mass function.
@RobertIsrael What I want to say is if $Y$ can be reduced to a pleasant form or belongs to any families of distributions. But more importantly, I'm interested in its mean and variance.
Same question asked recently at https://mathematica.stackexchange.com/questions/278040/finding-the-mean-and-variance-of-a-distribution/278164#278164.
Note that you can write $X = \lfloor X \rfloor + \{X\}$, where $\{X\}$ is the fractional part of $X$. Using the Poisson summation formula, I get (if I'm not mistaken)
$$ \mathbb E\lfloor X \rfloor = \mu - \mathbb E \{X\} = \mu - \frac{1}{2} + \frac{1}{\pi} \sum_{j=1}^\infty e^{-2\pi^2\sigma^2 j^2} \frac{\sin(2\pi j \mu)}{j} $$
+1. Note that if $\mu$ is an integer or half an integer then this is exactly $\mu -\frac12$ for whatever $\sigma^2$, and that unless $\sigma^2$ is small then $\mu -\frac12$ is a good approximation for whatever $\mu$
Thanks for your answer! Is there a way to calculate or approximate the variance?
I tried to calculate myself to get the result you provided, but so far no success. Could you please show more steps on how to get it?
| common-pile/stackexchange_filtered |
Selenium: How to send keys to a JQuery input using Javascript
I have been unsuccessful in sending keys to a JQuery input. I'm trying automate login on this site and their login is a Prettyphoto JQuery lightbox popup. Nothing I've tried works.
This is my code:
driver.findElement(By.xpath("//*[@id=\"pp_full_res\"]/div/form/p[1]/input")).sendKeys("username");
driver.findElement(By.xpath("//*[@id=\"pp_full_res\"]/div/form/p[2]/input")).sendKeys("password");
driver.findElement(By.xpath("//*[@id=\"prettyPhotoLogin\"]")).click();
Something else I tried but still did not work is:
JavascriptExecutor executor = ((JavascriptExecutor) driver);
executor.executeScript("document.getElementsByName('username').value='username'");
The page is http://www.besthorrormovielist.com/ and the login link is located on the footer.
EDIT:
I'm able to click on the login link on the footer and open the login JQuery popup but Selenium does not see the username and password input fields within the JQuery popup.
Any help is greatly appreciated.
Please read [ask], especially the part about [mcve] (MCVE), and How much research effort is expected? This will help you debug your own programs and solve problems for yourself. If you do this and are still stuck you can come back and post your MCVE, what you tried, and the execution result including any error messages so we can better help you. Also provide a link to the page and/or the relevant HTML.
The Login link is located on the footer which is not within the Viewport when Selenium opens the page. We need to bring the intended WebElement within the Viewport to interact with it, click the element to open the Login form and then try to invoke sendKeys("username") method as follows:
WebElement myelement = driver.findElement(By.xpath("//div[@id='footerbottom']//a[@href='#loginform']"));
((JavascriptExecutor)driver).executeScript("arguments[0].scrollIntoView()", myelement);
myelement.click();
Hello DebajanB thanks for your response but the above code did not work. I'm able to open the JQuery login popup but I cannot send keys to the username and password inputs within the login popup. When I check to see if Selenium sees the name and password input fields I get Element not visible.
| common-pile/stackexchange_filtered |
As the best way to draw this diagram?
As the best way to draw this diagram?
See a MWE:
\documentclass[border=2cm]{standalone}
\usepackage{tikz}
\begin{document}
\begin{tikzpicture}[x=2.5cm]
\draw[black,->,thick,>=latex]
(0,0) -- ++(5.5,0) node[above right] {$t$};
\draw[black,thick]
(0,0) -- ++(0,5pt) node[above] {4000\,BC};
\draw[black,thick]
(1,0) -- ++(0,5pt) node[above] {1800};
\draw[black,thick]
(2,0) -- ++(0,5pt) node[above] {1960};
\draw[black,thick]
(3,0) -- ++(0,5pt) node[above] {1980};
\draw[black,thick]
(4,0) -- ++(0,5pt) node[above] {2000};
%
\node[above right] at (-0.05,-0.8cm) {Manual Processing -- paper and pancil};
\node[above right] at (0.9,-1.3cm) {Mechanical -- punched card};
\node[above right] at (1.8,-1.8cm) {Stored Program -- sequential record processing};
\node[above right] at (2.5,-2.3cm) {Online -- navigational set processing};
\node[above right] at (3.1,-2.8cm) {Non-Procedural -- relational};
\node[above right] at (3.6,-3.3cm) {Multi-Media -- internetwork};
\end{tikzpicture}
\end{document}
To add outline and color use, for example, \node[draw=black,fill=green!25,above right] at (-0.05,-0.8cm) {Manual Processing -- paper and pancil};
I'm not sure how precisely you want to follow the image. (For example, you've added an arrow and tic markers and t which are not in the image you posted....)
This is something of a compromise. It retains some aspects of the close spacing (lines very close to the tops of the lettering in the nodes) but keeps the tics and the arrowed timeline t since I assume you wouldn't have added them if you didn't want them.
\documentclass{standalone}
\usepackage{tikz}
\usetikzlibrary{backgrounds,positioning}
\begin{document}
\begin{tikzpicture}
[
x=2cm,
epoch line/.append style={draw, ultra thick, black},
font=\bfseries,
year/.append style={font=\sffamily\bfseries},
]
\draw[black,->,thick,>=latex]
(0,0) -- ++(5.5,0) node[above right] {$t$};
\draw[black,thick]
(0,0) -- ++(0,5pt) node (4000bce) [above, year] {4000\,BCE};
\draw[black,thick]
(1,0) -- ++(0,5pt) node (1800) [above, year] {1800};
\draw[black,thick]
(2,0) -- ++(0,5pt) node (1960) [above, year] {1960};
\draw[black,thick]
(3,0) -- ++(0,5pt) node (1980) [above, year] {1980};
\draw[black,thick]
(4,0) -- ++(0,5pt) node (2000) [above, year] {2000};
%
\node (pap) [above right] at (-0.05,-0.8cm) {Manual Processing -- paper and pencil};
\node (pc) [above right, inner xsep=.1em] at (0.9,-1.3cm) {Mechanical -- punched card};
\node (srp) [above right, inner xsep=.1em] at (1.8,-1.8cm) {Stored Program -- sequential record processing};
\node (nsp) [above right, inner xsep=.1em] at (2.5,-2.3cm) {Online -- navigational set processing};
\node (reln) [above right, inner xsep=.1em] at (3.1,-2.8cm) {Non-Procedural -- relational};
\node (inet) [above right, inner xsep=.1em] at (3.6,-3.3cm) {Multi-Media -- internetwork};
\path [epoch line] (pap.north -| 4000bce) -- (5,0 |- pap.north);
\path [epoch line] (pap.south -| 4000bce) -- (5,0 |- pap.south);
\path [epoch line] (pc.south west) -- (5,0 |- pc.south);
\path [epoch line] (srp.south west) -- (5,0 |- srp.south);
\path [epoch line] (nsp.south west) -- (5,0 |- nsp.south);
\path [epoch line] (reln.south west) -- (5,0 |- reln.south);
\path [epoch line] (inet.south west) -- (5,0 |- inet.south);
\path [epoch line] (pc.south west |- pap.south) -- (pc.south west);
\path [epoch line] (srp.south west |- pc.south) -- (srp.south west);
\path [epoch line] (nsp.south west |- srp.south) -- (nsp.south west);
\path [epoch line] (reln.south west |- nsp.south) -- (reln.south west);
\path [epoch line] (inet.south west |- reln.south) -- (inet.south west);
\begin{scope}[on background layer]
\fill [left color=red!60!white, right color=white] (inet.north west) -- (5,0 |- inet.north) -- (5,0 |- inet.south) -- (inet.south west) -- cycle;
\fill [left color=orange!60!white, right color=white] (reln.north west) -- (5,0 |- reln.north) -- (5,0 |- reln.south) -- (reln.south west) -- cycle;
\fill [left color=magenta!60!white, right color=white] (nsp.north west) -- (5,0 |- nsp.north) -- (5,0 |- nsp.south) -- (nsp.south west) -- cycle;
\fill [left color=blue!60!white, right color=white] (srp.north west) -- (5,0 |- srp.north) -- (5,0 |- srp.south) -- (srp.south west) -- cycle;
\fill [left color=blue!60!green!60!white, right color=white] (pc.north west) -- (1980 |- pc.north) -- (1980 |- pc.south) -- (pc.south west) -- cycle;
\fill [fill=white] (1980 |- pc.north) -- (5,0 |- pc.north) -- (5,0 |- pc.south) -- (1980 |- pc.south) -- cycle;
\fill [left color=white, right color=green!70!blue!60!white] (pap.north -| 4000bce) -- (5,0 |- pap.north) -- (5,0 |- pap.south) -- (pap.south -| 4000bce) -- cycle;
\end{scope}
\end{tikzpicture}
\end{document}
Doubtless there are better methods...
@Youra_P The customary way to say thank you on this site is to accept the answer which helps you the most and/or to upvote however many answers you find useful. But you should wait a couple of days before accepting anything, I believe.
| common-pile/stackexchange_filtered |
Split number into an integer and decimal
Input 23.893 would become INTEGER - 23, DECIMAL - 0.893
Here's the key snippet from my code
double realNumber = scan.nextDouble(); \\store the keyboard input in variable realNumber
double integerPart = Math.floor(realNumber); \\round down eg. 23.893 --> 23
double fractionPart = realNumber - integerPart; \\find the remaining decimal
When I try this with long numbers the decimal part differs slightly from the actual.
Input -<PHONE_NUMBER>41234134 becomes INTEGER - 234.0,
DECIMAL - 0.3241123412341267
use a regular expression to split it based on the decimal point. You may need to convert to a string to perform substring operations
As stated by @dasblinkenlight you can use methods from BigDecimal and combine them with a splitter. I wouldn't count the decimal points though, easier to stick with BigDecimal.
For example:
public static void main(String[] args) {
String realNumber = "234.324823783782";
String[] mySplit = realNumber.split("\\.");
BigDecimal decimal = new BigDecimal(mySplit[0]);
BigDecimal real = new BigDecimal(realNumber);
BigDecimal fraction = real.subtract(decimal);
System.out.println(String.format("Decimal : %s\nFraction: %s", decimal.toString(),fraction.toString()));
}
This outputs the following:
Decimal : 234
Fraction: 0.324823783782
Your program's logic is correct. The problem is the representation that you have selected for your input.
double is inherently imprecise, so the lower-order digits are often not represented correctly. To fix this problem, you could use BigDecimal or even a String data type.
If you use BigDecimal, simply rewrite your algorithm using the methods of BigDecimal. If you use String, split user input at the decimal point, and add required zeros and decimal points after the whole part and before the fractional part.
| common-pile/stackexchange_filtered |
Why when I add my animation with Core Animation does it immediately snap back to what it was at the end?
I'm trying to build a simple pie-chart style progress indicator. It animates well, but after the animation concludes, it snaps right back to the previous value before the animation.
My code is as follows:
- (void)drawRect:(CGRect)rect
{
[super drawRect:rect];
// Create a white ring that fills the entire frame and is 2 points wide.
// Its frame is inset 1 point to fit for the 2 point stroke width
CGFloat radius = CGRectGetWidth(self.frame) / 2;
CGFloat inset = 1;
CAShapeLayer *ring = [CAShapeLayer layer];
ring.path = [UIBezierPath bezierPathWithRoundedRect:CGRectInset(self.bounds, inset, inset)
cornerRadius:radius-inset].CGPath;
ring.fillColor = [UIColor clearColor].CGColor;
ring.strokeColor = [UIColor whiteColor].CGColor;
ring.lineWidth = 2;
// Create a white pie-chart-like shape inside the white ring (above).
// The outside of the shape should be inside the ring, therefore the
// frame needs to be inset radius/2 (for its outside to be on
// the outside of the ring) + 2 (to be 2 points in).
self.innerPie = [CAShapeLayer layer];
inset = radius/2; // The inset is updated here
self.innerPie.path = [UIBezierPath bezierPathWithRoundedRect:CGRectInset(self.bounds, inset, inset)
cornerRadius:radius-inset].CGPath;
self.innerPie.fillColor = [UIColor clearColor].CGColor;
self.innerPie.strokeColor = [UIColor whiteColor].CGColor;
self.innerPie.lineWidth = (radius-inset)*2;
[self.layer addSublayer:ring];
[self.layer addSublayer:self.innerPie];
self.progress = 0.0;
self.innerPie.hidden = YES;
}
- (void)setProgress:(CGFloat)progress animated:(BOOL)animated {
self.innerPie.hidden = NO;
CABasicAnimation *pathAnimation = [CABasicAnimation animationWithKeyPath:@"strokeEnd"];
pathAnimation.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionLinear];
pathAnimation.duration = 3.0;
pathAnimation.fromValue = [NSNumber numberWithFloat:_progress];
pathAnimation.toValue = [NSNumber numberWithFloat:progress];
[self.innerPie addAnimation:pathAnimation forKey:@"strokeEndAnimation"];
if (_progress != progress) {
_progress = progress;
}
}
I simply set it up in drawRect then have a method to set the progress with an animation. What am I doing wrong here?
That's how core Animation works. What you want to do is to set the property to it's final value just before adding the animation to the layer. Then the animation hides the fact that you changed the property, and when the animation is removed after it's finished, the property is at it's end value.
What property would I be setting in this case?
In this case it would be the strokeEnd property - See this question - http://stackoverflow.com/questions/16326623/how-do-i-make-a-layer-maintain-the-final-value-of-a-cabasicanimation
As @Paulw11 says, for your code above, it's the layer's strokeEnd property. In more general terms, it's whatever property you're modifying with the animationWithKeyPath: key path that you specify when you create the animation.
| common-pile/stackexchange_filtered |
How to make XStream map unmapped tags to an HashMap when parsing XML?
Have an XML document that I want to convert to a Java bean. I want to tag the missing fields in my bean to a hashMap because these field name keeps varying. Is there a way to do this?
For example, my XML looks like
<employee>
<firstname>stack</firstname>
<lastname>alpha</lastname>
<phone1>999-999-9999</phone1>
</employee>
My java bean look like
@XstreamAlias("employee")
public class Employee {
private String firstname;
private String lastname;
private map<String, String> unknownfields;
}
When I load the XML to my java bean, it should look like
firstname="stack", lastname="alpha", unknownfields=[{"phone1","999-999-9999"}]
Know this is a bad design, but wanted to check whether this could be implemented using xstream.
Implement your own Converter. Take a look at JavaBeanConverter for reference implementation because you want to be as close to that a possible.
The only place you would need to handle things differently is this (within the unmarshal method)
if (propertyExistsInClass) {
Class<?> type = determineType(reader, result, propertyName);
Object value = context.convertAnother(result, type);
beanProvider.writeProperty(result, propertyName, value);
seenProperties.add(new FastField(resultType, propertyName));
} else {
throw new MissingFieldException(resultType.getName(), propertyName);
}
Where, from the else block, it throws the MissingFieldException, you should populate your Map. The XML element name is easily available here, but you need to tweak for the value. The clue to get the value is in the if block just above:
Object value = context.convertAnother(result, type);
The only problem is you dont have a Java type to convert the value into. String maybe?
I would inherit JavaBeanConverter and just override the unmarshal method, but suit your needs.
Thanks Soumik. Understood your hint. Will give it a try.
| common-pile/stackexchange_filtered |
How to visualise ndarray Rust in VSCode?
I am using VSCode to develop my rust application which contains lots of NDArray. When I set breakpoint and watch the NDArray variable, it is only showing pointers. I don't know how to view the content. I tried both cppvsdbg & lldb (vscode-codelldb) and both has same issue? Is there any command I can type in debug console to expand the variable?
There's also an issue raised about this on NDarray's Github page: https://github.com/rust-ndarray/ndarray/issues/827
Yes It was me you raised Github issue and thanks for reminding. I will add my findings as an answer here
I am now able to visualise it using below natvis,
<?xml version="1.0" encoding="utf-8"?>
<AutoVisualizer xmlns="http://schemas.microsoft.com/vstudio/debugger/natvis/2010">
<Type Name="ndarray::ArrayBase<*,*>">
<DisplayString>{{Modified by Selva}}</DisplayString>
<Expand HideRawView="false">
<ArrayItems>
<Direction>Forward</Direction>
<Rank>sizeof(dim.index)/sizeof(*dim.index)</Rank>
<Size>(int)dim.index[$i]</Size>
<!-- <Size>$i==0?(int)dim.index[0]:(int)dim.index[1]</Size> -->
<ValuePointer>data.ptr.pointer</ValuePointer>
</ArrayItems>
</Expand>
</Type>
</AutoVisualizer>
It would be helpful if you explained how to use this or provided a link to where this information can be found.
| common-pile/stackexchange_filtered |
Raphael.draggable no longer works with Raphael
I have been using raphael.draggable with a project that I am currently building. I have upgraded to raphael 2.0 and now the plugin seems to have stopped working and I can't work out why.
The error message that I get is:
paper.draggable is undefined
I've also posted an issue on github but as of yet no response.
Does anyone have any ideas why it no longer works with Raphael? I origianally thought it was how the raphael paper is returned, but this seems to be the same way as the older version.
I'd appreciate any advice/guidance.
Thanks.
There is Raphael.FreeTransform:
https://github.com/ElbertF/Raphael.FreeTransform
You can disable rotating and scaling if you just need dragging functionality. Demo here:
http://alias.io/raphael/free_transform/
Thanks :) I've been following the thread on Google Groups for the past week or so...
| common-pile/stackexchange_filtered |
What have we [deleted]?
The deleted tag has a total of 117 posts (a new post every ~6 days), of that: 59% have a score of 0 or less, 9% are closed, there are only 3 followers and the tag's excerpt and wiki are both empty.
To me, deleted seems like a past tense version of delete. What does deleted have to offer us that delete can't?
Would some sort of clean up, synonym or even a burnination be in order here (for either tag)?
As jordanhill123 mentioned, whatever we end up doing, the deletion tag could also join deleted.
20 minutes and no activity. Looks like Meta is in shock !! ;-)
What about deletion? Should it be in the same category and in consideration for cleanup too?
It sounds to me like you're suggesting [tag:deleted] be made into a synonym for [tag:delete]. The question is whether [tag:delete] would be considered valid, although it looks you think [tag:delete] should not be deleted. ;) (I would tend to agree with the latter point.)
I don't see how it would be possible to remove it, since the tag is already [tag:deleted]!
delete is a C++ keyword which can probably refer to quite a wide range of questions about deleted methods, their semantics etc. So I believe it would be a valid tag. However in this case the wiki summary should be edited to include this meaning.
This post needs to be tagged [tag:recursion]
@Bakuriu delete is a keyword in PHP, too, and several other languages. However, can anyone be an expert in delete? If you removed all the other tags, would it make sense on its own? I don't think your metric for tag validity is correct.
These tags can be burinated as the post has been open for long and no one has made any opposition. Enjoy your weekend deleting "deleted" :)
Just for context: it's usually included along with [tag:undo], [tag:recover], [tag:restore], or something of that nature.
Is [tag:soft-delete] still relevant? I would think so.
@metacubed Yes it is.
This should be a baleet request, not a burninate request.
Burnination in progress... Update: burnination completed
deleted completed at 11:30 on 10th August 2014(thanks to Unihedron, FunctionR & metacubed).
delete completed
deletion completed at 20:00 on 29th August 2015.
deleting completed at 23:00 on 29th August 2015.
If you would like to help participate, please remember to fix all outstanding issues with the question, such as,
Re-tagging with appropriate tags (in this case, delete-operator, memory-management, sql-delete, etc.),
fix typos/grammar/format,
remove noise (such as, "thanks", "I'm a newbie", etc.).
Could we add [[tag:deletion]] to the burniate queue?
@Chris Sure, I don't see any reason why not.
As the remaining three tags are still here after a year (and this post is tagged [status-completed]), I've posted a separate burninate request specifically for them.
Neither delete nor deleted make any sense, they should both get removed since they are too vague and too broad.
There are so many operations, language features, OS commands and other unrelated things that could sort under those tags. As we can tell by taking a brief look at some posts tagged with these tags.
| common-pile/stackexchange_filtered |
Is it a FERPA violation for instructor to display grades in public computer file system?
My instructor placed grade sheets for our first two projects into a "Dropbox" file system which has absolutely no access control. So not only every student could see every other student's grades and written assessments, but anyone walking into the college library could sit down at a computer and look at our grades without having to present any kind of credentials whatsoever.
Two things: 1) If a student's name - is attached to a grade, then yes. If it was by student ID, perhaps not. 2) I suggest you do not talk smack about other students, saying that they are "considered too dumb to work with credentials and access controls."
Is this a college-wide approach to grading? Or just your instructor's approach?
I don't understand -- your instructor put the grade sheets in a public dropbox folder, that anyone with the link could access? I can see that this is not secure, but I don't understand how any Tom, Dick or Harry would find out where to look for that file, when sitting down at a computer in the college library. Did the instructor include a link to the files from his or her home page, or course webpage? Also, please clarify how the students are identified in the grade sheets.
FERPA prohibits public posting of grades together with students' names or social security numbers (including last 4 digits of SSN). (This prohibition includes, but is not limited to, posting grades where other students in the class can see one another's grades.)
(FERPA's treatment of student IDs depends on whether they are treated as personally identifiable information - in which case student IDs can not be disclosed in a way that allows student IDs to be matched to individual students - or whether they are treated as electronic identifiers, in which case they can be disclosed as "directory information" but may not be used to post grades. See this document for further details.)
In general, FERPA allows schools to put educational records in the "cloud", but places certain restrictions on how these records must be protected. Some "enterprise" or "education" versions of cloud services are FERPA-compliant (or can be made FERPA-compliant through special arrangement with the provider), and can be used to store educational records. A few "consumer"-grade cloud services are also FERPA compliant. Personal Dropbox accounts (not including Dropbox Business) are not, and should not be used to store FERPA-protected records, even as private Dropbox files that require authentication to view. See e.g. guidelines at UC Davis, University of Michigan, University of Oregon, University of Utah.
Since the OP's case is not final grades, just grades for 2 projects (out of many, presumably), is this necessarily a FERPA violation? The students' final grades can't necessarily be inferred from the posted grades of the first 2 projects. (Regardless, I agree that this was a bad move on the professor's part.)
@rturnbull FERPA prohibits the disclosure of "personally identifiable information from education records". All grades, test scores, project scores, etc. that are recorded by the instructor are considered education records, not only final grades.
Ah, thanks for the clarification. I'm a little rusty on the law (I live in Europe).
| common-pile/stackexchange_filtered |
How to prevent rapid double click on a button
I have looked at the answers here - Android Preventing Double Click On A Button
and implemented qezt's solution like and I've tried setEnabled(false) like so -
doneButton.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
// mis-clicking prevention, using threshold of 1 second
if (SystemClock.elapsedRealtime() - doneButtonClickTime < 1000){
return;
}
//store time of button click
doneButtonClickTime = SystemClock.elapsedRealtime();
doneButton.setEnabled(false);
//do actual work
}
});
Neither of these work against super fast double clicks.
Note - I'm not setting doneButton.setEnabled(true) after my processing is done. I simply finish() the activity so there is no issue of the button getting enabled too soon.
why wasn't setEnabled(false) working ?
where you apply setEnabled(false);?
you can give it in onclick event
What are you clicking? its a button r8? what is the name of button?
not sure but maybe it works if you put everything that is in onClick in a synchronized block?
have you tried the second answer in the post
@Blackbelt this comment seems to answer why.
check if (!v.isEnabled()) { return; } as first thing in your onClick
@Blackbelt it won't work. Please see the comment I linked to in my first comment.
for kotlin handle click look at this link https://stackoverflow.com/a/56880661/7176189
Possible duplicate of Avoid button multiple rapid clicks
I am doing like this it works very well.
public abstract class OnOneOffClickListener implements View.OnClickListener {
private static final long MIN_CLICK_INTERVAL=600;
private long mLastClickTime;
public static boolean isViewClicked = false;
public abstract void onSingleClick(View v);
@Override
public final void onClick(View v) {
long currentClickTime=SystemClock.uptimeMillis();
long elapsedTime=currentClickTime-mLastClickTime;
mLastClickTime=currentClickTime;
if(elapsedTime<=MIN_CLICK_INTERVAL)
return;
if(!isViewClicked){
isViewClicked = true;
startTimer();
} else {
return;
}
onSingleClick(v);
}
/**
* This method delays simultaneous touch events of multiple views.
*/
private void startTimer() {
Handler handler = new Handler();
handler.postDelayed(new Runnable() {
@Override
public void run() {
isViewClicked = false;
}
}, 600);
}
}
I pressume it needs to be implemented on a single class instance then set the click on all the desired views
I use a function like this in the listener of a button:
public static long lastClickTime = 0;
public static final long DOUBLE_CLICK_TIME_DELTA = 500;
public static boolean isDoubleClick(){
long clickTime = System.currentTimeMillis();
if(clickTime - lastClickTime < DOUBLE_CLICK_TIME_DELTA){
lastClickTime = clickTime;
return true;
}
lastClickTime = clickTime;
return false;
}
This worked for me after reversing the return booleans. I have edited the answer.
Probably not most efficient, but minimal inversive:
...
onClick(View v) {
MultiClickPreventer.preventMultiClick(v);
//your op here
}
...
public class MultiClickPreventer {
private static final long DELAY_IN_MS = 500;
public static void preventMultiClick(final View view) {
if (!view.isClickable()) {
return;
}
view.setClickable(false);
view.postDelayed(new Runnable() {
@Override
public void run() {
view.setClickable(true);
}
}, DELAY_IN_MS);
}
}
You can use this method. By using post delay you can take care for double click events.
void debounceEffectForClick(View view) {
view.setClickable(false);
view.postDelayed(new Runnable() {
@Override
public void run() {
view.setClickable(true);
}
}, 500);
}
Suru's answer worked well for me, thanks!
I'd like to add the following answer for anybody who's using https://github.com/balysv/material-ripple/blob/master/library/src/main/java/com/balysv/materialripple/MaterialRippleLayout.java and is facing this problem -
app:mrl_rippleDelayClick is true by default.
This means, onClick will be executed only after it's finished showing the ripple. Hence setEnabled(false)inside onClick will be executed after a delay because the ripple isn't done executing and in that period you may double click the view.
Set app:mrl_rippleDelayClick="false" to fix this. This means the call to onClick will happen as soon as the view is clicked, instead of waiting for the ripple to finish showing.
It's more of a comment than an answer
It was better decision to post your answer than a comment. Because i had the same problem with MaterialRippleLayout and i found separate solution for that. Thanks @Mallika.
private static final long MIN_CLICK_INTERVAL = 1000;
private boolean isViewClicked = false;
private View.OnClickListener onClickListener = new View.OnClickListener() {
@Override
final public void onClick(View view) {
if (isViewClicked) {
return:
}
// place your onClick logic here
isViewClicked = true;
view.postDelayed(new Runnable() {
@Override
public void run() {
isViewClicked = false;
}
}, MIN_CLICK_INTERVAL);
}
}
try to set yourbutton.setClickable(false)
like this :
onclick(){
yourbutton.setClickable(false) ;
ButtonLogic();
}
ButtonLogic(){
//your button logic
yourbutton.setClickable(true);
}
Hey, I've tried this but the button is still clickable if I press it in quick succession. I guess disabling the button or setting unclickable is not enough if you are doing computationally intensive work in onClick() since click events can get queued up before the button can be disabled. Any ideas?
I usually use a progress dialog to prevent the user from interacting with UI if necessary . you may want to try it if the process you are doing takes longer than you wish .
Qezt solution is already fine.
But if you want to prevent super fast double-click, then you simply reduce the threshhold
// half a second
if (SystemClock.elapsedRealtime() - doneButtonClickTime < 500) {
return;
}
// or 100ms (1/10 of second)
if (SystemClock.elapsedRealtime() - doneButtonClickTime < 100) {
return;
}
public static void disablefor1sec(final View v) {
try {
v.setEnabled(false);
v.setAlpha((float) 0.5);
v.postDelayed(new Runnable() {
@Override
public void run() {
try {
v.setEnabled(true);
v.setAlpha((float) 1.0);
} catch (Exception e) {
Log.d("disablefor1sec", " Exception while un hiding the view : " + e.getMessage());
}
}
}, 1000);
} catch (Exception e) {
Log.d("disablefor1sec", " Exception while hiding the view : " + e.getMessage());
}
}
I kept above method in a static file and i will call this method for all the buttonclick like this
button_or_textview.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
StaticFile.disablefor1sec(button_or_textview);
// Do Somthing.
}
});
Kotlin short version:
private var lastClickTime: Long = 0
//in click listener
if (SystemClock.elapsedRealtime() - lastClickTime < 1000) {
return@setOnClickListener
}
lastClickTime = SystemClock.elapsedRealtime()
Set the launchMode of the activity in manifest as singleTop
<activity
android:name="com.example.MainActivity"
android:screenOrientation="portrait"
android:launchMode="singleTop"/>
The question has nothing to do with launching activities
| common-pile/stackexchange_filtered |
Joomla. How to get a model if I have got it before
For example, if I get some model in a controller, then the model save some its properties. But if I get the same model in a view or somethig else again, the changes disappear.
$model1 = $this ->getModel();
$model1 ->value = 123;
$model2 = $this ->getModel();
echo $model2 ->value; //Empty
I tried to use $model2 =& $this ->getModel(); but it is useless;
Is it possible to get the same model with its changed properties in every place?
Thanks.
The question is a little confusing. Are you saying you want to instantiate a model in a view? Or are you asking how does Joomla get the data from the model to the view? If you are in a controller or view $this is going to be a controller or view class, not a model object so it is not going to have a getModel() method.
For example, I have a model "Settings". It fetches data from database. But I need to get the data in various parts of my extension. I think it is not a good idea to ask the model to ask the database every time. I supposed that there is a mechanism should be to store (cache) the fetched data. Now I use setUserState() to store the data , but I think this method is not for such cases.
Have you looked at how the Joomla MVC works? Yes setUserState is not a bad idea for some kinds of data (like filters) and caching is not a bad idea either. Many joomla components use the internal cache, which is why you see the setStoreId() methods all over the place. I suggest that you look closely at the structure of the core components>
Yes, now I am on position "Developing a MVC Component/Adding backend actions". Which core component should be the first for learning?
People always suggest weblinks, in some ways it is the simplest.
Since you mention Settings as a model name, maybe a helper would be easier to work with, it will be static so you'll always get the same instance
| common-pile/stackexchange_filtered |
Can you call a specific params.value from ag-grid into your HTML
I'm new to ag-grid and web development in general, so I have this column definition where I have my image URL and using cellRenderer to display them as image.
Now I wanted to use these images on my main HTML file and was wondering if there is a way I can call specific parameter values (URL image links in order) from ag-grid into my HTML? I imagine calling them by index or something.
Hope my question makes sense, Thanks,
can you elaborate a little on this?
In the example code below (ag-grid tutorial), Is there a way for me to return the row data into my HTML file? Like Is it possible to extract the values "Toyota", "Ford" or "Celica" back into an HTML file?
`const columnDefs = [
{ field: "make" },
{ field: "model" },
{ field: "price" }
];
// specify the data
const rowData = [
{ make: "Toyota", model: "Celica", price: 35000 },
{ make: "Ford", model: "Mondeo", price: 32000 },
{ make: "Porsche", model: "Boxter", price: 72000 }
];`
Thanks
assuming that you are working with vanilla JavaScript implementation and have access to gridOptions, you can use gridOptions.api.rowModel.forEachNode(callback) to iterate over each node in the grid and can access the data on that node.
| common-pile/stackexchange_filtered |
Flex items not filling the height of its parent
I have a container with children items:
HTML
<div id="container">
<div className="item">1</div>
<div className="item">2</div>
<div className="item">3</div>
<div className="item">4</div>
</div>
CSS
#container {
display: flex;
flex-direction: row;
gap: 10px;
}
.item {
flex: 1;
text-align: center;
padding: 0px;
font-size: 30px;
background: #34ace0;
}
This flexbox container sits inside a grid layout, and the cell to the left of the one here has contents which cause the height of the flexbox shown here to be higher than the contents, as shown here:
I need the squares with the numbers inside to stretch/fill the height of its container, like this
...but with the text centered vertically as well.
I tried setting the height of the .item to 100% but it doesn't fill. Is there something like the free-remaining-space used in grid for flexbox?
Make sure the grid layout container has height of 100vh and the container you've shown also has height of 100%.
To center the text inside of each item, you can make each of them display: flex.
.grid-container {
height: 100vh;
}
#container {
height: 100%;
display: flex;
flex-direction: row;
gap: 10px;
}
.item {
height: 100%;
display: flex;
align-items: center;
justify-content: center;
flex: 1;
padding: 0px;
font-size: 30px;
background: #34ace0;
}
Works perfectly. Thank you!
| common-pile/stackexchange_filtered |
SOAP web service in JAX-RS REST service
I'm considering moving a SOAP web service from using a JAX-WS implementation to a JAX-RS implementation, but keeping the request/response the same (XMLs that meet the SOAP protocol). This would be to improve performance, have access to the streams, and use the NIO capabilities of Jersey 2/Servlet 3.1 to be able to do additional work after a response is sent back. Is there any reason I should not do this? Moving off of the SOAP protocol is not an option. I am using WebLogic 12c and plan to use Jersey 2 with the prefer-application-packages setting.
The service itself acts as a SOAP client to another SOAP service. I wanted to move that off of Spring WS to a REST client, because I didn't have any luck retrieving the response HTTP body content whenever it came as an HTML error page. I want to have access to that data to be able to send that back to my client. So same question as for the SOAP service itself, any issue with moving to a REST client or HTTP client?
You can do this if you are upto to the task of implementing your own xml deserializers and serializers which will meet the specs of soap message exchange format. otherwise, it will be too great a work.
| common-pile/stackexchange_filtered |
Prove 2 cups contain the same number of marbles without revealing quantity
I have seen proofs of this particular problem in a few articles/papers (which lead to a ZKP 2 nuclear warheads are similar), but I find a problem in the proof with the marbles - surely the proof doesn't work because the experiment cannot be repeated.
Proof outline:
The prover claims a pair of cups each contain $n$ marbles and for some $N$, $1 \leq n \leq N$.
The prover prepares a pair of buckets and claims each contain $N-n$ marbles.
The prover pours each cup of marbles into a different bucket, but the verifier chooses which cup goes into which bucket.
The verifier confirms each bucket now contains $N$ marbles.
This can be repeated an arbritrary number of times until the verifier is satisfied.
Is there a way this proof can be rectified? (I'm assuming the marbles aren't necessarily identical so weighing won't work)
http://cvt.engin.umich.edu/wp-content/uploads/sites/173/2014/10/Glaser-Nature-Article.pdf
https://www.latimes.com/science/sciencenow/la-sci-sn-verification-nuclear-disarmament-20140625-story.html
Add $m - 2$ more pairs of cups. The prover claims that the first $m$ cups (including the two original ones) contain $n$ marbles, and the second $m$ cups contain $N-n$ marbles. You choose a matching between the two $m$-tuples, the prover pours the cups, and you check that they all contain $N$ marbles.
If the two original cups contained the same amount of marbles, then the prover will always win. Otherwise, suppose that the first $m$ cups contain $k$ different amounts of marbles, with $c_1,\ldots,c_k$ copies each (so $c_1 + \cdots + c_k = m$). For the prover to have any chance of winning, we need the second $m$ cups to be partitioned similarly. Out of the $m!$ many partitions that the verifier chooses, the number of ones for which the proof goes through is $c_1! \cdots c_k!$, hence the success probability is
$$
\frac{c_1! \cdots c_k!}{m!}.
$$
Let us now notice that if $a \geq b \geq 2$ then
$$
(a+1)! (b-1)! = \frac{a+1}{b} a! b! > a! b!.
$$
Therefore the quantity $c_1! \cdot c_k!$ is maximized, subject to $c_1,\ldots,c_k \geq 1$ and $c_1 + \cdots + c_k = m$, when $c_1 = m-k+1$ and $c_2 = \cdots = c_k = 1$, that is,
$$
\frac{c_1! \cdots c_k!}{m!} \leq \frac{(m-k+1)!}{m!} \leq \frac{(m-1)!}{m!} = \frac{1}{m},
$$
since $k \geq 2$ (recall the prover is trying to cheat now!).
While this is less efficient than standard zero-knowledge proofs (in which the error probability decreases exponentially in the amount of work), this does show that you can get the error as small as you want.
| common-pile/stackexchange_filtered |
Is it reasonable or common to use multiple unique return values in PHP?
I have a function:
public static function loginUser($username, $password)
{
...
//if no record was found where the username was matched
//then we fail the login request
if(!isset($record)) return Login::FAILURE_INCORRECT_USERNAME_OR_PASSWORD;
...
//create a new user token object
$userToken = new UserToken();
...
//give the token back to the caller
return $userToken;
}
There are two distinct return values; one is an error code, and the other is an object. I normally contest this type of programming; typically I would encapsulate the result code and context into another safely typed class... I may yet do this, but I am curious to know if this is reasonable or common in PHP.
Here is how I handle the call:
public static function handleLoginRequest($request)
{
$result = new LoginResult();
$token = Login::loginUser($request->Username, $request->Password);
if($token === Login::FAILURE_INCORRECT_USERNAME_OR_PASSWORD)
{
$result->FailureReason = $token;
$result->Successful = False;
return $result;
}
//give the token back in the result
$result->UserToken = $token;
$result->Successful = True;
//return the result
return $result;
}
I also wasn't sure if this was more appropriate for StackOverflow or Programmers...
It's quite okay, but in your case it would be more common to throw an exception.
The first function should return a LoginResult object as you've indicated in the second method. You shouldn't check for magic values, it should be along the lines of
LoginResult
Boolean:isSuccessful
String:failureReason
UserToken:token
I'd say be consistent: return false in case of failure and one type of data if everything went well (e.g. Array). For me it's quite important to keep consistency in the entire application. So, by default I expect whatever data type in case of success (usually mentioned in the inline annotation to what to expect) and false in case of failure.
P.S. It's somehow the same as using getter and setter methods. Of course you don't have to use it, but it will make your application more solid in the long run. It's just a good practice.
| common-pile/stackexchange_filtered |
I'm stuck on this java code using strings and characters. I cant get past the first input
QUESTION: Write a program to validate Canadian Postal Codes. A postal code must follow the pattern of L9L9L9 where:
L is a letter
9 is a digit
Your program should continue accepting postal codes until the user enters the word “exit”.
Sample run (user input is shown in bold underline):
Enter a postal code: T2T-3X7
Postal code wrong length, format L9L9L9
Enter a postal code: T2T3AA
Postal code has letters where there are supposed to be digits
Enter a postal code: T2T358
Postal code has digits where there are supposed to be letters
Enter a postal code: T2T3A8
Postal code is valid!
Enter a postal code: exit
MY CODE:
// Standard import for the Scanner class
import java.util.*;
public class postalCode {
public static void main (String [] args) {
// Create a Scanner object attached to the keyboard
Scanner input = new Scanner (System.in);
System.out.print("Enter a postal code: ");
String postalCode = input.nextLine();
while (!postalCode.contains("exit")) {
if (postalCode.length() > 6){
System.out.println("Postal code wrong length, format L9L9L9");
}
if (postalCode.length() < 6){
System.out.println("Postal code wrong length, format L9L9L9");
}
if (postalCode.length() == 6) {
for (int i = 0; i < postalCode.length(); i++) {
if (Character.isDigit(postalCode.charAt(0))){
System.out.println("Postal code has digits where there are supposed to be letters");
break;
}
if (Character.isDigit(postalCode.charAt(2))){
System.out.println("Postal code has digits where there are supposed to be letters");
break;
}
if (Character.isDigit(postalCode.charAt(4))){
System.out.println("Postal code has digits where there are supposed to be letters");
break;
}
if (Character.isLetter(postalCode.charAt(1))){
System.out.println("Postal code has letters where there are supposed to be digits");
break;
}
if (Character.isLetter(postalCode.charAt(3))){
System.out.println("Postal code has letters where there are supposed to be digits");
break;
}
if (Character.isLetter(postalCode.charAt(5))){
System.out.println("Postal code has letters where there are supposed to be digits");
break;
}
else {
System.out.println("Postal code is Valid!");
break;
}
}
}
}
}
}
You're never putting anything inside postalCode in the loop. You're just either exiting from the loop or checking again the same value for postalCode
After your third if{ } block you need to add postalCode = input.nextLine(); so that you can take the input from user next time. Because of this you are not able to proceed after first input.
while (!postalCode.contains("exit")) {
if (postalCode.length() > 6){
System.out.println("Postal code wrong length, format L9L9L9");
}
if (postalCode.length() < 6){
System.out.println("Postal code wrong length, format L9L9L9");
}
if (postalCode.length() == 6) {
for (int i = 0; i < postalCode.length(); i++) {
if (Character.isDigit(postalCode.charAt(0))){
System.out.println("Postal code has digits where there are supposed to be letters");
break;
}
if (Character.isDigit(postalCode.charAt(2))){
System.out.println("Postal code has digits where there are supposed to be letters");
break;
}
if (Character.isDigit(postalCode.charAt(4))){
System.out.println("Postal code has digits where there are supposed to be letters");
break;
}
if (Character.isLetter(postalCode.charAt(1))){
System.out.println("Postal code has letters where there are supposed to be digits");
break;
}
if (Character.isLetter(postalCode.charAt(3))){
System.out.println("Postal code has letters where there are supposed to be digits");
break;
}
if (Character.isLetter(postalCode.charAt(5))){
System.out.println("Postal code has letters where there are supposed to be digits");
break;
}
else {
System.out.println("Postal code is Valid!");
break;
}
}
postalCode = input.nextLine();
}
}
You have to update postalCode inside a loop.
Then it's better to extract all different checks to the separate methods instead of printing same result multiple times.
public class Foo {
public static void main(String... args) {
Scanner scan = new Scanner(System.in);
while (true) {
System.out.print("Enter a postal code: ");
String postalCode = scan.nextLine();
if ("exit".equalsIgnoreCase(postalCode))
break;
if (postalCode.length() != 6)
System.err.println("Postal code wrong length, format L9L9L9");
else if (!hasDigits(postalCode))
System.err.println("Postal code has digits where there are supposed to be letters");
else if (!hasLetters(postalCode))
System.err.println("Postal code has letters where there are supposed to be digits");
else
System.out.println("Postal code is Valid!");
}
}
private static boolean hasDigits(String postalCode) {
for (int i = 1; i < postalCode.length(); i += 2)
if (!Character.isDigit(postalCode.charAt(i)))
return false;
return true;
}
private static boolean hasLetters(String postalCode) {
for (int i = 0; i < postalCode.length(); i += 2)
if (!Character.isLetter(postalCode.charAt(i)))
return false;
return true;
}
}
Can you explain what you did here ?
I have added a comments
| common-pile/stackexchange_filtered |
Swift - UITabBarContorller Initial View - when to force launch login screen?
I have programatically set up a custom UITabBarControler with three tabs, each of which is a UIViewController embedded a UINavigationController.
I am not using storyboards. I set the custom tab controller as the root in AppDelegate
window = UIWindow(frame: UIScreen.main.bounds)
window?.makeKeyAndVisible()
window?.rootViewController = CustomTabBarController()
The app runs fine and I get the three tabs and can move between them.
sample of how tabs are populated (in viewDidLoad of the customer tab bar controller)
let ordersVC = OrdersViewController() // where Orders is a UIViewController
ordersVC.title = "Orders"
ordersVC.view.backgroundColor = UIColor.white
let ordersVCNavi = UINavigationController(rootViewController: ordersVC)
ordersVCNavi.navigationBar.tintColor = UIColor.black
...
viewControllers = [homeVCNavi, inventoryVCNavi, ordersVCNavi]
Now I need to first to see if the user is logged in (using Firebase). I can easily check for already logged in (Firebase cached) or not logged in.
I do this logged in check in AppDelegate
My problem is when I need to force a login (jump to login view controller). I can not find a place that works.
- tried placing the call in the custom UITabBarController didLoad and the code is ignored
- tried placing the call in the didLoad and the willAppear in the initial tab controller also ignored
I can place a button on the initial tab and that button will indeed launch the login controller. So I can get to the login controller from a button press.
upon pressing a button I can execute this code and the login controller will show
let vc = LoginViewController()
self.navigationController?.pushViewController(vc, animated: false)
But if I know I need to force login and I try to do that same code snip above in viewDidLoad() or viewWillAppear() in the initial tab controller, or in the custom UITabBarController then the push is ignored. Nothing happens.
What is best practice for forcing login screen when initial view is tabbarcontroller?
Where should one put the navigation to the login controller to force login when not already logged in. Want to go to login so that user can not use the app if not logged in.
in didFinishLaunchingWithOptions
if loggedIn {
window?.rootViewController = CustomTabBarController()
}
else {
window?.rootViewController = LoginViewController()
}
after your login is successful
UIApplication.shared.keyWindow?.rootViewController = CustomTabBarController()
works great. Then when user specifically requests to log out ... I do UIApplication.shared.keyWindow?.rootViewController = LoginViewController()
| common-pile/stackexchange_filtered |
Merge all data of parent child relation-ship from same table in mysql
I have a one table like this
I expect to create table like this this
*yellow color is header sum of value or tire
I try with union an join but still not valid
any suggest for this problem. thanks
Perhaps GROUP BY ... WITH ROLLUP is what you want here:
SELECT id, SUM(value), SUM(tire)
FROM yourTable
GROUP BY id, value WITH ROLLUP
ORDER BY -id DESC, SUM(value) DESC;
Demo
try this i get error : Incorrect usage of CUBE/ROLLUP and ORDER BY
| common-pile/stackexchange_filtered |
Run all open tests in Visual Studio
Is there an easy way to run all tests in open test classes in Visual Studio. I find that this is what I most often need to do.
Mostly I use Resharpers or TestDriven.NET testrunners, and I cant find any easy way to do that in either testrunner. In Resharper it would be nice to have a "Add all open tests to session" feature, but after much googling I cant find one, or any other easy way to do this.
Is there a tool, plugin, or what-not to do this?
If you suggest this in their bugtracker (http://youtrack.jetbrains.net/), I'll vote for it. This would be pretty useful.
GOod idea, ill see what I can do.
AFAIK, there isn't such a feature.
Alternatively you could
run all tests within a class (by setting the cursor to the class, outside of a method, and pressing Ctrl+R T)
run all tests within a namespace (by setting the cursor into the namespace, outside of a class, and pressing Ctrl+R T)
After you managed to run the interesting tests (however), you could
run these tests again by pressing Ctrl+R D
run failed tests again by pressing Ctrl+R F
Then you can create test lists. I don't do this, it's to time consuming to keep them up to date.
You can also click the three-circles icon next to the test class, and "Append to session", and only run them once you've added all the tests you want. That way you're not waiting for all the tests to run every time you add another test class to the session. But agreed, I don't know of any way to automatically build a session from all tests in currently-open files.
Yes, basically thats what I do today. At some point you have so many test classes it becomes tedious. Also, if I close my solution for the day, to reopen it the next day, all my code is opened, test classes and all, but the test session is not preserved..
Have you considered automating your tests to be run on a build server under something like CruiseControl?
Whether you have a continuous integration server or not, you still need to run the tests frequently as you code. CI is just a safety net -- it doesn't remove the need to run tests yourself.
Yes we run cruisecontrol.net. But ofcourse I still need to run the tests as often as possible while I work.
| common-pile/stackexchange_filtered |
Transit Visa in London
I will be travelling from The Netherlands to India soon and have a flight change in London. However I also need to change airports (From London City Airport to Heathrow Airport). I hold an Indian Passport. Will I need a transit Visa ?
Yes.
You should apply for a Visitor in Transit visa if you arrive on a
flight and will pass through immigration control before you leave the
UK.
https://www.gov.uk/check-uk-visa/y/india/transit/yes
If the OP says he is traveling from the Netherlands, it is plausible that he has a residence permit there -- in which case he's eligible to Transit Without Visa in the UK.
| common-pile/stackexchange_filtered |
Intuitive explanation for parallel axis theorem
The parallel axis theorem states that you can relate the moments of inertia defined with the center of mass as the origin to the moments of inertia defined with respect to some other origin. It is summarized in this equation:
$I=I_c+Mh^2$
For a cone, the center of mass is $1/4$ of the height from the base so we can define $I_{base}$ using the parallel axis theorem. However, since the parallel axis theorem does not care for the sign of $h$, the moments of inertia defined at $h/4$ above the center of mass are equivalent to the base.
The moment of inertia can be thought of as the opposition that a body exhibits to having its speed of rotation about an axis altered by the application of a torque. This means that turning the cone about $I_1$ in the base will require the same force as it takes to rotate it about $I_1$ which is at the center of the cone. Intuitively, I would say that is harder to rotate the cone about its base than it is to spin it around its center so the fact that they have the same moments of inertia is confusing to me. Can someone please point out the flaw in my logic? Also, as a bonus question, is the parallel axis theorem still valid when you extend the origin outside of the shape?
Consider the following conceptual steps
The MMOI of a point particle of mass $m$ that is orbiting at a distance $d$ is $$I = m\, d^2$$
The motion of a rigid body is decomposed into the motion of the center of mass, and the motion about the center of mass.
The MMOI of a rigid body about the center of mass $I_c$ is derived from the angular momentum of the body whilst rotating about the center of mass only.
The MMOI of a rigid body about any other point $I$ is the superposition of the MMOI due to rotation $I_c$ and due to the motion of the center of mass as if the body was a concentrated point mass $m d^2$ $$ I = I_c + m d^2$$
The difficulty you are having is that you are looking at the problem from the wrong perspective.
I cannot draw a three-dimensional picture on this screen so here is a section of the cone through its centre of mass, $CoM$, and its apex, $C$ with axes $X$ and $Y$ perpendicular to the plane.
From your diagram it looks as thought your orange axis $I_1$ which I have labelled $Y$ has larger moment of inertia that your green axis $I_1 (Y)$.
The coloured circles in my diagram show the motion of particles at positions $A, \, B,\, C$ and $Y$ about my two axes.
Note that the asymmetry between the motion of the particles which make up the cone about those two axes are much less pronounced than you visualised when looking at your diagram and according the the parallel axes theorem the sum of all the contributions of the particles which make up the cone is the same irrespective if you use axis $X$ or axis $Y$.
I have not proved that they are equal but tried to show you, by hand waving, that they might be equal.
The axes can be outside the object as explained in the Wikipedia article Parallel axis theorem.
I mean is it acceptable to just say, “your intuition is wrong”...?
Trying to foster intuition
It gets harder to rotate something the further the masses in it are distributed from the axis of rotation. The axes closest to the mass-in-aggregate all pass through the center of mass, roughly speaking that is why we calculated a center of mass in the first place. The effect is linear in mass but quadratic in distance—what you see as a minimum at the center of mass is a delicate balance between these two, a motion towards more mass (moving the axis to the base of the cone) unfortunately shifts the minority of mass (the tip) further away from the rotation axis and the quadratic distance makes up for the linear mass improvement. On the other hand moving the other way, the mass of the base is greater and overwhelms bringing the tip closer. That's why it's the minimum.
We can formalize this idea that there has to be a minimal axis inside the figure, without the exact derivation that it goes through the CoM, simply by looking at the extreme case: an axis on the boundary, any boundary, such that the entire figure is to one side of a plane on the axis. The question is, if we move the axis normal to the plane, can we prove that it’s always easier to move the axis into the figure? If so, there must be a easiest axis somewhere in the figure. But the rest is just basic calculus, the kinetic energy expression must be differentiable with respect to this axis motion, it just involves some additions and subtractions and multiplications and not much else. And we know that if we move the axis away from the figure it gets harder to rotate it, so moving it into the figure must make it easier.
The actual derivation is super easy
Every physicist should just know this derivation, it is a very quick argument about vectors.
That it's the center of mass, well, that requires the full derivation. Approximate with a set of point masses, WLOG the axis is the $\hat z$-axis, use an angular velocity $\omega$ and cylindrical coordinates... the velocity is $v_i=\sqrt{x_i^2 + y_i^2} ~\omega$... Total kinetic energy $$K=\frac12 \sum_i m_i \omega^2 (x_i^2 +y_i^2),$$ and the question is, if we add some infinitesimal value $(\mathrm dx, \mathrm d y)$ to all of these masses uniformly how does it impact the kinetic energy, which is just
$$K+\mathrm d K= \frac12 \omega^2 \sum_i m_i \big((x_i+\mathrm d x)^2 +(y_i+\mathrm d y)^2),\\
\mathrm d K/\omega^2=\mathrm d x~\sum_i m_i x_i +\mathrm d y~\sum_i m_i y_i.$$
The minimum kinetic energy is therefore found when both $\sum_i m_i x_i = \sum_i m_i y_i = 0$ which is precisely the equation for saying that the axis intersects the center of mass. So you can kind of just “see it” from the kinetic energy expression.
A slight elaboration of this proves the parallel axis theorem, assume $x_i, y_i, z_i$ are relative to a center of mass at a macroscopic position $\delta x, \delta y, \delta z$... Now you cannot neglect the $\delta x^2$ term but of course you don't want to—you have chosen $x_i, y_i$ to eliminate the cross term above and so you just have,
$$
K +\delta K = \frac12 \sum_i m_i \omega^2 (x_i^2 +y_i^2) + \frac12 M \omega^2(\delta x^2+\delta y^2),$$
dividing by $\frac12 \omega^2$ we find easily $\delta I = M(\delta x^2+\delta y^2).$
| common-pile/stackexchange_filtered |
jqGrid Enter key before save handler
I am using jqGrid 3.8.1 with inline edit mode. Users are currently allowed to hit the Enter key in order to save a row. I would like to let them continue to do this, but I need to do some data validation (including a call to the server) before they can be allowed to save the row. Is this possible? I don't see anything like a "beforeSaveRow" function that gets called in this case.
Server side validation is the part of saving the changes. The server get the modified data and it can send back HTTP response with some error HTTP code (some value higher or equal to 400). One can include the description of the error in the body of the HTTP response. jqGrid will display the error message and the user can continue the editing. One can use additionally errorfunc callback of inline editing to decode the server response holding the error and to convert it to some another HTML fragment.
Hey Oleg, so basically you're saying that the validation must be done on the server side? jqGrid does not provide any "on before row save" event?
@user1334007: You asked about server side validation. The client side validation can be implemented only on the column basis (validation of separate new value for the column) by editrules inclusive of custom validation.
sorry, I just meant that part of the client side validation process may involve making a call to the server to get some extra data need of the validation process. So you'd make a call to the server to do the validation, then if that passed you'd make another call to the server to actually save the row.
Yeah, I'm not interested in column validation. So i you can't do row validation I guess I might have to do all the validation on the server side.
@user1334007: Sorry, but exactly because of the formulation I wrote my answer. You should consider exactly what you suggest. You suggest that your client side code makes separate additional Ajax request and send the data which will be send immediately after the request. I see no sense in the behavior because it could be some validation errors (SQL constants like UNIQUE key, trigger or wrong character, wrong date and so on) during saving itself. I see no sense in two calls which both do practically the same and the user will see the same error message.
Yeah making it all happen on the save call is an option. Given the current architecture of the system (which I realize you don't have any knowledge of) it's actually a bit simpler to make two calls but since that's not really an option, I'll do all the validation on the server end. Thanks for the help.
@user1334007: You are welcome! If you think about the architecture of the system than it's important to remark that two asynchronous cascading calls to the server required absolutely another structure of callbacks. The usage of synchronous calls (at least the usage of async: false option in the first validation call) makes the user interface slowly. Because of that no additional server side validations are implemented in jqGrid.
| common-pile/stackexchange_filtered |
FMX: TCanvas.DrawLine issue on Android
Lines with thickness > 1 appear to be drawn differently on Windows and Android. I'm using Delphi 11.0. Create a blank multi-platform application and add the following in the FormPaint event.
procedure TMainForm.FormPaint(Sender: TObject; Canvas: TCanvas;
const ARect: TRectF);
begin
Canvas.Stroke.Thickness := 20;
Canvas.Stroke.Cap := TStrokeCap.Round;
Canvas.Stroke.Color := TAlphaColorRec.Red;
Canvas.Stroke.Kind := TBrushKind.Solid;
Canvas.DrawLine(PointF(20,20), PointF(70,70), 1);
Canvas.DrawLine(PointF(70,70), PointF(130,70), 1);
end;
This results in the following. The same happens when drawing on a TImage.
It seems that in Windows the endpoints of the line are at the center of the line caps, whereas on Android they're at the extremes of the line caps. I'm testing on a Huawei P10 running Android 8.0.0. I'm not currently able to test on a more recent Android version. A Google search doesn't seem to give any results on this issue. I'd appreciate if anyone has any info on this issue and what could be done about it? If someone could test this on a more recent Android version, that would also be appreciated. I could of course add special code for Android to extend the line endpoints by half the line thickness, but I'd like to avoid that if possible.
The Android documentation seem to imply that it shouldn't behave like this.
https://developer.android.com/reference/android/graphics/Paint.Cap
The main difference between Windows and Android is that they use different implementations of TCanvas. Windows uses TCanvasD2D, whereas Android is using TCanvasGpu. Looking into the Delphi code. I wonder if the following code in FMX.StrokeBuilder.pas is causing the issue. This code gets runs from FMX.Canvas.GPU.pas, even with TStrokeDash.Solid. I can't work out why it would offset the ends like that.
procedure TStrokeBuilder.InsertDash(SrcPos, DestPos: TPointF; const DashDirVec, ThickPerp: TPointF);
var
InitIndex, DivIndex, Divisions: Integer;
SinValue, CosValue: Single;
RoundShift: TPointF;
begin
if FBrush.Cap = TStrokeCap.Round then
begin
RoundShift := DashDirVec * FHalfThickness;
SrcPos := SrcPos + RoundShift;
DestPos := DestPos - RoundShift;
end;
I can confirm that TCanvasGpu is the issue by setting FMX.Types.GlobalUseGPUCanvas to True before Application.Initialize. Then TCanvasGpu is used even on Windows instead of TCanvasD2D and I get the same issue that I see on Android.
Did you try using SKIA4Delphi ?
We use FMXNativeDraw on Android because there are too many problems with the draw quality in the default FMX solution.
Thanks. I hadn't heard of SKIA4Delphi. It looks like that may require me to rewrite my graphics code though. I have had a look at FMX.Graphics.Native already, so I may try that.
@XylemFlow Actually no, SKIA4Delphi doesn't require to rewrite your graphics code. Skia provides a new, common canvas for all supported platforms. It also has many more benefits - it's really worth to give it a try !
| common-pile/stackexchange_filtered |
how to draw a polygon in leaflet and add it to geojson file
I am working on a small project with leaflet. the goal of the project is to show a geojson file and updating it dynamicaly from the window of the web site. I have imported the geojson file to the map. here is the code i used:
var geojson;
geojson = L.geoJson(lotData, {
style: style,
onEachFeature: onEachFeature
});
Now i want to update the geojson file: i want to add a polygon to the file dynamically and to delete an other one from the file. The problem is that i have no idea how to do it.
Is this being served from a server? And you want to update the geojson file on the server from user interaction at the browser? You're going to have to start caring about security and concurrency and stuff. This isn't a small project any more... You might want to look at Django and its spatial GeoDjango extensions on the server side.
This is a very broad question and difficult to answer as the solution will necessarily be complex. In some ways you are describing browser-based GIS.
To get started, perhaps check out the following components:
Leaflet Draw. A great set of easy to use and extendable tools for drawing and editing polygons on screen through a browser.
PostGIS. Using Leaflet Draw callbacks like draw:created, you can post polygons to a server running PostGIS; add polygons, intersect, delete, etc., and return geojson.
Turf.js. It may also be possible to do the adding/deleting/saving of polygons in-browser with a library like turf.js. Performance will depend on how large your polygons and geojson files are.
| common-pile/stackexchange_filtered |
Add TextSwitcher Animation in RecyclerView using MVVM
I am building a very simple Shopping List app using Room and MVVM.
Objective: Animate the quantity textView in recycler according to the action, if increased the slide down, if decreased the slide up.
Problem: Do not know to handle the click event of a button in the recycler view item. Right now the button is calling the interface's method which is being handled by Activity, which calls the ViewModels function which increases the values in the Database. On the bases of that, the recycler view gets updated because it is listing to the changes.
Possible Solution I can think of: Add a Channel of events in ViewModel, and when the add or remove method is called, fire a related event. Observers these events in MainActivity, and then from there call the appropriate adapter function which properly handles the addition or subtraction of quantity items. But don't you think this will generate a ton of boilerplate code?
It's more of an architectural problem than a coding problem.
Please help me out with this. Here is the code of my app: https://github.com/waqas-334/MVVM-TODO-Android-app/tree/bugs
Thank you in advance.
Handle the animation inside the adapter and pass the event to your activity or fragment to handle business logic after event is over
Thank you but after I do the animation in the adapter and then update the values in the database, the list will be updated again because it's listening to changes from the database. Would cause 2 animations.
Create a Static MutableLiveData shouldUpdate
Observe this boolean.
you can trigger it from anywhere in your application, using setValue or postValue.
| common-pile/stackexchange_filtered |
How do I extract quoted portions from a text in perl?
As an example,from a text like this:
By 1984, Dylan was distancing himself from the "born again" label. He told Kurt Loder of Rolling Stone magazine: "I've never said I'm born again. That's just a media term. I don't think I've been an agnostic. I've always thought there's a superior power, that this is not the real world and that there's a world to come."
I want to extract:
"born again"
"I've never said I'm born again. That's just a media term. I don't think I've been an agnostic. I've always thought there's a superior power, that this is not the real world and that there's a world to come."
There is obviously no fixed amount of quotes that will be in the text itself, so the solution needs to extract all quoted portions.
I was trying with Text::Balanced like this:
extract_delimited($text, "\"");
inside a loop, but I can't get it to even extract "born again" - which would be a good start.
Is Text::Balanced the right tool? what am I getting wrong?
The documentation clearly states Extract the initial substring (emphasis mine).
I would suggest Text::Balanced, yes. But you need to read the docs closely, it's not the simplest tool to use correctly. A few examples that I can readily find: this and this and this. There's far more...
If you don't need to deal with quotes within quotes and stuff like that, Text::Balanced may be overkill.
Assuming that the " character either at the start of the string, or preceded by a space will open a quote, and the next " at either the end of the string, or with a non-word character following it will end the quote, then /(?:\s|\A)(\".+?\")(?:\W|\z)/sm should capture a quoted string, including the quotes.
Add in the /g modifier to capture all the quotes, and you get:
use strict;
use warnings;
use Data::Dumper;
my $data = <<'DATA';
By 1984, Dylan was distancing himself from the "born again" label. He told
Kurt Loder of Rolling Stone magazine: "I've never said I'm born again.
That's just a media term. I don't think I've been an agnostic. I've always
thought there's a superior power, that this is not the real world and that
there's a world to come."
DATA
my @quoted_parts = ( $data =~ /(?:\s|\A)(\".+?\")(?:\W|\z)/gsm );
print Dumper \@quoted_parts;
Text::Balanced is useful when you need to deal with, for example, different brackets which may be nested like "( [ ( ) ] )" and you need to make sure that the correct ending bracket gets matched with the correct starting bracket. It's useful when you want your quotes to be able to contain escaped quote characters. That sort of thing. It's really for dealing with parsing formal languages along the lines of XML, JSON, programming languages, config files, etc. Not intended for parsing natural language.
| common-pile/stackexchange_filtered |
How to trigger a hyperlink using a keystroke in a Swing JEditorPane
I'm trying to fire hyperlinks in a JEditorPane using the "Enter" keystroke. So that the hyperlink (if any) under the caret will fire rather than having to click with the mouse.
Any help would be appreciated.
I haven't really even got close on this one. Have scoured the swing docs to see if I can get a list of links from the editor, so maybe I could go through each link and see if it is under the caret, but I can't find such a list.
First of all the HyperlinkEvent is only fired on a non-editable JEditorPane so it will be difficult for the users to know when the caret is over a link.
But if you do want to do this, then you should be using Key Bindings (not a KeyListener) to bind an Action to the ENTER KeyStroke.
One way to do this is to simulate a mouseClick by dispatching a MouseEvent to the editor pane when the Enter key is pressed. Something like this:
class HyperlinkAction extends TextAction
{
public HyperlinkAction()
{
super("Hyperlink");
}
public void actionPerformed(ActionEvent ae)
{
JTextComponent component = getFocusedComponent();
HTMLDocument doc = (HTMLDocument)component.getDocument();
int position = component.getCaretPosition();
Element e = doc.getCharacterElement( position );
AttributeSet as = e.getAttributes();
AttributeSet anchor = (AttributeSet)as.getAttribute(HTML.Tag.A);
if (anchor != null)
{
try
{
Rectangle r = component.modelToView(position);
MouseEvent me = new MouseEvent(
component,
MouseEvent.MOUSE_CLICKED,
System.currentTimeMillis(),
InputEvent.BUTTON1_MASK,
r.x,
r.y,
1,
false);
component.dispatchEvent(me);
}
catch(BadLocationException ble) {}
}
}
}
| common-pile/stackexchange_filtered |
React hooks: useState not setting correctly
I created a setter function in a parent component and pass it down to its child:
Portion in question:
export default function Day({ dayInfo, props }) {
var [timeOfDay, setTimeOfDay] = useState('');
function TimeOfDaySetter(index) {
console.log('index ', index);
if (index === 0) {
setTimeOfDay((timeOfDay) => (timeOfDay = 'AM'));
return <Header as="h1">{timeOfDay}</Header>;
} else if (index === 12) {
setTimeOfDay((timeOfDay) => (timeOfDay = 'PM'));
return <Header as="h1">{timeOfDay}</Header>;
}
}
That function is nested in a map function in the child:
{Array.from(Array(amountOfRows)).map((row, index) => {
return (
<React.Fragment key={index}>
<Table.Row>
<Table.Cell rowSpan="2" style={tableStyle}>
{TimeOfDaySetter(index)}
</Table.Cell>
But this is skipping the first condition?
Could anyone help why this is happening?
Complete Parent and component:
import { Header, Table, TextArea } from 'semantic-ui-react';
import React, { useState, useEffect } from 'react';
export default function Day({ dayInfo, props }) {
var [dayInfoInChild, setDayInfoInChild] = useState([]);
var [timeOfDay, setTimeOfDay] = useState('');
function setExactHourHelper(index) {
return index === 0 ? 12 : '' || index > 12 ? index - 12 : index;
}
function TimeOfDaySetter(index) {
console.log('index ', index);
if (index === 0) {
setTimeOfDay((timeOfDay) => (timeOfDay = 'AM'));
return <Header as="h1">{timeOfDay}</Header>;
} else if (index === 12) {
setTimeOfDay((timeOfDay) => (timeOfDay = 'PM'));
return <Header as="h1">{timeOfDay}</Header>;
}
}
useEffect(() => {
if (dayInfo !== null) {
var modifiedDayInfo = dayInfo
.split(' ')
.map((item) => {
if (item.indexOf(',')) return item.replace(/,/g, '');
})
.join('-');
if (localStorage.getItem(modifiedDayInfo)) {
// setDayInfoInChild(function (dayInfoInChild) {
// return [...setDayInfoInChild, modifiedDayInfo];
// });
console.log(modifiedDayInfo);
} else {
localStorage.setItem(modifiedDayInfo, JSON.stringify({}));
}
}
}, [dayInfo, timeOfDay, timeOfDay]);
function TableLayout({ TimeOfDaySetter }) {
var [amountOfRows, setAmountOfRows] = useState(24);
var [textValue, setTextValue] = useState('');
function handleChange(event) {
setDayInfoInChild(event.target.value);
}
const tableStyle = {
borderLeft: 0,
borderRight: 0,
};
const colorOveride = {
color: '#C1BDBD',
};
return (
<>
<h1>{dayInfo}</h1>
<Table celled structured>
<Table.Body>
{Array.from(Array(amountOfRows)).map((row, index) => {
return (
<React.Fragment key={index}>
<Table.Row>
<Table.Cell rowSpan="2" style={tableStyle}>
{TimeOfDaySetter(index)}
</Table.Cell>
<Table.Cell style={tableStyle}>
{
<strong>
{setExactHourHelper(index)}
:00
</strong>
}
<TextArea
rows={2}
name="textarea"
value={textValue}
onChange={handleChange}
placeholder="Tell us more"
/>
</Table.Cell>
</Table.Row>
<Table.Row>
<Table.Cell style={(tableStyle, colorOveride)}>
{
<strong>
{setExactHourHelper(index)}
:30
</strong>
}
<TextArea rows={2} placeholder="Tell us more" />
</Table.Cell>
</Table.Row>
</React.Fragment>
);
})}
</Table.Body>
</Table>
</>
);
}
{
if (dayInfo === null) {
return <p>Loading...</p>;
}
}
return (
<React.Fragment>
<TableLayout
dayInfo={dayInfo}
timeOfDay={timeOfDay}
TimeOfDaySetter={TimeOfDaySetter}
/>
</React.Fragment>
);
}
useState isn't supposed to be arrow function, setTimeOfDay('AM')
There are many minor mistakes in this code, but the biggest one is that you assume that setState is synchronic in (TimeOfDaySetter) which is not true.
@DennisVash Care to expound on them? I'd love the feedback!
It should simply be setTimeOfDay("AM") and for the second conditional block setTimeOfDay("PM"). You are making it overly complicated by passing in a function that returns a string. While you could pass in a function that returns a string, for your setTimeOfDay to set it's string to, your function has a fundamental problem in that you are manually changing the state as well.
Notice in your function you have "timeOfDay = 'AM'". This completely defeats the purpose of setState and useState since you are manipulating the state directly.
The traditional setState does have use case where it can accept a function, but I don't believe that the useState does.
// Correct
this.setState(function(state, props) {
return {
counter: state.counter + props.increment
};
});
And this function would not be useful for you right now anyways.
Also, in your useEffect. Even though, you have your use effect tied to [dayInfo, timeOfDay, timeOfDay], it is never actually modified, at least in the code that you have shown. So your Day component is only getting the change of dayInfo during the mount and not on a re-render.
Also, unless I am missing something, because your state is managed in your Day component and not each individual React Fragment, when you are calling
{Array.from(Array(amountOfRows)).map((row, index) => {
return (
<React.Fragment key={index}>
<Table.Row>
<Table.Cell rowSpan="2" style={tableStyle}>
{TimeOfDaySetter(index)}
</Table.Cell>
You are simply changing the state of your Day component with each iteration on your map which will cause your react fragment to re-render. And so only the last condition would be honored since each fragment would re-render. It would be better to pass down props to a new component item instead of calling state on each iteration, make your state more complex to manage each fragment, or manage your state locally within each component.
Thanks Chris, originally the TimeOfDaySetter was just a function reacted to the map functions index input: function TimeOfDay(index) { if (index === 0) { return <Header as="h1">AM</Header>; } else if (index === 12) { // setTimeOfDay('PM'); return <Header as="h1">PM</Header>; } }
Anyway now I see that I should just use a simple funtction.
Happy to help. Yes, I am not sure if you will run into any other issues. But I think remove that setting of state with each fragment produced will help.
And the dependency in useState should only be useEffect(() => {.....}, [dayInfo]); Your help is much appreciated!
As I see you pass arrow function as setTimeOfDay parameter:
setTimeOfDay((timeOfDay) => (timeOfDay = 'AM'));
Instead you should invoke setTimeOfDay as an usual setter, i.e.:
setTimeOfDay('AM');
Thanks for the feedback, I appreciate it but that didn't work.
Although this code is uncommon, (timeOfDay = 'AM') return AM which is what the setter expects, also it doesn't solve the OP's problem
Hello Dennis, That is true, that it returns a string but notice that the OP, manually manipulates state by calling timeOfDay = "AM". However, I do believe the real source of the problem is in the map method and calling of set state each time on the day component to render smaller fragments. It will cause a re-render each time for each fragment and the last fragment will be the one that is honored.
| common-pile/stackexchange_filtered |
Error when connecting to XML file office add-in c#
I am working on a project with Microsoft office Excel add-in with Fisual Studio 2015 c#.
The Excel is supposed to have a button that, when it has been clicked, will bring all the data from a Remote XML file on an internal server.
However, I am getting this error while I am connecting:
server response contain error
I used this code :
private const string utilityUrl = "http://bitreporting/ReportServer/Pages/ReportViewer.aspx?%2fCockpits%2fOrgaCockpit%2fCockpit&rs%3aCommand=Render&rs:format=xml";
XmlDocument xmlDoc = new XmlDocument();
xmlDoc.Load(utilityUrl );
MessageBox.Show(xmlDoc.DocumentElement.ChildNodes[1] + "");
When I download the XML manually and access it from my computer then everything is okay.
Can anybody tell me what is the problem with this code?
This is just a guess, but there's a good chance you've got some characters in that string that aren't escaped, causing the URL to be wrong. Try adding @ in front of your string: @"http://...xml";
well, actually I already try it. the thing is that the URL is correct and when I access it using the browser it responed with the XML file. it is kind of strange. but I have no idea what is the problem and also it does not give me too many details
Doesn't it require ".com" ? Can be wrong just a wonder.
one more details, it gave me the error 401. which means unathorized. but I cann access this like using all the browsers from my laptop
it is an internal server only for our company
var doc = XDocument.Load("path"); try this. @SamySammour
I found the problem, it was in the Authorization.
because the server is internal and prevent an external request so I have to use webclient.
WebClient wc = new WebClient();
wc.Proxy = null;
wc.UseDefaultCredentials = true;
string xml = wc.DownloadString(url);
XDocument doc = XDocument.Parse(xml);
MessageBox.Show(doc.FirstNode + "");
| common-pile/stackexchange_filtered |
Is media-7.x-1.x vulnerable to SA-CONTRIB-2018-020
The Drupal Media module version 7.x-2.x contains a remote code execution vulnerability which was patched with:
https://cgit.drupalcode.org/media/commit/?id=1cd77ffa9c2cf96d80b76d47318179a8a82f0d46
1.x and 2.x seem to be very different beasts, I can't find any corresponding code to media_ajax_upload() in 1.x but I'm still rather unsure wether this version is vulnerable or not.
Someone has ask the question in the module issues https://www.drupal.org/project/media/issues/2966112
But the version 7.x-1.x seems no longer support, so, it could be possible but I think mainteners will not patch the version now.
According to Joseph Olstad 1.x is not vulnerable to the problem found in the 2.x branch. Source:
https://www.drupal.org/project/media/issues/2966112#comment-12592407
There is a previously unreleased fix for another security issue though. This was integrated in version 1.7 of this module.
| common-pile/stackexchange_filtered |
Anti-commutator of angular momentum operators for arbitrary spin
I know the commutator of angular momentum operators are
$$
[J_i,J_j]=\mathrm i\hbar \varepsilon_{ijk}J_k.
$$
For spin-1/2 particles, $J_i=\frac\hbar2\sigma_i$ where $\sigma_i$ are Pauli matrices, and I can compute $\{J_i,J_j\}$ from the algebra of $\sigma_i$'s,
$$
[\sigma_i,\sigma_j]= 2\mathrm i\varepsilon_{ijk}\sigma_{k},
\quad
\{\sigma_i,\sigma_j\}=2\delta_{ij}1 \!\!1_2.
$$
When it comes to higher dimensons, these relations are broken.
I want to know what are the relations in higher dimensions.
This answer shows the case for a fundamental representation of $SU(N)$,
$$
\{t^{A},t^{B}\} = \frac{2N}{d}\delta^{AB}\cdot 1_{d} + d_{ABC}t^{C},\tag{1}
$$
where
$$
\mathrm{Tr}[t^{A}t^{B}] = N\delta_{AB}\quad
d_{ABC} = \frac{1}{N}\mathrm{Tr}[\{t^{A},t^{B}\}t^{C}].\tag{2}
$$
If I substitude Eq.(1) into Eq.(2), I get
$$
d_{ABC}=\frac1N \frac{2N}{d}\frac1N\mathrm{Tr}[t^At^B] \cdot \mathrm{Tr}[t^C]
+\frac1N d_{ABD}\mathrm{Tr}[t^Dt^C]
$$
where $\mathrm{Tr}[t^C]=0$. Above equation dosen't tell me how to compute $d_{ABC}$,
since $\mathrm {Tr}[t^D t^C]=\delta_{DC}\mathrm{Tr}[1_d]=N\delta_{DC}$.
What I'm dealing with is actually $SU(2)$, not for arbitrary $N$.
The d coefficient of su(2) always vanishes. Your right hand side is not in the Lie algebra, but, instead, in the universal enveloping algebra, addressed in answers to the linked questions. There is no good formal theory for it.
For the adjoint (3d irrep) of su(2), it is straightforward to compute all anticommutators explicitly,
$$
\{J^a,J^b \}_{mk}= -\epsilon_{amn} \epsilon_{bnk} -\epsilon_{bmn}\epsilon_{ank}= 2\delta_{ab}\delta_{mk} -(\delta_{am}\delta_{bk}+\delta_{bm}\delta_{ak}),
$$
in variant normalization, $(J^a)_{mn}=i\epsilon_{amn}$, which is the least of your problems.
You see by inspection that, for $a\neq b$, the right-hand side is a symmetric traceless matrix, so not a linear combination of the three generators! Your equation (1) fails.
Keep reading other answers in that question. su(2) irreps are real/pseudoreal, so anomaly coefficients vanish, as your QFT text must have emphasized.
But I am unaware of a generic shortcut expression for M.
Experiment with higher spin representations, all explicit. Can the r.h.s. of $\{S_x,S_z\}$, traceless, be represented by a linear combination of the three generators? (No! You may readily compute it for arbitrary spin s, given item 4 of this link, and confirm it is symmetric with vanishing diagonal elements, but is not $\propto S_x$, the only generator with these properties!)
Linked.
| common-pile/stackexchange_filtered |
Spring Batch JdbcPagingItemReader seems not EXECUTING ALL THE ITEMS
I'm working on an app that extract records from an Oracle database and then are exported as one single tabulated file.
However, when I attempt to read from the DB using JdbcPagingItemReader and write to a file I only get the number of records specified in pageSize. So if the pageSize is 10, then I get a file with 10 lines and the rest of the records seem to be ignored. So far, I haven't been able to find whats is really going on and any help would be most welcome.
Here is the JdbcPagingItemReader config:
<bean id="databaseItemReader"
class="org.springframework.batch.item.database.JdbcPagingItemReader" >
<property name="dataSource" ref="dataSourceTest" />
<property name="queryProvider">
<bean
class="org.springframework.batch.item.database.support.SqlPagingQueryProviderFactoryBean">
<property name="dataSource" ref="dataSourceTest" />
<property name="selectClause" value="SELECT *" />
<property name="fromClause" value="FROM *****" />
<property name="whereClause" value="where snapshot_id=:name" />
<property name="sortKey" value="snapshot_id" />
</bean>
</property>
<property name="parameterValues">
<map>
<entry key="name" value="18596" />
</map>
</property>
<property name="pageSize" value="100000" />
<property name="rowMapper">
<bean class="com.mkyong.ViewRowMapper" />
</property>
<bean id="itemWriter" class="org.springframework.batch.item.file.FlatFileItemWriter">
<!-- write to this csv file -->
<property name="resource" value="file:cvs/report.csv" />
<property name="shouldDeleteIfExists" value="true" />
<property name="lineAggregator">
<bean
class="org.springframework.batch.item.file.transform.DelimitedLineAggregator">
<property name="delimiter" value=";" />
<property name="fieldExtractor">
<bean class="org.springframework.batch.item.file.transform.BeanWrapperFieldExtractor">
<property name="names" value="ID" />
</bean>
</property>
</bean>
</property>
<job id="testJob" xmlns="http://www.springframework.org/schema/batch">
<step id="step1">
<tasklet>
<chunk reader="databaseItemReader" writer="itemWriter" commit-interval="1" />
</tasklet>
</step>
thanks
Possible duplicate of http://stackoverflow.com/questions/33345157/spring-batch-jdbcpagingitemreader-seems-not-paginating?rq=1
no it's not the same they changed the use of the reader they have used finally jdbcCursorItemReader in my case i want absolutely to use the JdbcPagingItemReader for performance reasons
Try adding following property to your datasource bean ""
And if possibel keep both pagesize and commit-interval same.If nothing works, then i would say to use JdbcCursorItemReader which reads one by one..
it was the scope="step" that was missing it should be:
<bean id="databaseItemReader"
class="org.springframework.batch.item.database.JdbcPagingItemReader" scope="step">
See my comment below for java config
Your setting seem incorrect for whereClause and sort key can not be same because pagesize works hand to hand with your sorting column name.
Check how is your data(in corresponding table) looks like.
In spring batch , as per your configuration, spring will create and execute as given below..
first query executed with pagesize = 10 , is like following
SELECT top 10 FROM tableName where snapshot_id=18596 snapshot_id > 10
Second /remaining query executed depends on your sort key.
SELECT * FROM tableName where snapshot_id=18596 snapshot_id > 10
SELECT * FROM tableName where snapshot_id=18596 snapshot_id > 20
and so on.. try running this query in database , doesn't it look weird . :-)
If you don't need where clause, remove it.
And if possible keep page size and commit-interval same, because that's how you decide to process and persist. But of course that depends on your design. So you decide.
Adding @StepScope made my item reader take off with paging capability.
@Bean
@StepScope
ItemReader<Account> ItemReader(@Value("#{jobParameters[id]}") String id) {
JdbcPagingItemReader<Account> databaseReader = new JdbcPagingItemReader<>();
databaseReader.setDataSource(dataSource);
databaseReader.setPageSize(100);
databaseReader.setFetchSize(100);
PagingQueryProvider queryProvider = createQueryProvider(id);
databaseReader.setQueryProvider(queryProvider);
databaseReader.setRowMapper(new BeanPropertyRowMapper<>(Account.class));
return databaseReader;
}
| common-pile/stackexchange_filtered |
TomEE: ignore docBase from being scanned
I have a docBase (alternative docroot) that holds a lot of static content that I don't include in a war file. However, the bootstrap code of TomEE scans the docBase directories for jar files and other JavaEE components. Since the docBase includes a large number of files it takes a very long time to deploy. I was wondering if there is a flag or parameter I can use to ignore the processing (scanning) of these directories.
I tried to tweak the server.xml file of TomEE but nothing seems to be working.
<Context docBase="/large/directory" path="/foo/bar">
<JarScanner scanClassPath="false" scanAllFiles="false" scanAllDirectories="false" />
<Parameter name= "org.apache.tomee.catalina.TomcatWebAppBuilder.IGNORE" value="true" />
<Environment name="org.apache.tomee.catalina.TomcatWebAppBuilder.IGNORE" value="true" type="java.lang.String" />
</Context>
However, with these changes, the docBase is still processed and scanned.
Digging into the source code[1] I found system properties that seem to properly skip docBase from being scanned:
private static boolean isExcludedBySystemProperty(final StandardContext standardContext) {
String name = standardContext.getName();
if (name == null) {
name = standardContext.getPath();
if (name == null) { // possible ?
name = "";
}
}
if (name.startsWith("/")) {
name = name.substring(1);
}
final SystemInstance systemInstance = SystemInstance.get();
return "true".equalsIgnoreCase(systemInstance.getProperty(name + ".tomcat-only", systemInstance.getProperty("tomcat-only", "false")));
}
By passing in system properties -Dfoo/bar.tomcat-only=true seem to do the trick. Open source for the win!
https://github.com/apache/tomee/blob/master/tomee/tomee-catalina/src/main/java/org/apache/tomee/catalina/TomcatWebAppBuilder.java
Thanks
| common-pile/stackexchange_filtered |
Query from ejabberd module with LIKE %%
I have this lines of code in ejabberd module, it works fine:
case catch ejabberd_odbc:sql_query(Server,["select COUNT(*) as total from spool where username='",IdUsername,"' AND xml LIKE '%message from%' AND xml LIKE '%chat%';"]) of
{selected, [<<"total">>], [[Totale]]} ->
Count = binary_to_list(Totale);
_ -> Count = "0"
end,
If I convert this:
LIKE '%chat%';
with this:
LIKE '%type=\'chat\'%';
I obtain an error, any ideas? or there's another way to get only the chat message?
no visible error, i obtain always 0, but from phpMyAdmin the same query give me three records! i think the problem is '
Since you're typing this in an Erlang string, the Erlang escape sequences apply. In particular, \' is an escape sequence for just a single quote, '. (That's more useful inside atoms, which are delimited by single quotes.)
You can try it in an Erlang shell, and see that "\'" and "'" are equivalent:
1> "\'".
"'"
2> "\'" =:= "'".
true
To include an actual backslash in the string, escape it with another backslash:
"\\'"
In your case, that would be:
LIKE '%type=\\'chat\\'%';
| common-pile/stackexchange_filtered |
Combinatorial proof of ${2n \choose 3} = 2 \times {n \choose 3} + 2 \times {n \choose 1}{n\choose 2}$
I need to prove ${2n \choose 3} = 2 \times {n \choose 3} + 2 \times {n \choose 1}{n\choose 2}$ forming my
counting problem as such:
Suppose that we are trying to count the number of ways
to form a sports team of (insert the correct number here) students picked from (complete the rest of
this sentence, being very precise/specific).
I am entirely new to binomials and any help for where to begin or general guidance would be appreciated.
The left hand side is the number of ways choose a team of 3 people from a group of 2n people. Split that group into two haves with n people in each. See if you can interpret the RHS in the light of this.
Suppose we had $n$ boys and $n$ girls. What might $\binom{2n}{3}$ count in this scenario? What might $\binom{n}{3}$ count in this scenario? Recall that $\binom{a}{b}$ is used for counting things like the number of subsets of size $b$ from a set of size $a$.
Use this combinatorial proof with $m=n,,r=3$. It helps to write your RHS as$$\binom{n}{0}\binom{n}{3}+\binom{n}{1}\binom{n}{2}+\binom{n}{2}\binom{n}{1}+\binom{n}{3}\binom{n}{0}.$$
The punchline at the end of the day for problems like these is that if we have a scenario we wish to count and if there are multiple different approaches that we could take to count it... so long as each of the approaches we took was correct then even if the answers appear different among the different approaches we know that they must be equal regardless, the so-called Principle of Double Counting.
This is the punchline @JMoravitz mentioned.
Say you have total of $2n$ players of which $n$ are experienced and $n$ are new.
You are making a team of $3$ pplayers by choosing $3$ out of $2n$ players - which is $2n \choose 3$.
The selection of $3$ will contain either i) All $3$ experienced ii) All $3$ new iii) $2$ new, $1$ experienced and iv) $1$ new and $2$ experienced.
If we add selection for all of these cases, it should equal to $2n \choose 3$.
So ${2n \choose 3} = {n \choose 3} + {n \choose 3} + {n \choose 1} {n \choose 2} + {n \choose 2} {n \choose 1} = 2 {n \choose 3} + 2 {n \choose 1} {n \choose 2}$
Say we have red and blue line, each containig $n$ points. How many triples can we choose?
First answer is ${2n\choose 3}$
Second we can choose 3 points from one line so that is ${n\choose 3}$ triples times 2 and we can choose 2 points fome one line and 1 point from other, so that is ${n\choose 2}{n\choose 1}$ times 2.
| common-pile/stackexchange_filtered |
Microdata Rich Snippets Showing On Home Page?
I added the schema.org rich snippets code inside the <head> and the text is showing up on my home page, what am I doing wrong?
DTD
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML+RDFa 1.0 Transitional//EN" "http://www.w3.org/MarkUp/DTD/xhtml-rdfa-1.dtd">
...
Snippet Code
<div itemscope itemtype="http://schema.org/EntertainmentBusiness">
<span itemprop="name">MySite.com</span>
<div itemprop="aggregateRating" itemscope itemtype="http://schema.org/AggregateRating">
<span itemprop="ratingValue">4.5</span> stars -
based on <span itemprop="reviewCount">233</span> Reviews
</div>
</div>
Well, where is your head element?
@unor I haven't posted the whole section... just the important bits relating to the issue
div is not allowed in head.
Parser’s think that the body started since they encounter div, so this content is displayed.
See "Content model" for the allowed content of the head element.
The code is from Schema.org and it uses Divs in the head of the document - any solutions to get it working correctly?
@ubique: Could you link to such an example?
@ubique: These are not in the head element.
The correct DTD to use for XHTML and MicroData is:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional with HTML5 microdata//EN" "xhtml1-transitional-with-html5-microdata.dtd">
Found a code example from schema.org that uses the 'meta' tag - although it's not recommended from an SEO point of view according to the articles I have read:
<div itemscope itemtype="http://schema.org/Offer">
<span itemprop="name">Blend-O-Matic</span>
<span itemprop="price">$19.95</span>
<div itemprop="reviews" itemscope itemtype="http://schema.org/AggregateRating">
<img src="four-stars.jpg" />
<meta itemprop="ratingValue" content="4" />
<meta itemprop="bestRating" content="5" />
Based on <span itemprop="ratingCount">25</span> user ratings
</div>
</div>
| common-pile/stackexchange_filtered |
How to display my location button on Here Maps for Android
I am using the Here map api to show the here map in android fragment. I am successful in showing the here map in fragment. But i cannot able to show "My Location" button on left corner of here map. (I am using the premium plan)
So, How do I show my location button on a Here Map?
For google map api, we can add "My current location" button like
explained in
(How to display my location on Google Maps for Android API v2).
I am looking for the same functionality using Here api.
It looks like they don't have an equivalent to the "My Location" button. However, they have the PositionIndicator class, take a look here for an example of how to use it: https://stackoverflow.com/questions/43325005/here-api-offset-map-center
There is no built-in location button UI for the HERE SDK as there is for the Google SDK. You will have to implement the button UI yourself and have it call Map#setCenter(GeoCoordinate point, Animation animation) [1] when pressed. You can get the current location to hand to setCenter using the PositioningManager#getPosition() API [2].
[1] Map setCenter API Reference
[2] Positioning User Guide and PositioningManager API Reference
P.S. You can also have a look at https://github.com/heremaps/here-android-sdk-examples for more examples of using the HERE SDK for Android but I don't believe there is explicitly an example of implementing a "my location" button.
| common-pile/stackexchange_filtered |
In spark, how to tackle function with multiple arguments, the data of arguments come from rdd generated from local files
For example, we have two files, a.txt and b.txt, we want to add the data of two files together. Maybe my case is not an addition, I only want to test how to tackle the function with multiple arguments in spark, the data of arguments come from rdd generated from local files.
we can add a number to data of one file, such as codes:
a_data = sc.textFile("a.txt")
a_data.map(lambda x: x + 5)
How to add data of two files together with spark RDD?
Do you want to add the values of two RDD[Int]/RDD[Double] together into a single RDD?
Thanks for your reply. Maybe my description is not clear. I revise the description of the problem.
I am not sure to understand what you want. If you want to add each element of RDD a to the corresponding element of RDD b, you can do it with zipWithIndex that will associate an ordered index to each element of each RDD (see the scaladoc)
val a = sc.textFile("a.txt").zipWithIndex().map(_.swap)
val b = sc.textFile("b.txt").zipWithIndex().map(_.swap)
a.fullOuterJoin(b)
.map{ case(k, (v_a, v_b)) => v_a.getOrElse(0) + v_b.getOrElse(0) }
This code does not assumes the two files to have the same length (Zeros are added at the end in case they do not). If you assume that they have the same length, you can simply write:
a.join(b).map(_._2).map(_+_)
| common-pile/stackexchange_filtered |
Facing issue while running aws boto3 ec2 inventory script NameError: name 'Instance_name' is not defined
I am trying to get Ec2 inventory information from Boto3 script this script runs fine on jupyter notebook but when I am running on my project Linux environment I am getting below issue.
Name Error: name 'Instance_name' is not defined
Below is my script:
import boto3
import csv
import pprint as pprint
aws_mang_con=boto3.session.Session(profile_name='root')
ec2_cli=aws_mang_con.client('ec2')
cnt=1
csv_ob=open("demo3.csv","w",newline='')
csv_w=csv.writer(csv_ob)
csv_w.writerow(["S_NO","Enviroment",'Application','Componenets','Instance_id','IP','Instance_Name','Instance_Size','Status'])
response=ec2_cli.describe_instances()['Reservations']
for each_item in response:
for instances in each_item['Instances']:
for tags in instances['Tags']:
if tags['Key'] in 'Name':
name=tags['Value']
print(name)
elif tags['Key'] in 'Application':
application=tags['Value']
print(application)
elif tags['Key'] in 'Components':
components=tags['Value']
print(components)
csv_w.writerow([cnt,'dev',application,components,instances['PrivateIpadress'],name,instances['InstanceType'],instances['State']['Name']])
cnt+=1
csv_ob.close()
What else needs to be added here as it is running fine on my local jupyter notebook but on my project Linux machines it is showing above error.Any help will be much appreciated?
Please post the full traceback error.
Traceback (most recent call last):
File "ec2.py", line 19, in
csv_w.writerow([cnt,'dev',application,components,instances['InstanceId'],instances['PrivateIpAddress'],Instance_name,instances['InstanceType'],instances['State']['Name']])
NameError: name 'Instance_name' is not defined
that traceback isn't corresponding to the code you posted. in the code, you have name but in the traceback you have Instance_name
yes I was just changing the name and testing so if we write name or instance name it is giving the same issue
Please post the actual code you're having trouble with along with the error. I doubt you have the same error when you changed Instance_name to name
I have formated the code again @ewong
@KHYATIAMAN can you check if the formatting is correct.
@ewong its not allowing me to post code in comment the code is same and error is Traceback (most recent call last):
File "ec2.py", line 19, in
csv_w.writerow([cnt,each_item['OwnerId'],application,components,instances['InstanceId'],instances['PrivateIpAddress'],Instance_name,instances['InstanceType'],instances['State']['Name']])
@ArunK where i need to format?
I have formatted the code in the question. is your csv.write line is aligned with for tags in instances['Tags']: ?
@KHYATIAMAN Instance_name isn't defined in your code. thus your error.
Thanks a lot @ArunK its working fine now.Issue got resolved
what was the issue, indentation ?
yes @ArunK indentation was issue
I think the following line is incorrectly indented.
csv_w.writerow([cnt,'dev',application,components,instances['PrivateIpadress'],name,instances['InstanceType'],instances['State']['Name']])
the indententation should be same as the line that says for tags in instances['Tags']:
it should be like this:
for instances in each_item['Instances']:
for tags in instances['Tags']:
# assign the value to variables
#
csv_w.writerow([cnt,'dev',application,components,instances['PrivateIpadress'],name,instances['InstanceType'],instances['State']['Name']])
cnt+=1
csv_ob.close()
Thanks for your response but it has 9 parameters in both header and csv
| common-pile/stackexchange_filtered |
What's the path name of appStoreReceiptURL in App Review environment?
Is Bundle.main.appStoreReceiptURL?.lastPathComponent "receipt" or "sandboxReceipt" in App Review environment?
I want to know the result of this TestFlight check code in App Review environment.
Bundle.main.appStoreReceiptURL?.lastPathComponent is "receipt" in App Review environment.
| common-pile/stackexchange_filtered |
It's just black or It's just dark
Nothing visible through the windshield. It's just black.
I take my car into a forest, the car battery goes down. Outside it's so dark. I wrote the above sentence to describe what I see though the windshield. Now my confusion is whether "It's just black" is a natural way a native speaker say it, or if they'd just say "It's just dark".
A native speaker would probably say:
I can't see a thing! It's pitch black outside.
Perhaps so, although I don't find anything wrong with the original. It could work in a story. The first segment is a fragment (There's nothing visible through the windshield would fix that), but, in a larger context, that might be fine; writers use fragments all the time.
You could also say pitch dark
@J.R. Yes, I like to use the fragment, the complete sentence which doesn't look good for story.
| common-pile/stackexchange_filtered |
ChromeSessionParser syntax issue
I am running into an issue with parts of my code i have added my errors at the bottom. The issue is arising around the sqllite3.operationError part. i attempted removing it but when i do another error occurs for line 68 'def getpath():', i cant see why the errors are showing up any and all help is appreciated as always thanks. My code is generally for taking Login data out of my database and displaying in an csv file
import os
import sys
import sqlite3
try:
import win32crypt
except:
pass
import argparse
def args_parser():
parser = argparse.ArgumentParser(description="Retrieve Google Chrome Passwords")
parser.add_argument("--output", help="Output to csv file", action="store_true")
args = parser.parse_args()
if args.output:
csv(main())
else:
for data in main():
print (data)
def main():
info_list = []
path = getpath()
try:
connection = sqlite3.connect(path + "Login Data")
with connection:
cursor = connection.cursor()
v = cursor.execute('SELECT action_url, username_value, password_value FROM logins')
value = v.fetchall
for information in value:
if os.name == 'nt':
password = win32crypt.CryptUnprotectData(information[2], None, None, None, 0)[1]
if password:
info_list.append({
'origin_url': information[0],
'username': information[1],
'password': str(password)
})
except sqlite3.OperationalError as e:
e = str(e)
if (e == 'database is locked'):
print('[!] Make sure Google Chrome is not running in the background')
sys.exit(0)
elif (e == 'no such table: logins'):
print('[!] Something wrong with the database name')
sys.exit(0)
elif (e == 'unable to open database file'):
print('[!] Something wrong with the database path')
sys.exit(0)
else:
print (e)
sys.exit(0)
return info_list
def getpath():
if os.name == "nt":
# This is the Windows Path
PathName = os.getenv('localappdata') + '\\Google\\Chrome\\User Data\\Default\\'
if (os.path.isdir(PathName) == False):
print('[!] Chrome Doesn\'t exists')
sys.exit(0)
return PathName
def csv (info):
with open ('chromepass.csv', 'wb') as csv_file:
csv_file.write('origin_url,username,password \n' .encode('utf'))
for data in info:
csv_file.write(('%s, %s, %s \n' % (data['origin_url'], data['username'], data['password'])).encode('utf-8'))
print ("Data written to Chromepass.csv")
if __name__ == '__main__':
args_parser()
Errors
Traceback (most recent call last):
File "C:/Users/Lewis Collins/Python Project/ChromeDB's/ChromeSessionParser.py", line 90, in <module>
args_parser()
File "C:/Users/Lewis Collins/Python Project/ChromeDB's/ChromeSessionParser.py", line 19, in args_parser
for data in main():
File "C:/Users/Lewis Collins/Python Project/ChromeDB's/ChromeSessionParser.py", line 35, in main
for information in value:
TypeError: 'builtin_function_or_method' object is not iterable
Right way is:
except sqlite3.OperationalError as e:
And you main() should be like:
def main():
info_list = []
path = getpath()
try:
connection = sqlite3.connect(path + "Login Data")
with connection:
cursor = connection.cursor()
v = cursor.execute('SELECT action_url, username_value, password_value FROM logins')
value = v.fetchall
for information in value:
if os.name == 'nt':
password = win32crypt.CryptUnprotectData(information[2], None, None, None, 0)[1]
if password:
info_list.append({
'origin_url': information[0],
'username': information[1],
'password': str(password)
})
except sqlite3.OperationalError as e:
e = str(e)
if (e == 'database is locked'):
print '[!] Make sure Google Chrome is not running in the background'
sys.exit(0)
elif (e == 'no such table: logins'):
print '[!] Something wrong with the database name'
sys.exit(0)
elif (e == 'unable to open database file'):
print '[!] Something wrong with the database path'
sys.exit(0)
else:
print e
sys.exit(0)
return info_list
Thanks for the help I knew it would be something small what i was getting wrong
I've fixed that issue and now receiving new errors Traceback (most recent call last):
File "C:/Users/Lewis Collins/Python Project/ChromeDB's/ChromeSessionParser.py", line 90, in
args_parser()
File "C:/Users/Lewis Collins/Python Project/ChromeDB's/ChromeSessionParser.py", line 19, in args_parser
for data in main():
File "C:/Users/Lewis Collins/Python Project/ChromeDB's/ChromeSessionParser.py", line 35, in main
for information in value:
TypeError: 'builtin_function_or_method' object is not iterable
| common-pile/stackexchange_filtered |
About families of distributions and moment generating functions
Suppose we have $X_1,...,X_n$ i.i.d rv's each with mgf $M_{X_i}(t)$.
Let $Z = \sum X_i$, we know that $M_Z(t) = M^{n}_{X_i}(t)$
In some cases, we can know the distribution of $Z$ by identifying the mgf of $Z$. In other cases is not possible to identify the mgf and the Jacobian method or any other alternative method is required.
Did anyone know under which family of distributions or what conditions are needed in order to obtain a "known" mgf for $Z$?
You may take a look at something like stable distribution / infinite divisibility. Those are some special classes of distribution and in mathematical terms, they are "closed under addition". So in such case you should be able to figure out the mgf / characteristic function of the sum.
@BGM take a look at my other question here
I think here we have an algebraic structure. For example, we can define a subclass of distributions that the sum of the distribution can be evaluated using the moment generating functions IN the "closed under addition" distributions. Can we do the same for the product Z = XY ?. In general find the families of distributions closed under addition product convolution ?
| common-pile/stackexchange_filtered |
Why Kickstarter applies "utf8=[Unicode character]" to the query string?
I noticed that (for example) when searching on Kickstarter it applies utf8=[Unicode character] to the query string, like the following:
Is it some fancy way of detecting whether the client supports UTF-8? Never came across this trick before.
possible duplicate of What is the _snowman param in Ruby on Rails 3 forms for?
That parameter could be added to force older browsers (IE 6/7) to encode url parameters as unicode.
| common-pile/stackexchange_filtered |
Which model performance metric is used for SVR model by sklearn?
I noticed the math for SVR states that SVR uses L1 penalty or epsilon insensitive loss function. But sklearn SVR model documentation mentions L2 penalty. I don't have much experience with SVR thought the community who has experience could shed some light on this.
Here is the snippet from the documentation:
C: float, default=1.0
Regularization parameter. The strength of the
regularization is inversely proportional to C. Must be strictly
positive. The penalty is a squared l2 penalty.
Check out this link: https://scikit-learn.org/stable/modules/svm.html#svm-regression. quote - Here, we are penalizing samples whose prediction is at least away from their true target
So should I use MSE or MAE to check my SVR model's performance?
| common-pile/stackexchange_filtered |
Solving Black scholes PDE using Laplace transform
I'm trying to obtain the Laplace transform of Call option price with repect to time to maturity under the CEV process.
The well known Black scholes PDE is given by
$$
\frac{1}{2}\sigma(x)^2x^2\frac{\partial^2}{\partial x^2}C(x,\tau)+\mu x\frac{\partial}{\partial x}C(x,\tau)-rC(x,\tau)-\frac{\partial}{\partial \tau}C(x,\tau)=0.
$$
where the initial condition $C(x,0)=max(x-K,0)$ and $\sigma(x)=\delta x^\beta$.
Taking a Lapalce transform with respect to $\tau$, we obtain the following ODE :
$$
\frac{1}{2}\delta x^{2\beta+2}\frac{\partial^2}{\partial x^2}\hat{C}(x,\lambda)+\mu x\frac{\partial}{\partial x}\hat{C}(x,\lambda)-(\lambda+r)\hat{C}(x,\lambda)=-max(x-K,0).
$$
where $\hat{C}(x,\lambda)=\int_0^\infty e^{-\lambda \tau}C(x,\tau)d\tau$.
and the initial condition is transformed to
$$
\hat{C}(x,\lambda)=\int_0^\infty e^{-\lambda \tau}C(x,0) d\tau=max(x-K,0)/\lambda
$$(is this right??? it seems wrong..)
. Then, $\hat{C}(x,\lambda)$ can be analytically formulated by the case $x>K$ and $x\leq K$.
How to get explicit formula for $\hat{C}(x,\tau)$?. I can't proceed from this stage.
Firstly, note that by taking Laplace transforms time has been eliminated and as such there are no initial conditions for your ODE anymore. There are only boundary conditions which in this case read $C(\pm \infty, \tau) = 0$. By the way it is customary to denote the price of the underlying as $S$ whereas you denote it as $x$..
If you look at the ODE you see that it is very similar to the Euler equation. Thus it is useful to substitute for $y := \log(x)$. Having done that we end up with :
\begin{equation}
(i) \frac{\delta}{2} \frac{d^2}{d y^2} \tilde{C}(y,\lambda) + \left(\mu - \frac{\delta}{2} e^{2 \beta y}\right) \frac{d}{d y} \tilde{C}(y,\lambda) - (\lambda + r) \tilde{C}(y,\lambda) = - max(e^y - K,0)
\end{equation}
This is allready a much simpler equation yet since the coefficient at the first derivative is not constant we still have to transform it. We solve firstly the homogenuous equation. Here let us try the a following ansatz:
\begin{equation}
(ii) \tilde{C}(y,\lambda) := \exp(2 A \beta y) \tilde{\tilde{C}}(\exp(2 \beta y))
\end{equation}
Inserting this ansatz to the above equation we will obtain another second order equation for the function $\tilde{\tilde{C}}$. The later equation will be grossly simplified if we require the constant term in the coefficient at the zeroth derivative to be zero. This gives:
$-2 A^2 b^2 \delta - 2 \mu A b + r + \lambda =0$ which gives $A = (-\mu + \sqrt{\mu^2 + 2 \delta (\lambda+r)})/(2 \beta \delta)$ . Taking all this into account we get a following ODE:
\begin{equation}
(iii) A \tilde{\tilde{C}}(u,\lambda) + (B_0 - B_1 u )d_u \tilde{\tilde{C}}(u,\lambda) + C u d^2_{u^2} \tilde{\tilde{C}}(u,\lambda) = 0
\end{equation}
where $A = (\mu - \sqrt{\mu^2+2\delta(\lambda+r)})$ and $B_0 = 2 \beta (2\beta \delta + 2 \sqrt{\mu^2 + 2 \delta(\lambda+r)}$, $B_1 = 2\beta\delta$ and $C = (2 \beta)^2 \delta$ and $u := \exp(2 \beta y)$. We substitute for $B_1/C u$ and we recognize the generalized Laguerre differential equation, an equation this is easily solved through a power series method. Now, combining (iii) with (ii) we obtain the fundamental solutions of the homogenous ODE (i) . Having done this we solve the inhomogenuous ODE (i) by means of the Green's function method, for example.
| common-pile/stackexchange_filtered |
Why installing packages without updates fails? Are original packages removed from repositories after update is available?
After a fresh 14.04.1 install I removed a few packages, disabled updates (because updating causes suspend/sleep not to work), ran apt-get update and apt-get upgrade (which of course printed 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.), and finally tried installing packages with apt-get, but I get this error message:
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
clang-3.3 : Breaks: clang-3.4 but 1:3.4-1ubuntu3 is to be installed
clang-3.4 : Breaks: clang-3.3 but 1:3.3-16ubuntu1 is to be installed
clang-3.5 : Breaks: clang but 1:3.4-0ubuntu1 is to be installed
Breaks: clang-3.3 but 1:3.3-16ubuntu1 is to be installed
Breaks: clang-3.4 but 1:3.4-1ubuntu3 is to be installed
clang-format-3.3 : Breaks: clang-format-3.4 but 1:3.4-1ubuntu3 is to be installed
clang-format-3.4 : Breaks: clang-format-3.3 but 1:3.3-16ubuntu1 is to be installed
clang-format-3.5 : Breaks: clang-format-3.3 but 1:3.3-16ubuntu1 is to be installed
epiphany-browser : Depends: libwebkit2gtk-3.0-25 (>= 2.1.4) but it is not going to be installed
libreoffice : Depends: libreoffice-base but it is not going to be installed
Depends: libreoffice-core (= 1:4.2.3~rc3-0ubuntu2) but 1:4.2.4-0ubuntu2 is to be installed
Depends: libreoffice-report-builder-bin but it is not going to be installed
octave : Depends: default-jre-headless but it is not going to be installed
openshot : Depends: gtk2-engines-pixbuf but it is not going to be installed
python-clang-3.3 : Breaks: python-clang-3.4 but 1:3.4-1ubuntu3 is to be installed
python-clang-3.4 : Breaks: python-clang-3.3 but 1:3.3-16ubuntu1 is to be installed
Breaks: python-clang-3.5 but 1:3.5~svn201651-1ubuntu1 is to be installed
python-clang-3.5 : Breaks: python-clang-3.3 but 1:3.3-16ubuntu1 is to be installed
Breaks: python-clang-3.4 but 1:3.4-1ubuntu3 is to be installed
E: Unable to correct problems, you have held broken packages.
Both:
dpkg --get-selections | grep hold
And:
apt-mark showhold
Return nothing.
I suspect this is caused by the disabled update sources. If yes, then why cannot apt-get just install the versions from release time? Why does it want newer versions?
Are the original packages removed from the repositories in case of security updates? Are only updated versions of dependencies available?
$ cat /etc/apt/sources.list
deb http://archive.ubuntu.com/ubuntu trusty main restricted universe multiverse
deb-src http://archive.ubuntu.com/ubuntu trusty main restricted universe multiverse
deb cdrom:[Ubuntu 14.04.1 LTS _Trusty Tahr_ - Release i386 (20140722.2)]/ trusty main restricted
so my real question is: Are the original packages removed from the repo in case of security updates? Are only the updated versions available?
installations do work after updating, and now suspend seems to work too (maybe I had the unsupported updates enabled last time), but I am still curious about why I cannot install certain packages without updating
@Toroidal: the updates are repositories, see /etc/apt/sources.list
I don't know exactly what you did, but I'd guess that you're correct that your "the disabled update sources" broke everything. (And your software sources should have nothing to do with suspend/sleep... even updating has to be manually initiated, so again once it's finished it shouldn't affect suspend/sleep).
apt relies upon the software sources to know where & what to update/download/install, without proper reliable software sources it's crippled. You could try a sudo apt-get check for some info, but regardless you may need to restore them:
If you destroyed your software sources somehow, you can go to this site http://repogen.simplylinux.ch/ to create a new sources.list file. And/or follow the directions here for more info on backing up the current sources & having them automatically re-created or getting one from the above site.
If you don't want newer versions of packages, then you can just run the currently installed ones, though there could be security holes waiting to be patched, or improved updates waiting too.
You could try and run sudo apt-get dist-upgrade which does update more packages as the normal "upgrade" package would do.
Also a call to sudo apt-get check and sudo apt-get -f can't hurt.
$ sudo apt-get check
Reading package lists... Done
Building dependency tree
Reading state information... Done
| common-pile/stackexchange_filtered |
XmlSerializer extraTypes memory leak
I'm developing some application, wich calls lot of XmlSerializer constructor with extraTypes parametr. I've find out, that each call encrease application memory for about 100KB and 2 descriptors (sometimes more).
Code example:
this code encrease application memory for 100KB and 2 handlers per each call
while (true)
{
Console.ReadLine();
new XmlSerializer(typeof (object), new Type[] {});
}
this code encrease application memory for 43024KB and 2004 handlers
for (var i = 0; i < 1000; i++)
{
new XmlSerializer(typeof (object), new Type[] {});
}
so just siplest example of console application:
internal class Program
{
private static void Main(string[] args)
{
//this code encrease application memory for 43024KB and 2004 handlers
for (var i = 0; i < 1000; i++)
{
new XmlSerializer(typeof (object), new Type[] {});
}
Console.WriteLine("Finished. Press any key to continue...");
Console.ReadLine();
}
}
Is it a bug in XmlSerializer or im doing something wrong?
P.s. same with optimize code on and Release build
Duplicate of Memory Leak using StreamReader and XmlSerializer
Ok, there is already an answer on msdn https://blogs.msdn.microsoft.com/tess/2006/02/15/net-memory-leak-xmlserializing-your-way-to-a-memory-leak/
Shot answer is: no, it is not a bug, it is a feature ;)
XmlSerializer creates a TempAssembly per each call of constructor with extraTypes parametr. And "an assembly is not an object on the GC Heap, the GC is really unaware of assemblies, so it won’t get garbage collected"
Solution is to cache XmlSerializer's in some dictionary and use only one object per type instead of creating new XmlSerializer each time u need it
Yep, another approach for you is to process your code in new AppDomain and unload appDomain after processing. Thus you'll get rid of this temp assemblies.
Just debugged XmlSerializer and saw exactly what you've said in your answer, nice question
| common-pile/stackexchange_filtered |
Massive Gauge Bosons without Higgs fields
In a possible theory like our Standard model but without a Higgs i.e.:
$$ \mathcal{L}=i\bar{\Psi}_f\gamma_\mu D^\mu\Psi_f-\text{Tr}[G^b_{\mu\nu}G^{b\,\mu\nu}] $$
where $b,f$ run over the typical species we have in the standard model (SM), and all fields are in the same representation as in the SM.
In this context it is sometimes stated that, although there is no Higgs, there would be a mass generation mechanism for the gauge bosons of $SU(2)$ because of QCD. This happens via the chiral quark condensate $\langle q_L q_R\rangle\neq 0$. (Or statements like "the gauge bosons eat up the pion")
My question is now, how can I see that this generates a mass for the $SU(2)$-gauge bosons? Usually using methods of spontaneous symmetry breaking, I would put a vacuum expectation value for some field and see that it results in a term that behaves like a mass term. But this won't work here because there is no term involving quarks and bilinear in gauge bosons.
I could write this as an answer but it might be not enough to satisfy you. The quark-quark condensate happens because of SU(3) confinement and cannot be study perturbatively. The best way of studying it at a Lagrangean level is to use chiral-perturbation theory where you have explicit "pions" (quark-quark bound states) as the fields. Any good book on QFT and the SM that discusses QCD has a discussion on chiral perturbation theory, I advise you to give it a look!
Which books are there for example? Do you know a good one?
I also red about chiral perturbation theory in Schwartz, but many aspects concerning the theory of pions, seem to appear all of a sudden. But how are the two descriptions related?
Unfortunately I am not an expert in chiral PT. I would advise you to read like http://arxiv.org/abs/hep-ph/0210398. My SM course had some introduction stuff and I can only tell you some general notes. But the idea is very similar to the SM Higgs mechanism, if $\langle q_L q_R \rangle \neq 0$ you are aligning the vev in a preferable $SU(2)$ direction (just like the SM Higgs) as the condensate $q_L q_R$ is not $SU(2)$ singlet. Therefore, a mass will be generated. The details should be clear in a good chiral PT text... sorry for the superficial answer
The standard text worth reading is Georgi's Weak Interactions text, which outlines the effective σ models resulting out of chiral symmetry breaking in QCD. This process is quite subtle and elusive and Schwartz's book is no more schematic than standard texts. But, once you've bought the conceit of quark bilinears supplanted/summarized by mesons (and this is pure low energy QCD, nothing to do with the weak interactions; it is a potential separate question I would not be keen to answer!), which you complain about in your comment, it is not hard to see how the pions of the σ-model, this effective low energy theory, also overlap with generators of the EW symmetry, which are thus also SSBroken, in turn, but only by a little.
This, in fact, is the building principle of all technicolor theories: using states resulting out of a technistrong theory chiral symmetry breaking to substitute for the Higgs field in the Higgs mechanism and break EW. So, let me flesh out the point that @romanovzky made schematically.
Take the SM, keep only the lightest quark generation, for simplicity, so the u,d quarks only, and discard the Higgs field. Hence, u and d are now massless. QCD, in the χSB elided here (cf. WP link provided) generates a σ model that summarizes this symmetry breaking into PCAC; actually, here, CAC, conserved axial current, as the pion is massless, that is, the 3 conserved axial currents
$$
\vec{A}_\mu= f_\pi \partial_\mu \vec{\pi} +... ,
$$
where $f_\pi$ is the pion decay constant, of the order of o.1 GeV --this is low energy QCD, after all, and we are interested in features of the peculiar asymmetric vacuum.
Enter the Weak interactions. You couple these axial currents to the axial half of the W, to get schematically and cavalierly,
$$
...+g \vec{A}_\mu \cdot \vec{W}_\mu +...
$$
etc... so you are gauging the σ model, in our case, for convenience, the nonlinear one. The EW $SU(2)_L$ overlaps the 3 broken chiral charges of the Axials, so it's broken. (Recall the V-A action on pions, $\delta _{\vec{\theta}_L} \vec{\pi} \sim \vec{\theta}_L\times \vec{\pi} - f_\pi \vec{\theta}_L +...$. You can see the axial variation of the above current-W term may only be cancelled by the variation of the W bilinear in the next paragraph.)
Now the seagull term in the gauge-covariant kinetic term for the pion is, quite schematically, in the leading, pionless term, something like
$$
g^2 f_\pi^2 \vec{W}_\mu \cdot \vec{W}^\mu ~,
$$
that is, $f_\pi$ has supplanted the standard Higgs EW v.e.v. $v\sim 246$ GeV of the real world; that is, the new notional Ws now have a mass
$$
\frac{f_\pi}{246~GeV} M_W\sim 4\cdot 10^{-4} M_W \sim 32 ~MeV,
$$
real light... lighter than the strong χSB scale.
You can see how this sort of thing (which happens at some level in real life) is a negligible piece of noise in the big picture of the SM. We have omitted the "second job of the Higgs", namely giving the fermions (e.g. leptons, in case we didn't wish to descend into current quark mass complications) mass in a gauge invariant way, but it can be arranged.
Sophisticated versions of this mechanism undergird Technicolor models, where the interaction is some QCD-inspired strong coupled theory, e.g., Susskind 1979. It's eqn (15) provides the W mass "directly" through the vacuum polarization contribution of the quarks, assuming χSB, but it might come across as more cryptic than the effective Lagrangean sketch outline here.
| common-pile/stackexchange_filtered |
remove missing values from dataframe and reshape the dataframe with the non-missing ones
I have a dataframe that has missing values at each column, but at different rows. For simplicity, let's see the following dataframe (real dataframe is much more complex):
first_column <- c(1, 2, NA,NA)
second_column <- c(NA, NA, 4,9)
df <- data.frame(first_column, second_column)
and we get:
first_column second_column
1 1 NA
2 2 NA
3 NA 4
4 NA 9
Now, I want to reshape the dataframe, after removing these missing values. I want the following:
first_column second_column
1 1 4
2 2 9
Is there an automatic way to do it (real dataframe has dimensions 1800 x 33)?
We may have to reorder the column values so that the NAs are at the end and then use drop_na
library(dplyr)
library(tidyr)
df %>%
summarise(across(everything(), ~ .x[order(is.na(.x))])) %>%
drop_na()
-output
first_column second_column
1 1 4
2 2 9
If there are unequal distribution of NAs in each column and wants to remove the row only if all the columns have NA at a particular row after ordering, use if_all/if_all in filter
df %>%
mutate(across(everything(), ~ .x[order(is.na(.x))])) %>%
filter(if_any(everything(), complete.cases))
-output
first_column second_column
1 1 4
2 2 9
One possible solution:
df_new = as.data.frame(lapply(df, function(x) x[!is.na(x)]))
first_column second_column
1 1 4
2 2 9
| common-pile/stackexchange_filtered |
Trick to give images a standard group size
I'm not sure where to start with this:
I'm looking for a way to custom edit widths/heights on on line of photos, pretty much the exact same as here http://www.flickr.com/search/?q=flowers. On flickR each line of photos has the same width and each photo of that line the same height. so that it lines up nicely.
On my pages I use php to randomly select 2 or 3 images to display in a line, I have it set so the width is a set size (2 images at 50% or 3 at 33% witch etc) but that means the height is off between the images. The tidy solution is a script I guess that looks at the images height and widths and works out their sizes so that the widths and heights adjust to match up and keep a tidy line.
Does anyone know where to start with this?
Thanks
Maybe you could start by looking at this blog post:
http://www.crispymtn.com/stories/the-algorithm-for-a-perfectly-balanced-photo-gallery
Generally i will place the images within an element, i.e.
<div class='item_image'><img src='image.jpg'></div>
You could then set css as follows:
div.item_image { width: 100px; height: 75px; overflow-y: hidden; }
div.item_image img { width: 100%; }
You could also use php to resize your images appropriately and/or create thumbnail sizes which are appropriate to what you need.. Lots of options really.
You could set the images as background-image and then use the background-size: cover property to fill the containing elements. Just set a specific width & height of each image container, and then background-size: cover will fill the container with the image.
More on this here: http://css-tricks.com/perfect-full-page-background-image/
Markup
<div class="imageOne" style="background-image: url('x');"> </div>
CSS
.imageOne {
background-size: cover;
}
| common-pile/stackexchange_filtered |
SVG generated with D3 doesn't scale like seemingly identical static SVG content
I'm generating an svg visualization with d3 that I'd like to have scale with it's container in a responsive way. I can directly create an SVG element that behaves this way, but when I write a d3 script that generates the same SVG markup, it doesn't scale.
Here's the d3 code that generates the SVG element:
var arc = d3.svg.arc()
.startAngle(0)
.innerRadius(36)
.outerRadius(48);
var svg = d3.select('#d3').append('svg')
.attr('width', '100%')
.attr('height', '100%')
.attr('viewbox', '0, 0, 100, 100')
.attr('preserveAspectRatio', 'xMidYMid')
.append('g')
.attr('transform', 'translate(50, 50)');
var loop = svg.append('g')
.attr('class', 'percent-wheel');
loop.append('path')
.attr('class', 'background')
.attr('d', arc.endAngle(2 * Math.PI));
And here's the resulting SVG markup, which scales correctly when inserted into the DOM statically:
<div id="d3" class="container">
<svg preserveAspectRatio="xMidYMid" viewbox="0, 0, 100, 100" height="100%" width="100%">
<g transform="translate(50, 50)">
<g class="percent-wheel">
<path d="M0,48A48,48 0 1,1 0,-48A48,48 0 1,1 0,48M0,36A36,36 0 1,0 0,-36A36,36 0 1,0 0,36Z" class="background"></path>
</g>
</g>
</svg>
</div>
You can see a live example here:
http://jsfiddle.net/HMQGq/
Can anyone explain what's different about creating the svg via d3 that causes this to happen?
I'm not sure if I understand what you mean. Neither of the circles scales when I resize the window.
Sorry if that wasn't clear. One circle has scaled to the size of its containing element (the .container div), the other hasn't.
There's a typo in your attribute definition -- it should be viewBox instead of viewbox.
That's interesting - and solves my problem - Thanks!. Out of curiosity: any idea why the typo causes problems in d3, but not in the static markup?
My guess is that the browser "normalizes" the attribute names when parsing the page and that this doesn't happen for content that is generated dynamically, but I'm really just guessing here.
It's interesting that this occurs in Chrome, IE & Firefox (i.e. all the ones I quickly checked).
Yes, the fact that it was consistent across browsers really threw me, and made me think it was something that was happening by design, as opposed to the sometimes-occurring typo auto-correction it seems to be
Please mark this is answered.
The problem is a typo in the attribute definition -- it should be viewBox instead of viewbox.
| common-pile/stackexchange_filtered |
Getting net::ERR_CONTENT_LENGTH_MISMATCH 200 (OK) when running Vue project
I have the following problem, my vue project used to run properly and not out of the blue it is giving me the error
net:: ERR_CONTENT_LENGTH_MISMATCH 200 (OK) for the chunk-vendors.js file in my Chrome console. Only the index page is visible and I am unable to access any routes.
Here is the vue.config.js file
const webpack = require('webpack');
module.exports = {
lintOnSave: false,
configureWebpack: {
module: {
rules: [
{
test: /.html$/,
loader: 'vue-template-loader',
exclude: /index.html/,
},
],
},
optimization: {
minimizer: [new UglifyJsPlugin()],
splitChunks: {
chunks: 'all',
},
},
plugins: [
// Ignore all locale files of moment.js
new webpack.IgnorePlugin(/^\.\/locale$/, /moment$/),
],
},
};
These are the dependencies and devDependencies that i am using in this project.
"dependencies": {
"@rxweb/reactive-forms": "^1.0.0",
"bootstrap": "^4.6.0",
"bootstrap-vue": "^2.21.2",
"core-js": "^3.16.0",
"luxon": "^1.28.0",
"moment": "^2.29.1",
"uglifyjs-webpack-plugin": "^1.2.7",
"vue": "^2.6.14",
"vue-class-component": "^7.2.3",
"vue-datetime": "^1.0.0-beta.14",
"vue-property-decorator": "^8.4.2",
"vue-router": "^3.5.2",
"vue-template-loader": "^1.0.0",
"weekstart": "^1.1.0"
},
"devDependencies": {
"@typescript-eslint/eslint-plugin": "^2.33.0",
"@typescript-eslint/parser": "^2.33.0",
"@vue/cli-plugin-babel": "^4.5.13",
"@vue/cli-plugin-eslint": "^4.5.13",
"@vue/cli-plugin-router": "^4.5.13",
"@vue/cli-plugin-typescript": "^4.5.13",
"@vue/cli-service": "^4.5.13",
"@vue/eslint-config-typescript": "^5.0.2",
"eslint": "^6.7.2",
"eslint-plugin-vue": "^6.2.2",
"lint-staged": "^9.5.0",
"node-sass": "^5.0.0",
"sass-loader": "^10.1.0",
"typescript": "^3.9.10",
"vue-template-compiler": "^2.6.14"
},
I have tried adding proxy to the dev server as described in this post Vue npm run serve Failed to load resource: net::ERR_CONTENT_LENGTH_MISMATCH but still the problem persists.
I checked the network tab and the chunk-vendor.js file was over 11.4 MB.
I am using Vue 2.6.14
| common-pile/stackexchange_filtered |
automatically build a org chart based on self-referencing sql employee table?
sol have a standard self referencing employee table in sql 2005 and thought it would be cool to have an automated way to be able to build an org. chart like this
(source: freeorgchart.com)
i want somethign that can be scheduled to run daily because my company has a large team and things change frequently. as an aside, i am re-fleshing the employee table (1000) records daily from Active Directory.)
anyone know a simple way to do this?
visio/ vba?
excel/ vba?
winforms/ c#?
Visio allows you to generate an org chart from a SQL database, but I'm not sure you can automate it (may have to have Visio installed on the server if you can) and it's a fair amount of work to set up the template file to lay everything out the way you like. So the short answer is "probably possible in Visio, but definitely a pain for non-Visio gurus"
| common-pile/stackexchange_filtered |
Class initalization in Python - when to use parentheses and default values
class Example:
def __init__(self, sides, name):
self.sides = sides
self.name = name
Why are the additional attributes below (interior_angles and angle) not defined in the parentheses and then written out in a function?
class Example:
def __init__(self, sides, name):
self.sides = sides
self.name = name
self.interior_angles = (self.sides-2)*180
self.angle = self.interior_angles/self.sides
I have also seen some attributes be defined within the parentheses themselves. In what cases should you do that? For example:
class Dyna_Q:
def __init__(self, environment, alpha = alpha, epsilon = epsilon, gamma = gamma):
Thank you.
Why should interior_angles and angle be defined as parameters? They can be calculated from the other values.
It seems that those should be properties instead.
Regular parentheses () in programming are usually used very similarly to how they are used in math: Group stuff together and change the order of operations.
4 - 2 == ( 4 - 2 )
Will be true, however
(4 - 2) / 2 == 4 - 2 / 2
won't be. Additionally they are used, as you correctly observed, to basically tell the interpreter where a function name ends and the arguments start, so in case of a function definition they come after the name and before the :. The assignments from your last example aren't really assignments, but default values; I assume you have global variables alpha and so on in your scope, that's why this works.
So when to use parentheses? For example:
When you want to change the order of operations.
When you want to call or define a function.
When you want to define inheritance of a class class Myclass(MyBaseClass): ...
Sometimes tuples, though return a, b and return (a,b) are the same thing.
As in math, extra parenthesis can be used to make the order of operations explicit, e.g. (4-2) == (4-2) to show the calculations are done before the comparison.
In python's constructor function, the 'self' points to the current object to be created.
Your question:
Why are the additional attributes below (interior_angles and angle) not defined in the parentheses and then written out in a function?
The fact is that, if you have a ready value to be set inside the newly created object you will pass that value to the constructor inside the parenthesis.
But if you don't know that value to put inside your new object and also the value needs to be calculated you wont pass the value inside parenthesis.
The core thing is understanding the 'self' keyword.
If you do self.<property_name> = , you are creating a new property(field) inside your object.
The __init__ method is the object constructor. It means that his arguments are the basic parameters that you need in order to initialize each object instance.
On the other side, "interior_angles" and "angle" are calculated attribute based on sides.
Thank you that makes sense. How about when attributes are defined within parentheses? Like in the DynaQ example?
| common-pile/stackexchange_filtered |
in `+': no implicit conversion of nil into String (TypeError)
I am installing VVV and stuck while executing "vagrant up" command. It is showing me following error:
C:/Users/Admin/.vagrant.d/gems/2.6.6/gems/vagrant-hostsupdater-1.2.0/lib/vagrant-hostsupdater/HostsUpdater.rb:152:in `+': no implicit conversion of nil into String (TypeError)
When I opened the ruby file,
elsif Vagrant::Util::Platform.windows?
require 'tmpdir'
uuid = @machine.id || @machine.config.hostsupdater.id
tmpPath = File.join(Dir.tmpdir, 'hosts-' + uuid + '.cmd')
File.open(tmpPath, "w") do |tmpFile|
entries.each { |line| tmpFile.puts(">>\"#{@@hosts_path}\" echo #{line}") }
end
sudo(tmpPath)
File.delete(tmpPath)
"tmpPath = File.join(Dir.tmpdir, 'hosts-' + uuid + '.cmd')"
is the line which is giving me error.
I am trying to install VVV through a Youtube video, followed the exact steps given in the video, then how come that video isn't facing this error and I do.
Apparently, uuid is nil which means that neither @machine.id nor @machine.config.hostsupdater.id are set.
this file gets added while doing "vagrant plugin install vagrant-hostsupdater". I do not know anything about ruby. how should I fix this problem? @Stefan
Seems this issue has already been recognized Issue 190 and was resolved in master 6 days ago (https://github.com/agiledivider/vagrant-hostsupdater/commit/38eee8150cb659e69abebf4cc55eb2ad8e1bd4a5)
| common-pile/stackexchange_filtered |
$\mathbb{Z}[i]/(a+bi) \cong \mathbb{Z}_{a^2+b^2}$ if $(a,b)=1$
I hope to show that
$$\mathbb{Z}[i]/(a+bi) \cong \mathbb{Z}_{a^2+b^2}$$
for $(a,b)=1$.
I made an effort to find a homomorphism but I failed.
Can you give a hint?
Hint: $(a,b)=1$ if and only if there exist integer $m,n$ such that $an+bm=-1$.
Incidentally, the result should be that $$\Bbb Z[i]/(a+bi)\cong\Bbb Z_{a^2+b^2}.$$
Could you explain me more precisely? I can't get the homomorphism with an+bm=-1
I don't have time to get into the details, I'm afraid, but specific instances of this sort of question can be found here, here, and here. Hopefully that's enough to give you the idea of what such a homomorphism should look like.
@Pearl: Let me know what you're able to do with those, and if you're still stuck, I'll try to explain it further when I have the time.
Thank you for link. It is easy to solve with starting from $Z$ to $Z[i]/(a+bi)$. I first thought from $Z[i]$ to $Z_{a^2+b^2}$ ^0^
You're very welcome!
| common-pile/stackexchange_filtered |
Spring and Thymeleaf: How to move javascript to a separate .js file
Example:
This works
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml"
xmlns:th="http://www.thymeleaf.org">
<head lang="en">
<meta charset="UTF-8"/>
<title></title>
</head>
<body>
<button th:onclick="'javascript:sayHello(\'hello\')'">Say Hello</button>
</body>
<script>
function sayHello(text) {
alert(text);
}
</script>
</html>
But, if I move js to the file hello.js in the same folder, script is not working.
I tried embed like this:
<script type="text/javascript"<EMAIL_ADDRESS>
And like this:
<script type="text/javascript" src="hello.js"></script>
What I'm doing wrong?
Could you check which error do you have using developer tools in your browser? Maybe the hello.js file is not being loaded, change it to if you have it in your root context
Patrick, slash doesn't help. Message says "ReferenceError: sayHello is not defined"
Assuming you are using default configurations from Spring Boot you should have your thymeleaf file in src/main/resources/templates/ so you should put the javascript file in:
/src/main/resources/static
Explanation: your build tool (Maven or Gradle) will copy all the content from /src/main/resources/ in the application classpath and, as written in Spring Boot's docs, all the content from a directory called /static in the classpath will be served as static content.
This directory could works also, but it is discouraged:
/src/main/webapp/
Do not use the src/main/webapp directory if your application will be
packaged as a jar. Although this directory is a common standard, it
will only work with war packaging and it will be silently ignored by
most build tools if you generate a jar.
Indeed! Thanks a lot Andrea.
According to the Spring Boot documentation, static resources are served from /static, /public or off the classpath.
http://docs.spring.io/spring-boot/docs/1.2.0.RELEASE/reference/htmlsingle/#boot-features-spring-mvc-static-content
There is also a brief mention of how to reload your static content and template files.
http://docs.spring.io/spring-boot/docs/1.2.0.RELEASE/reference/htmlsingle/#howto-reload-static-content
| common-pile/stackexchange_filtered |
How can I prevent connman immediately reconfigures the network when I change the .config file?
I use connman for the configuration of the network.
I noticed as soon as I change the entry IPv4= in /var/lib/connman/my.config Linux immediately reconfigures the network to the new ip address. But I don't want that. My desired behaviour is it should just reconfigure on boot-up of my embedded device.How do I do that?
Or is there a magic setting for connman, something like: DoNotImmediatelyReconfigure=yes?
Meanwhile I found out connman does not have such setting DoNotImmediatelyReconfigure. So I solved it by editing a copy of that config file ~/my.config which I copy over to /var/lib/connman/my.config during Linux-startup before the networking comes up. Therefore I created a systemd service which calls my script for replacing that connman config file, and that service is called before the networking service.
| common-pile/stackexchange_filtered |
On the image of a polynomial map modulo two distinct primes
Let $Q_0, Q_1, Q_2 \in \mathbb{Z}[x_0, x_1, x_2]$ be three non-singular ternary quadratic forms with integer coefficients. Let $T$ be a large real number, and let $p, q$ be two distinct primes having size $T^\alpha/\log T < p, q < T^\alpha$ for some $\alpha \geq 1$. Let $I(T) = [1, T^{1/2}) \cap \mathbb{Z}$, and let $I_p(T), I_q(T)$ respectively denote the reduction of $I(T)$ mod $p,q$.
Consider the polynomial map $\mathcal{F}(\mathbf{x}) = (Q_0(\mathbf{x}), Q_1(\mathbf{x}), Q_2(\mathbf{x}))$. Define the set
$\displaystyle W_{\mathcal{F},p,q}(T) = \{\mathbf{x} \in [1,T]^3 \cap \mathbb{Z}^3 : \mathcal{F}(\mathbf{x}) \pmod{p} \in I_p(T)^3, \mathcal{F}(\mathbf{x}) \pmod{q} \in I_q(T)^3 \}.$
In other words, $W_{\mathcal{F},p,q}(T)$ is the set of integer points in the box $[1,T]^3$ such that the triple $(Q_0(\mathbf{x}), Q_1(\mathbf{x}), Q_2(\mathbf{x}))$ satisfies $Q_i(\mathbf{x}) \pmod{p} \in I_p(T)$ and $Q_i(\mathbf{x}) \pmod{q} \in I_q(T)$ for $i = 0,1,2$.
Is there a way to control the size of $W_{\mathcal{F}, p, q,}(T)$, especially for $\alpha > 1$?
The idea is that typically one would expect at least one of $Q_i(\mathbf{x})$ to be "large" mod $p,q$ (i.e., does not land in $I_p(T), I_q(T)$) when $\mathbf{x} \in [1,T]^3$, and for all three quadratic forms to be "small" for both moduli seems a very difficult condition to satisfy. Of course, this notion of "large/small" is virtually meaningless in the finite field setting.
| common-pile/stackexchange_filtered |
Can't boot act led always on
i have two cards. Old 4gb and new 32gb.
New card:
I successfully can boot from this old card:
And can't from this card:
Both card was written by etcher. What can be wrong here?
Sometimes cards have defects that prevent them from booting, while appearing fine in your os.
A common flaw in IO can be diagnosed by simply trying to image the new card: you might get an I/O error.
If not, compare the image of the sd card that won't boot, with the one that boots. It helps to keep the partitions the same size, in such case comparing the first 4Gb would be a good test that your writes were succesfully.
But looking at the images, I can see that your new card has a partition of 2Gb then 32Gb free space, which is not consistent with a 32Gb card (it should add to 32Gb, not 34Gb). So possibly it is a corrupted drive.
We have come across a number of fake drives, reporting a wrong size i.e. 256Gb card being only 8Gb, but reporting 256Gb to the OS.
You can easily diagnose it testing the images, I suggest using some low-level utils such as dd or cat /dev/mmcblk0 so you can easily replicate the tests on different SD cards.
Addendum: You might want to ensure it's on the "good cards" list
Well, i can fill card with 1mb files up to 32gb, and i can make image of this card (32gb) without errors. But still can't boot. Seller promise send me another sd card. As i can see i can use that only for storing. :(
You might want to ensure it's on the "good cards" list: https://elinux.org/RPi_SD_cards
You could be dealing with a corrupt flashing of the image. firstly erase the SD card using the program provided from here and then attempt to flash the card again using etcher.
If this doesn't work it's likely that it is dead and you will need a new one :(
Hope this helps :)
No, this is don't help me. I formatted card, write image, validate result and still can't boot.
| common-pile/stackexchange_filtered |
How to write code for speed up manual process
Can someone help me with some tips to create code for my problem? Step by step I am capable of solving some problems but I have no idea how to start with writing loops and my own function to speed up my work. Below, you can see my last problem which is because of small dataset quite easy for me to finish but problem is when I have much bigger dataset (when I need to create 30 and more "diff" columns).
thank you for your time :)
id=c(0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0)
distance=c(43.59304, 152.66197, 208.00968, 272.92662, 380.79429,
469.62269, 556.72725,734.29125, 796.42570, 873.09448, 1040.64550)
c=data.frame(id,distance)
c
c[paste("diff",1:3, sep="")]=NA
c$diff1=abs(c[c$id>0,][1,2]-c$distance)
c$diff2=abs(c[c$id>0,][2,2]-c$distance)
c$diff3=abs(c[c$id>0,][3,2]-c$distance)
c
c$min=apply(c[,c(3:5)], 1,FUN=min)
edit:
Basically I want calculate shortest distance for each points with ID=0 to the nearest point with ID=1.
Data "distance" is a distance along line for each points.
Picture for better understening
you should explain your specific problem, not expect us to decode it, and never ever name a variable c ;)
to add too @Moody_Mudskipper comment; It's more likely that we will be able to help you if you provide a complete minimal reproducible example to go along with your question. Something we can work from and use to show you how it might be possible to answer your question I will also recommend to take a look at the how do I ask a good question.
That part in the middle could be done by using outer, which applies a function (your function, i.e. the absolute difference) to every pair of two input-vectors.
I therefore switch notation, because @Moody_Mudskipper made a good point: never name a variable c.
It is by the way often better to refer explicitly to columns by writing their names instead of their column index, just in case your column changes positions.
df <- structure(list(id = c(0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0),
distance = c(43.59304, 152.66197, 208.00968,
272.92662, 380.79429, 469.62269,
556.72725, 734.29125, 796.4257,
873.09448, 1040.6455)),
.Names = c("id", "distance"),
row.names = c(NA, -11L),
class = "data.frame")
differences <-
outer(Y = df[df[["id"]] > 0, "distance"], X = df[, "distance"], FUN = function(x, y){
abs(x - y)
})
differences <- as.data.frame(differences)
names(differences) <- paste0("diff", seq_len(ncol(differences)))
differences[["min"]] <- apply(differences, 1, min)
cbind(df, differences)
As you can see, this also works for more data, since it is flexible. If your needs are completely different, be more specific and share more information.
| common-pile/stackexchange_filtered |
Collection View reordering cells upon return to view
I currently have a collection view set up to display a dynamic number of objects in its view. Each cell displays an image from the corresponding object. When a cell is tapped, it triggers a segue to the next view in the hierarchy. The cell's corresponding object is passed to the next view However, I am noticing that when I return the view with the collection, the ordering of cells has changed, and now, when I tap one to got the next view, its properties are from objects of other cells.
Below are my methods of the UICollectionView:
func numberOfSectionsInCollectionView(collectionView: UICollectionView) -> Int {
return 1
}
func collectionView(collectionView: UICollectionView, numberOfItemsInSection section: Int) -> Int {
return objectsCount
}
func collectionView(collectionView: UICollectionView, cellForItemAtIndexPath indexPath: NSIndexPath) -> UICollectionViewCell {
let cell = collectionView.dequeueReusableCellWithReuseIdentifier("Chow Object Reuse ID", forIndexPath: indexPath) as UICollectionViewCell
cell.backgroundColor = UIColor.whiteColor()
var imageView:UIImageView = UIImageView()
imageView.frame = CGRect(x: 0, y: 0, width: 90, height: 90)
//Since loading of images is a time-intensive task, all the thumbnails
//may have not been fetched yet.
if (imageThumbsArray.count == objectsCount) { //Eventually, a more elegant fix will be needed.
imageView.image = imageThumbsArray[indexPath.row]
}
cell.addSubview(imageView)
return cell
}
func collectionView(collectionView: UICollectionView, didSelectItemAtIndexPath indexPath: NSIndexPath) {
self.objectToBePassed = self.chowObjectsArray[indexPath.row] as PFObject
self.performSegueWithIdentifier("Show Chow Details", sender: self)
}
I also call self.PastObjectsCollection.reloadData()
Why is the reordering and mixing up of cells happening?
Thanks,
Siddharth
I don't know about the reordering, but you have another problem. Each time cellForItem is called you will be adding new image views to the cells. You either need to remove them first, before you add them, check if they exist and add them only if they don't, or, what I think is the best solution, add the image views in the init method for a custom cell (or make the cell in IB).
How exactly would I go about doing that? I very new to iOS programming.
Also, the number of objects is dynamic and I can't predict it, so I don't think I will be able to do that in IB.
| common-pile/stackexchange_filtered |
ImportError: DLL load failed while importing _ctypes : The specified module could not be found
Error:
ImportError: DLL load failed while importing _ctypes : The specified module could not be found
Need: How to resolve this error? And launch jupyter notebook and use pip from the specific environment?
It works on other environment.
While trying to launch jupyter notebook or using pip in virtual environment.
I couldn't launch jupyter in the environment "Myenvproject".
It launches on the base environment.
I tried to
conda uninstall pyzmq
In the environment.
And reinstall jupyter in "Myenvproject" Environment but it still doesn't launch.
Jupyter doesn't launch
Turns out that when I check for
pip --verison
It then also shows the same error
Can't use pip
Windows 10
Python 3.10.0(Anaconda, virtual environment)
Please post the error as text, not an image.
Please clarify your specific problem or provide additional details to highlight exactly what you need. As it's currently written, it's hard to tell exactly what you're asking.
@toyotaSupra I posted the error as text
@Community How to resolve the ImportError: DLL failed while importing _ctypes
What does (in a console) output: dir "C:\Users\shing\anaconda3\DLLs\_ctypes.pyd"? Also the other environment is also Python 3.10.0?
I just created a fresh environment and it works.
@CristiFati The console output is in the first image, "Jupyter doesn't launch". The other environment is python 3.9. Jupyter was working good, but a day before this issue, I updated few modules and installed selenium and from next day jupyter won't launch.
What's the command list you're running and where from?
1. Intro
I noticed this question (or very similar / duplicate ones) several times lately, but none of them provides a clear path to consistently run into the issue (environment creation, commands run in order to reach this state). Also, considering the frequency this question is viewed, one could only conclude this is a fairly common problem.
Posting some guidelines to prevent users from running into it, and also some investigation techniques meant to let users know how to get out of it once hit.
Notes:
I wasn't able to run into this - well unless purposely messing up the environment
Error is about _ctypes.pyd, but the principle applies to any other .pyd (.dll)
Although I'm conducting my investigation on Win, concepts also apply to Nix
I am using (at answer date, upgraded constantly from older versions) Anaconda:
Navigator 2.4.2
Conda 23.5.2
Python 3.9.17 (base environment)
2. CTypes considerations
[Python.Docs]: ctypes - A foreign function library for Python is a library (initially started by Thomas Heller, then adopted by Python's standard library a long long time ago), designed to interact with (mainly, call functions from) .dlls (.sos) written in other (lower level) languages.
There are tons of examples on the internet (SO included), I'm going to exemplify a few (of mine):
[SO]: Python ctypes, char** and DLL (@CristiFati's answer)
[SO]: Access OpenSSL FIPS APIs from python ctypes not working (@CristiFati's answer)
[SO]: How to run a Fortran script with ctypes? (@CristiFati's answer)
[SO]: C function called from Python via ctypes returns incorrect value (@CristiFati's answer) - a common pitfall especially for the less experienced
CTypes uses LibFFI ([GitHub]: libffi/libffi - A portable foreign-function interface library - written mostly in C and ASM) under the hood.
It consists of 2 parts:
The core: written in C (which uses LibFFI). Located at [GitHub]: pytgon/cpython - (main) cpython/Modules/_ctypes. This does all the magic, and it's an extension module ([Python.Docs]: Extending Python with C or C++). It's a .dll (.so) file named _ctypes.pyd on Win (this is how I'm going to reference it), or _ctypes*.so on Nix (check [Python.PEPs]: PEP 3149 - ABI version tagged .so files for more naming convention details)
The Python wrappers. Located at [GitHub]: pytgon/cpython - (main) cpython/Lib/ctypes. Provides a Python friendly access to the core
The way _ctypes.pyd depends on LibFFI varies very much on OS and Python distribution / version.
2.1. CPython
On some OSes (Win included) LibFFI code (at least the needed part) was duplicated in Python codebase and built into _ctypes.pyd. Although it was easier at the beginning, it's not a good practice as it becomes very hard to maintain if one needs to keep the referred software up to date, and also licensing might become a problem. So, since Python 3.8 they stopped doing that ([GitHub]: pytgon/cpython - bpo-45022: Pin current libffi build to fixed version in preparation for upcoming update, [GitHub]: pytgon/cpython - bpo-45022: Update libffi to 3.4.2 in Windows build). Differences (check for libffi_msvc):
[GitHub]: pytgon/cpython - (3.7) cpython/Modules/_ctypes
[GitHub]: pytgon/cpython - (3.8) cpython/Modules/_ctypes
This change has some implications:
Build: LibFFI code (API) is required on the build machine and Python code needs to know where it is (similar to Linux world where libffi-dev (or equivalent) is required)
LibFFI is built in a stand alone .dll (libffi*.dll) that _ctypes.pyd depends on (and must be present at runtime (when the module is loaded)).
On Win, that .dll is shipped by the Python installer
Illustrating the differences on various OSes:
Win (check [SO]: How to build a DLL version of libjpeg 9b? (@CristiFati's answer) (at the end) for .dll dependency command details) - PS console:
[cfati@CFATI-5510-0:E:\Work\Dev\StackExchange\StackOverflow\q073458524]> sopr.bat
### Set shorter prompt to better fit when pasted in StackOverflow (or other) pages ###
[prompt]>
[prompt]> $_CTss = Get-ChildItem 'c:\Install\pc064\Python\Python' -Filter '_ctypes.pyd' -Recurse
[prompt]> foreach ($_CTs in $_CTss) {
>> echo $_CTs.FullName
>> Start-Process -Wait -FilePath 'c:\Install\pc064\Depends\DependencyWalkerPolitistTexan\Version\depends.exe' -ArgumentList '-c', '-ot:out.txt', $_CTs.FullName
>> Get-Content 'out.txt' | Select-String 'FFI.*\.DLL$'
>> }
C:\Install\pc064\Python\Python\02.07.18\DLLs\_ctypes.pyd
C:\Install\pc064\Python\Python\03.04.04\DLLs\_ctypes.pyd
C:\Install\pc064\Python\Python\03.05.04\DLLs\_ctypes.pyd
C:\Install\pc064\Python\Python\03.06.08\DLLs\_ctypes.pyd
C:\Install\pc064\Python\Python\03.07.09\DLLs\_ctypes.pyd
C:\Install\pc064\Python\Python\03.08\DLLs\_ctypes.pyd
[ 6] c:\install\pc064\python\python\03.08\dlls\LIBFFI-7.DLL
C:\Install\pc064\Python\Python\03.09\DLLs\_ctypes.pyd
[ 6] c:\install\pc064\python\python\03.09\dlls\LIBFFI-7.DLL
C:\Install\pc064\Python\Python\03.10\DLLs\_ctypes.pyd
[ 6] c:\install\pc064\python\python\03.10\dlls\LIBFFI-7.DLL
C:\Install\pc064\Python\Python\03.11\DLLs\_ctypes.pyd
[ 6] c:\install\pc064\python\python\03.11\dlls\LIBFFI-8.DLL
OSX:
[cristian.fati@cfati-16i2019-0:~/Work/Dev/StackExchange/StackOverflow/q073458524]> ~/sopr.sh
### Set shorter prompt to better fit when pasted in StackOverflow (or other) pages ###
Running OSX
[064bit prompt]>
[064bit prompt]> uname -a
Darwin cfati-16i2019-0 22.5.0 Darwin Kernel Version 22.5.0: Thu Jun 8 22:22:22 PDT 2023; root:xnu-8796.121.3~7/RELEASE_X86_64 x86_64
[064bit prompt]>
[064bit prompt]> for g in 7 8 9 10 11; do _CTS=$(python3.${g} -c "import ctypes;print(_ctypes.__file__)"); echo ${_CTS}; otool -L ${_CTS} | grep ffi; done
<EMAIL_ADDRESS><EMAIL_ADDRESS> /usr/lib/libffi.dylib (compatibility version 1.0.0, current version 30.0.0)
<EMAIL_ADDRESS> /usr/lib/libffi.dylib (compatibility version 1.0.0, current version 30.0.0)
<EMAIL_ADDRESS> /usr/lib/libffi.dylib (compatibility version 1.0.0, current version 30.0.0)
<EMAIL_ADDRESS> /usr/lib/libffi.dylib (compatibility version 1.0.0, current version 30.0.0)
Linux - Ubuntu:
(qaic-env) [cfati@cfati-5510-0:/mnt/e/Work/Dev/StackExchange/StackOverflow/q073458524]> ~/sopr.sh
### Set shorter prompt to better fit when pasted in StackOverflow (or other) pages ###
[064bit prompt]>
[064bit prompt]> uname -a
Linux cfati-5510-0 6.2.0-37-generic #38~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Nov 2 18:01:13 UTC 2 x86_64 x86_64 x86_64 GNU/Linux
[064bit prompt]>
[064bit prompt]> for g in 5 6 7 8 9 10 11 12; do _CTS=$(python3.${g} -c "import _ctypes;print(_ctypes.__file__)"); echo ${_CTS}; ldd ${_CTS} | grep ffi; done
/usr/lib/python3.5/lib-dynload/_ctypes.cpython-35m-x86_64-linux-gnu.so
libffi.so.7 => /lib/x86_64-linux-gnu/libffi.so.7 (0x00007ff1fb7d1000)
/usr/lib/python3.6/lib-dynload/_ctypes.cpython-36m-x86_64-linux-gnu.so
libffi.so.7 => /lib/x86_64-linux-gnu/libffi.so.7 (0x00007f0bf4ec5000)
/usr/lib/python3.7/lib-dynload/_ctypes.cpython-37m-x86_64-linux-gnu.so
libffi.so.8 => /lib/x86_64-linux-gnu/libffi.so.8 (0x00007fc2a3ae6000)
/usr/lib/python3.8/lib-dynload/_ctypes.cpython-38-x86_64-linux-gnu.so
libffi.so.8 => /lib/x86_64-linux-gnu/libffi.so.8 (0x00007f32b6176000)
/usr/lib/python3.9/lib-dynload/_ctypes.cpython-39-x86_64-linux-gnu.so
libffi.so.8 => /lib/x86_64-linux-gnu/libffi.so.8 (0x00007f40e590a000)
/usr/lib/python3.10/lib-dynload/_ctypes.cpython-310-x86_64-linux-gnu.so
libffi.so.8 => /lib/x86_64-linux-gnu/libffi.so.8 (0x00007f20bcbc7000)
/usr/lib/python3.11/lib-dynload/_ctypes.cpython-311-x86_64-linux-gnu.so
libffi.so.8 => /lib/x86_64-linux-gnu/libffi.so.8 (0x00007f01a5555000)
/usr/lib/python3.12/lib-dynload/_ctypes.cpython-312-x86_64-linux-gnu.so
libffi.so.8 => /lib/x86_64-linux-gnu/libffi.so.8 (0x00007f4a95e2d000)
Linux - CentOS (Docker):
[root@cfati-5510-0:/work/q073458524]> ~/sopr.sh
### Set shorter prompt to better fit when pasted in StackOverflow (or other) pages ###
[064bit prompt]>
[064bit prompt]> cat /etc/os-release | grep PRODUCT
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
[064bit prompt]>
[064bit prompt]> for g in 2.7 3.6; do _CTS=$(python${g} -c "import _ctypes;print(_ctypes.__file__)"); echo ${_CTS}; ldd ${_CTS} | grep ffi; done
/usr/lib64/python2.7/lib-dynload/_ctypes.so
libffi.so.6 => /lib64/libffi.so.6 (0x00007f6529d71000)
/usr/lib64/python3.6/lib-dynload/_ctypes.cpython-36m-x86_64-linux-gnu.so
libffi.so.6 => /lib64/libffi.so.6 (0x00007f1085acc000)
Couple of words about Nix (Linux) where a similar error occurs.
Ubuntu:
Running the command ls /usr/lib/x86_64-linux-gnu/libffi*.so* | xargs dpkg -S will output 3 packages owning files of interest: LibFFI 8, LibFFI 7, LibFFI-Dev.
Depending on Python version, the appropriate package should be installed (3rd one isn't required, but adding it just in case):
apt install libffi8 libffi7 libffi-dev
CentOS:
Following the same steps (different tools) yields:
yum install libffi libffi-devel
More details can be found at [SO]: No module named '_ctypes'.
2.2. Anaconda
Naturally, Anaconda's distribution of Python follows CPython, but took things a bit further.
Note: In the following (Win) snippets (command outputs), due to space reasons, the Anaconda's installation base path (f:\Install\pc064\Anaconda\Anaconda\Version) will be replaced by a placeholder: ${ANACONDA_INSTALL_PATH} (Nix style env var).
(base) [cfati@CFATI-5510-0:e:\Work\Dev\StackExchange\StackOverflow\q073458524]> :: ------- Anaconda Prompt -------
(base) [cfati@CFATI-5510-0:e:\Work\Dev\StackExchange\StackOverflow\q073458524]> cd /d f:\Install\pc064\Anaconda\Anaconda\Version
(base) [cfati@CFATI-5510-0:f:\Install\pc064\Anaconda\Anaconda\Version]> sopr.bat
### Set shorter prompt to better fit when pasted in StackOverflow (or other) pages ###
[prompt]> :: Reactivate base to have active environment in prompt
[prompt]> conda deactivate & conda activate base
(base) [prompt]> :: ------- Anaconda Prompt (still) -------
(base) [prompt]> :: ------- ${ANACONDA_INSTALL_PATH} is a placeholder for f:\Install\pc064\Anaconda\Anaconda\Version -------
(base) [prompt]> conda env list
# conda environments:
#
F:\Install\pc032\Intel\OneAPI\Version\intelpython\python3.7
F:\Install\pc032\Intel\OneAPI\Version\intelpython\python3.7\envs\2021.1.1
base * ${ANACONDA_INSTALL_PATH}
py_pc032_030602_00 ${ANACONDA_INSTALL_PATH}\envs\py_pc032_030602_00
py_pc064_030610_00 ${ANACONDA_INSTALL_PATH}\envs\py_pc064_030610_00
py_pc064_030704_00 ${ANACONDA_INSTALL_PATH}\envs\py_pc064_030704_00
py_pc064_030716_00 ${ANACONDA_INSTALL_PATH}\envs\py_pc064_030716_00
py_pc064_030800_00 ${ANACONDA_INSTALL_PATH}\envs\py_pc064_030800_00
py_pc064_030808_00 ${ANACONDA_INSTALL_PATH}\envs\py_pc064_030808_00
py_pc064_030817_00 ${ANACONDA_INSTALL_PATH}\envs\py_pc064_030817_00
py_pc064_030900_00 ${ANACONDA_INSTALL_PATH}\envs\py_pc064_030900_00
py_pc064_030917_00 ${ANACONDA_INSTALL_PATH}\envs\py_pc064_030917_00
py_pc064_030917_01 ${ANACONDA_INSTALL_PATH}\envs\py_pc064_030917_01
py_pc064_031000_00 ${ANACONDA_INSTALL_PATH}\envs\py_pc064_031000_00
py_pc064_031006_00 ${ANACONDA_INSTALL_PATH}\envs\py_pc064_031006_00
py_pc064_031012_00 ${ANACONDA_INSTALL_PATH}\envs\py_pc064_031012_00
py_pc064_031012_01 ${ANACONDA_INSTALL_PATH}\envs\py_pc064_031012_01
py_pc064_031104_00 ${ANACONDA_INSTALL_PATH}\envs\py_pc064_031104_00
(base) [prompt]>
(base) [prompt]> :: Search environments for _ctypes.pyd
(base) [prompt]> dir /B /S "envs\*_ctypes.pyd"
${ANACONDA_INSTALL_PATH}\envs\py_pc032_030602_00\DLLs\_ctypes.pyd
${ANACONDA_INSTALL_PATH}\envs\py_pc064_030610_00\DLLs\_ctypes.pyd
${ANACONDA_INSTALL_PATH}\envs\py_pc064_030610_00\DLLs\instrumented\_ctypes.pyd
${ANACONDA_INSTALL_PATH}\envs\py_pc064_030704_00\DLLs\_ctypes.pyd
${ANACONDA_INSTALL_PATH}\envs\py_pc064_030704_00\DLLs\instrumented\_ctypes.pyd
${ANACONDA_INSTALL_PATH}\envs\py_pc064_030716_00\DLLs\_ctypes.pyd
${ANACONDA_INSTALL_PATH}\envs\py_pc064_030800_00\DLLs\_ctypes.pyd
${ANACONDA_INSTALL_PATH}\envs\py_pc064_030800_00\DLLs\instrumented\_ctypes.pyd
${ANACONDA_INSTALL_PATH}\envs\py_pc064_030808_00\DLLs\_ctypes.pyd
${ANACONDA_INSTALL_PATH}\envs\py_pc064_030817_00\DLLs\_ctypes.pyd
${ANACONDA_INSTALL_PATH}\envs\py_pc064_030900_00\DLLs\_ctypes.pyd
${ANACONDA_INSTALL_PATH}\envs\py_pc064_030917_00\DLLs\_ctypes.pyd
${ANACONDA_INSTALL_PATH}\envs\py_pc064_030917_01\DLLs\_ctypes.pyd
${ANACONDA_INSTALL_PATH}\envs\py_pc064_031000_00\DLLs\_ctypes.pyd
${ANACONDA_INSTALL_PATH}\envs\py_pc064_031006_00\DLLs\_ctypes.pyd
${ANACONDA_INSTALL_PATH}\envs\py_pc064_031012_00\DLLs\_ctypes.pyd
${ANACONDA_INSTALL_PATH}\envs\py_pc064_031012_01\DLLs\_ctypes.pyd
${ANACONDA_INSTALL_PATH}\envs\py_pc064_031104_00\DLLs\_ctypes.pyd
(base) [prompt]>
(base) [prompt]> :: Search environments for the FFI dll
(base) [prompt]> dir /B /S "envs\*ffi*.dll"
${ANACONDA_INSTALL_PATH}\envs\py_pc064_030800_00\DLLs\libffi-7.dll
${ANACONDA_INSTALL_PATH}\envs\py_pc064_030808_00\DLLs\libffi-7.dll
${ANACONDA_INSTALL_PATH}\envs\py_pc064_030817_00\DLLs\libffi-7.dll
${ANACONDA_INSTALL_PATH}\envs\py_pc064_030817_00\Library\bin\ffi-7.dll
${ANACONDA_INSTALL_PATH}\envs\py_pc064_030817_00\Library\bin\ffi-8.dll
${ANACONDA_INSTALL_PATH}\envs\py_pc064_030817_00\Library\bin\ffi.dll
${ANACONDA_INSTALL_PATH}\envs\py_pc064_030900_00\DLLs\libffi-7.dll
${ANACONDA_INSTALL_PATH}\envs\py_pc064_030917_00\DLLs\libffi-7.dll
${ANACONDA_INSTALL_PATH}\envs\py_pc064_030917_01\DLLs\libffi-7.dll
${ANACONDA_INSTALL_PATH}\envs\py_pc064_031000_00\Library\bin\ffi-7.dll
${ANACONDA_INSTALL_PATH}\envs\py_pc064_031000_00\Library\bin\ffi-8.dll
${ANACONDA_INSTALL_PATH}\envs\py_pc064_031000_00\Library\bin\ffi.dll
${ANACONDA_INSTALL_PATH}\envs\py_pc064_031006_00\Library\bin\ffi-7.dll
${ANACONDA_INSTALL_PATH}\envs\py_pc064_031006_00\Library\bin\ffi-8.dll
${ANACONDA_INSTALL_PATH}\envs\py_pc064_031006_00\Library\bin\ffi.dll
${ANACONDA_INSTALL_PATH}\envs\py_pc064_031012_00\Library\bin\ffi-7.dll
${ANACONDA_INSTALL_PATH}\envs\py_pc064_031012_00\Library\bin\ffi-8.dll
${ANACONDA_INSTALL_PATH}\envs\py_pc064_031012_00\Library\bin\ffi.dll
${ANACONDA_INSTALL_PATH}\envs\py_pc064_031012_01\Library\bin\ffi-7.dll
${ANACONDA_INSTALL_PATH}\envs\py_pc064_031012_01\Library\bin\ffi-8.dll
${ANACONDA_INSTALL_PATH}\envs\py_pc064_031012_01\Library\bin\ffi.dll
${ANACONDA_INSTALL_PATH}\envs\py_pc064_031104_00\Library\bin\ffi-7.dll
${ANACONDA_INSTALL_PATH}\envs\py_pc064_031104_00\Library\bin\ffi-8.dll
${ANACONDA_INSTALL_PATH}\envs\py_pc064_031104_00\Library\bin\ffi.dll
As seen, there are 2 types of .dlls:
libffi*.dll (located in the same directory as _ctypes.pyd)
ffi*.dll (located in ..\Library\bin relative to _ctypes.pyd)
I discovered empirically what I'm stating here (I don't have an official source - at least not yet).
#1. comes from CPython (also, older versions don't have the libffi*.dll), and are distributed by the Python package.
#2. comes from LibFFI package (was split out of Python - it now offers a greater granularity, separate upgrades, ...). There are 3 different .dlls (ffi.dll, ffi-7.dll, ffi-8.dll), but they are just copies (don't know why weren't they SymLinked). This seems to be the newer approach, preferred by newer Python versions (I think starting with v3.10). Worth mentioning v3.8.17 which seems to contain both forms1.
Scanning inside packages, the paths match (once the leading environment part is dropped):
(base) [prompt]> :: ------- Anaconda Prompt (still) -------
(base) [prompt]> :: ------- ${ANACONDA_INSTALL_PATH} is a placeholder for f:\Install\pc064\Anaconda\Anaconda\Version -------
(base) [prompt]> dir /B /S "pkgs\*ffi*.dll"
${ANACONDA_INSTALL_PATH}\pkgs\libffi-3.4.2-hd77b12b_4\Library\bin\ffi-7.dll
${ANACONDA_INSTALL_PATH}\pkgs\libffi-3.4.2-hd77b12b_4\Library\bin\ffi-8.dll
${ANACONDA_INSTALL_PATH}\pkgs\libffi-3.4.2-hd77b12b_4\Library\bin\ffi.dll
${ANACONDA_INSTALL_PATH}\pkgs\libffi-3.4.2-hd77b12b_6\Library\bin\ffi-7.dll
${ANACONDA_INSTALL_PATH}\pkgs\libffi-3.4.2-hd77b12b_6\Library\bin\ffi-8.dll
${ANACONDA_INSTALL_PATH}\pkgs\libffi-3.4.2-hd77b12b_6\Library\bin\ffi.dll
${ANACONDA_INSTALL_PATH}\pkgs\libffi-3.4.4-hd77b12b_0\Library\bin\ffi-7.dll
${ANACONDA_INSTALL_PATH}\pkgs\libffi-3.4.4-hd77b12b_0\Library\bin\ffi-8.dll
${ANACONDA_INSTALL_PATH}\pkgs\libffi-3.4.4-hd77b12b_0\Library\bin\ffi.dll
${ANACONDA_INSTALL_PATH}\pkgs\python-3.8.0-hff0d562_2\DLLs\libffi-7.dll
${ANACONDA_INSTALL_PATH}\pkgs\python-3.8.12-h6244533_0\DLLs\libffi-8.dll
${ANACONDA_INSTALL_PATH}\pkgs\python-3.8.13-h6244533_0\DLLs\libffi-7.dll
${ANACONDA_INSTALL_PATH}\pkgs\python-3.8.17-h1aa4202_0\DLLs\libffi-7.dll
${ANACONDA_INSTALL_PATH}\pkgs\python-3.8.5-h5fd99cc_1\DLLs\libffi-7.dll
${ANACONDA_INSTALL_PATH}\pkgs\python-3.8.8-hdbf39b2_5\DLLs\libffi-7.dll
${ANACONDA_INSTALL_PATH}\pkgs\python-3.9.0-h6244533_2\DLLs\libffi-7.dll
${ANACONDA_INSTALL_PATH}\pkgs\python-3.9.12-h6244533_0\DLLs\libffi-7.dll
${ANACONDA_INSTALL_PATH}\pkgs\python-3.9.16-h6244533_2\DLLs\libffi-7.dll
${ANACONDA_INSTALL_PATH}\pkgs\python-3.9.17-h1aa4202_0\DLLs\libffi-7.dll
${ANACONDA_INSTALL_PATH}\pkgs\python-3.9.17-h6244533_0\DLLs\libffi-7.dll
3. The error
Although, the error mentions _ctypes.pyd, it's not necessary that that .dll is problematic (and often isn't), but one of its dependencies (or dependencies dependencies, ...). Even if it's about a different error, [SO]: Python Ctypes - loading dll throws OSError: [WinError 193] %1 is not a valid Win32 application (@CristiFati's answer) is a thorough investigation, and it's the same principle (this error is mentioned somewhere at the end).
While we're here, it probably also worth reading:
[SO]: Can't import dll module in Python (@CristiFati's answer)
[SO]: DLL load failed error with Python while trying to convert .dem files to .grid files (@CristiFati's answer)
[SO]: Load a DLL with dependencies in Python (@CristiFati's answer)
3.1. Avoiding it
There are a bunch of generic (common sense) rules that should keep anyone following them out of trouble. Needless to say that if there's a bug in one of the packages (out of user's control), the error might still pop up:
Be aware of the Python instance being used (active): [SO]: How to install a package for a specific Python version on Windows 10? (@CristiFati's answer)
Work from Conda prompt
Only use Conda managed environments: [Conda.Docs]: Managing environments. Technically things could work if using other types (VEnv, plain CPython, ...), but there's pretty high chance of running into trouble (and if you're reading this, you probably shouldn't)
Activate an environment and only use that one. Avoid mixing them ([SO]: PyCharm doesn't recognize installed module (@CristiFati's answer)). Activation sets some env vars specific to that environment:
(base) [prompt]> :: ------- Anaconda Prompt (still) -------
(base) [prompt]> :: --- PATH in active (base) environment ---
(base) [prompt]> echo off & (for %g in ("%PATH:;=" "%") do (echo %~g)) & echo on
f:\Install\pc064\Anaconda\Anaconda\Version
f:\Install\pc064\Anaconda\Anaconda\Version\Library\mingw-w64\bin
f:\Install\pc064\Anaconda\Anaconda\Version\Library\usr\bin
f:\Install\pc064\Anaconda\Anaconda\Version\Library\bin
f:\Install\pc064\Anaconda\Anaconda\Version\Scripts
f:\Install\pc064\Anaconda\Anaconda\Version\bin
f:\Install\pc064\Anaconda\Anaconda\Version\condabin
C:\WINDOWS\System32\WindowsPowerShell\v1.0
C:\WINDOWS\System32
C:\WINDOWS
C:\WINDOWS\System32\Wbem
C:\Install\pc064\Docker\Docker\Version\Docker\resources\bin
C:\Program Files\dotnet
e:\Work\Dev\Utils\current\Win
e:\Work\Dev\VEnvs\py_pc064_03.10_test0\Scripts
C:\Users\cfati\.dotnet\tools
.
(base) [prompt]>
(base) [prompt]> :: --- Activate Python 3.10.12 environment ---
(base) [prompt]> conda activate py_pc064_031012_00
(py_pc064_031012_00) [prompt]> :: --- PATH in active (py_pc064_031012_00) environment ---
(py_pc064_031012_00) [prompt]> echo off & (for %g in ("%PATH:;=" "%") do (echo %~g)) & echo on
f:\Install\pc064\Anaconda\Anaconda\Version\envs\py_pc064_031012_00
f:\Install\pc064\Anaconda\Anaconda\Version\envs\py_pc064_031012_00\Library\mingw-w64\bin
f:\Install\pc064\Anaconda\Anaconda\Version\envs\py_pc064_031012_00\Library\usr\bin
f:\Install\pc064\Anaconda\Anaconda\Version\envs\py_pc064_031012_00\Library\bin
f:\Install\pc064\Anaconda\Anaconda\Version\envs\py_pc064_031012_00\Scripts
f:\Install\pc064\Anaconda\Anaconda\Version\envs\py_pc064_031012_00\bin
f:\Install\pc064\Anaconda\Anaconda\Version\condabin
C:\WINDOWS\System32\WindowsPowerShell\v1.0
C:\WINDOWS\System32
C:\WINDOWS
C:\WINDOWS\System32\Wbem
C:\Install\pc064\Docker\Docker\Version\Docker\resources\bin
C:\Program Files\dotnet
e:\Work\Dev\Utils\current\Win
e:\Work\Dev\VEnvs\py_pc064_03.10_test0\Scripts
C:\Users\cfati\.dotnet\tools
.
It's easy to see that having variables set for an environment, but using another is a recipe for disaster
Prefer Conda packages (and pay attention to the channels they come from) over PIP. Although the latter will most likely work, they are mainly tested for CPython, and might miss some Anaconda specifics
3.2. Troubleshooting
There are ways of getting out of the error. [SO]: Discover missing module using command-line ("DLL load failed" error) (@CristiFati's answer) contains a number of tools able to help pinpointing a missing / bad dependency. In our case:
(py_pc064_031012_00) [prompt]> :: ------- Anaconda Prompt (still) -------
(py_pc064_031012_00) [prompt]> "f:\Install\pc064\LucasG\DependencyWalkerPolitistTexan\Version\DependenciesGui.exe" "envs\%CONDA_DEFAULT_ENV%\DLLs\_ctypes.pyd"
(py_pc064_031012_00) [prompt]> conda deactivate & conda activate py_pc064_030817_00
(py_pc064_030817_00) [prompt]> "f:\Install\pc064\LucasG\DependencyWalkerPolitistTexan\Version\DependenciesGui.exe" "envs\%CONDA_DEFAULT_ENV%\DLLs\_ctypes.pyd"
Python 3.10.12 (containing and using LibFFI .dlls from LibFFI package (newer)):
Python 3.8.17 (1) (containing LibFFI .dlls from both packages, but using the ones from Python (older)):
As seen, the dependency tree (custom dependents at least) is simple. In this case it's a 99% chance that ffi.dll is missing. That can be fixed by executing (in the environment):
conda install libffi
Manually taking care of things (copying the files) is also an option, but it might come to bite in the ass at a later time.
Worth paying attention to the PATH env var, as it might contain directories with the same .dll names (before the "good" ones). Check [SO]: Can't get FontForge to import as a module in a custom Python script (@CristiFati's answer) (Update #0 section) for a similar problem.
In some cases the error might be triggered by some other factors (disk failure, manually deletion, faulty packages, ...), and in those cases recreating the environment could solve it.
Some other times, upgrading the environment or Conda ([Anaconda.Docs]: Updating from older versions) might help.
If the error is stubborn and just won't go away, reinstalling Anaconda altogether might be required, but only use that after lots of considerations and as a last resort.
4. Summary
If reaching this situation, here's what to do in order to get out of it:
Search _ctypes.pyd dependencies and if missing, install (inside the environment) the package that owns them (typically):
conda install libffi
If _ctypes.pyd is missing, reinstall its owning package:
conda install python
One could also copy (unpack) files from the right package, into the right (environment) paths, but that's for experienced users only
Check PATH env var, as it might contain paths with files (.dlls) interfering with the "right" ones
Upgrade / recreate the environment
Upgrade Conda
Reinstall Anaconda (!!! will lose all environments !!!)
If it still doesn't work, there's a deeper problem (most likely with Anaconda installation), and one could try:
Uninstall
Manual cleanup (files on disk, registry (Win), ...)
Install
I had similar issue too.
Copying libffi-7.dll from \Python310\DLLs to my env folder fixed it for me.
I just uninstalled the environment and reinstalled Anaconda, now it works properly.
I had this misleading error aka "import" issue also. I guess it came from installing (slightly diff versions of packages due to) using several diff. conda channels.
so best to stick with 1 channel only.
try:
1 conda activate the env, even if cmd prm says it is activated
2 conda update conda (base and specific env)
I had this misleading error aka "import" issue also.
I guess it came from installing (slightly diff. versions of packages due to) using several diff. conda channels. so it is best to stick with 1 channel only.
try:
conda activate the env, even if cmd prm says it is activated
conda update conda (base)
conda install / update package (specific env)
When a package is installed via pip in a conda env, and updated via conda, it results in the same error message shown. => pip install package_name --upgrade
So I add this problem in vcpkg after installing Anaconda:
G:\vcpkg_common\downloads\tools\python>G:\vcpkg_common\downloads\tools\python\python-3.11.5-x64\python.exe
Python 3.11.5 (tags/v3.11.5:cce6ba9, Aug 24 2023, 14:38:34) [MSC v.1936 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import _ctypes
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: DLL load failed while importing _ctypes: Das angegebene Modul wurde nicht gefunden.
I deliberately changed CWD since if I am in the same directory as the python I want to use everything works as expected.
Supplying the -E option to python makes it work:
G:\vcpkg_common\downloads\tools\python>G:\vcpkg_common\downloads\tools\python\python-3.11.5-x64\python.exe -E
Python 3.11.5 (tags/v3.11.5:cce6ba9, Aug 24 2023, 14:38:34) [MSC v.1936 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import _ctypes
>>> exit()
However the -E option shouldn't do anything since there is no relevant environment variable being set:
G:\vcpkg_common\downloads\tools\python>set | findstr PY
G:\vcpkg_common\downloads\tools\python>
Looking at sys.path however reveals:
>>> sys.path
['', 'G:\\vcpkg_common\\downloads\\tools\\python\\python-3.11.5-x64\\python311.zip', 'C:\\Python\\anaconda3\\Lib', 'C:\\Python\\anaconda3\\DLLs', 'G:\\vcpkg_common\\downloads\\tools\\python\\python-3.11.5-x64', 'G:\\vcpkg_common\\downloads\\tools\\python\\python-3.11.5-x64\\Lib', 'C:\\Users\\neumann\\AppData\\Roaming\\Python\\Python311\\site-packages']
in the error case and
>>> sys.path
['', 'G:\\vcpkg_common\\downloads\\tools\\python\\python-3.11.5-x64\\python311.zip', 'G:\\vcpkg_common\\downloads\\tools\\python\\python-3.11.5-x64', 'G:\\vcpkg_common\\downloads\\tools\\python\\python-3.11.5-x64\\Lib', 'C:\\Users\\neumann\\AppData\\Roaming\\Python\\Python311\\site-packages']
in the working case. To the error case is trying to load the DLLs from the anaconda location since the its paths hare earlier than the builtin paths leading to the observed error.
After searching through the registry I found:
HKEY_LOCAL_MACHINE\SOFTWARE\Python\PythonCore\3.11\PythonPath
which seems to be responsible for breaking parallel installs.
Everyone Hello. In my case, this helped to solve the problem:
Activate your env
conda install libffi
After that pyinstaller build my exe file without error
And you need to downgrade pyInstaller version
pip uninstall pyinstaller
pip install pyinstaller==5.11
or used this link
pip install https://github.com/pyinstaller/pyinstaller/archive/develop.zip
My problem is a bit off-topic because I got the issue in eclipse. As I did not find a discussion on this topic for eclipse here in SO, I came across this discussion and want to add my findings. Maybe it could help others too.
The answer from @Artin helped me, somehow. Python was updated and eclipse/PyDev collected new path information. After the collecting process, eclipse could not access libffi-8.dll anymore (I got the same error as OP) because the old path to the DLLs did not contain LIBFFI but came first in the librayry paths of PyDev configuration. So re-arranging the paths (newer first) solved the problem. If you don't need the stuff in the older path they could be deleted completely from the PyDev configuration.
| common-pile/stackexchange_filtered |
Is there a way to trace objects to get this old fashion 3D effect?
Just like the question says.
Here's the video. It's from a recent Progressive commercial.
Is there a way to do this, preferably in Illustrator, without drawing the outline by hand?
I think of this style as retro, not old fashioned. Wikipedia describes Retro style as “a style that is consciously derivative or imitative of trends, modes, fashions, or attitudes of the recent past." Old fashioned has a negative connotation. Or is it my age talking?
I guess retro fits the shoe as well :)
Here's how I'd go about this:
Create a text layer with your text, convert it to outline. Set it at position (0,0) (x: 0px, y: 0px).
Copy-paste the outlined text and set it to position (1,1).
Paste again and set to (2,2)
Select pasted item one and two, copy paste again, set to (3,3).
Select pasted items one through four, copy paste, set to (5,5).
Repeat until you've got about 15 copies.
Unite all copies (NOT the original) via Pathfinder. Set the outline to your color, set the fill to blank.
Optional: Delete any inner paths you don't want.
It took me about two minutes to make. You can play around with the position shift if you're not totally happy with the result (eg shift half a pixel right and a full pixel down). Some fonts are better for this than others.
Disclaimer: As stated in the comments, there are more efficient ways to go about this. This is the most simple way though, so if you're a beginner, try to understand what is happening in this answer. Then try to understand how Vincent's, Cai's and jooja's answers make it more efficient. If you're a pro you can skip immediately to using transforms or scripting.
Why the -1? I hate when people downvote and don't explain. If you've got a better way, make it into an answer.
It saves a few clicks and keyboard strokes to do the transformation with Object > Transform > Move... with Copy and then repeat that transformation (Ctrl/Cmd + D) as many times as needed.
That is true. It'll save you about thirty seconds...
Or you could use Object -> Blend. Then you can change the blend steps etc, without having to re-copy the objects.
the blend make stairstepped edges. You can use the extrude tool or something like http://63mutants.com/subc/products/try.php but personally i just use a script for this.
All very true, but I guess my answer is the easiest to understand for an Illustrator noob. And since the OP never stated her/his level of expertise, I just assumed (s)he is a beginner.
Very pragmatic answer, but I hate the stairsteps! You have to go through and delete them all, which can take ages, and is a real pain...
you can simplify the path later on to get rid of those stairsteps, I believe it's under Object > Path > Simplify
possible in Illustrator, but as it's essentially an effect emulating raster imagery (video tape) it'd be a lot easier and more practical to do this with Photoshop...especially if you use a filter designed to do exactly this.
Google the following terms along with 'filter' or 'photoshop tutorial' to find plenty of resources:
vhs
channel offset
video tape
The simplest thing I can think of that you may want to try in Photoshop would be:
open your image
open each RGB channel and nudge them a few pixels in different directions (to create the color offset effect)
save out as a highly compressed JPG a few times (to get the well-worn VHS look)
It's not completely clear from the question but I think he is talking about the outlined faux 3D extrude on the type.
@CaiMorris which is caused by the RGB channel information 'bleeding' and becoming offset on well worn video tape.
Oh, wait a minute! I see what you are referring to! Yes, you may be right! If so, completely disregard this answer. :)
I think Cai is right, but this is nice knowledge nonetheless. Would you mind making that into a question and self-answering it with this?
| common-pile/stackexchange_filtered |
What is "cliptab" in OpenCV's SGBM algorithm?
I am currently trying to understand how OpenCV's SGBM disparity algorithm works, I know the pixel cost calculations follows Birchfield and Tomasi algorithm.
http://robotics.stanford.edu/~birch/publications/dissimilarity_pami1998.pdf
I cannot seem to figure out what is clipTab[TAB_SIZE] and why is it filled this way.
int ftzero = std::max(params.preFilterCap, 15) | 1;
PixType clipTab[TAB_SIZE];
for( k = 0; k < TAB_SIZE; k++ )
clipTab[k] = (PixType)(std::min(std::max(k - TAB_OFS, -ftzero), ftzero) + ftzero);
The full code can be found using this link:
https://github.com/opencv/opencv/blob/master/modules/calib3d/src/stereosgbm.cpp
It is clip tab for a clip operator after sobel filter.
It will be used during calcPixelCostBT. In my opinions, the useful value of the tab is only the index form (TAB_OFS - ftzero) to (TAB_OFS + ftzero) if you have notice the "tab += tabOfs;" during calcPixelCostBT. The value of tab during this area is [0, 2 * ftzero], which you can get it from the clip rule.
the clip rule:
| common-pile/stackexchange_filtered |
No valid coverage data available in Rails stats report
`No valid coverage data available
Tracking and trending code coverage works best when like is compared with like. In this regard it is best to only track builds when all unit tests are passing.
This plugin will not report code coverage until there is at least one stable build.`
what should i do to have a rails stats report
"works best when like is compared with like." I am not getting your question exactly can you please elaborate do you want code coverage then use gem metric_fu. If you want to track your status of rails application use gem resque
the thing is i have configured my build with rails stats report, when i open the rails stats report it displays as i mentioned above but rails report displays the report for only one build. rest of the builds displays the above one. please let me know what am i supposed to do and may i know what do you mean by "works best when like is compared with like."
"Tracking and trending code coverage works best when like is compared with like. In this regard it is best to only track builds when all unit tests are passing." from your post
can you please let me know more clearly
You configured your build through travis ?
no i didn't use travis, actually i am using jenkins for deploying
let us continue this discussion in chat
can you please use travis it is free I can help you and it is very efficient for any kind of projects even rails main repository use it
I don't know what problem you have exactly but this may help? http://timvoet.com/2011/05/19/jenkins-rails-code-coverage-a-gotcha/
@krs - From what you've said in your comments, you are correct.
rails stats only provides code coverage metrics for tests at that point in time. rails stats does not intend to keep data between builds and report test results vs. code converage.
You can connect your CI to an external service to see test coverage vs. test results. (EG, https://coveralls.io).
Please add coverage.xml file outside the project.It works fine for me.
| common-pile/stackexchange_filtered |
Unblock an automated blocked user
I used Login Security module to block users after some login failures. It works fine however I need to unblock such users after certain time like after 1 hour/day. How would I do that as it has configurations to delete attempt from database but user isn't unblocked and it has to be active manually.
Please see below attached screen shot of module configuration.
A small module named as Auto Unblock Users to unblock users.
you can use the following modules for managing blocked users. These help you in unblocking the user via admin UI. The modules are flood_control and flood_unblock
Url's to download the modules are https://www.drupal.org/project/flood_control https://www.drupal.org/project/flood_unblock
Via admin UI, its in core we can unblock users manually. As I asked automatically after some time in my question so your suggestion isn't worked for me.
| common-pile/stackexchange_filtered |
Directly showing $\lim_{n\to\infty}n\sum_{k=0}^n \binom{n}{k}(-1)^k\zeta (k+2) =1$
In this question it is shown, up to a small detail or two, that
$$\lim_{n\to\infty}n\cdot \sum_{m=1}^{\infty}\Big(1-\frac{1}{m}\Big)^n\cdot \frac{1}{m^2}=1$$
The proof essentially involves Taylor series and reimagining the sum as a Riemann integral to show a lower bound and an upper bound each converge to $1$.
However, I believe the problem can be solved a different way. If we use the binomial theorem to expand $(1-1/m)^n$ and swap the resulting double sum, we have
$$
\lim_{n\to\infty}n\sum_{k=0}^n \binom{n}{k}(-1)^k\zeta (k+2)
$$Intuitively, this limit should converge because $\zeta(n+2)\to 1$ as $n\to\infty$, and then the series looks like $\sum_{0\le k\le n}\binom{n}{k}(-1)^k=0$, but the asymptotics evade me. Previously I was interested in so-called 'alternating binomial zeta series,' (my own clunky descriptor of this object) and found that convergence was quite delicate and subtle. I tried using Stolz-Cesaro, the discrete version of L'Hopital's Rule, but was unsusccessful, though maybe it's possible through this or other means.
The problem is that the $\binom{n}{k}$ terms get bigger and bigger as $n\to\infty,$ so this seems harder, not easier, than the original series. You are essentially hiding the cancellation. But you could write it as $$n\sum_{k=0}^{n}\binom nk(-1)^{k}(\zeta(k+2)-1)$$ We have that $\sum_{k=0}^{\infty}(\zeta(k+2)-1)=1,$ so $\zeta(k+2)-1$ seems more useful here than $\zeta(k+2).$ It is a probability measure on $\mathbb N.$
Those are excellent points, Thomas! I will try to think about them and report back.
A proof of the asymptotics for
$$ H_{\nu}(x) := \sum_{k>\text{Re}\,\nu+1} (-1)^k\binom{x}{k} \zeta(n-\nu) $$
appears in the publication 'On Mellin-Barnes Type of Integrals and Sums Associated with the Riemann Zeta-Function,' by M. Katsurada, in Public. de L'Institut Mathematique Nouvell serie, tome 62, (76) 1997, 13-25.
Theorem 4.1 states that, for $\nu$ not in the set {-1, 0, 1, 2, ...}
$$H_{\nu}(x) = \frac{\Gamma(1+x)\,\Gamma(-v-1)}{\Gamma(x-\nu)} - \sum_{k=0}^{[\text{Re}\,\nu+1]} (-1)^k\binom{x}{k} \zeta(n-\nu) + \cal{R}_\nu(x)$$
The proposer's question is for $\nu = -2.$ The empty sum contributes nothing,
and
$$H_{-2}(x)= \frac{1}{1+x} + \cal{R}_{-2}(x) $$
Corollary 4.1 bounds the error as exponentially small
$$\cal{R}_{-2}(x) = \cal{O}\big(x^{1/2} \exp{(-D\,x^{1/3})}\big) \, , D=4.5201... $$
The proof requires some sophistication in complex analysis and some knowledge of special function theory. I tried to solve this problem on my own, even using special function theory, and got to a place where I had to regularize some integrals. The first term (1/x) worked with out regularization, and the rest of my terms all went to zero. That occurrence told me that it was likely that the error was exponentially small, (that is, the asymptotic series won't contain $x^{-3/2}, x^{-2}...$) and Katsurada's paper proves it.
I will check out the paper you suggested, thanks! All things being equal, I'd be fine awarding the bounty to this answer.
How would you find an exact expression for the error term? Showing that's exponential small is ofcourse a good start, but less satisfying. Could you please share the integral you found (and how) for the error term?
@Gerben The integral I found does not give an explicit exponentially small term, but only suggests it. It does show the 1/x asymptotic term. Trying to go further, and all the terms for $x^{-3/2}$, $x^{-2}$ equate to zero. That kind of analysis just doesn't quantify the exponentially small term. I also think it's somewhat interesting, and maybe it should be asked as another question. The paper I give a reference to does answer the OP's question, and with an error bound.
Not an answer, but an interesting formula.
Let $p_j=\zeta(j)-1$ for $j>1.$
Let: $$A_{n,i}=\sum_{k=0}^{n}(-1)^k\binom nk p_{k+i},$$ for $i>1,$ then we get $$\begin{align}A_{n+1,i}&=\sum_{k=0}^{n+1}(-1)^k\left(\binom{n}{k}+\binom{n}{k-1}
\right)p_{k+i}\\&=A_{n,i} -A_{n,i+1}\tag{1}\end{align}$$
So if $B_{n,i}=(n+i)A_{n,i},$ then $$B_{n+1,i}=\left(B_{n,i}-B_{n,i+1}\right) + A_{n,i}$$
Now you are trying to prove $B_{n,2}\to 1.$ (Technically, you want $\frac{n}{n+2}B_{n,2}\to 1,$ but that is the same thing.)
Substituting again, we get:
$$\begin{align}B_{n+2,i}&=\left(\left(B_{n,i}-B_{n,i+1}\right)+A_{n,i}\right)-\left((B_{n,i+1}-B_{n,i+2})+A_{n,i+1}\right)+A_{n+1,i}\\
&=\left(B_{n,i}-2B_{n,i+1}+B_{n,i+2}\right)+A_{n,i}+A_{n+1,i}-A_{n,i+1}\\
&=2A_{n+1,i}
\end{align}$$ The last step by (1).
Inductively you'll get:
$$B_{n+m,i}=\sum_{j=0}^m(-1)^m \binom{m}{j}B_{n,i+j}+mA_{n+m-1,i}
$$
Using $B_{1,i}=A_{1,i}=p_{i}-p_{i+1},$ we get:
$$B_{n,2}=\sum_{j=0}^{n-1}(-1)^{j}\binom{n-1}{j}(p_{2+j}-p_{3+j}) +B_{n-1,2}$$
So: $$\begin{align}B_{n,2}&=\sum_{k=0}^{n-1} \sum_{j=0}^{k}(-1)^{j}\binom{k}{j}(p_{2+j}-p_{3+j})\\&=\sum_{j=0}^{n-1}(-1)^j(p_{2+j}-p_{3+j})\sum_{k=j}^{n-1}\binom{k}{j}\\&=\sum_{j=0}^{n-1}(-1)^j\binom{n}{j+1}(p_{2+j}-p_{3+j})\\
&=\sum_{j=0}^{n-1}(-1)^{j}\binom n{j+1}p_{2+j}+\sum_{j=1}^{n}(-1)^{j}\binom n{j}p_{2+j}\\
&=np_{2}+(-1)^np_{2+n}+\sum_{j=1}^{n-1}(-1)^{j}\left(\binom n{j+1}+\binom nj\right)p_{2+j}\\
&=np_{2}+(-1)^np_{2+n}+\sum_{j=1}^{n-1}(-1)^j\binom {n+1}{j+1}p_{2+j}\\
&=-p_{2}+\sum_{k=1}^{n+1}(-1)^{k-1}\binom{n+1}{k}p_{1+k}
\end{align}$$
(This section is broken.)
We can try generating functions:
If $$P_i(z)=\sum_{n=0}^{\infty}\frac{p_{n+i}}{n!}z^n$$
Writing:
$$\begin{align}P_i(z)&=\sum_{k=2}^{\infty}\sum_{n=0}^{\infty}\frac{1}{n!}\frac{z^n}{k^{n+i}}\\&=\sum_{k=2}^{\infty}\frac{e^{z/n}}{n^i}\end{align}$$
We also get: $$A_i(z)=\sum_{n=0}^{\infty} A_{n,i}z^n/n!=e^zP_i(-z).$$
From this we get $A_i'(z)=A_i(z)-A_{i+1}(z).$ That is equivalent to (1) above.
If $c_{n,i}=nA_{n,i}$ then
$$C_i(z)=\sum_{n=1}^{\infty}c_{n,i}z^{n}/n!=zA_i'(z)=ze^z(P_i(z)-P_{i+1}(z)).$$
| common-pile/stackexchange_filtered |
Is there any why to merge CEF runtime resources file together?
I used CEFSharp for my application for a while. I noticed there are some commercial solutions packaged all CEF runtime resources into few dll files, some C++ base solution also can do it.
So, I just want to know CefSharp can merge/package/combine all CEF runtime resources together? In that ways, develop an application can be just reference the CefSharp dlls and run, it will make works more simple and more easier to redistribution.
Find out how those other projects do it and use the same thing, CefSharp has no specific requirements of it's own
| common-pile/stackexchange_filtered |
C# Selenium Webdriver- How to hide openfiledialog?
I have a small application written in C#, with Selenium webdriver (Chrome). I use this app to fill some textboxes and select some pictures to upload on a webpage.
The website wasn't designed by me, so I don't have any possibility to redesign it.
There is a button on this website which needs to be clicked to select some pictures to upload. (in openfiledialog window)
In the program I find this button via xpath, the pictures paths are generated in another part of the program. When the picture is selected the program simulate to hit the Enter button so the upload begins.
The program works fine until it reaches the "picture upload" part. Until that point the app works in the background out of focus so i can do other things on the PC. But when the openfiledialog window appears it's steal the focus and make a mess with my other projects.
Is there any option to hide (not show at all) the openfiledialog window?
This is the current "file select and hit enter" part of my code:
{
IWebElement btn_picture = driver.FindElement(By.XPath("//*[@class='class_name']"));
Thread.Sleep(rand3);
btn_picture.Click();
int numpic1 = Convert.ToInt32(goods2.numericUpDownpic1.Value + 1);
Random random2 = new Random();
string elsokep = random2.Next(1, numpic1).ToString();
string pth1 = goods2.textBox1pic.Text;
string path1 = (pth1 + elsokep + ".png");
int kepek = 1;
Thread.Sleep(rand3);
SendKeys.SendWait(path1);
Thread.Sleep(4000);
SendKeys.SendWait(@ "{Enter}");
}
If the form has an input tag of type "file" you won't have to click the browse button... instead use sendKeys to send the file path to the tag and then submit the form. (It'll set the value)
Thanks it worked!
IWebElement elem = driver.FindElement(By.XPath("//input[@type='file']"));
elem.SendKeys(path1);
| common-pile/stackexchange_filtered |
iPhone sdk : Open Default Message tones list
In my application , i want to set default system message tones for upcoming message settings. How can i open default device alertTones list.
I have tried following code, but its not returning any sound.
NSFileManager *fileManager = [[NSFileManager alloc] init];
NSURL *directoryURL = [NSURL URLWithString:@"/System/Library/Audio/UISounds"];
NSArray *keys = [NSArray arrayWithObject:NSURLIsDirectoryKey];
NSDirectoryEnumerator *enumerator = [fileManager
enumeratorAtURL:directoryURL
includingPropertiesForKeys:keys
options:0
errorHandler:^(NSURL *url, NSError *error) {
// Handle the error.
// Return YES if the enumeration should continue after the error.
return YES;
}];
for (NSURL *url in enumerator) {
NSError *error;
NSNumber *isDirectory = nil;
if (! [url getResourceValue:&isDirectory forKey:NSURLIsDirectoryKey error:&error]) {
// handle error
}
else if (! [isDirectory boolValue]) {
[audioFileList addObject:url];
}
}
Please Help.
Checked this link https://github.com/TUNER88/iOSSystemSoundsLibrary. I think are you using this reference code and its working I have tested in the iPhone. I think you were testing in the iPhone Simulator. its not working in the simulator. So, Test in the Device its working fine
| common-pile/stackexchange_filtered |
How to map values to a factor in R?
I have a data set in R with countries. I need to make a factor that maps 1 when US appears, and 0 to the rest of the countries. How can I do that? I have tried mapvalues() but I can't find a way to put all the countries in the argument.
You can try something like below
Data$Category[Data$Country == 'US'] = 1
Data$Category[Data$Country != 'US' ] = 0
And then convert the Category column to factor, something like this
Data$Category = factor(Data$Category,
levels=c(0,1))
I hope this works
@sindri_baldur sure thanks. I will keep that in mind.
| common-pile/stackexchange_filtered |
Outline edition tool in CorelDraw?
Is there any tool in CorelDraw X4/X5 that allows to change the outline like you can do in Illustrator?
Yes
Outline Pen Tool in the Toolbox, where you can change anything regarding outline.
| common-pile/stackexchange_filtered |
In Chrome, <select> element doesn't have background color when disabled
Chrome has default background colors set for disabled inputs, but not for the <select> element when it is disabled. What is the reason for that? It seems like a bug to me.
Open the jsFiddle link in IE, Firefox and Chrome and notice the difference. In IE and Firefox the <input> and <select> both have the same styling. But in Chrome, the elements have different styling. I would expect the <select> element to have the same background color as the <input>.
See jsFiddle
<input disabled value="text" />
<select disabled>
<option>option</option>
</select>
Chrome
Firefox
You could ask here: http://www.chromium.org/developers/discussion-groups
Because that's how the browser's default styles look.
It is doing it correctly as per those.
If you'd like to fix it to match, you can do so with CSS:
select:disabled {
background-color: rgb(235, 235, 228);
color: rgb(84,84,84);
}
+1 to this, but I think you should also add input:disabled for your sample code to him
Browsers do that for a lot of reason, it may be a result of after the browsers' authors UX researches and it may even for the sake of accessibility reason or some visual cue so that you can immediately see which are disabled.
But if you still want to apply your own style. You could do it like these:
http://jsfiddle.net/fedmich/PmhJc/1/
select:disabled {
background-color: rgb(235, 235, 228) !important;
color: rgb(84,84,84);
}
input:disabled {
background-color: rgb(235, 235, 228) !important;
color: rgb(84,84,84);
}
Note that sometimes, you'll also need to change the "mouse pointers" to a proper cursor or i-beam.
You should not care much about browser default styles, in case you are not a browser developer.
If you want to make it same color, than add styles for Chrome:
select:disabled {
background-color: #ebebe4;
}
| common-pile/stackexchange_filtered |
What was learned from study of Surveyor-3's components, retrieved and returned to Earth by Apollo-12?
A comment by @Hobbes on the question How did Apollo-12 manage to land next to Surveyor-3? First “Space-Tourists”? points out that the "souvenirs" removed and retrieved from the Surveyor-3 lander by the astronauts and returned to Earth were studied for "the effects of long-term exposure to space" on moon exploration equipment.
This was important not only for basic science, but for planning for equipment to be left on the moon by subsequent Apollo missions and further space exploration as well. The components had been exposed to the lunar space environment and solar radiation for 31 and 16 months, respectively.
Is there any discussion or reports of "What was learned from study of Surveyor-3's components, retrieved and returned to Earth by Apollo-12?"
It's often fruitful to web-search for "Apollo xyz mission report" to get answers to these sorts of questions.
The Apollo 12 Preliminary Science Report contains a section on Surveyor 3.
Among other findings:
no evidence of "cold welding" of the parts.
dust kicked up during the landing of the Apollo 12 LM pitted and "sandblasted" one side of Surveyor
micrometeoroid pitting was light and confirmed the estimates used in designing the Apollo spacecraft to withstand same
There's also a more detailed Analysis of Surveyor 3 material and photographs, which among other things:
Mentions that a streptococcus bacterium was isolated from a piece of foam in the interior of the camera, presumably a stowaway, but later analysis suggests the possibility that it was contamination occurring after the return from Apollo 12.
No living microbes appeared to have survived in a bundle of wiring, but that's inconclusive because they have no way of knowing if there were any living microbes in there when Surveyor was launched.
From the report's preface:
Engineering studies of the television camera show that the complex electromechanical components, optics, and solid state electronics were remarkably resistant to the severe lunar surface environment over 32 lunar day/night cycles with their extremes of temperature and long exposure to solar and cosmic radiation. These results indicate that the state of technology, even as it existed some years ago, is capable of producing reliable hardware that makes feasible long-life lunar and planetary installations.
I've taken the liberty to add a quote from the report you've found as an example of "What was learned..." does this look OK?
| common-pile/stackexchange_filtered |
Are all classification models appropriate for estimation of probabilities?
I try to estimate the probability that a tennis player will win based on several predictors (such as skill, form, surface, weather etc.).
Can I use every classification method to estimate a probability such as these methods:
Logistic Regression
Linear Discriminant Analysis
SVM
Neural Networks
KNN
Bagged Trees
Random Forest
Or are some methods more suitable to identify a specic class and less suitable for estimation of a precise probability?
Many classifiers give you uncalibrated probability estimates http://scikit-learn.org/stable/modules/calibration.html
This question. Right of the bat one can comment that SVMs do not return probabilities natively and have to rely to techniques like Platt scaling and isotonic regression to produce any probability estimates. Similarly a kNN does not really output probabilities. Yes, we could count the fractions of labels but that is really an approximation that is highly dependant on the the choice of $k$.
@Tim That is an interesting blog. I think calibrating the outcomes is a bridge too far currently for me. I conclude that Logistic regression is the model providing the best probabilities. Do you know any other classification methods which provide accurate probabilities in general?
@usεr11852 Thanks. Do you know which classification methods would generate the most accurate probabilities without the use of additional techniques?
Given we would not use an additional calibration steps, Logistic regression, where we would probably use splines for continuous predictors, is our best bet. That said, producing a calibration plot should be not horribly hard. You could try an LR and then say a RF and then compare there calibration plots.
Calssification is not probability estimation. Have a look at https://stats.stackexchange.com/questions/127042/why-isnt-logistic-regression-called-logistic-classification
| common-pile/stackexchange_filtered |
IValueResolver.Resolve not called on mapping AutoMapper.QueryableExtensions
I am trying to map IQueryable data in AutoMapper.
namespace Soure
{
public class Employee
{
public int Id { get; set; }
public string Name { get; set; }
public Department dept { get; set; }
public Employee()
{
this.Id = 1;
this.Name = "Test Name";
this.dept = new Department();
}
}
public class Department
{
public int DeptId { get; set; }
public string DeptName { get; set; }
public Department()
{
DeptId = 2;
DeptName = "Test Dept";
}
}
}
namespace destination
{
public class Employee
{
public int Id { get; set; }
public string Name { get; set; }
public Department dept { get; set; }
}
public class Department
{
public int DeptId { get; set; }
public string DeptName { get; set; }
}
}
var empList = new List<Soure.Employee>() { new Soure.Employee(), new Soure.Employee() }.AsQueryable();
Mapper.CreateMap<Soure.Employee, destination.Employee>();
Mapper.CreateMap<Soure.Department, destination.Department>().ForMember(d => d.DeptId, s => s.MapFrom(sou => 7));
Mapper.AssertConfigurationIsValid();
var mappedEmp = empList.Project().To<destination.Employee>();
I want to call ResolutionResult IValueResolver.Resolve(ResolutionResult source)
method on mapping .
But it doesn't get called.
When i use Map(sourceObject)
this Mapper call ResolutionResult IValueResolver.Resolve(ResolutionResult source)
As i am having Queryable to Map so i can't use Map(sourceObject) .
Or is there any alternative method to ResolutionResult IValueResolver.Resolve(ResolutionResult source) which works for AutoMapper.QueryableExtensions
Value Resolvers aren't supported for queryable extensions. LINQ providers can't interpret any random bit of code, it has to be interpretable by EF/NHibernate etc. AutoMapper merely passes an expression tree to a LINQ provider.
Your example uses MapFrom instead of ResolveUsing, so I can't tell if what you're trying to do is actually possible. Also, your example uses AsQueryable(), which is deceptive - lots of things are possible with the in-memory queryables that aren't possible with DB-based query providers.
I had try with ResolveUsing as it need IValueResolver which is not supported to Queryable thats why i am trying to achieve same thing with MapFrom. How to achieve same functionality with MapFrom ? or any other way to achieve same functionality.
Achieve what exactly? The code you show in your example works just fine in LINQ.
Yes the above code works fine. But i want to give func instead of 7. And want result of func.
i am having 1 query as i just asked another question so i had blocked for next 1.5 hour but i want it immediately. It is related to linq. can you help me
Can you update your code to show the actual problem you're having?
| common-pile/stackexchange_filtered |
Upgrading the twitter bootstrap gem from 2.x to 3.x or 4.x is breaking the UI styles
I am trying to upgrade the twitter-bootstrap-rails gem version from 2.2.8 version to 3.2.2 version since 2.2.8 has vulnerabilities.
But while upgrading the gem version it is breaking the UI styles everywhere in the application nav-bar, search, and modal box, etc.
I followed the steps given in the official documentation
by running this command rails generate bootstrap:install less after doing the bundle install.
Is there any other configurations that we need to change other than this? I thought it will be a simple gem upgrade but not sure where things are going wrong.
Any help would be appreciated. Thanks.
You should have a look at the GetBoostrap documentation (the successor of "Twitter Bootstrap) and should install the
gem 'bootstrap', '~> 4.5.0'
The migration guide is here : https://getbootstrap.com/docs/4.5/migration/
I added the gem as suggested, but it's throwing the following error ExecJS::RubyRacerRuntime is not supported. Please replace therubyracer with mini_racer in your Gemfile or use Node.js as ExecJS runtime. I need the rubyracer gem so even If I try to replace it with mini_racer, it's throwing an error activesupport-<IP_ADDRESS>/lib/active_support/dependencies.rb:324:in require: cannot load such file -- v8 (LoadError)
And also what you are suggesting is changing from twitter-bootstrap-rails gem to bootstrap. This involves moving from less to sass. And we try to keep them in less itself. So I believe this is different from what I am expecting.
You 're right I thought going to bootstrap might be the best choice.
Yeah, that would be right. But I would like to stick with twitter-bootstrap-rails for now. Thanks for your suggestion though :)
| common-pile/stackexchange_filtered |
Omnibus Gitlab version 7.4.3 at a custom relative_url_root (http://mydomain/gitlab)
I have successfully installed older versions of gitlab and hosted them at a location like this:
mydomain/gitlab
with the new version of gitlab doing all of the config though the gitlab-ctl and via me editing the /etc/gitlab/gitlab.rb I'm not sure how to achieve this set-up.
I find lots of documentation on stackexchange for older versions of gitlab that did not use gitlab-ctl for configuration but not for the new version. Presently I have gitlab installed and running fine at:
mydomain
I want to move it to:
mydomain/gitlab
Anybody know how to do this for version 7.4.3?
Thanks :)
I recommend you learn precise URL terminology: domain != path ;) will really help you find answers.
Serving from a relative URL root seems to be simply not implemented on Omnibus GitLab: https://gitlab.com/gitlab-org/omnibus-gitlab/blob/ed51ec97401bba955c93e61f8ef860520f745837/files/gitlab-cookbooks/gitlab/templates/default/gitlab.yml.erb#L24 (since no template variable is inserted there)
You could work around that by modifying all the required configuration files manually as explained in the comment on gitlab.yml, but that would really be a lot of manual work and those configs would get overwritten if you reconfigure, so I recommend you request the feature at: http://feedback.gitlab.com/forums/176466-general and send a pull request implementing that.
This problem was also raised at: Omnibus GitLab on IP without a domain-name and with custom relative_url_root
I ran across this same issue this week and set out to find a solution. I'm not all that familiar with RoR so I ended up creating a Bash script to automate the process instead.
Now it'd be a lot nicer if we could just automate all this through /etc/gitlab/gitlab.rb and hopefully somebody sets that up at some point (maybe someone has that I just can't find?), but in the meantime feel free to use this script as needed. Run as root on a system with a GitLab Omnibus-package installation. Tested on GitLab CE 7.9.0 with Bash 4.2.
GitLab Relative URL Setter
| common-pile/stackexchange_filtered |
How to cause open event of jQuery UI tooltip to obey the show.delay property setting
The open event of the jQuery UI tooltip fires not when the popup opens visibly but as soon as the mouse enters the element. It does not obey the show.delay property setting. This is documented behavior so I suppose it is not a bug.
So if I have tooltip on adjacent cells of a table, and the user drags the mouse across these cells, the actions in my open and close handlers are taken multiple times -- three, four, five times -- as many times as the number of cells the mouse entered.
What's a good way to exit the open event if the show.delay has not yet transpired?
EDIT: Not knowing how much time has elapsed on the delay.show, I've had to choose an arbitrary duration for the setTimeout, and track whether a class-switch is in progress using a flag:
<snip> ...
show: {
delay: 666
},
open: function (event, ui) {
if (me.changingClass) return;
me.changingClass = true;
$("td.baz").switchClass("foo", "bar");
},
close: function (event, ui, dupids) {
$("td.baz").switchClass("bar", "foo");
setTimeout(function () { me.changingClass = false; }, 200);
}
Can you show the code you have so far? I would guess a timeout would do it, but I can't say for sure without seeing what you're working with.
Can you add enough code to replicate your issue?
All you have to do to replicate it is to attach the tooltip to the TDs of a table. open and close fire immediately on mouse enter and mouse leave, not after the show.delay threshold has been reached. So if the user drags the mouse (quickly) across several adjacent cells of a table, entering the cell fires the open event and leaving the cell fires the close event, so if the mouse touched three cells while being moved, those events would each fire three times.
Can you throw it in a fiddle? I'm not sure how you're achieving the show delay.
See edit; it is a property of the tooltip configuration object.
I think this may do the trick, if I understand what you're after:
Working Example
var timer;
$('td').tooltip({
show: {
delay: 2000 //number of milliseconds to wait
},
open: function (event, ui) {
var xthis = this;
timer = setTimeout(function () {
$(xthis).siblings().switchClass("bar", "foo");
}, 2000); // number of milliseconds to wait
},
close: function (event, ui, dupids) {
clearTimeout(timer);
$(this).siblings().switchClass("foo", "bar");
}
});
| common-pile/stackexchange_filtered |
How do I find ID to download ImageNet Subset?
I am new to ImageNet and would like to download full sized images of one of the subsets/synsets however I have found it incredibly difficult to actually find what subsets are available and where to find the ID code so I can download this.
All previous answers (from only 7 months ago) contain links which are now all invalid. Some seem to imply there is some sort of algorithm to making up an ID as it is linked to wordnet??
Essentially I would like a dataset of plastic or plastic waste or ideally marine debris. Any help on how to get the relevant ImageNet ID or suggestions on other datasets would be much much appreciated!!
welcome. please take the [tour] and review [ask]. you are asking for recommendations, which is off-topic ([help/on-topic])
I used this repo to achieve what you're looking for. Follow the following steps:
Create an account on Imagenet website
Once you get the permission, download the list of WordNet IDs for your task
Once you've the .txt file containing the WordNet IDs, you are all set to run main.py
As per your need, you can adjust the number of images per class
By default ImageNet images are automatically resized into 224x224. To remove that resizing, or implement other types of preprocessing, simply modify the code in line #40
Source: Refer this medium article for more details.
You can find all the 1000 classes of ImageNet here.
EDIT:
Above method doesn't work post March 2021. As per this update:
The new website is simpler; we removed tangential or outdated functions to focus on the core use case—enabling users to download the data, including the full ImageNet dataset and the ImageNet Large Scale Visual Recognition Challenge (ILSVRC).
So with this, to parse and search imagenet now you may have to use nltk.
More recently, the organizers hosted a Kaggle challenge based on the original dataset with additional labels for object detection. To download the dataset you need to register a Kaggle account and join this challenge. Please note that by doing so, you agree to abide by the competition rules.
Please be aware that this file is very large (168 GB) and the download will take anywhere from minutes to days depending on your network connection.
Install the Kaggle CLI and set up credentials as per this guideline.
pip install kaggle
Then run these:
kaggle competitions download -c imagenet-object-localization-challenge
unzip imagenet-object-localization-challenge.zip -d <YOUR_FOLDER>
Additionally to understand ImageNet hierarchy refer this.
Hi, I already had gained permission to the website. The issue comes with stage 2. It is not as simple as downloading the list of ID's, this is the part I need help with. The medium article said "For instance if the synset needed is pictures of ships it can be found by searching for ship on the imagenet website and the result will be the following page which has the wnid: n04194289". However the website appears to have been updated and no longer has such a search function.
@HHEEMM My bad! you are right, till Feb 2021 methods I mentioned worked for me. I have edited the answer to address this issue further
| common-pile/stackexchange_filtered |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.