text stringlengths 70 452k | dataset stringclasses 2 values |
|---|---|
How do I create my c sharp object to match my RestSharp XML response?
I just started to work with RestSharp and using the NextFlix API have been able to successfully create my Oauth tokens and request data from NetFlix. But for some reason I can not seem to create my objects to match the return XML correctly.
My code:
var client2 = new RestClient("http://api-public.netflix.com")
{
Authenticator = OAuth1Authenticator.ForProtectedResource(MyOauth.ConsumerKey, MyOauth.ConsumerSecret, MyOauth.OauthToken, MyOauth.OauthTokenSecret)
};
var request = new RestRequest("/catalog/titles/autocomplete");
request.AddParameter("term", "star wars");
var searchResults = client2.Execute<CatalogList>(request);
My attempted object creation:
[XmlRoot("autocomplete")]
public class CatalogList
{
public List<AutoCompleteItem> Titles { get; set; }
}
[XmlRoot("autocomplete_item")]
public class AutoCompleteItem
{
[XmlElement("title short")]
public string Title { get; set; }
[XmlElement("short")]
public string ShortName { get; set; }
}
serachresults returns 20 titles and my List has 20 entries, however the AutoCompleteItem is always empty. I've changed up the element names, attributes etc but never get the right combination.
The XML that is returned looks like:
<?xml version="1.0" standalone="yes" ?>
<autocomplete>
<url_template>http://api-public.netflix.com/catalog/titles/autocomplete?{-join|&|term} </url_template>
<autocomplete_item>
<title short="Star Wars: Episode II: Attack of the Clones" />
</autocomplete_item>
What am I missing here?
Thanks,
Jason
[XmlRoot("autocomplete")]
public class AutocompleteList
{
[XmlElement("url_template")]
public string UrlTemplate { get; set; }
[XmlElement("autocomplete_item")]
public List<AutocompleteItem> Items { get; set; }
}
public class AutocompleteItem
{
[XmlElement("title")]
public Title ItemTitle { get; set; }
}
public class Title
{
[XmlAttribute("short")]
public string Short { get; set; }
}
The inner Title type is necessary to be able to capture the XML attribute short=.
string xml = @"<?xml version=""1.0"" standalone=""yes"" ?>
<autocomplete>
<url_template>http://api-public.netflix.com/catalog/titles/autocomplete?{-join|&|term}</url_template>
<autocomplete_item>
<title short=""Star Wars: Episode II: Attack of the Clones"" />
</autocomplete_item>
</autocomplete>";
var reader = new StringReader(xml);
var ser = new XmlSerializer(typeof(AutocompleteList));
var result = (AutocompleteList) ser.Deserialize(reader);
This produces the same result as
var result = new AutocompleteList
{
UrlTemplate = "http://api-public.netflix.com/catalog/titles/autocomplete?{-join|&|term}",
Items = new List<AutocompleteItem>
{
new AutocompleteItem
{
ItemTitle = new Title
{
Short = "Star Wars: Episode II: Attack of the Clones",
}
},
},
}
I now get the URL template, but ItemTitle (all 20 of them) show up as null. Is it possible the RestSharp deserializer isn't functioning correctly and I need to do it on my own? Or does the object need to be tweaked? Thanks for the help..
your code works fine when deserializing the xml manually.. I guess I'll just go that route. No idea why RestSharp isn't functioning with it. Probably user error I'm sure. But thanks again.
| common-pile/stackexchange_filtered |
Multiple resolutions custom UI strategy
I made my last iOS apps for iOS4 when I didn't have to worry too much about different scales and resolutions.
I made custom tableview cells with filling multiple CGRect programmatically without using Interface Builder. But this doesn't feel like a smart thing to do with 5 or more different sizes (iPhone 4, 5, 6 Plus, iPad, iPad mini ...) and several different resolutions.
Is the Interface Builder worth using and helping me with this problem?
What is your strategy to sanely design custom views in iOS8?
You should use Autolayout. Either programmatically or using interface builder.
Check out the Autolayout videos from WWDC 12 and 13.
That does look promising. Would it be a hassle to edit an existing layout with this or should I start over?
You will probably want to rewrite existing layout for the most part. But Autolayout can also coincide with frame defined layout. It does take some time to wrap your head around how to use it. But once you start, you'll never look back :)
| common-pile/stackexchange_filtered |
How do I parse an individual entry from XML using PHP?
I am trying to parse an individual element from an XML string using PHP. The issue is that this individual element occurs before the entries start. The XML is below:
<?xml version="1.0" encoding="UTF-8"?>
<feed gd:kind="shopping#products" gd:etag=""lm_25heFT8yiumci9EH1kItJBpg/Sj5O9aXZ82PKpx3N2C3uQYMhNYE"" xmlns="http://www.w3.org/2005/Atom" xmlns:gd="http://schemas.google.com/g/2005" xmlns:openSearch="http://a9.com/-/spec/opensearchrss/1.0/" xmlns:s="http://www.google.com/shopping/api/schemas/2010">
<openSearch:totalResults>64</openSearch:totalResults>
<openSearch:startIndex>1</openSearch:startIndex>
<openSearch:itemsPerPage>25</openSearch:itemsPerPage>
<entry >...</entry>
<entry >...</entry>
</feed>
I am trying to parse out the "64" in the opensearch:totalResults tag. How do I this and assign it to a variable in php? I tried:
$url = 'url of xml feed';
$xml = simplexml_load_file($url);
$entries =$xml->entry[0]->openSearch:totalResults;
// also tried $entries =$xml->openSearch:totalResults;
echo $entries;
but it's not working. Any advice?
possible duplicate of simplexml_load_string: how to get openSearch:itemsPerPage in youtube's video feed?
See also: http://stackoverflow.com/search?q=%5Bsimplexml%5D+opensearch
You need to register namespace in order to access these nodes:
$xml = simplexml_load_file('file.xml');
$xml->registerXPathNamespace('os', 'http://a9.com/-/spec/opensearchrss/1.0/');
$nodes = $xml->xpath('os:totalResults');
$totalResults = (string)$nodes[0];
You can also use https://www.php.net/manual/en/simplexmlelement.children.php (using the $ns parameter)
that is less resource intensive.
| common-pile/stackexchange_filtered |
Get last h1 tag before clicked element
When a user clicks an element on a page, I need to find the closest h1 tag above the clicked element and get it's html content.
Given the below html:
<div>
<h1>This is header 1</h1>
<div>
<span> some text</span><br>
<span> some text</span><br>
<span> some text</span><br>
<span style="color:red;" class="myClass"> CLICK ME</span><br>
<span> some text</span><br>
<span> some text</span><br>
</div>
<h1>This is header 2</h1>
<div>
<span> some text</span><br>
<span> some text</span><br>
<span> some text</span><br>
<span> some text</span><br>
<span> some text</span><br>
<span> some text</span><br>
</div>
</div>
I tried:
$("body").on('click', '.myClass', function() {
var text=$(this).closest( "h1" ).html();
alert(text);
});
But text ends up undefined as it doesn't find the h1 tag
Expected result: function should alert "This is header 1"
jsfiddle
closest should be used to get the closest parent of the element, then get the previous sibling:
var text = $(this).closest('div').prev('h1').text();
DEMO: http://jsfiddle.net/son6v8g3/3/
You can also use parents
var heading = $(this).parents('div:first').prev('h1').text()
parents is obviously an overkill here.
| common-pile/stackexchange_filtered |
Can I create a Druid lookup using time as a dimension?
I am able to build lookups like
{
"type":"lookup",
"dimension":"type",
"outputName":"type_name",
"outputType": "STRING",
"retainMissingValue":true,
"lookup":{"type": "map", "map":
{"0": "Unknown",
"1": "Mobile(Other)",
"2": "Desktop/Notebook",
"3": "Connected/Smart TV",
"4": "Mobile Phone"},
"isOneToOne":true}
}
However, I would like to create one using the time of day as the input variable. Is there any way to do so without having to add hour to the datasource as a dimension? For example, the hours 5am-9am should map to morning etc. I am running druid 12.
This works! You can also add a timezone which is important if your data is in UTC like ours.
{
"type": "extraction",
"dimension": "__time",
"outputName": "hourOfDay",
"extractionFn": {
"type": "cascade",
"extractionFns": [
{
"type": "timeFormat",
"format": "H",
"locale": "en"
},
{
"type": "lookup",
"lookup": {
"type": "map",
"map": {
"0": "early_morning",
"1": "early_morning",
"2": "early_morning",
"3": "early_morning",
"4": "early_morning",
"5": "early_morning",
"6": "morning",
"7": "morning",
"8": "morning",
"9": "morning",
"10": "morning",
"11": "morning",
"12": "afternoon",
"13": "afternoon",
"14": "afternoon",
"15": "afternoon",
"16": "afternoon",
"17": "evening",
"18": "evening",
"19": "evening",
"20": "evening",
"21": "night",
"22": "night",
"23": "night"
}
},
"retainMissingValue": true,
"injective": true
}
]
}
| common-pile/stackexchange_filtered |
Websphere App Server <IP_ADDRESS> - IBM Java 1.6, Apache Wink, Rest Client TLSv1.2
Recently upgraded websphere app server to TLSv1.2. Prior, with TLSv1, was able to interface with REST client using combination of javax.ws.rs, org.glassfish.jersey and javax.net.ssl. Wasn't able to get working once app server upgraded TLS. IBM instructed to apply Feature Pack Web 2.0 and interface with REST client using Apache Wink. Haven't been able to get this solution to work. Previously, Keystore and Truststore were set on SSLContext.
private static String getClient(String servicePath){
try {
javax.ws.rs.client.Client client = null;
org.glassfish.jersey.client.ClientConfig clientConfig = new org.glassfish.jersey.client.ClientConfig();
int readTimeOut = Integer.parseInt(commonProp.getProperty(READ_TIMEOUT));
int connTimeout = Integer.parseInt(commonProp.getProperty(CONNECTION_TIMEOUT));
clientConfig.property(ClientProperties.CONNECT_TIMEOUT, connTimeout);
clientConfig.property(ClientProperties.READ_TIMEOUT, readTimeOut);
org.glassfish.jersey.SslConfigurator sslConfig;
String trustStoreName = commonProp.getProperty(TRUSTSTORE_NAME);
File file1 = new File("\\"+PROJECT_FILE_DIR+ "/ISB"+trustStoreName.trim());
String keyStoreName = commonProp.getProperty(KEYSTORE_NAME);
File file2 = new File("\\"+PROJECT_FILE_DIR+ "/ISB"+keyStoreName.trim());
FileInputStream fis1 = new FileInputStream(file1);
FileInputStream fis2 = new FileInputStream(file2);
String trustStorePassword = commonProp.getProperty(TRUSTSTORE_PASSWORD);
String keyStorePassword = commonProp.getProperty(KETSTORE_PASSWORD);
sslConfig = org.glassfish.jersey.SslConfigurator.newInstance().trustStoreBytes(ByteStreams.toByteArray(fis1))
.trustStorePassword(trustStorePassword).keyStoreBytes(ByteStreams.toByteArray(fis2))
.keyPassword(keyStorePassword);
javax.net.ssl.SSLContext sslContext = sslConfig.createSSLContext();
client = javax.ws.rs.client.ClientBuilder.newBuilder().sslContext(sslContext).withConfig(clientConfig).build();
String url = commonProp.getProperty(ENDPOINT_URL)+servicePath;
Response response =
client.target(url)
.request(MediaType.APPLICATION_JSON)
.get();
String responseAsString = "";
if(response != null){
responseAsString = response.readEntity(String.class);
}
return responseAsString;
} catch (Throwable e) {
logger.severe(e.getMessage() + e.getLocalizedMessage());
e.printStackTrace();
throw new RuntimeException(e);
}
}
With Installed Feature Pack and use of JSSEHELPER, retrieved SSL alias
information from WAS. Seem to make client connection now, but still have
authentication issue
Blockquote
response statusCode: 200
called close()
called closeInternal(true)
WebContainer : 0, SEND TLSv1.2 ALERT: warning, description = close_notify
An attempt to authenticate with a client certificate failed. A valid client
certificate is required to make this connection.
Blockquote
Believe issue may be that previously the KEYSTORE and TRUSTSTORE information
was read from file. Whereas the JSSEHELPER is simply putting file name.
com.ibm.ssl.clientAuthenticationSupported = false
com.ibm.ssl.keyStoreClientAlias = isbgatewaytst
com.ibm.ssl.contextProvider = IBMJSSE2
com.ibm.ssl.trustStoreProvider = IBMJCE
com.ibm.ssl.protocol = TLSv1.2
com.ibm.ssl.keyStoreReadOnly = false
com.ibm.ssl.alias = ISBGatewaySSL
com.ibm.ssl.keyStoreCreateCMSStash = false
com.ibm.ssl.securityLevel = CUSTOM
com.ibm.ssl.trustStoreName = ISBGatewayTrust
com.ibm.ssl.configURLLoadedFrom = security.xml
com.ibm.ssl.trustStorePassword = ********
com.ibm.ssl.keyStoreUseForAcceleration = false
com.ibm.ssl.trustManager = PKIX
com.ibm.ssl.validationEnabled = false
com.ibm.ssl.trustStoreInitializeAtStartup = false
com.ibm.ssl.keyManager = IbmX509
com.ibm.ssl.keyStoreFileBased = true
com.ibm.ssl.keyStoreType = JKS
com.ibm.ssl.trustStoreFileBased = true
com.ibm.ssl.trustStoreCreateCMSStash = false
com.ibm.ssl.trustStoreScope = (cell):ESB_DEV
com.ibm.ssl.trustStore = E:/IBM/content/resources/dev_projectfiles_dir/ISB/isb-truststore.jks
com.ibm.ssl.keyStoreProvider = IBMJCE
com.ibm.ssl.enabledCipherSuites = SSL_ECDHE_RSA_WITH_AES_128_GCM_SHA256 SSL_ECDHE_RSA_WITH_AES_128_CBC_SHA256 SSL_ECDHE_RSA_WITH_AES_128_CBC_SHA
com.ibm.ssl.daysBeforeExpireWarning = 60
com.ibm.ssl.keyStoreServerAlias = isbgatewaytst
com.ibm.ssl.clientAuthentication = false
com.ibm.ssl.keyStore = E:/IBM/content/resources/dev_projectfiles_dir/ISB/isb-keystore-tst.jks
com.ibm.ssl.trustStoreUseForAcceleration = false
com.ibm.ssl.trustStoreReadOnly = false
com.ibm.ssl.keyStoreScope = (cell):ESB_DEV
com.ibm.ssl.tokenEnabled = false
com.ibm.ssl.keyStoreName = ISBGatewayPrivatekey
com.ibm.ssl.keyStorePassword = ********
com.ibm.ssl.keyStoreInitializeAtStartup = false
com.ibm.ssl.trustStoreType = JKS
My issue is how to put the KEYSTORE and TRUSTSTORE information on the org.apache.wink.client.ClientConfig or else where, similar to previously setting the javax.net.ssl.SSLContext? Did set the WAS SSL alias information on the ClientConfig properties, but believe that is just adding the Store's file location which isn't of use for the REST Client.
public static String getPPLUResponseString(String servicePath) {
String responseAsString = "";
com.ibm.websphere.ssl.JSSEHelper jsseHelper = com.ibm.websphere.ssl.JSSEHelper.getInstance();
try {
Properties sslProps = null;
String alias = "ISBGatewaySSL";
sslProps = jsseHelper.getProperties(alias, getConnectionInfo(), null);
org.apache.wink.client.ClientConfig clientConfig = new org.apache.wink.client.ClientConfig();
clientConfig.readTimeout(Integer.parseInt(commonProp.getProperty(READ_TIMEOUT)));
clientConfig.connectTimeout(Integer.parseInt(commonProp.getProperty(CONNECTION_TIMEOUT)));
clientConfig.setProperties(sslProps);
Enumeration keys = clientConfig.getProperties().keys();
while (keys.hasMoreElements()) {
String key = (String) keys.nextElement();
String value = (String) clientConfig.getProperties().get(key);
System.out.println(" clientConfig.getProperties(): " + key + ": " + value);
}
org.apache.wink.client.RestClient restClient = new org.apache.wink.client.RestClient(clientConfig);
String url = commonProp.getProperty(ENDPOINT_URL) + servicePath;
System.out.println(" url: " + url);
org.apache.wink.client.Resource restResource = restClient.resource(url);
System.out.println(" before client response");
org.apache.wink.client.ClientResponse clientResponse = restResource.accept(MediaType.APPLICATION_JSON_TYPE).get();
int statusCode = clientResponse.getStatusCode();
System.out.println(" response statusCode: " + statusCode);
String responseEntity = clientResponse.getEntity(String.class);
System.out.println(" responseEntity start: " + responseEntity);
System.out.println(" responseEntity end: ");
if (responseEntity != null) {
responseAsString = responseEntity;
}
} catch (com.ibm.websphere.ssl.SSLException e) {
System.out.println(" com.ibm.websphere.ssl.SSLException");
e.printStackTrace();
} catch (MalformedURLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return responseAsString;
}
Does your client cert use a reasonable signature algorithm? TLS1.2 would give the server an oppty to restrict that.
This was info provided by client. Shows TLSv1.2 and proper ciphers. TLSv1.2: server selection: enforce server preferences 3f- (key: RSA) ECDHE_RSA_WITH_AES_256_CBC_SHA384 3f- (key: RSA) ECDHE_RSA_WITH_AES_256_CBC_SHA 3f- (key: RSA) ECDHE_RSA_WITH_AES_128_CBC_SHA256 3f- (key: RSA) ECDHE_RSA_WITH_AES_128_CBC_SHA Tomcat Server Level: ciphers="TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,T
LS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"
Those just look like ciphers. That doesn't really say anything about the signature algorithms used in the client certificates chain.
Same info was sent with TLSv1. How would signature algorithms come into play with TLSv1.2?
| common-pile/stackexchange_filtered |
Why does it return 0?
The following code is a recursion example which returns how many numbers between 0 and 9 (including 9) are in the given String argument:
public int sayiAdediBul(String str){
int holder = 0;
if (sayi == 9) { //base case
for (int a = 0; a < str.length(); a++) {
if(Integer.parseInt(str.substring(a,a+1)) == sayi) { // im increasing sayi int variable everytime calling the method
holder++;
}
}
System.out.print(sayi + " :");
return holder; // here, holder returns 0 , isnt it supposed to return how many 9 there are
}
}
There is no recursion in this code. And sayi isn't declared— is it 9 or isn't it?
Your function seems to miss an else block, is this intentional?
Please create a proper [mcve], the sayi is missing and the method is not completely shown.
please rectify your piece of code.
Your question is really bad. I help you a little..
if(Integer.parseInt(str.substring(a,a+1)) == sayi) ...
in your case is this false.
The variable holder is initialized with 0. The variable holder dont
make +1 because dont get in the if statement. The variable holder is
0. The variable holder returns.
| common-pile/stackexchange_filtered |
Custom Keyboard event in windows side
CTRL + C means Copy
CTRL + V means Paste
I want to write my custom keyboard event like above.
As instance,
I have e-mail adress
<EMAIL_ADDRESS>
If ı enter CTRL + W ı want to write my e-mail adress automaticly where ı paste it.
How can ı do it in windows side ?
or I must write code in c# something else ?
Thanks
There are many softwares that handle global events in Windows, like this one which is a scripting tool and you can do what you want.
Sharon
| common-pile/stackexchange_filtered |
Centos 7 stopped detecting all RAM on system?
After months of working fine, the BIOS for my Centos 7 installation suddenly maps the wrong amount of memory to the kernel. I have 8 GB of RAM installed and the SO only maps 3.2 GB of RAM. I can't understand why.
This is all the info that I believe to be of concern, but if you need something more please let me know.
About the hardware:
CPU: AMD Phenom(tm) 9550 Quad-Core Processor
Motherboard: GA-MA780G-UD3H
Rare Hardware: MegaRAID SAS 2008 [Falcon]
dmesg | grep -i mem
[ 0.000000] BIOS-e801: [mem 0x0000000000000000-0x000000000009efff] usable
[ 0.000000] BIOS-e801: [mem 0x0000000000100000-0x00000000cdecffff] usable
[ 0.000000] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
[ 0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable
[ 0.000000] e820: update [mem 0xcff00000-0xffffffff] usable ==> reserved
[ 0.000000] found SMP MP-table at [mem 0x000f57c0-0x000f57cf] mapped at [fffffffffd2007c0]
[ 0.000000] Base memory trampoline at [ffff880000098000] 98000 size 24576
[ 0.000000] RAMDISK: [mem 0x3529f000-0x36947fff]
[ 0.000000] Faking a node at [mem 0x0000000000000000-0x00000000cdecffff]
[ 0.000000] NODE_DATA(0) allocated [mem 0xcdeae000-0xcdecffff]
[ 0.000000] crashkernel: memory value expected
[ 0.000000] DMA [mem 0x0000000000001000-0x0000000000ffffff]
[ 0.000000] DMA32 [mem 0x0000000001000000-0x00000000cdecffff]
[ 0.000000] Early memory node ranges
[ 0.000000] node 0: [mem 0x0000000000001000-0x000000000009efff]
[ 0.000000] node 0: [mem 0x0000000000100000-0x00000000cdecffff]
[ 0.000000] Initmem setup node 0 [mem 0x0000000000001000-0x00000000cdecffff]
[ 0.000000] DMA zone: 64 pages used for memmap
[ 0.000000] DMA32 zone: 13116 pages used for memmap
[ 0.000000] PM: Registered nosave memory: [mem 0x00000000-0x00000fff]
[ 0.000000] PM: Registered nosave memory: [mem 0x0009f000-0x000fffff]
[ 0.000000] e820: [mem 0xcded0000-0xffffffff] available for PCI devices
[ 0.007556] Freeing SMP alternatives memory: 32K
[ 0.028268] Fam 10h mmconf [mem 0xe0000000-0xe00fffff]
[ 0.028268] bus: 00 [mem 0x000a0000-0x000bffff]
[ 0.028268] bus: 00 [mem 0xd0000000-0xdfffffff]
[ 0.028268] bus: 00 [mem 0xe0600000-0xffffffff]
[ 0.028268] bus: 00 [mem 0xe0100000-0xe05fffff]
[ 0.028268] bus: 00 [mem 0x230000000-0xfcffffffff]
[ 0.028268] PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000)
[ 0.038526] PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000)
[ 0.039286] PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in ACPI motherboard resources
[ 0.046039] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
[ 0.046040] pci_bus 0000:00: root bus resource [mem 0x000c0000-0x000dffff window]
[ 0.046042] pci_bus 0000:00: root bus resource [mem 0xd0000000-0xfebfffff window]
[ 0.046769] pci 0000:00:11.0: reg 0x24: [mem 0xfe02f000-0xfe02f3ff]
[ 0.046916] pci 0000:00:12.0: reg 0x10: [mem 0xfe02e000-0xfe02efff]
[ 0.047077] pci 0000:00:12.1: reg 0x10: [mem 0xfe02d000-0xfe02dfff]
[ 0.047240] pci 0000:00:12.2: reg 0x10: [mem 0xfe02c000-0xfe02c0ff]
[ 0.047432] pci 0000:00:13.0: reg 0x10: [mem 0xfe02b000-0xfe02bfff]
[ 0.047588] pci 0000:00:13.1: reg 0x10: [mem 0xfe02a000-0xfe02afff]
[ 0.047749] pci 0000:00:13.2: reg 0x10: [mem 0xfe029000-0xfe0290ff]
[ 0.048567] pci 0000:00:14.5: reg 0x10: [mem 0xfe028000-0xfe028fff]
[ 0.049170] pci 0000:01:00.0: reg 0x10: [mem 0xd0000000-0xdfffffff 64bit pref]
[ 0.049179] pci 0000:01:00.0: reg 0x18: [mem 0xfd7e0000-0xfd7effff 64bit]
[ 0.049197] pci 0000:01:00.0: reg 0x30: [mem 0x00000000-0x0001ffff pref]
[ 0.049293] pci 0000:01:00.1: reg 0x10: [mem 0xfd7fc000-0xfd7fffff 64bit]
[ 0.052017] pci 0000:00:02.0: bridge window [mem 0xfd700000-0xfd7fffff]
[ 0.052020] pci 0000:00:02.0: bridge window [mem 0xd0000000-0xdfffffff 64bit pref]
[ 0.052087] pci 0000:02:00.0: reg 0x14: [mem 0xfdffc000-0xfdffffff 64bit]
[ 0.052098] pci 0000:02:00.0: reg 0x1c: [mem 0xfdf80000-0xfdfbffff 64bit]
[ 0.052112] pci 0000:02:00.0: reg 0x30: [mem 0x00000000-0x0001ffff pref]
[ 0.055016] pci 0000:00:04.0: bridge window [mem 0xfdf00000-0xfdffffff]
[ 0.055019] pci 0000:00:04.0: bridge window [mem 0xfde00000-0xfdefffff 64bit pref]
[ 0.055100] pci 0000:03:00.0: reg 0x18: [mem 0xfddff000-0xfddfffff 64bit]
[ 0.055112] pci 0000:03:00.0: reg 0x20: [mem 0xfdcfc000-0xfdcfffff 64bit pref]
[ 0.058021] pci 0000:00:09.0: bridge window [mem 0xfdd00000-0xfddfffff]
[ 0.058024] pci 0000:00:09.0: bridge window [mem 0xfdc00000-0xfdcfffff 64bit pref]
[ 0.058104] pci 0000:04:00.0: reg 0x18: [mem 0xfdaff000-0xfdafffff 64bit pref]
[ 0.058116] pci 0000:04:00.0: reg 0x20: [mem 0xfdae0000-0xfdaeffff 64bit pref]
[ 0.058125] pci 0000:04:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref]
[ 0.061019] pci 0000:00:0a.0: bridge window [mem 0xfdb00000-0xfdbfffff]
[ 0.061022] pci 0000:00:0a.0: bridge window [mem 0xfda00000-0xfdafffff 64bit pref]
[ 0.061100] pci 0000:00:14.4: bridge window [mem 0xfd900000-0xfd9fffff]
[ 0.061104] pci 0000:00:14.4: bridge window [mem 0xfd800000-0xfd8fffff pref]
[ 0.061109] pci 0000:00:14.4: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode)
[ 0.061111] pci 0000:00:14.4: bridge window [mem 0x000c0000-0x000dffff window] (subtractive decode)
[ 0.061112] pci 0000:00:14.4: bridge window [mem 0xd0000000-0xfebfffff window] (subtractive decode)
[ 0.062580] pci 0000:01:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
[ 0.071632] e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff]
[ 0.071634] e820: reserve RAM buffer [mem 0xcded0000-0xcfffffff]
[ 0.089630] pnp 00:01: disabling [mem 0x00000000-0x00000fff window] because it overlaps 0000:01:00.0 BAR 6 [mem 0x00000000-0x0001ffff pref]
[ 0.089635] pnp 00:01: disabling [mem 0x00000000-0x00000fff window disabled] because it overlaps 0000:02:00.0 BAR 6 [mem 0x00000000-0x0001ffff pref]
[ 0.089640] pnp 00:01: disabling [mem 0x00000000-0x00000fff window disabled] because it overlaps 0000:04:00.0 BAR 6 [mem 0x00000000-0x0000ffff pref]
[ 0.089699] system 00:01: [mem 0xfee00400-0xfee00fff window] has been reserved
[ 0.090378] system 00:03: [mem 0xe0000000-0xefffffff] has been reserved
[ 0.090592] system 00:04: [mem 0x000d7c00-0x000d7fff] has been reserved
[ 0.090595] system 00:04: [mem 0x000f0000-0x000f7fff] could not be reserved
[ 0.090597] system 00:04: [mem 0x000f8000-0x000fbfff] could not be reserved
[ 0.090598] system 00:04: [mem 0x000fc000-0x000fffff] could not be reserved
[ 0.090600] system 00:04: [mem 0xcfee0000-0xcfefffff] could not be reserved
[ 0.090602] system 00:04: [mem 0xffff0000-0xffffffff] has been reserved
[ 0.090603] system 00:04: [mem 0x00000000-0x0009ffff] could not be reserved
[ 0.090605] system 00:04: [mem 0x00100000-0xcfedffff] could not be reserved
[ 0.090607] system 00:04: [mem 0xfec00000-0xfec00fff] could not be reserved
[ 0.090609] system 00:04: [mem 0xfee00000-0xfee00fff] could not be reserved
[ 0.090610] system 00:04: [mem 0xfff80000-0xfffeffff] has been reserved
[ 0.097979] pci 0000:01:00.0: BAR 6: assigned [mem 0xfd700000-0xfd71ffff pref]
[ 0.097988] pci 0000:00:02.0: bridge window [mem 0xfd700000-0xfd7fffff]
[ 0.097990] pci 0000:00:02.0: bridge window [mem 0xd0000000-0xdfffffff 64bit pref]
[ 0.097994] pci 0000:02:00.0: BAR 6: assigned [mem 0xfdf00000-0xfdf1ffff pref]
[ 0.098000] pci 0000:00:04.0: bridge window [mem 0xfdf00000-0xfdffffff]
[ 0.098009] pci 0000:00:04.0: bridge window [mem 0xfde00000-0xfdefffff 64bit pref]
[ 0.098017] pci 0000:00:09.0: bridge window [mem 0xfdd00000-0xfddfffff]
[ 0.098019] pci 0000:00:09.0: bridge window [mem 0xfdc00000-0xfdcfffff 64bit pref]
[ 0.098023] pci 0000:04:00.0: BAR 6: assigned [mem 0xfdb00000-0xfdb0ffff pref]
[ 0.098028] pci 0000:00:0a.0: bridge window [mem 0xfdb00000-0xfdbfffff]
[ 0.098030] pci 0000:00:0a.0: bridge window [mem 0xfda00000-0xfdafffff 64bit pref]
[ 0.098041] pci 0000:00:14.4: bridge window [mem 0xfd900000-0xfd9fffff]
[ 0.098044] pci 0000:00:14.4: bridge window [mem 0xfd800000-0xfd8fffff pref]
[ 0.098056] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
[ 0.098057] pci_bus 0000:00: resource 7 [mem 0x000c0000-0x000dffff window]
[ 0.098059] pci_bus 0000:00: resource 8 [mem 0xd0000000-0xfebfffff window]
[ 0.098062] pci_bus 0000:01: resource 1 [mem 0xfd700000-0xfd7fffff]
[ 0.098063] pci_bus 0000:01: resource 2 [mem 0xd0000000-0xdfffffff 64bit pref]
[ 0.098067] pci_bus 0000:02: resource 1 [mem 0xfdf00000-0xfdffffff]
[ 0.098068] pci_bus 0000:02: resource 2 [mem 0xfde00000-0xfdefffff 64bit pref]
[ 0.098071] pci_bus 0000:03: resource 1 [mem 0xfdd00000-0xfddfffff]
[ 0.098073] pci_bus 0000:03: resource 2 [mem 0xfdc00000-0xfdcfffff 64bit pref]
[ 0.098075] pci_bus 0000:04: resource 1 [mem 0xfdb00000-0xfdbfffff]
[ 0.098077] pci_bus 0000:04: resource 2 [mem 0xfda00000-0xfdafffff 64bit pref]
[ 0.098080] pci_bus 0000:05: resource 1 [mem 0xfd900000-0xfd9fffff]
[ 0.098082] pci_bus 0000:05: resource 2 [mem 0xfd800000-0xfd8fffff pref]
[ 0.098086] pci_bus 0000:05: resource 6 [mem 0x000a0000-0x000bffff window]
[ 0.098087] pci_bus 0000:05: resource 7 [mem 0x000c0000-0x000dffff window]
[ 0.098089] pci_bus 0000:05: resource 8 [mem 0xd0000000-0xfebfffff window]
[ 0.440156] pci 0000:01:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
[ 0.889247] Freeing initrd memory: 23204K
[ 1.415893] Non-volatile memory driver v1.3
[ 1.416846] ehci-pci 0000:00:12.2: irq 17, io mem 0xfe02c000
[ 1.423652] ehci-pci 0000:00:13.2: irq 19, io mem 0xfe029000
[ 1.430547] ohci-pci 0000:00:12.0: irq 16, io mem 0xfe02e000
[ 1.486520] ohci-pci 0000:00:12.1: irq 16, io mem 0xfe02d000
[ 1.542513] ohci-pci 0000:00:13.0: irq 18, io mem 0xfe02b000
[ 1.598498] ohci-pci 0000:00:13.1: irq 18, io mem 0xfe02a000
[ 1.654502] ohci-pci 0000:00:14.5: irq 18, io mem 0xfe028000
[ 1.724405] Freeing unused kernel memory: 2232K
[ 1.724676] Freeing unused kernel memory: 132K
[ 1.726622] Freeing unused kernel memory: 544K
[ 2.019715] resource sanity check: requesting [mem 0x000c0000-0x000dffff], which spans more than pnp 00:04 [mem 0x000d7c00-0x000d7fff]
[ 2.020483] [TTM] Zone kernel: Available graphics memory: 1651458 kiB
[ 2.020515] [drm] radeon: 256M of VRAM memory ready
[ 2.020516] [drm] radeon: 512M of GTT memory ready.
uname -a
Linux <host> 4.14.0-1.el7.elrepo.x86_64 #1 SMP Sun Nov 12 20:21:04 EST 2017 x86_64 x86_64 x86_64 GNU/Linux
lshw -short | grep 'System Memory'
/0/29 memory 8GiB System Memory
free -m
total used free shared buff/cache available
Mem: 3225 155 95 8 2974 2838
Swap: 8191 0 8191
lshw -c memory
*-firmware
description: BIOS
vendor: Award Software International, Inc.
physical id: 0
version: F5
date: 10/08/2009
size: 128KiB
capacity: 960KiB
capabilities: isa pci pnp apm upgrade shadowing cdboot bootselect socketedrom edd int13floppy360 int13floppy1200 int13floppy720 int13floppy2880 int5printscreen int9keyboard int14serial int17printer int10video acpi usb agp ls120boot zipboot biosbootspecification
*-cache:0
description: L1 cache
physical id: a
slot: Internal Cache
size: 128KiB
capacity: 128KiB
capabilities: synchronous internal write-back
configuration: level=1
*-cache:1
description: L2 cache
physical id: c
slot: External Cache
size: 512KiB
capacity: 512KiB
capabilities: synchronous internal write-back
configuration: level=2
*-cache
description: L1 cache
physical id: b
slot: Internal Cache
size: 128KiB
capacity: 128KiB
capabilities: synchronous internal write-back
configuration: level=1
*-memory
description: System Memory
physical id: 29
slot: System board or motherboard
size: 8GiB
*-bank:0
description: DIMM 800 MHz (1.2 ns)
product: None
vendor: None
physical id: 0
serial: None
slot: A0
size: 2GiB
width: 64 bits
clock: 800MHz (1.2ns)
*-bank:1
description: DIMM 800 MHz (1.2 ns)
product: None
vendor: None
physical id: 1
serial: None
slot: A1
size: 2GiB
width: 64 bits
clock: 800MHz (1.2ns)
*-bank:2
description: DIMM 800 MHz (1.2 ns)
product: None
vendor: None
physical id: 2
serial: None
slot: A2
size: 2GiB
width: 64 bits
clock: 800MHz (1.2ns)
*-bank:3
description: DIMM 800 MHz (1.2 ns)
product: None
vendor: None
physical id: 3
serial: None
slot: A3
size: 2GiB
width: 64 bits
clock: 800MHz (1.2ns)
cat /proc/meminfo | grep Direct
DirectMap4k: 95040 kB
DirectMap2M: 2230272 kB
DirectMap1G: 2097152 kB
Please add to the question: cat /proc/meminfo | grep Direct
Errr... did you change any BIOS settings? Maybe memory remapping, memory hole, etc. ?
@RuiFRibeiro added ;)
4GB indeed...strange.
@derobert no, and I tried to find the options and without any success. I also tried to remove one module, start - stop, add it again without any change.
do a memtest....
@RuiFRibeiro I did it too but only detects the same 3.2GB so the final test end successfully.
I agree with @RuiFRibeiro, but different versions of memtest support different amounts of memory. For 8GB you need 5 version. Before the memory test, you need to remove all modules and wipe the contacts with an eraser and degrease with alcohol.
@Alex_Krug I use the version 5, but I'll remove as you said with wipe included and retest everything. Thanks :)
@Alex_Krug I did what you said, I cleaned them and did the Memtest v5.01 and again, test without errors but only 3294MB of RAM.
@RuiFRibeiro as far as I can see, I think the problem is my bios is not reporting with the e820 function, only with the e801 and it reports the 3294MB exactly. I tried to install a new version of the bios but the result it's still the same.
the bios is not to blame. What is the memory configuration? How many slots are occupied? What modules? Great suspicions of slots in the motherboard. Try to take out the modules, test the modules separately, in different slots. In short, try any combination to identify the faulty slot or module. Yes, and for a reminder, no one OS should be loaded when testing the memory.
@Alex_Krug I tested it with the memtest so no OS is loaded I tried one module of memory on each slot, but I didn't try the 4 slots with the four modules (the 16 combinations), but i'll try that. About the memory configuration which is important it's that it isn't new (and it worked with the full 8GB). I has been unchanged for a year and it's 4 slots with 2GB on each one. Bought in a single package for maximum compatibility.
| common-pile/stackexchange_filtered |
Upload and play a song
i'm trying to upload and play a song with HTML5. I'm using JavaScript to upload the file to the server and jPlayer to play the song but i'm having issues whit this plug-in.
This is my code:
$(document).ready(function() {
$(this)
.bind("dragenter", function(event) {
return false;
})
.bind("dragover", function(event) {
return false;
})
.bind("drop", function(event) {
var file = event.originalEvent.dataTransfer.files[0];
event.preventDefault();
$("#state").html("Loading...");
$.ajax({
url: 'upload.php',
async: true,
type: 'POST',
contentType: 'multipart/form-data',
processData: false,
data: file,
success: function(data) {
$("#state").html("Ready!");
$("#player").jPlayer( {
ready: function() {
$(this).jPlayer("setMedia", {
oga: file.name
}).jPlayer("play");
},
supplied: "oga"
});
},
beforeSend: function(xhr) {
xhr.setRequestHeader("X-File-Name", file.name);
xhr.setRequestHeader("Cache-Control", "no-cache");
}
});
});
});
The file is uploaded to the server, but jPlayer doesn't play it and i can't figure out why...
@vigrond: Yes i can! ;)
<html id = "homepage">
<head>
<title>Echo</title>
<script type = "text/javascript" src = "http://ajax.googleapis.com/ajax/libs/jquery/1.7.0/jquery.min.js"></script>
<script type = "text/javascript" src = "jquery.jplayer.min.js"></script>
<script type = "text/javascript" src = "upload.js"></script>
</head>
<body bgcolor = "black">
<div style = "margin: 0 auto; text-align: center">
<h1 style = "margin-top: 100px; color: white">Drag and drop a song...</h1>
<h2 id = "state" style = "color: white"></h2>
</div>
<div id = "player"></div>
</body>
</html>
Can you supply your HTML as well? There are a few steps necessary to get jPlayer to run correctly, and the correct HTML elements is one of them. You may also want to try setting errorAlerts: true and warningAlerts:true. This will often give you specific information about what is going on through alert dialogs.
The main issue to understand is that all browsers act differently when it comes to HTML5 audio support. (see here: http://www.w3schools.com/html5/html5_audio.asp)
That is why jPlayer has the flash backup solution.
By default, jPlayer tries the html5 solution first, and falls back to flash with this default setting:
solution: "html,flash" //Set by default, no declaration necessary
In order for the flash support to function, you must set swfPath to the containing directory of the Jplayer.swf file that jPlayer comes with.
swfPath: "/js"
Additionally, jPlayer recommends at least 2 different versions of the same file to maximize support for HTML5. For example, .ogg and .mp3.
$("#player").jPlayer({
ready: function () {
$(this).jPlayer("setMedia", {
oga: "http://www.vigrond.com/jplayerTest/beer.ogg",
mp3: "http://www.vigrond.com/jplayerTest/beer.mp3"
}).jPlayer("play");
},
supplied: "oga, mp3",
swfPath: "/js",
solution: "html,flash"
});
For an example, I set up a test page here of an invisible jPlayer player, with the code and directory structure: http://vigrond.com/blog/2011/12/01/invisible-html5-flash-audio-player-using-jplayer/
Let me know if this helps!
I knew that, the point is i don't want to use Flash in my website. I'm thinking about convert audio files on the fly in order to maximize cross-browser compatibility.
Today my code is working without any modification though. Guess that yesterday i was testing an older version of the page cached by the browser.
Anyway i have another issue. The uploading script works fine with Firefox, Chrome and Internet Explorer, but it doesn't work with Safari. When i drag and drop a file, the page upload to the server only 15B and i can't figure out why. Any suggestion?
| common-pile/stackexchange_filtered |
exponentially weighted average c++ nested for loops
I'm looking to implement an exponentially weighted moving average with a sliding window using data that I am pulling from a large data set.
The code works but the results are definitely not what they should be and I can't seem to figure out why. Here is my code and please give me good detail as to what I am exactly doing wrong:
for(unsigned int i = window; i< close_price.size(); i++)
{
double tmp3;
double tmp4;
for(int j = 0; j < window; j++)
{
tmp3 += pow(lambda,j) * pow(close_price[i-j], 2);
tmp4 += pow(close_price[i-j], 2);
if(j == window-1)
{
double temp = (1-lambda) * (pow(close_price[window], 2) + tmp3);
ewma.push_back( sqrt(temp) );
sma.push_back( tmp4/window );
}
tmp3 = 0;
tmp4 = 0;
}
}
That's some wonky indentation right there.
@Borgleader There's more than one indent style and as long as one is consistent, there's really no right or wrong here. No reason for religious arguments on SO.
Nope, it was not aligned, still a brace is misplaced, I did not manage to correct all of them as I just noticed. :)
@Barnabas I didn't notice that someone already edited the post, the original version certainly qualified for wonky :-)
I knew that was the case, my "fault", that is why I wrote instead of Borgleader :)
Basically the problem is that tmp3 and tmp4 are not initialized, so tmp3+=blah has undefined result.
The way I see your code now, it should look something like this:
for(size_t i = window; i< close_price.size(); i++)
{
double tmp3 = 0.0;
double tmp4 = 0.0;
for(size_t j = 0; j < window; j++)
{
tmp3 += pow(lambda,j) * pow(close_price[i-j], 2);
tmp4 += pow(close_price[i-j], 2);
}
double temp = (1-lambda) * (pow(close_price[window], 2) + tmp3);
ewma.push_back( sqrt(temp) );
sma.push_back( tmp4/window );
}
Explanation: there is no need for the extra if within the for loop, as j's last value will be window-1 anyway, tmp3 and tmp4 are to be initialized at every i-loop start. The type of size() is size_t not unsigned int, if any.
Thank you I just could not see what was going wrong.
| common-pile/stackexchange_filtered |
When should you validate a JWT token in a React App?
I recently implemented JWT tokens in my React + ASP.NET Core 6 Web application.
When a user signs in, the request is sent via HTTP request to the server to issue a JWT token
back to the client. The client then sends another request to validate the JWT token received, in which the server sends a "success" or "rejected" response back to the client.
Now, this is done once when a user signs in, and the JWT token is stored in a Cookie. The Cookie expires 5 days after it is issued, so if a user closes the tab or browser, if they reopen the application, they will automatically be logged in due to the Cookie stored. Note: The JWT token from the Cookie is validated again once a user comes back.
Here is the tricky part...
Since this is a SPA, the JWT token validation happens on the useEffect() method in the AuthContext that handles User Auth.
When a user clicks into a new page, only the child components are rendered, and the AuthContext / Navbar are not, since they are Higher Order Components acting as wrappers. Because of this, the JWT token is not revalidated each time a user visits a new page.
Is this secure? Should a revalidation trigger every single time a user visits a new "page"? Are there any security concerns?
| common-pile/stackexchange_filtered |
Chrome redownloading the same Excel file despite it being replaced on the server?
We uploaded an updated version of an Excel file to our client portal. Tested it on our test account and it downloads the most up-to-date file.
The client that needs to retrieve it is using Chrome and when they download it they say they keep getting the old file without the updates. They switched to IE and it downloaded the file fine.
Sounds like a caching issue with Chrome, but I did have the press CTRL + F5 and they said it didn't clear up the issue.
Anyone experienced this issue before? If it isn't user error, what is the solution?
You can usually force the browser to get a new version of a file by putting a query string after it, something like ?v=2.0 should be enough.
Would that force the browser to get the most current version whether it is the the most current version is the first version, 2nd, 3rd... 25th?
If I'm not wrong, it merely makes the file appear as if it's a different one. If you want to force the file not to be cached, you should either change caching rules in your webserver or maybe append a timestamp string after ?v= using Javascript or your backend language of choice.
adding timestamp on each download does not work. Like this + '?t=' + Date.now()
| common-pile/stackexchange_filtered |
Making \xrightarrow longer but without using spaces
I have this table, and as you can see I make the last arrow longer with loads of spaces. Is there a better way to make it as long as the other \plusarrows but at the same time with a centered text (only the equal sign)?
\documentclass[a4paper]{article}
\usepackage{amsmath}
\usepackage{ulem}
\usepackage{booktabs}
\newcommand{\plusarrow}{$\xrightarrow{+\Delta V \%}$}
\newcommand{\bmin}{\textbf{--}}
\begin{document}
\begin{table}[H]
\centering
\begin{tabular}{c l r c r}
& \textbf{Ricavi} & 2000 & \plusarrow{} &
\textbf{2140.00}\\
\bmin{} & Costi operativi \dashuline{variabili} [660] & &
\plusarrow{} & \textbf{706.20}\\
& Acquisti materie prime & 150\\
& Provvigioni passive & 180\\
& Lavoro straordinario & 20\\
& Lavorazioni esterne & 310\\
\midrule
= & \textbf{Margine di contribuzione} & 1340 & \plusarrow{} &
\textbf{1433.80}\\
\bmin{} & Costi operativi \dashuline{fissi} [860] & &
$\xrightarrow{\ ~~=~~\ }$ & \textbf{860.00}\\ %% HERE
& Lavoro ordinario & 420\\
& Ammortamenti & 210\\
& Affitti passivi & 90\\
& Canoni di leasing & 140\\
\midrule
= & \textbf{Risultato operativo lordo} & 480 & & \textbf{573.80}
\end{tabular}
\end{table}
\end{document}
This is the current result:
While it is perfectly fine, I don't like the hack that I used to make it work.
Simple solution using
\newcommand{\foo}{\phantom{+\Delta V \%}}
\newcommand{\equalarrow}{%
$\vphantom{\xrightarrow{=}}%
\smash{\xrightarrow[\foo]{=}}%
$}
Note that \equalarrow already contains $ $ in its definition. Also, I added the @Heiko's suggestion to adjust the vertical space created by the invisible content.
It works perfectly! But why can't I use my \plusarrow{} command inside of phantom?
@rubik, I edited.
Because of the invisible contents below the arrow, the vertical spacing is increased accordingly. \hphantom is better than \phantom here. The additional invisible depth can be entirely removed by \smash: \newcommand{\equalarrow}{$\vphantom{\xrightarrow{=}}\smash{\xrightarrow[\foo]{=}}$}
| common-pile/stackexchange_filtered |
task based async pattern for wcf
I am implementing task based async pattern for wcf. The method includes stored procedure execution and lots of processing on the data it got. As well it throws an exception
the question is
how to implement that
option 1.
*await command.ExecuteScalarAsync();
//run 10000 lines of processing including exception handling*
option 2
*await command.ExecuteScalarAsync();
Task.Factory.StartNew(() => run 10000 lines of processing including exception handling);*
may be there other options...?
pros and cons of each of them
Also if I already implemented the sync for that method - should I use it?
I don't understand your logic of throwing the exception. Why?
for example since it tried to run some stored procedure and failed to open a connection.I have fixed the post what i meant the processing include exception handling as well
What's the issue with await and an exception?
the question was which option 1/2 is better pros/cons.
using await looks like only part of my code will be asynchrony command.ExecuteScalarAsync();
Option 1 is the better option. However be sure you document that the function has both a asynchronous component and a long running synchronous component.
If the caller of your function decides that the synchronous component is taking too long and blocking their UI or similar the caller can decide to wrap the call in a separate thread. Forcing the code to be in a separate thread like you did in option 2 does not scale well. In situations where you will experience high loads, like web servers, you can greatly hurt performance by generating those extra unnecessary threads.
| common-pile/stackexchange_filtered |
Rich domain model vs strategy pattern in DDD
I've recently watched a few Pluralshigh courses on DDD by Vladimir Khorikov. He was encouraging to create a rich instead of anemic domain models. It all looked very nice in a small test-project, however I still have no idea how to put extensive business logic inside rich domain model.
In rich domain model we are supposed to put domain logic into entities. So let's say we want to model an Employer who pays the salary the their Employees. I see Employer as the aggregate root here.
So we add a Employer.PayTo(someEmployeeIdentificator) method. Business rules for calculating the salary could be very elaborate and depend on things like:
Employer and Employees countries
Employment form
Employee taxation form
How much did an Employee work last month
and so much more, you get the point
Potentially, there could be a few dozens of algorithms. Some might even require communication with external services. A perfect case for strategy pattern, but:
The logic was supposed to be implemented inside the entities
Employees are hidden inside the Employer aggregate root
I cannot use IOC to inject stuff into my entities (they are usually created by some ORM). And providing the dependency trees is no fun.
Injecting the some implementations of SalaryCalculator to the Employer entity might be a bad idea, as the calculators might not be 'pure'. They might have references to some external resources (e.g. issue tracker)
How would you model it?
The logic was supposed to be implemented inside the entities
Yes, although it's usually more specific than that - the logic that computes the new state of an entity is supposed to be "inside" the entity.
Employees are hidden inside the Employer aggregate root
That may not be an effective model to use for this kind of problem; it might make more sense to focus on the transactions (exchanges of money), or the ledgers, rather than on the people.
I find it helpful to remember that aggregates are digital things; they are (parts of) documents that describe something, not the thing itself.
I cannot use IOC to inject stuff into my entities
Remember, inversion of control "is really just a pretentious way of saying taking an argument." We don't usually inject stuff into entities, that's true. Instead, we pass stuff to the entities as arguments (domain services).
Injecting the some implementations of SalaryCalculator to the Employer entity might be a bad idea, as the calculators might not be 'pure'. They might have references to some external resources (e.g. issue tracker)
The early discussions of domain model (Evans in the blue book, Fowler in Patterns of Enterprise Application Architecture) don't call for it to be pure. It's objects calling other objects, with the expectation of side effects.
As best I can tell, you really have two options - you can make the orchestration among objects part of the domain model, or you can treat the model itself as just a single collaborator, and manage the orchestration elsewhere.
How would you model it?
"It depends". I'd probably begin by separating out different processes
Calculating how much compensation has accrued during some pay period
Calculating the dispersal of some amount of compensation
Tracking and confirming the asset transfers
Additionally, pay close attention to correctly modeling subtle distinctions; if salaried compensation, hourly compensation, and commissioned sales exist in your domain, then they should be separately identifiable in your model. "Almost the same" is just a wishy-washy way of spelling different.
I see Employer as the aggregate root here.
When you do OO modeling, usually a business function is defined on the subject of the action, not the initiator of the action.
It depends on the requirements of course, but I would perhaps model this as Employee.receiveSalary().
Another indicator that this may be "better", is that in your case the method needs the employee id (basically a reference to another object) to do its work. It indicates that the method wants to be in another object.
Injecting the some implementations of SalaryCalculator...
I would probably not create an object named SalaryCalculator. The problem with that is, it is not part of the "domain". Even if you want to create different "strategies", you are better off finding a way to incorporate it into the domain itself.
This might be as easy as just naming the thing Salary instead of SalaryCalculator. There can be HourlySalary, MonthlySalary, whatever. That makes sense in the domain instead of tacking it on.
The logic was supposed to be implemented inside the entities
Yes. If you can not find a good place to do things, you are probably missing a good abstraction. (See example with SalaryCalculator). If you do it right, you will not need any services or calculators or other things that are not part of the domain.
Employees are hidden inside the Employer aggregate root
If that is a problem, then just don't do that. Proper modeling comes first, technical things come second! Maybe the Employee should be the aggregate root, or there is a third abstraction that can act as a root for this specific case.
I cannot use IOC to inject stuff into my entities...
Again, proper modeling first, technology second. If the technology does not support a correct model, you can choose another technology (there are plenty).
Objects collaborate with eachother, saying that a set of objects can't define collaborators seems unreasonable to me.
In rich domain model we are supposed to put domain logic into entities
The problem with this sentence is, people interpret it wrong as
"we are supposed to put all domain logic into entities".
A better interpretation would be:
"we are supposed to put some domain logic into entities"
where "some" means the domain logic which makes most sense. There is no "hard & fast" rule to make a decision where to draw the line, but a good start is to put any logic which can be expressed mostly through the properties of the object itself into the object.
As you already demonstrated, a large operation like PayTo does not fall into that category, so I would recommend to implement it not directly in the Employer class. That does not mean you will end up with an anemic domain model, surely when your program evolves, you will find lots of small operations which fit perfectly into an Employer class.
You don't need your business entities to be the same than your persistence entities. You can use any tool for making the mapping between them easily (in C#, we have Automapper).
This way you can inject any dependency you need into your business entities. Besides, it can help you hide properties/attributes which you might have made public just because your ORM needed them.
| common-pile/stackexchange_filtered |
Weird JSONP response in Facebook
I'm wondering what is the reason for using following JSONP response syntax:
Under URL: https://ect.channel.facebook.com/probe?mode=stream&format=json
There is:
for (;;); {"t":"heartbeat"}
{"t":"heartbeat"}
{"t":"continue","seq":0}
My question is, what exactly does for(;;); in this JSONP response. How is it parsed?
I see there is no function call, but the technique seems to be similar
No; the technique is exactly the opposite.
This isn't JSONP; it's JSON which is delibaretly modified to fail if used as JSONP.
If you include that URL in a <script> tag, it will freeze the browser in an infinte for loop.
This prevents attackers from including it in an external site and calling Object.defineProperty to create a setter function and bypass the SOP.
That is there for security reasons. JSONP is not actually JSON, it's a JavaScript file that's executed.
The for(;;); is there, so that if people (outside of Facebook) try to access that file, they can't.
NOTE: This isn't JSONP, but your browser doesn't know that. It'll try to run it, then get stuck in an infinite loop.
| common-pile/stackexchange_filtered |
Getting the tab URL back to the event script
I'm trying to make an extension that simply gets the URL of the current tab, parses it to get the numerical ID, and opens a new tab with a URL based on that ID.
Using an event.js script that executes inject.js and the latter should get and return the URL. Based on some other sample code I'm able to return other stuff, but I can't figure out the best way to simply return the URL of the tab. Here's the inject.js code:
var injected = injected || (function(){
var methods = {};
methods.gettheurl = function(){
???
chrome.runtime.onMessage.addListener(function (request, sender, sendResponse) {
var data = {};
if (methods.hasOwnProperty(request.method))
data = methods[request.method]();
sendResponse({ data: data });
return true;
});
return true;
})();
Any suggestions as to the best method for getting the URL?
I don't understand what you are asking exaclty. How to get the URL? Something like location.href?
Yes, I think what I would like (speaking as a JavaScript novice) is to retrieve the tab URL and then receiving the same URL in the event.js script of the extension.
No, unfortunately I did not. Will try again and see where it fails.
The recommended way of getting the URL of the current tab is to use 'chrome.tabs.query()'. Dont forget to have 'tabs' permission in your manifest file.
Sample code snippet for Tabs permission:
"permissions": [
"tabs"
]
| common-pile/stackexchange_filtered |
The best way for a WAR file to receive data from an external file
I have a WAR file that requires additional data from an external config file residing in the same folder as the WAR file.
But once I deploy them to Tomcat, the WAR file and the config file will be residing in different places right?
Do I need to insert a special file path to my project before building the WAR file to make sure that the WAR file will still find the config file after deployment?
Thanks.
You can:
include the config file inside the war and read it from this predefined location. This isn't good if you're going to change it after you deploy since every time you deploy a new war, your changes will be overwritten
put the config file outside the war (and maybe even outside of tomcat) and read it from there. Doing this, your changes will survive redeploys of the war.
Thanks. But in this case, if I put the config outside the war, is there a permanent filepath that the war can always find after deployment?
Sure. you can use any path you want and you just make sure that path (and file) exists when you deploy it.
| common-pile/stackexchange_filtered |
xargs overrides dialog's exit status
I have a dialog created with the following code:
exec 3>&1
selection=$(echo " --backtitle \"Patches\"
--extra-button --extra-label \"See File List\"
--title \"Patches\" --clear --ok-label \"Create\" --cancel-label \"Exit\"
--menu \"Please select:\" $HEIGHT $WIDTH 25 $gridData" |
xargs dialog 2>&1 1>&3)
exit_status=$?
exec 3>&-
The dialog has an extra button in addition to Ok/Cancel pair (though I've renamed them). It works great unless the extra button is clicked, in which case $exit_status has the same value (123) as if the cancel button was clicked. Is there a way I can get dialog's status without xargs interfering with it?
You would only use xargs if you were trying to create multiple dialog boxes at once. There is no need for it here.
According to the man page of xargs:
xargs exits with the following status:
0 if it succeeds
123 if any invocation of the command exited with status 1-125
124 if the command exited with status 255
125 if the command is killed by a signal
126 if the command cannot be run
127 if the command is not found
1 if some other error occurred.
What are you trying to accomplish here? I don't see why you would need xargs in this case. You should instead call dialog directly, like so:
dialog --backtitle Patches \
--extra-button --extra-label "See File List" \
--title Patches --clear --ok-label Create --cancel-label Exit \
--menu "Please select:" $HEIGHT $WIDTH 25 "$gridData"
This will work even if $gridData contains special characters (such as " or spaces).
dialog's exit status is important, because this is the only way to determine which button was clicked
@TichomirMitkov sorry, I want's clear. I meant, don't need xargs here, just call dialog directly.
I need it because of white space. $gridData contains fields, which are surrounded in double quotes.
| common-pile/stackexchange_filtered |
i don't inderstand the problem in my first redux counter
i have an error in my code
Failed to compile
./src/components/counterTemp.js
Attempted import error: 'decrement' is not exported from '../action/indexAction'.
there is my code>> https://github.com/ennouri0maak/my-error
ther is the exact error
Well, you've to be careful, because by giving it similar names you may have been confused.
What you're importing into the CounterTemp.js file was the name you had defined each of the actions with but not the actual name of your ActionCreators which are actually incrementAction, decrementAction and resetAction. Therefore the import would be:
import {
incrementAction,
decrementAction,
resetAction,
} from "../action/indexAction";
Also, in your mapDispatchToProps method you can no longer directly assign increment to increment as you used to, but you have to assign the particular ActionCretor, in this case incrementAction, to increment otherwise you will get an error as it will tell you that increment, decrement and reset are undefined, so it would remain:
function mapDispatchToProps(dispatch) {
return bindActionCreators(
{
increment: incrementAction,
decrement: decrementAction,
reset: resetAction,
},
dispatch
);
}
To see what the CounterTemp.js code would look like, it would:
import React, { Component } from "react";
import { bindActionCreators } from "redux";
import { connect } from "react-redux";
import {
incrementAction,
decrementAction,
resetAction,
} from "../action/indexAction";
class Counter extends Component {
render() {
const { count, increment, decrement, reset } = this.props;
return (
<div>
<h2>Counter</h2>
<button onClick={decrement}>-</button>
<span> {count} </span>
<button onClick={increment}>+</button>
<br />
<br />
<button onClick={reset}>Reset</button>
</div>
);
}
}
function mapStateToProps(state) {
return {
count: state.count,
};
}
function mapDispatchToProps(dispatch) {
return bindActionCreators(
{
increment: incrementAction,
decrement: decrementAction,
reset: resetAction,
},
dispatch
);
}
Counter = connect(mapStateToProps, mapDispatchToProps)(Counter);
export default Counter;
PS: barrel files (index files) if you have already nested them in a particular folder just add the name "index.js" because the specificity of what type is given by the folder where it is hosted and not by the index. Therefore, it would be changing from indexAction to index.js so you can import your ActionCreators over "../action" and not over "../action/indexAction"
Did you do an npm install of the project before you started it?
and then just entering those files should do you good, they are hosted in node_modules
guess what
https://i.imgur.com/NB20NKM.png
thank you awesome devloper <3.
You do try to import the below from a file that these are not defined. If you wanna import indeed the below import them from indexTypes file although I believe that you want to import the actual acitons.
import { increment, decrement, reset } from "../action/indexAction";
Although you have defined your actions as incrementAction, decrementAction, resetAction
So you need to do:
import {
incrementAction,
decrementAction,
resetAction`
} from '../action/indexActions'
The indexAction.js does not have any import with the name of decrement, increment, or reset.
So, either you will need to refactor the names in the indexAction.js, or need to import in your counterTemp.js like the following
import {
incrementAction as increment,
decrementAction as decrement,
resetAction as reset
} from "../action/indexAction";
instead of
import { increment, decrement, reset } from "../action/indexAction";
| common-pile/stackexchange_filtered |
Transformation from Android's tilt and orientation of stylus to tiltX and tiltY
I'm calling the followind methods in Android:
var tilt = MotionEvent.getAxisValue(MotionEvent.AXIS_TILT);
var orientation = MotionEvent.getAxisValue(MotionEvent.AXIS_ORIENTATION);
Now I want to translate above values to tiltX and tiltY defined in W3 spec:
https://w3c.github.io/pointerevents/#dom-pointerevent-tiltx
https://w3c.github.io/pointerevents/#dom-pointerevent-tilty
Is there a way to do such a transformation?
| common-pile/stackexchange_filtered |
Can the owner of an attached file be set to a queue?
My app's users need to be able to view and edit each other's files that they have attached to records. So my idea was to put my users into a Queue and then set the owner to that Queue on upload.
I was able to set the owner to another user by setting the OwnerId on ContentVersion to the id of the user. But am I able to do this with a Queue or Public Group?
What you are describing is that your users need to access the attachments that are related with some records. The way this can be achieved is by giving access to the users you want to be able to view and edit the attached file. They should be able to see and edit the record itself.
If you can't give access to the record, for security reasons, consider using profiles and page layouts to remove field access from the interface. Another solution would be to create a custom object, related to the actual record, and give access to those instead of the original record. It is another "jump", but it should work if you can't do it with profiles and sharing rules.
| common-pile/stackexchange_filtered |
Return something that's allocated on the stack
I have the following simplified code, where a struct A contains a certain attribute. I'd like to create new instances of A from an existing version of that attribute, but how do I make the lifetime of the attribute's new value last past the function call?
pub struct A<'a> {
some_attr: &'a str,
}
impl<'a> A<'a> {
fn combine(orig: &'a str) -> A<'a> {
let attr = &*(orig.to_string() + "suffix");
A { some_attr: attr }
}
}
fn main() {
println!("{}", A::combine("blah").some_attr);
}
The above code produces
error[E0597]: borrowed value does not live long enough
--> src/main.rs:7:22
|
7 | let attr = &*(orig.to_string() + "suffix");
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ does not live long enough
8 | A { some_attr: attr }
9 | }
| - temporary value only lives until here
|
note: borrowed value must be valid for the lifetime 'a as defined on the impl at 5:1...
--> src/main.rs:5:1
|
5 | / impl<'a> A<'a> {
6 | | fn combine(orig: &'a str) -> A<'a> {
7 | | let attr = &*(orig.to_string() + "suffix");
8 | | A { some_attr: attr }
9 | | }
10| | }
| |_^
This question most certainly was answered before, but I'm not closing it as a duplicate because the code here is somewhat different and I think it is important.
Note how you defined your function:
fn combine(orig: &'a str) -> A<'a>
It says that it will return a value of type A whose insides live exactly as long as the provided string. However, the body of the function violates this declaration:
let attr = &*(orig.to_string() + "suffix");
A {
some_attr: attr
}
Here you construct a new String obtained from orig, take a slice of it and try to return it inside A. However, the lifetime of the implicit variable created for orig.to_string() + "suffix" is strictly smaller than the lifetime of the input parameter. Therefore, your program is rejected.
Another, more practical way to look at this is consider that the string created by to_string() and concatenation has to live somewhere. However, you only return a borrowed slice of it. Thus when the function exits, the string is destroyed, and the returned slice becomes invalid. This is exactly the situation which Rust prevents.
To overcome this you can either store a String inside A:
pub struct A {
some_attr: String
}
or you can use std::borrow::Cow to store either a slice or an owned string:
pub struct A<'a> {
some_attr: Cow<'a, str>
}
In the last case your function could look like this:
fn combine(orig: &str) -> A<'static> {
let attr = orig.to_owned() + "suffix";
A {
some_attr: attr.into()
}
}
Note that because you construct the string inside the function, it is represented as an owned variant of Cow and so you can use 'static lifetime parameter for the resulting value. Tying it to orig is also possible but there is no reason to do so.
With Cow it is also possible to create values of A directly out of slices without allocations:
fn new(orig: &str) -> A {
A { some_attr: orig.into() }
}
Here the lifetime parameter of A will be tied (through lifetime elision) to the lifetime of the input string slice. In this case the borrowed variant of Cow is used, and no allocation is done.
Also note that it is better to use to_owned() or into() to convert string slices to Strings because these methods do not require formatting code to run and so they are more efficient.
how can you return an A of lifetime 'static when you're creating it on the fly? Not sure what "owned variant of Cow" means and why that makes 'static possible.
Here is the definition of Cow:
pub enum Cow<'a, B> where B: 'a + ToOwned + ?Sized {
Borrowed(&'a B),
Owned(B::Owned),
}
It looks complex but it is in fact simple. An instance of Cow may either contain a reference to some type B or an owned value which could be derived from B via the ToOwned trait. Because str implements ToOwned where Owned associated type equals to String (written as ToOwned<Owned = String>, when this enum is specialized for str, it looks like this:
pub enum Cow<'a, str> {
Borrowed(&'a str),
Owned(String)
}
Therefore, Cow<str> may represent either a string slice or an owned string - and while Cow does indeed provide methods for clone-on-write functionality, it is just as often used to hold a value which can be either borrowed or owned in order to avoid extra allocations. Because Cow<'a, B> implements Deref<Target = B>, you can get &B from Cow<'a, B> with simple reborrowing: if x is Cow<str>, then &*x is &str, regardless of what is contained inside of x - naturally, you can get a slice out of both variants of Cow.
You can see that the Cow::Owned variant does not contain any references inside it, only String. Therefore, when a value of Cow is created using Owned variant, you can choose any lifetime you want (remember, lifetime parameters are much like generic type parameters; in particular, it is the caller who gets to choose them) - there are no restrictions on it. So it makes sense to choose 'static as the greatest lifetime possible.
Does orig.to_owned remove ownership from whoever's calling this function? That sounds like it would be inconvenient.
The to_owned() method belongs to ToOwned trait:
pub trait ToOwned {
type Owned: Borrow<Self>;
fn to_owned(&self) -> Self::Owned;
}
This trait is implemented by str with Owned equal to String. to_owned() method returns an owned variant of whatever value it is called on. In this particular case, it creates a String out of &str, effectively copying contents of the string slice into a new allocation. Therefore no, to_owned() does not imply ownership transfer, it's more like it implies a "smart" clone.
As far as I can tell String implements Into<Vec<u8>> but not str, so how can we call into() in the 2nd example?
The Into trait is very versatile and it is implemented for lots of types in the standard library. Into is usually implemented through the From trait: if T: From<U>, then U: Into<T>. There are two important implementations of From in the standard library:
impl<'a> From<&'a str> for Cow<'a, str>
impl<'a> From<String> for Cow<'a, str>
These implementations are very simple - they just return Cow::Borrowed(value) if value is &str and Cow::Owned(value) if value is String.
This means that &'a str and String implement Into<Cow<'a, str>>, and so they can be converted to Cow with into() method. That's exactly what happens in my example - I'm using into() to convert String or &str to Cow<str>. Without this explicit conversion you will get an error about mismatched types.
Thanks, it works! But there's a lot I don't understand here, how can you return an A of lifetime static when you're creating it on the fly? Not sure what "owned variant of Cow" means and why that makes 'static possible. We're not ever modifying some_attr at any point -- isn't that usually the point of "copy on write", to allow for writes? Does orig.to_owned remove ownership from whoever's calling this function? That sounds like it would be inconvenient. As far as I can tell String implements Into<Vec<u8>> but not str, so how can we call into() in the 2nd example?
Whoa, that's a lot of questions! I've updated my answer, hopefully it would be helpful :)
| common-pile/stackexchange_filtered |
how do i search for certain word in a website?
Is there any way to search within a website without manually checking out every link?
Suppose there is some website www.somewebsite.com and I want to search for a word in all of the pages www.somewebsite.com/1, www.somewebsite.com/2, and so on. Is there any way to do it without actually browsing through each and every page?
Questions about using search engines like Google are off topic for Super User. If you intended to do this in a browser only, please clarify your question—it would be on topic then.
Go to Google and type this: site:http://www.somewebsite.com keyword. It will search through the pages they have indexed to find your keyword. This is particularly useful for sites that don't have a search bar.
Have a look at more Google Tricks
Also, your question isn't very clear are you trying to do this via Shell in Solaris?
no nothing in particular....i wanted to do it in any possible way,your answer suffices.thanks
Most browsers have a Find feature. In IE it's on the Edit menu as Find on this page or you can press Ctrl + F. In Firefox it's the same, but just called Find on the menu. Not sure about Chrome.
Whatever browser, it's pretty easy to use to find words.
but using this i wud have to browse every page and then press ctrl+ F..that wud be really tough..
| common-pile/stackexchange_filtered |
How to save multiple cookies?
I am creating a website where user will come and do some task. Now what I want is, if user 'X' comes, a cookie should be saved with name 'X' in the browser. And if user 'Y' comes then cookie for 'Y' user should also be saved. But cookie of 'X' should not be deleted or overwrite.
Now suppose next time again 'X' visits the website the data in 'X' cookie should be updated, same should be done for 'Y'.
If is is possible then tell me exact way to do it.
Thanks.
It sounds like you have a serious misunderstanding of how cookies work. They are stored in the browser and are usually used to identify which user is logged in. Most people expect a roughly 1-to-1 correspondence between user and browser. Are you really planning to store user-specific data for multiple users in the same browser? What is your plan to distinguish between users?
actually I have created a game. Suppose 'X' played the game and won 10 time, this info should be saved as cookie, Now suppose 'Y' came and won 5 times. Cookie for 'y' should also be saved. Now next time when 'X' come to play he should be able to see his last victories.
That doesn't address any of the points I made in my previous comment.
Do you have solution or not?
No. I don't have a solution. You can tell this because I haven't posted an answer. This is because your real problem is unclear. I could give you a literal answer (like dearsina already has, which you don't seem to understand either) but I suspect it won't solve the real problem. That is why I made a comment asking you to clarify your problem.
You can create whatever cookies you want. Think of cookies as $_SESSION values that are stored on the user's computer instead of your server. Cookies can be complex arrays, so you could store an array for each user.
Keep in mind that any user of the computer can see what's inside any cookie, so don't ever save anything secret, like passwords or API keys.
Arrays as cookies
You can store arrays as cookies using json_encode(), like so:
setcookie('your_cookie_name', json_encode($array), time()+3600);
Further reading
Have a look at setcookie() for further documentation on how to use it in PHP.
Am I doing the right thing?
The most common use for cookies is to identify one user, and to save them having to identify themselves. It doesn't make much sense to store lots of information about or for lots of users on their own machine (instead of your own server), because cookies can be cleared out easily.
You're much better off storing everything on your own server, and only use cookies to identify the current user.
Now I want to save only one info but for many user. How to do it with cookies?
@raj, just change the cookie name to be some sort of user identifier. Then you can store as much data you want for each user. It's a bad idea though.
| common-pile/stackexchange_filtered |
Map File Editor
I'm Creating an Offline Map for Android.But i only need a certain part of the Map.At the moment I can View my whole country using Mapsforge.
The map is provided by Mapsforge .What I want is to get some continent of my country. I don't want to use MOBAC as the files are too large.
| common-pile/stackexchange_filtered |
How to integrate Ember.JS and Keycloak SSO system
My question is quite simple : does anybody know how to integrate Ember.JS and Keycloak (the SSO system) ? We currently face a problem using the Keycloak JS Bower library (https://github.com/keycloak/keycloak-js-bower) to redirect users to Keycloak own login page, and to generally integrate Ember.JS with Keycloak. Our problems are :
Double page reload on page reloading,
401 unauthorized HTTP code on login to the Ember App.
Thanks for your (precious) support.
You can use script JS:
Check that you have script in your keycloak: https://yourdoamin.com/auth/js/keycloak.js
If you have, add this code to your login page:
<head>
<script src="https://yourdoamin.com/auth/js/keycloak.js"></script>
<script>
var keycloak = Keycloak({
"realm": "relam_name",
"url": "https://yourdomain.com/auth",
"ssl-required": "external",
"resource": "client_id",
"verify-token-audience": true,
"credentials": {
"secret": "secret_code"
},
"use-resource-role-mappings": true,
"confidential-port": 0,
"policy-enforcer": {},
"onLoad":'login-required',
"clientId": "client_id"
});
var keycloak = Keycloak();
keycloak.init().success(function(authenticated) {
console.log(keycloak);
alert(authenticated ? 'authenticated' : 'not authenticated');
}).error(function() {
alert('failed to initialize');
});
</script>
</head>
In console.log(keycloak); Keycloak return token and other information about user.
More info:
Keycloak JavaScript API to get current logged in user
JavaScript Adapter
MicroProfile JWT Authentication with Keycloak and React
There are a couple ember cli addons that deal with Keycloak and Ember integration:
https://www.npmjs.com/package/ember-cli-keycloak
https://www.npmjs.com/package/ember-keycloak-auth
| common-pile/stackexchange_filtered |
impose order (not sorting) using comparator on a wrapper class of a list in Java
I have a class, person. it has Age, Name, Height, etc.
I am creating a class called PersonCollection which is a wrapper of a list (an ArrayList).
I will like to be able to compare Person objects using the PersonCollection class, Which means, I don't want to make the Person class implement the Comparable interface, I would like the PersonCollection to implement the Comparator interface.
I have having trouble doing that. I have implemented the compare method but still when I compare Person Objects it doesn't work.
for example this code gives me an error (people is an ArrayList
public void insert (Person p){
for(int i = 0; i < people.size(); i++){
if (people.get(i) > p){
//Do something
}
}
}
I know how to use Comparator for sorting, this is different.
I am fully aware of other possible and maybe better solutions (any priority queue class or some sort of sortedset classes)
I wish to do that for ArrayList for a specific reason and I kindly ask you to base your solutions on this instead of suggest other Data structures.
Since Java doesn't have operator overloading (and your Person class doesn't even implement Comparable), it's no surprise that you can't use > to compare objects.
So shouldn't Comparator solve this ? PersonCollection implements a Comperator according to the two Person's age for example.
I don't see why a collection should be a comparator, but if you want to abuse design, sure thing, write a compare() method and implement it correctly.
You can write a custom Comparator and use the compare(a, b) method.
https://docs.oracle.com/javase/8/docs/api/java/util/Comparator.html#compare-T-T-
Your code would look like
if (myComparator.compare(people.get(i), p) > 0 ) {
Ok so this doesn't really use the compare function as default but I guess that there isn't any other option so this is good enough for me .
@NotSure why can't you implement it in a clean and proper way? School project I guess?
Job interview task that asks for an implementations of a PersonCollection (saying not to actual implement a Person class, just use it as interface)
Sometimes you can't modify the class in order to make it Comparable, or other times you might not want to, or you might need more than one way to compare or sort the data - therefore Comparator is a good alternative to Comparable.
According to your description you have a Wrapper class like this?
public class People implements List<Person>, Comparator<Person>{
/* methods */
}
so if you want to really use comparator interface, then you would have to do it like this:
public void insert (Person p){
for(int i = 0; i < people.size(); i++){
if (people.compare(people.get(i),p)){ // because people implements Comparator
//Do something
}
}
}
which should (not too sure though) work.
But i would highly recommend not to use this and think about something better, as a class should not be a comparator and a list (because both interfaces should be used for completly different reasons).
A better approach would be to make Person implement Comparable, and then sort according to that
What I ended up doing is having a PersonCollection that has an ArrayList property called people and also a Comparator property that I get from a factory (the factory gives me a comparator according to my choosing of comparing with age,name,id, etc...) and ended up using @vikingsteve approch. I know that a Comparable on Person is the most obvious but I was tasked not to implement this that way
Below is a piece of code where you can see a custom comparator is making an age comparison on Person object's age attribute.
public class TestCompare {
public static void main(String[] args) {
Person person1 = new Person(45, "Tom");
Person person2 = new Person(12, "Sarah");
Person person3 = new Person(34, "Michael");
Person person4 = new Person(33, "Donald");
Person person5 = new Person(65, "timothy");
List<Person> people = new ArrayList<Person>();
people.add(person1);
people.add(person2);
people.add(person3);
people.add(person4);
people.add(person5);
CustomComparator comparator=new CustomComparator();
for (Person p : people) {
System.out.println(comparator.compare(p, new Person(55, "James")));
}
}
}
class CustomComparator implements Comparator<Person> {
@Override
public int compare(Person o1, Person o2) {
// TODO Auto-generated method stub
return o1.getAge().compareTo(o2.getAge());
}
}
class Person implements Comparable<Person> {
public Person(int age, String name) {
this.age = age;
this.name = name;
}
private Integer age;
private String name;
public Integer getAge() {
return age;
}
public void setAge(Integer age) {
this.age = age;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
@Override
public int compareTo(Person o) {
// TODO Auto-generated method stub
return this.getAge().compareTo(o.getAge());
}
}
| common-pile/stackexchange_filtered |
Can the real line be partitioned into two sets, each with positive Lebesgue measure and no interior points?
Is there such example that $A\subset \mathbb{R}$, such that both $A$ and $\mathbb{R}\backslash A$ are sets without interior points and with positive Lebesgue measure?
Isn't the union of two nowhere dense sets nowhere dense? The existence of such a set would make $\mathbb{R}$ nowhere dense in itself, wouldn't it?
@Bungo Sorry for that, I'll do it later. Do you mean $A = {q+x:x\in E,q\in\mathbb{Q}}$ and $E$ is the fat Cantor set?
@Bungo Thank you.
For the new question (I agree that questions with accepted answers shouldn't be changed): take $A=\big(\Bbb Q\cap(0,\infty)\big)\cup\big((-\infty,0)\setminus\Bbb Q\big)$.
There is a lot of confusion in the various revisions (I did one) as to whether you want the two sets to have no interior points or you want the two sets to be nowhere dense. The answer is YES either way, and examples are in Greg Martin's comment and Henno Brandsma's answer. For the difference between having no interior points and being nowhere dense, note that "$E$ is nowhere dense" is equivalent to "the closure of $E$ has no interior points". Also, see my answer to Is saying a set is nowhere dense the same as saying a set has no interior?
Oops, for "both sets are nowhere dense" the answer is NO; for "both sets have no interior points" the answer is YES.
You cannot write $\Bbb R$ as a union of two nowhere dense sets, because the union of nowhere dense sets is still nowhere dense.
And without interior points (after the question change): take $A$ negative rationals plus positive irrationals), and the other set its complement.
Sorry for mistakes in questioning, can R be separated into two sets without inner points and with positive Lebesgue meausure?
@Sqr yes split the irrationals and split the rationals and combine e.g.
| common-pile/stackexchange_filtered |
What's the advantage of using Fitness versus automated Integration Tests?
What's the advantage of using Fitness versus automated Integration Tests? I'm struggling to see exactly where Fitness fits in when aiming to deliver a fully tested solution. Surely, if a developer has unit and integrate tested their code then this should be sufficient. Why would a team need to duplicate integration testing efforts?
Test cases in Agile environments mainly come in four main types:
1) Automated unit tests (e.g., using J-unit);
2) Automated feature verification tests (e.g., using Fitnesse);
3) Automated functional/regression tests (e.g., using Selenium or QuickTestPro);
4) Manual testing.
For types 1-3, of course, there are specified automated test cases. For type 4, the test cases tend to be logical (or high-level) test cases, which requires a higher level of skill and domain knowledge in the testers. Also, a significant amount of experience-based testing, such as exploratory testing, defect taxonomy testing, etc., tends to occur.
See the RBCS blog here:
The main reason to use fitness is if you are going to have non-technical people writing tests. So for example, suppose we have to support a ton of different ways of paying commissions. Non-tech people could make spreadsheets that show a dude earning a certain amount of dough, asking the system how much they should get in commission, and then asserting that the calc is right.
Personally, I have found FIT more trouble than it's worth. I think it might be a really compelling tool if the makers got serious and made some tools to set it up and configure it.
But the main thing is only use it if you are sure you are going to have a lot of business rules type things to verify that BAs or even Customers could participate directly in. This is not for asserting that an orbital constant is being computed properly.
Fitness is supposed to make it easier for business analysts to own and run tests. Developers create fixtures; business analysts feed data and confirm that tests pass.
In my experience, business analysts have neither the background nor the interest to do such a thing.
Fitness tests are more like integration tests. They can involve several components. Unit tests should be done by developers on single components. Hence the name "unit".
I prefer unit tests.
The question implies a false dichotomy; FitNesse is an automated integration testing solution. It's just that tests are (intended to be) created as markup in wiki pages.
I'm currently using it as my integration testing solution; I run all of the integration tests using the command line. Tests can also be run via a JUnit or a REST API (which would require running the FitNesse server).
As Rob mentions in their answer, it's not (very) easy to setup and configure, tho I didn't find it too difficult. And I dispute Rob's claim that "This is not for asserting that an orbital constant is being computed properly."; in fact, it's perfectly usable for exactly that.
I came across this question because I was searching for evaluations from people using, or people that had used, Fit or FitNesse for unit testing. The reason why that idea occurred to me is that, as a developer, I find it much easier to understand a set of tests in the form of a FitNesse wiki page than a code file.
Below is an example of a test page from one of my projects. These tests are integration tests, but I can't think of any reason why this wouldn't work well for unit tests too. There's nothing special about the code for the tests that would prevent units from being tested.
| common-pile/stackexchange_filtered |
How to pass a mutated state to a different page?
In my App.js, I'm creating a Route as such:
<Route
exact path='/tools/ndalink'
render={(props) => (
<React.Fragment>
<Header />
<LinkPage {...props} brokerID={this.state.brokerID}></LinkPage>
</React.Fragment>
)}
/>
state.brokerID is initially "", but changed shortly after, therefore LinkPage receives this.state.brokerID as "".
How can I pass the changed state.brokerID (without using Redux)?
You need to use a lifecycle method to get the props to the component to wait for the props called componentDidUpdate.
That being said, you only have to use this if you plan to mutate the brokerId.
Since the process is async you'll have to wait for the props to be passed down. Until the you can show a loading text or progess bar.
class LinkPage extends React.Component {
state = {
builderId: ''
};
componentDidUpdate(prevProps) {
if(this.props.builderId !== prevProps.brokerId) {
this.setState({ builderId: this.props.brokerId });
}
}
render() {
return (
<div className="App">
<h1>{ this.state.builderId ? this.state.builderId : 'Loading' }</h1>
</div>
);
}
}
Or, a simple method would be to not use the lifecycle method at all, Change the following line in render and it should work:
<h1>{ this.props.builderId ? this.props.builderId : 'Loading' }</h1>
If you need to use this brokerId for an api call or something, you can use the setState callback. This would go something like this in the componentDidUpdate lifecycle method.
componentDidUpdate(prevProps) {
if(this.props.builderId !== prevProps.builderId) {
this.setState({ builderId: this.props.builderId }, () => {
//use this.state.brokerIdhere
});
| common-pile/stackexchange_filtered |
Nature of JS bound functions and function invocation operator
var obj = {};
var r1 = (obj['toString'])();
var m1 = obj['toString'];
var r2 = m1();
var r3 = (obj.toString)();
var m2 = obj.toString;
var r4 = m2();
r1 and r3 expectedly contain correct result: "[object Object]", while r2 and r4 contain "[object Undefined]" showing that m1 and m2 are not bound to object.
I can't fully comprehend how obj['toString']() is executed. I always looked this way, (obj['toString'])() -> (function obj)(). Turns out that function invocation operator looks back on what is the context. I would expect operator to not know where operands come from.
Can anyone properly explain this behavior?
r2 and r4 were generated with the function having a this equal to doing ({}).toString.call(undefined);
@PaulS. yeah I tried to point this out, but question is why.
Turns out that function invocation operator looks back on what is the context. I would expect operator to not know where operands come from.
In fact, it does know.
In the EcmaScript specification this is described by the property accessor operators (and a few similar operations) to return a "Reference object" that holds exactly this information: the context on which the property was accessed. Any "normal" operations will usually just get the reference's value - including the assignment operators, which do dissolve the reference in your case.
The call operator uses this to make method calls special:
Let ref be the result of evaluating MemberExpression. which may return a Reference
Let func be GetValue(ref). which fetches the actual function object - you see this operation a lot in the spec
… fetch arguments, do some type assertions
If Type(ref) is Reference, then
If IsPropertyReference(ref) is true, then
Let thisValue be GetBase(ref). <- here the method context is fetched
Else, the base of ref is an Environment Record
… which basically describes variables inside a with statement
Else, Type(ref) is not Reference.
Let thisValue be undefined.
So this Reference object is implicitly converted to just function object if not invoked in place?
Yes, this happens in the assignment operator. It doesn't happen in the grouping operator (parenthesis) for example.
This is actually a "special" behavior of the grouping operator (...):
1. Return the result of evaluating Expression. This may be of type Reference.
NOTE This algorithm does not apply GetValue to the result of evaluating Expression. The principal motivation for this is so that operators such as delete and typeof may be applied to parenthesised expressions.
So, this operator specifically does not call GetValue and thus does not return the function object itself but rather the whole reference, so that operations which expect a reference still work.
A Reference is basically an encapsulation of a value with an optional "base value" which is the object in case of a property access.
The meaning of "this" always seems to cause confusion in JavaScript, especially in scenarios such as callbacks.
Modifying your example slightly, as you are using toString() and we can't see its implementation, I've introduced a function that can be seen to depend on the context, that is the meaning of this. The coder is expecting to access the object attribute x, but he gets surprised when the function is unbound.
window.x = "some global value";
var obj = {
wibble : function() { console.log("wibble " + this.x); },
x : 7
};
obj.wibble();
var f = obj.wibble;
f();
This results in:
wibble 7
wibble some global value
That is when we explictly specify a context in the form obj.wibble() or indeed obj["wibble"], this behaves as most developers expect.
However when we call the "naked" function, we are effectively getting a default context, and in my example I happen to have a value for x in that context and so get that value, which happens to be a string.
I don't think this question is specifically about this, but rather why/when the "connection" between the object and the function is lost.
| common-pile/stackexchange_filtered |
Magnetic Sensor to GPS heading in Gazebo?
I have a magnetometer sensor in Gazebo running, but want to convert it to a GPS heading. I additionally have the navsatfix plugin working (Would be nice if there was a heading in that somehow as well). Does anyone know how to get the GPS heading, or if there is a plugin that generates NMEA sentences?
can you have a GPS heading when you are not moving?
Good point. I would think the mag sensor can adjust to earths field though? A compass can...
| common-pile/stackexchange_filtered |
Filter SQL column based on value of another column
Here is a SQL table "OptionValue" :
t_date strike
1/1/2011 89
1/1/2011 105
1/2/2011 70
1/2/2011 82
and another table "PriceValue":
t_date price
1/1/2011 100
1/2/2011 93
What I'm wanting to do, is select every row where the strike is < than price * (.9) , for that specific t_date ... eg.
t_date price strike limit
1/1/2011 100 89 (100*.9 = 90) , 89 < 100
1/2/2011 93 70 (93*.9 =83.7) , 70 < 83.7
1/2/2011 93 82 (93*.9 =83.7) , 82 < 83.7
Note: every t_date has a different price (diff. limit), is it possible to apply a specific filter, for each trade date?
should be able to get the result you want: WHERE strike < (price * 0.9)
@JohnWoo in the database, there is one price per t_date , and I need to apply 1 limit for each t_date
http://sqlfiddle.com/#!9/3f5f0f/5
Which DBMS product are you using? Postgres? Oracle? "SQL" is just a query language, not the name of a specific database product.
select * from option_value ;
t_date | price | strike
----------+-------+--------
1/1/2011 | 100 | 89
1/1/2011 | | 105
1/2/2011 | 93 | 95
1/2/2011 | | 82
(4 rows)
So you need to compare each row to rows with same t_date:
select * from option_value v1 join option_value v2 on v1.t_date=v2.t_date where v2.strike < (v1.price * 0.9);
t_date | price | strike | t_date | price | strike
----------+-------+--------+----------+-------+--------
1/1/2011 | 100 | 89 | 1/1/2011 | 100 | 89
1/2/2011 | 93 | 95 | 1/2/2011 | | 82
(2 rows)
sorry , but im confused about the first two lines .. select * from option_value;
peter=# select * from option_value ?
just ran a select on the example table to show you the columns and values. the 'peter=#' is the prompt shown by my command line client, you may ignore it.
| common-pile/stackexchange_filtered |
Generating random sample from csv file, whilst skipping rows with certain values
I'm trying to generate a random subsample of 5000 rows from a .csv file containing tens of thousands of rows. The df contains two columns: 'JPG' and 'NAME'.
I have generated a random subsample with the following code:
import pandas as pd
file = pd.read_csv(r'C:\filepath\data.csv', usecols = [7, 8])
sample = file.sample(n=5000)
print(sample)
However, now I wish to do the same, but including a for-loop that can do so whilst skipping any rows with the string 't3' in the 'NAME' column.
Here's where I'm at, but stuggling to make it work:
import pandas as pd
file = pd.read_csv(r'C:\filepath\data.csv', usecols = [7, 8])
sample = file.sample(n=5000)
for num in sample:
if sample.loc[sample['NAME'] == 't3']:
continue
print(sample)
Any help on this would be greatly appreciated.
Thanks,
R
why don't you filter out all the rows with the NAME t3 beforehand? Like:
import pandas as pd
file = pd.read_csv(r'C:\filepath\data.csv', usecols = [7, 8])
file_without_t3 = file[file['NAME'] != 't3']
sample = file_without_t3.sample(n=5000)
print(sample)
Very good point! I overcomplecated that, and got confused in the process. Thanks for the help Zhao! :)
| common-pile/stackexchange_filtered |
GCC & binutils build - C compiler cannot create executables
I'm trying to build gcc-5.3 and binutils-2.26. I've done it like this:
mkdir gcc; cd gcc
wget http://path/to/gcc-5.3.0.tar.bz2
wget http://path/to/binutils-2.26.tar.bz2
tar xf gcc-5.3.0.tar.bz2
tar xf binutils-2.26.tar.bz2
cd gcc-5.3.0
contrib/download_prerequisites
for file in ../binutils-2.26/*; do ln -s "${file}"; done
cd ..
mkdir build
mkdir dist
cd build
../gcc-5.3.0/configure --prefix=/home/teamcity/gcc/dist --disable-multilib --with-system-zlib --enable-languages=c,c++,fortran --program-suffix=-mine
make
This appears to build the first stage executables okay; prev-gas, prev-gcc, prev-ld are all present with plausible-looking executables in them. But the next stage fails:
Configuring stage 2 in ./intl
configure: loading cache ./config.cache
checking whether make sets $(MAKE)... yes
checking for a BSD-compatible install... /usr/bin/install -c
checking whether NLS is requested... yes
checking for msgfmt... /usr/bin/msgfmt
checking for gmsgfmt... /usr/bin/msgfmt
checking for xgettext... /usr/bin/xgettext
checking for msgmerge... /usr/bin/msgmerge
checking for x86_64-unknown-linux-gnu-gcc... /home/teamcity/gcc/build/./prev-gcc/xgcc -B/home/teamcity/gcc/build/./prev-gcc/ -B/home/teamcity/gcc/dist/x86_64-unknown-linux-gnu/bin/ -B/home/teamcity/gcc/dist/x86_64-unknown-linux-gnu/bin/ -B/home/teamcity/gcc/dist/x86_64-unknown-linux-gnu/lib/ -isystem /home/teamcity/gcc/dist/x86_64-unknown-linux-gnu/include -isystem /home/teamcity/gcc/dist/x86_64-unknown-linux-gnu/sys-include -L/home/teamcity/gcc/build/./ld
checking for C compiler default output file name...
configure: error: in `/home/teamcity/gcc/build/intl':
configure: error: C compiler cannot create executables
See `config.log' for more details.
The relevant bit of config.log appears to be this:
configure:2978: checking for C compiler default output file name
configure:3000: /home/teamcity/gcc/build/./prev-gcc/xgcc -B/home/teamcity/gcc/build/./prev-gcc/ -B/home/teamcity/gcc/dist/x86_64-unknown-linux-gnu/bin/ -B/home/teamcity/gcc/dist/x86_64-unkn
own-linux-gnu/bin/ -B/home/teamcity/gcc/dist/x86_64-unknown-linux-gnu/lib/ -isystem /home/teamcity/gcc/dist/x86_64-unknown-linux-gnu/include -isystem /home/teamcity/gcc/dist/x86_64-unknown-l
inux-gnu/sys-include -L/home/teamcity/gcc/build/./ld -g -O2 -gtoggle -static-libstdc++ -static-libgcc conftest.c >&5
/home/teamcity/gcc/build/./prev-gcc/as: 106: exec: /home/teamcity/gcc/build/./gas/as-new: not found
This looks like prev-gcc's as is expecting to find gas/as-new, when actually it's prev-gas/as-new.
Is there some workaround for this? Is it safe to just ln -s prev-gas gas? I'd sort of expect that to cause problems later on. Is it not possible to build these two versions of gcc and binutils together?
Typo:
wget http://path/to/binutils-5.3.0.tar.bz2 should probably have 2.26 rather than 5.3.0 in the path?
Quite right. I'll fix it.
Should have also mentioned the build platform is Ubuntu 14.04, using GCC 4.8 installed with the system.
This is my edited answer after trying it on my own on an Ubuntu 14.04 Azure VM.
The following approach worked for me:
Build and install binutils 2.26; for the purposes of this discussion,
let's say it's installed in /opt/gcc-5.3.0.
Configure gcc-5.3.0 in a build directory using /root/objdir/../gcc-5.3.0/configure --prefix=/opt/gcc-5.3.0
--enable-languages=c,c++ --disable-multilib --with-ld=/opt/gcc-5.3.0/bin/ld --with-as=/opt/gcc-5.3.0/bin/as assuming gcc-5.3.0 and the build directory, objdir, are at the same
level.
Do make followed by make install in the objdir build directory.
To verify that the ld used by the newly-built gcc is the one from the new binutils:
/opt/gcc-5.3.0/bin/gcc -print-prog-name=ld
The output should be, in this example:
/opt/gcc-5.3.0/bin/ld
Another test: rename the system ld, in my case /usr/bin/ld; the newly-built gcc should still work.
Of course, this applies to both ld and as.
Setting AS and LD environment variables to point to the newly-built binaries from the binutils package did not work: the -print-prog-name=... still showed default tools, and removing/renaming the default tools caused gcc to fail.
Thus the best way of accomplishing this is to build binutils first and then use the --with-ld and --with-as options to configure. If you also want to ensure that the newly-built binutils are used to build GCC, you may want to put them in the PATH before the system-provided tools, or you can even rename the system-provided tools to keep them out of the picture.
Thank you for checking with another mailing list to verify that building GCC and binutils together doesn't work unless they are pulled from the source control, I guess that option is not applicable when using downloaded tarballs. Overall this was an interesting exercise.
I've tried this. But then the built GCC ends up using the system ld, not the one from binutils 2.26.
I see now. I've found some instructions that are similar to what you described, but those are 6-7 years old, things may have changed. I'll play with it on my Ubuntu 14.04 when I find time, looks like an interesting question.
I've got it to work in the end by building binutils, installing, then configuring gcc with AS=/path/to/as-mine and LD=/path/to/ld-mine. I read that you can also use --with-as=... and --with-ld=... as options to configure, but haven't tried them.
Looks like a great idea, but you might want to re-build binutils with the more recent gcc. I've updated my answer, but still need to play with this.
I tried a build similar to what you described in the original post and ran into a similar error. It appears to be an inconsistency between scripts and directory names, basically a bug in the build process; will investigate more...
The response on the binutils mailing list was that building the two directory trees together is only supported when fetching from source control; if you're trying to build released versions of both then it won't (necessarily) work, as shared files / directories may have changed between the two releases. If you want to update this to mention the AS=... LD=... and --with-as=... --with-ld=... options, I'll mark it as correct.
| common-pile/stackexchange_filtered |
Are the highlights in this photo a byproduct of the vintage lens or were water particles introduced during shooting?
The effect seen in this image is popular among garden photographers and most of the time vintage lenses are used. In this case the m42 helios-44m 58mm f2, a lens which i ordered and am anxiously waiting to try out.
I've tried a few times to shoot garden shots against the light source and while sprinkling water, etc with both the Canon f/1.4 50mm and Canon f/2.8 100mm L macro lenses but was never able to achieve such intense, large and perfectly circular highlights.
Can anyone provide the basic recipe on how this is achieved?
Photo source and credit: Frozen 2 by Hana Balasova
Not sure what technique is used, but it does look to me as though the water drops are quite some distance behind the flower. In my mind though, the big out-of-focus drops implies that they should be much nearer to the camera, which then makes me wonder whether it is actually a composite image?
I don't know what do you mean by vintage lenses, and what is the relation with garden photography. I don't see the conection.
I've done similar through rain spotted windows before, so basically the way I would tackle this is with a sheet of clear glass, perhaps from a picture frame, treated with something like Rain-X so that water will bead and then spray it with some water. You could also probably use something like glycerin. Then, basically, shoot through the glass.
Thanks John. Will give it a go with a piece of glass. I will try it then post some results. I will also try some shots with the same lens (once it arrives) with and without glass and compare.
@Jakub - I'll be curious to see your results, you may need to play around with it a bit.
making this photo was not as complicated as you write here. Sprayer, reflector plate, backlight in the morning, f 2. Drops of water flew before the flower, but above all behind the flower. That is all. Everything is the work of a genius lens :-) More photos taken with old lenses can be found on my site. Hana
It's always great when the actual photographer shows up to answer questions about their work!
@mattdm Indeed. If there's ever a case of "this should be the accepted answer", it's this one. =)
A few thoughts.
the depth of focus is quite small: this is shot with a low f-number (probably < 4)
different drops have different degrees of "out of focus" (there are some drops that look like they are almost in focus)
many of the drops are not perfect circles
Based on these three observations, I am concluding that not all the drops were the same distance from the lens; however, the non-circular drops were almost certainly stuck to some piece of glass - probably not the lens, as with that low f number they would almost certainly be too far out of focus.
I going to guess that this was shot while there was some water in the air, and through a piece of glass. This could happen naturally if you were taking a picture of a flower in the garden (or more likely, the window box) while you are inside, and it is raining. Of course it might have been "staged" - but those are the elements I see in this picture.
The effect you are seeing is called Bokeh. It is how a lens renders out of focus highlights. While glass aberrations and coatings play a part, the blurred highlights tend to take on the shape of the aperture. They are, after all, basically the inverse of a shadow cast by the aperture into the film or sensor.
Since the effect is heightened by shallow depth of field and since depth of field gets shallower with wider apertures, bokeh tends toward circular shapes. However, a small reduction of a modern bladed aperture will reveal a more polygonal structure. Shooting wide open and using a faster shutter speed will give the circular shape.
The size of the highlights is a combination of the size of the light source from which they originate and the brightness of that light source. A fine mist with reflected flash is capable of producing both large and small bokeh. They key is how distant the drops from the center of focus, not the distance of the drop from the lens.
You should be able to get this effect with any lens, although with today's lenses the effect is more dependent on the aperture then lens aberration because the quality of the glass and the coatings has improved over the years.
This is a bit outside the scope of your question but since the bokeh tends to take on the shape of the aperture, it is possible to use cut-outs in front of the lens while shooting wide open to manufacture specific shapes for your bokeh. I once cut a cross shape into a filter mask to obtain cross-shaped bokeh and it worked quite well. In the image below, I tried to get the feel of lotus flowers by using a filter mask into which I'd cut the main shape but left a small spot in the center. Note that the bokeh on the figure's cap also reflect the same shape, but are quite small due to their proximity to the field of focus.
| common-pile/stackexchange_filtered |
Filter stream distinct
I have a stream over a simple Java data class like:
class Developer{
private Long id;
private String name;
private Integer codePost;
private Integer codeLevel;
}
I would like to apply this filter to my stream :
if 2 dev has the same codePost with different codeExperience keep the dev with codeLevel = 5
keep all devs if Developers has the same codePost with the same codeLevel
Example
ID
name
codePost
codeExperience
1
Alan stonly
30
4
2
Peter Zola
20
4
3
Camilia Frim
30
5
4
Antonio Alcant
40
4
or in java
Developer dev1 = new Developer (1,"Alan stonly",30,4);
Developer dev2 = new Developer (2,"Peter Zola",20,4);
Developer dev3 = new Developer (3,"Camilia Frim ",30,5);
Developer dev4 = new Developer (4,"Antonio Alcant",40,4);
Stream<Developer> Developers = Stream.of(dev1, dev2, dev3 , dev4);
Streams are not very well-suited to problems that need to take the state of previous entries into account. You could collect your entries with the 3 args version of Collectors.toMap(), with codePost as key and 'merging' the values to keep the one you want.
developers.collect(Collectors.toMap(Developer::getCodePost, Function.identity(),BinaryOperator.maxBy(Comparator.comparing(Developer::getCodeLevel)))).values();
@Hulk yes that seems good, but how I could filter by the codeExperience in my case?
Well, forget it - the approach shown in the linked answer (stateful Predicate in Stream.filter()) is not applicatble to your case, because you cannot take back what you already emitted after the fact. I've retracted my duplicate-close vote. I'd just use a Map - either somthing along the lines of the comment by Hadi or simply with a loop instead of a Stream.
keep all devs if Developers has the same codePost with the same codeLevel if there are 4 devs, two have codeLevel = 2, and the other two have codeLevel = 3, which devs should be kept: the ones with the lowest codeLevel, those with the max codeLevel, all of them, or none of them?
As mentioned in the comments, Collectors.toMap should be used here with the merge function (and optionally a map supplier, e.g. LinkedHashMap::new to keep insertion order):
Stream.of(dev1, dev2, dev3, dev4)
.collect(Collectors.toMap(
Developer::getCodePost,
dev -> dev,
(d1, d2) -> Stream.of(d1, d2)
.filter(d -> d.getCodeLevel() == 5)
.findFirst()
.orElse(d1),
LinkedHashMap::new // keep insertion order
))
.values()
.forEach(System.out::println);
The merge function may be implemented with ternary operator too:
(d1, d2) -> d1.getCodeLevel() == 5 ? d1 : d2.codeLevel() == 5 ? d2 : d1
Output:
Developer(id=3, name=Camilia Frim , codePost=30, codeLevel=5)
Developer(id=2, name=Peter Zola, codePost=20, codeLevel=4)
Developer(id=4, name=Antonio Alcant, codePost=40, codeLevel=4)
If the output needs to be sorted in another order, values() should be sorted as values().stream().sorted(DeveloperComparator) with a custom developer comparator, e.g. Comparator.comparingLong(Developer::getId) or Comparator.comparing(Developer::getName) etc.
Update
As the devs sharing the same codeLevel should NOT be filtered out, the following (a bit clumsy) solution is possible on the basis of Collectors.collectingAndThen and Collectors.groupingBy:
input list is grouped into a map of codePost to the list of developers
then the List<Developer> values in the map are filtered to keep the devs with max codeLevel
// added two more devs
Developer dev5 = new Developer (5L,"Donkey Hot",40,3);
Developer dev6 = new Developer (6L,"Miguel Servantes",40,4);
Stream.of(dev1, dev2, dev3, dev4, dev5, dev6)
.collect(Collectors.collectingAndThen(Collectors.groupingBy(
Developer::getCodePost
), map -> {
map.values()
.stream()
.filter(devs -> devs.size() > 1)
.forEach(devs -> {
int maxLevel = devs.stream()
.mapToInt(Developer::getCodeLevel)
.max().orElse(5);
devs.removeIf(x -> x.getCodeLevel() != maxLevel);
});
return map;
}))
.values()
.stream()
.flatMap(List::stream)
.sorted(Comparator.comparingLong(Developer::getId))
.forEach(System.out::println);
Output:
Developer(id=2, name=Peter Zola, codePost=20, codeLevel=4)
Developer(id=3, name=Camilia Frim , codePost=30, codeLevel=5)
Developer(id=4, name=Antonio Alcant, codePost=40, codeLevel=4)
Developer(id=6, name=Miguel Servantes, codePost=40, codeLevel=4)
Thanks for you reply but I need to keep all devloper if Developers has the same codePost with the same codeLevel
Why did not you post this requirement in the original post? Are there other cases like two or more devs have the came codePost but different codeLevel and nobody has codeLevel == 5?
i thought this condition should not be added to the filter , but after testing I have realized it should be . sorry I added that to original post .
| common-pile/stackexchange_filtered |
Derivative of a Wiener integral using Ito formula
Given the process $X_t=\int_0^t g_s\,dW_s$, compute $dX_t$ using the Ito formula.
The Ito formula says that, given:
$$f(t,X_t)=f(0,X_0)+\int_0^t\frac{\partial f}{\partial s}(s,X_s)\,ds+\int_0^t\frac{\partial f}{\partial x}(s,X_s)\,dX_s+\frac{1}{2}\int_0^t\frac{\partial^2 f}{\partial x^2}(s,X_s)\,d\langle X \rangle_s$$
it holds:
$$df(t,X_t)=\frac{\partial f}{\partial t}(t,X_t)\,dt+\frac{\partial f}{\partial x}(t,X_t)\,dX_t+\frac{1}{2}\frac{\partial^2 f}{\partial x^2}(t,X_t)\,d\langle X \rangle_t$$
I already know the result, that is $dX_t=g_t\,dW_t$, but I would like to understand how Ito formula works. I guess that first and third terms in the formula for $df$ are zero, but I don't get why. Why is $\frac{\partial f}{\partial t}(t,X_t)=0$ ? Since $X_t$ is a definite integral, I thought that it should be equal to $G(t)-G(0)$, where $G$ is the primitive of $g$, and so, since $G(t)$ is a function of time, its derivative is not zero.
So the question is how to compute these terms:
$$\frac{\partial f}{\partial t}(t,X_t),\quad\frac{\partial f}{\partial x}(t,X_t),\quad \frac{\partial^2 f}{\partial x^2}(t,X_t)\quad ?$$
You should put $f(t,x) = x$. It follows immediately that
$$
\frac{\partial f}{\partial t}(t,x) = 0, \;\frac{\partial f}{\partial x}(t,x) = 1, \; \frac{\partial^2 f}{\partial x^2}(t,x) = 0.
$$ By Ito's formula, we have
$$
X_t = \int_0^t dX_s = \int_0^t g_sdW_s,
$$ which is an obvious result.
Thank you for the answer. Could you please explain why you put $f(t,x) = x$ ? From other exercises I learned to substitute the process with $x$, for example: $$\begin{align*}X_t=e^{\alpha W_t} \quad &\to f(t,x) = e^{\alpha x}\ Z_t=X_t^2, \text{ with }dX_t=\alpha X_t dt+\sigma X_t dW_t \quad &\to f(t,x) = x^2\end{align*}$$ So in this exercise I did put $f(t,x)=\int_0^t g_s,dx$, but I was not able to compute the three partial derivatives. Could you help? Where am I wrong?
Perhaps using $x$ as a variable is confusing you. To see those examples clearly, I will write as $ f(t,w) = e^{\alpha w} \rightarrow X_t = e^{\alpha W_t}=f(t,w)|{w=W_t} $ and $f(t,x) = x^2 \rightarrow Z_t = X_t^2 = f(t,x)|{x= X_t}$. In this case, $f(t,x) = x \rightarrow X_t = X_t = f(t,x)|_{x=X_t}$.
Thank you for the clarification. But I still don't uderstand how it is possibile to make that choice. If we have $X_t=e^{\alpha W_t}$, and we put $f(t,x) = x$ (i.e. $X_t = f(t,x)|_{x=X_t}$), then we have $$\frac{\partial f}{\partial t}(t,x) = 0, ;\frac{\partial f}{\partial x}(t,x) = 1, ; \frac{\partial^2 f}{\partial x^2}(t,x) = 0.$$ but it is not correct because in that case $dX_t=\alpha X_t dW_t+\frac{1}{2}\alpha^2 X_t dt$. I am missing something I guess.
I'll elaborate on what you are missing. If we put $f(t,x) = x$, then what follows from $X_t = f(t,x)|{x=X_t}$ is $dX_t = 0dt + 1dX_t + 0 d\langle X\rangle_t = dX_t$, not $dX_t = 0dt + 1dW_t + 0d\langle W\rangle_t$. (so the result is correct.) If we let $f(t,w) = e^{\alpha w}$ and $X_t = f(t,w)|_{w=W_t}$, then what follows is $dX_t = 0dt + \alpha e^{\alpha W_t}dW_t + \frac{1}{2}\alpha^2 e^{\alpha W_t}dt.$ You may see that both results are not contradictory.
Thank you, ok maybe now I'm understanding. But letting $f(t,x)=x$ give us $dX_t=dX_t$ so we don't have the explicit result; while letting $f(t,w)=e^{\alpha w}$ we are able to compute $dX_t$. Is this right?
Yes, it's right. $f(t,x) =x$ can only give us a trivial result. Ito's lemma is useful when $f(t,\cdot)$ describes a non-trivial relation between two (usually different) stochastic processes. It allows us a stochastic integral representation.
Thank you. So to be more clear, shouldn't you put $s$ instead of $t$ in the first two rows of your answer? Because then you plug the partial derivatives (with $s$ instead of $t$) in the formula of $f(t,X_t)$ written in my answer, and finally we obtain the final result. Isn't it?
| common-pile/stackexchange_filtered |
Localization not working in .NET6 Razor Pages project
I was trying to get localization to work in my 'real' project, but was not able to do so. So I created a new stock .NET6 Razor Pages project to test localization in a fresh environment. However I am not quite able to do that there as well. At this point I have read about 6 articles, watched 2 tutorials and read the official documentation which says that it is up to date with .NET6, but I'm not sure about that (partly because the examples use old syntax, i.e. not things that came with .NET6). Every single one of those resources more or less did/said the same things, which is what I have right now, but it just doesn't work.
This is my Program.cs file:
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddLocalization(options => options.ResourcesPath = "Resources");
//builder.Services.AddRazorPages(); builder.Services.AddMvc().AddDataAnnotationsLocalization();
builder.Services.AddRazorPages().AddDataAnnotationsLocalization();
var app = builder.Build();
if (!app.Environment.IsDevelopment())
{
app.UseExceptionHandler("/Error");
app.UseHsts();
}
app.UseHttpsRedirection();
app.UseStaticFiles();
app.UseRouting();
var supportedCultures = new[] { "en" };
var localizationOptions = new RequestLocalizationOptions().SetDefaultCulture(supportedCultures[0])
.AddSupportedCultures(supportedCultures)
.AddSupportedUICultures(supportedCultures);
app.UseRequestLocalization(localizationOptions);
app.UseAuthorization();
app.MapRazorPages();
app.Run();
I have tested the "things I'll mention bellow" with ...AddRazorPages().AddData... and (AddMvc().AddData... with ...AddRazorPages();) as well. That's why one of those lines is commented.
The way I am testing the localization is just with an OnGet call on the Index page, and restarting the (local) server after every change just to not miss some dumb thing. On the OnGet, I just log the value to the console. But I'm always getting the key. Here is the IndexModel class:
public class IndexModel : PageModel
{
private readonly IStringLocalizer<IndexModel> _localizer;
public IndexModel(IStringLocalizer<IndexModel> localizer)
{
_localizer = localizer;
}
public void OnGet()
{
Console.WriteLine(_localizer["Test"].Value);
}
}
This is how my (relevant) folder structure looks like in the project's directory:
And finally the resx file:
Things I have tested are the .resx file names and the 2 possible locations. I am sure that both are correct, because when both files were active (neither had the .Dup (for duplicate) extension) at the end, I got an error from the compiler that multiple resources point to the same location or something like that.
The names I have tested are the following:
IndexModel.en.resx
Index.en.resx
IndexModel.cshtml.cs.en.resx
Index.cshtml.cs.en.resx
IndexModel.cs.en.resx
Index.cs.en.resx
So you mean you cannot get This is a test! in OnGet?If you try to add ?culture=en in url,can you get the Value?Or you can try to test Using one resource string for multiple classes.
That shouldn't be necessary because I have this line in program.cs DefaultRequestCulture = new RequestCulture("en"). I have tested your theory anyway, but unfortunetely that didn't work, as expected. I have also tested it by using the one resource, but that didn't work either. I guess that my issue has something to do with .NET6 with razor pages. I don't have time to properly test other versions or MVC (on 6) though. So I have just created my own locallization tool. It ain't pretty but it gets the job done., for now at least. @YiyiYou
@TomekŚmigielski have you figured it out?
@Rico Like I said in the last part of my previous comment.
I have created my completely own localization (only translations actually) tool from scratch. It wasn't the best by far, but it got the job done.
@Rico
I remember I used a Dictionary<string, (enum, string)>
The dictionary key was just the language id (Ie. en, sp, ge etc.), the first value in the tuple was an enum which stored the string id's, and the second was the translation.
The dictionaries were in a standalone class, and I had a method which checked for the language, string id, and returned the translation. The language id was just a global value defined by a user.
Very simple, and suboptimal, but like I said, it got the job done. Since then I didn't need localization in any other of my projects. Good luck!
| common-pile/stackexchange_filtered |
Saving entity with Spring JPA Repository resets enum ttype value
I have an TimelineEntity entity, that uses HoTimelineType enum with custom integer value. That custom integer value is stored in the database. Implemented via Using @PostLoad and @PrePersist Annotations
Sprint JPA Repository is used to save and get entities.
Here is the issue:
@Entity
@Table(name = TABLE_NAME)
@IdClass(TimelineKey.class)
public class TimelineEntity {
public interface Persistence {
String TABLE_NAME = "timelines";
}
@Id
@Column(name = "node_id")
private Long nodeId;
@Id
@Column(name = "timeline_id")
private Long timelineId;
@Column(name = "ho_timeline_type")
private Integer hoTimelineTypeValue;
@Transient
private HoTimelineType hoTimelineType;
public Long getNodeId() {
return nodeId;
}
public void setNodeId(Long nodeId) {
this.nodeId = nodeId;
}
public Long getTimelineId() {
return timelineId;
}
public void setTimelineId(Long timelineId) {
this.timelineId = timelineId;
}
public HoTimelineType getHoTimelineType() {
return hoTimelineType;
}
public void setHoTimelineType(HoTimelineType hoTimelineType) {
this.hoTimelineType = hoTimelineType;
}
public Integer getHoTimelineTypeValue() {
return hoTimelineTypeValue;
}
public void setHoTimelineTypeValue(Integer hoTimelineTypeValue) {
this.hoTimelineTypeValue = hoTimelineTypeValue;
}
@PostLoad
private void postLoad() {
this.hoTimelineType = HoTimelineType.of(hoTimelineTypeValue);
}
@PrePersist
private void prePersist() {
this.hoTimelineTypeValue = hoTimelineType.getValue();
}
}
@Eager
public interface TimelineEntityRepository extends JpaRepository<TimelineEntity, TimelineKey> {
List<TimelineEntity> findByNodeId(Long nodeId);
}
@Autowired
private TimelineEntityRepository timelineEntityRepository;
...
TimelineEntity newTE = new TimelineEntity();
newTE.setNodeId(10L);
newTE.setTimelineId(22L);
newTE.setHoTimelineType(HoTimelineType.TYPE_1);
newTE = timelineEntityRepository.save(newTE);
When the newTE entity is saved, prePersist is invoked, and inside this method, the hoTimelineType is null and I get NPE. nodeId and timelineId are not nulls. If I stay with a debugger on the last line, outside of prePersist, I see that hoTimelineType has the value, I set before.
When I load entities, inserted with test data, everything works fine and both hoTimelineType and hoTimelineTypeValue have not nullable values.
I skipped the code of TimelineKey and HoTimelineType to simplify the example. Can add it, if needed.
What could reset hoTimelineType? What do I miss?
It seems there is no way to control the saving behaviour of spring jpa repository proxy.
Possible solutions for issue:
Via javax.persistence.Converter. It is pretty clear, the structe of an entity is simple. Can confirm it works fine with Spring Jpa Repository generation.
Explicitely set hoTimelineTypeValue before you save an entity. Error-prone solution. Everytime you save an entity you must think about the difference between the hoTimelineTypeValue and hoTimelineType.
You could enrich setters and getters of the entity class, to explicitely control the consistency between the fields. It makes implementation of entity classes not so obvious. You get more compicated solution for nothing. As a result error-prone solution. Do not recommend it as well.
Cause of disadvantages of #2 and #3 I do not provide examples. It makes no sense.
Example of the solution #1 can be found here: Using JPA 2.1 @Converter Annotation
| common-pile/stackexchange_filtered |
MessageBox.Show in UI Thread
this.Dispatcher.InvokeAsync(
() => {
MessageBoxResult result = MessageBox.Show("111111111111111111111111", "Child process failed", MessageBoxButton.OK);
}
);
this.Dispatcher.InvokeAsync(
() => {
MessageBoxResult result = MessageBox.Show("22222222222222222222222", "Child process failed", MessageBoxButton.OK);
}
);
Why does it show<PHONE_NUMBER> first and then show<PHONE_NUMBER>1?
If I change this.Dispatcher.InvokeAsync to this.Dispatcher.Invoke the order will be right. Does anyone know why?.
You are sending MessageBox.Show to the dispatcher. It will execute 11111 first but won't wait until you click OK, it will show the 22222 ontop of that. So the order seems right.
When the first MessageBox ("22") appears, try to move it to the side. The other MessageBox should be behind it.
If you care about the order then you need to change your technique, Maybe add messages to a queue and process them from there. If this mimics a real world requirement then messagebox is unlikely to be your best choice to show multiple messages.
I want to know Why it won't wait until I click OK when it executes 11111 first.
***When the first MessageBox ("22") appears, try to move it to the side. The other MessageBox should be behind it ***
No, must click OK and then the 1111 show.
var op = this.Dispatcher.InvokeAsync(...); op.Wait();
Dispatcher.InvokeAsync is essentially putting a message on the message queue, asking for the method to be run when the UI thread is free.
While the messagebox is showing some messages will still be processed, and this message is probably one of them. So the first messagebox will be displayed first, and a moment later the second one, on top of the first. When you click away the second messagebox it looks like the first one appears.
this.Dispatcher.Invoke is also putting a message on the message queue, but in this case the caller is also waiting for the UI thread to process the message before continuing, so the messageboxes are shown one after the other and not on top.
Thanks for your comment!
I have debug it. The break points always run into 11111(but at here the message box does not show) and then run into 2222, and then the message box always show 222 first and then 1111.
@yunate I revised the answer. The messageboxes are probably showing on top of each other.
| common-pile/stackexchange_filtered |
Cannot import declaration file
I'm trying to set up some test for my RESTAPI server. I use typescript.
I tried to import chai and chai-http using the ES6 syntax, but apparently, it's not supported (Github issue here ). I get an error, so I had to require the files directly.
var chai = require('chai')
, chaiHttp = require('chai-http');
import * from '@types/chai-http';
/*
* Both librairies have limited support of ES6 import.
* See https://github.com/DefinitelyTyped/DefinitelyTyped/issues/19480
*/
chai.use(chaiHttp);
import { expect } from 'chai' ;
import 'mocha';
Now the thing is typescript doesn't import @types/chai-http for some reason, so when I try to use expect(res).to.have.status(201);, I get an error, the compiler complain
Error TS2339: Property status does not exist on type Assertion
How do I import the type declaration directly?
Thanks
| common-pile/stackexchange_filtered |
Why does the "m" key disable my text editor?
Sometimes, it's the "o" key, but lately it's been the "m" key.
When I hit the "m" key, either by typing a message letter by letter or using Swype, my editing stops. I cannot recover without closing my editor and getting back in. Sometimes, I have to completely reboot my phone, and that seems to help. When I get back into the message to try again, I can edit until I hit the "m" key again.
Which app is this?
Some phones allow the user to assign single key presses as accelerators to start applications, dial regular contacts, or the like. Is it possible you set something like this? Check your keyboard settings for a keyboard type you may not use regularly. I think I remember this was a possible setting on the Galaxy Stratosphere, or similar.UPDATE: The Stratosphere has this setting under Applications\QuickLaunch. It can accept a combo key press to start any application at users discretion. Maybe you have 'm' or 'o' set somehow similar.
| common-pile/stackexchange_filtered |
How to load library scripts in admin from plugins in noConflict wrapper?
For my plugin I need to make a meta box for some custom fields in my CPT (xyz), and to organize the fields, I need to implement tabs. And I's taught to load jQuery/JavaScripts using wp_enqueue_script(), so from my plugin, I entered the following (with Chip Bennett's suggestion in mind, I excluded wp_register_script() and used only the wp_enqueue_script() and avoided deregistering built-in scripts):
function my_scripts() {
wp_enqueue_script( 'jquery-lib-scripts', plugins_url( '/js/jquery-1.10.2.min.js', __FILE__ ) );
wp_enqueue_script( 'jquery-ui-for-tabs', plugins_url( '/js/jquery-ui.js', __FILE__ ) );
//wp_enqueue_script( 'plugin-scripts', plugins_url( '/js/plugin-scripts.js', __FILE__ ) );
wp_enqueue_style( 'plugin-style', plugins_url( 'plugin-style.css', __FILE__ ) );
}
add_action('admin_enqueue_scripts', 'my_scripts');
So the files are loading globally in admin panel.
Everything's fine, though I's full aware about the built-in WP admin scripts jquery, jquery-ui-core, and jquery-ui-tabs. But somehow my scripts are not conflicting, and is working with my tabs in my CPT's post-new.php (Add new) page; but is breaking the tabs in post.php (Edit screen of CPT xyz).
I am working on my dev. environment localhost using WAMP, and using WP 3.9.1, while I tried deactivating all other plugins and tried switching to default T14 theme.
Just found that, with my code the tabs are working in all pages, but if I activate any other plugin, even Akismet, it's CONFLICTING. I've learned about the noConflict wrappers, but assumed they are only for custom scripts, but how can I enqueue library scripts for my plugin in a noConflict wrapper?
I'm not sure that'd work: noConflict is to avoid defining $ because some other popular libraries also use $ (e.g. mootools), it's not AFAIK intended to support entirely different copies of jQuery running in parallel. Do you really need that? Can you really not just use the provided jquery and jquery-ui-tabs? They're very recent, and jQuery is 1.11.1 so newer than you're loading.
If you really did need these versions your best bet would be to paste both scripts into a new file and wrap them in code that saves the old values of $ and jQuery, loads your scripts (jQuery, UI and plugin) then restores the old versions. I can't promise that will completely not conflict (e.g. interfere with page-level events set up by the other copy) but I think it's your best hope.
I don't know WHY ON EARTH WITH ME, but I COULDN'T make it work ANYTIME doing wp_enqueue_script('jquery'). :(
I'm NOT actually Answering the question, I'm putting here, how I got my solution without being on that course:
First of all, I got the initial solution to the enqueue problem from WPSE answer. It's working like a charm. But the problem of conflicting re-occurred when I tried embedding the Media Uploaded to a field with wp_enqueue_media(), it's not working even in that page the Featured Image jquery uploader was working.
I tried with deactivating all my plugins, and switched back to TwentyFourteen with define('SCRIPT_DEBUG', true). And with the suggestion of @G.M. I tried the following code provided by him:
function header_sent() { headers_sent() && die('Sent!'); }
add_action( 'init', 'header_sent' );
to check whether any header is sent already, but I got the green signal. Then with multiple try, and @Rup's comment, I enqueued all the latest jQuery things (jQ 1.11.1 and jQUI 1.10.4). But still the problem resides. But with these broad effort I at least found a hope that, my plugin is not conflicting with Akismet.
I revamped the wp_enqueue_media() with this SO answer. But when I activated WordPress SEO, though my plugin jQuery is working, but the WP SEO. So it's a clear conflict with the WordPress SEO.
Then I switched the meta box's priority from 'high' to 'low'$mdash; nothing happened. They something clicked my mind, and I eneuqued my scripts in_footer with:
wp_enqueue_script( 'jquery-lib-scripts', plugins_url( '/js/jquery-1.11.1.min.js', __FILE__ ), '', '', true );
wp_enqueue_script( 'jquery-ui-for-tabs', plugins_url( '/js/jquery-ui-1.10.4.min.js', __FILE__ ), '', '', true );
That's solved my problem here.
(But I actually don't know if the scripts conflict with any other plugin in future, then what would I do then :( )
| common-pile/stackexchange_filtered |
DeclarationError: Identifier not found or not unique. --> tests/4_Ballot_test.sol:66:9: | 66 | Request storage request = requests[index]; | ^^^^^^^
I identify no typos in my code, but it keeps returning that error message - DeclarationError: Identifier not found or not unique. --> tests/4_Ballot_test.sol:66:9: | 66 | Request storage request = requests[index]; | ^^^^^^^
The function finalizeRequest has the same code and it gets no error. Can anyone help?
Thanks
// SPDX-License-Identifier: AFL-3.0
pragma solidity ^0.8.7;
contract CampaignFactory {
address[] public deployedCampaigns;
function createCampaign(uint minimum) public {
address newCampaign = address (new Campaign(minimum, msg.sender));
deployedCampaigns.push(newCampaign);
}
function getDeployedCampaigns() public view returns (address[] memory) {
return deployedCampaigns;
}
}
contract Campaign {
struct Request {
string description;
uint value;
address recipient;
bool complete;
uint approvalCount;
mapping(address => bool) approvals;
}
uint numRequests;
mapping (uint => Request) requests;
address public manager;
uint public minimumContribution;
mapping(address => bool) public approvers;
uint public approversCount;
modifier restricted() {
require(msg.sender == manager);
_;
}
constructor (uint minimum, address creator) {
manager = creator;
minimumContribution = minimum;
}
function contribute() public payable {
require(msg.value > minimumContribution);
approvers[msg.sender] = true;
approversCount++;
}
function createRequest (string memory description, uint value,
address recipient) public{
Request storage r = requests[numRequests++];
r.description = description;
r.value = value;
r.recipient = recipient;
r.complete = false;
r.approvalsCount = 0;
}
}
function approveRequest(uint index) {
Request storage request = requests[index];
require(approvers[msg.sender]);
require(!request.approvals[msg.sender]);
request.approvals[msg.sender] = true;
request.approvalCount++;
}
function finalizeRequest(uint index) restricted {
Request storage request = requests[index];
require(request.approvalCount > (approversCount / 2));
require(!request.complete);
request.recipient.transfer(request.value);
request.complete = true;
}
// SPDX-License-Identifier: AFL-3.0
pragma solidity ^0.8.7;
contract CampaignFactory {
address[] public deployedCampaigns;
function createCampaign(uint minimum) public {
address newCampaign = address(new Campaign(minimum, msg.sender));
deployedCampaigns.push(newCampaign);
}
function getDeployedCampaigns() public view returns (address[] memory) {
return deployedCampaigns;
}
}
contract Campaign {
struct Request {
string description;
uint value;
address payable recipient;
bool complete;
uint approvalCount;
mapping(address => bool) approvals;
}
uint numRequests;
mapping (uint => Request) public requests;
address public manager;
uint public minimumContribution;
mapping(address => bool) public approvers;
uint public approversCount;
modifier restricted() {
require(msg.sender == manager);
_;
}
constructor (uint minimum, address creator) {
manager = creator;
minimumContribution = minimum;
}
function contribute() public payable {
require(msg.value > minimumContribution);
approvers[msg.sender] = true;
approversCount++;
}
function createRequest (string memory description, uint value,
address payable recipient) public{
Request storage r = requests[numRequests++];
r.description = description;
r.value = value;
r.recipient = recipient;
r.complete = false;
r.approvalCount = 0;
}
function approveRequest(uint index) public {
Request storage request = requests[index];
require(approvers[msg.sender]);
require(!request.approvals[msg.sender]);
request.approvals[msg.sender] = true;
request.approvalCount++;
}
function finalizeRequest(uint index) public restricted {
Request storage request = requests[index];
require(request.approvalCount > (approversCount / 2));
require(!request.complete);
request.recipient.transfer(request.value);
request.complete = true;
}
}
I didn't check the logic, but I made the compile pass through.
I guess the most biggest problem is that you wrote one more "}" symbol in function createRequest
I had r.approvalCount = 0 wrong, as identified by MERN too plus the curly braces. It's working now perfectly, thank you.
I fixed minor problems, so, you can use below code.
// SPDX-License-Identifier: AFL-3.0
pragma solidity ^0.8.7;
contract CampaignFactory {
address[] public deployedCampaigns;
function createCampaign(uint256 minimum) public {
address newCampaign = address(new Campaign(minimum, msg.sender));
deployedCampaigns.push(newCampaign);
}
function getDeployedCampaigns() public view returns (address[] memory) {
return deployedCampaigns;
}
}
contract Campaign {
struct Request {
string description;
uint256 value;
address payable recipient;
bool complete;
uint256 approvalCount;
mapping(address => bool) approvals;
}
uint256 numRequests;
mapping(uint256 => Request) requests;
address public manager;
uint256 public minimumContribution;
mapping(address => bool) public approvers;
uint256 public approversCount;
modifier restricted() {
require(msg.sender == manager);
_;
}
constructor(uint256 minimum, address creator) {
manager = creator;
minimumContribution = minimum;
}
function contribute() public payable {
require(msg.value > minimumContribution);
approvers[msg.sender] = true;
approversCount++;
}
function createRequest(
string memory description,
uint256 value,
address payable recipient
) public {
Request storage r = requests[numRequests++];
r.description = description;
r.value = value;
r.recipient = recipient;
r.complete = false;
r.approvalCount = 0;
}
function approveRequest(uint256 index) public {
Request storage request = requests[index];
require(approvers[msg.sender]);
require(!request.approvals[msg.sender]);
request.approvals[msg.sender] = true;
request.approvalCount++;
}
function finalizeRequest(uint256 index) restricted public {
Request storage request = requests[index];
require(request.approvalCount > (approversCount / 2));
require(!request.complete);
request.recipient.transfer(request.value);
request.complete = true;
}
}
r.approvalsCount = 0; is wrong, r.approvalCount = 0; is correct
Yes, plus the curly braces I misplaced. It's working now, thank you!
with https://marketplace.visualstudio.com/items?itemName=JuanBlanco.solidity plugin for the visual studio code, solve this problems easily
I had it disabled earlier and forgot to enable it again. Thanks.
| common-pile/stackexchange_filtered |
How can I export Apache Solr settings?
I selected entity types and bundles in admin/config/search/apachesolr. So is there any way to featurize or export these settings? If it can be done by Strongram module which variable I should use?
| common-pile/stackexchange_filtered |
Why are allyl halides reactive towards both SN1 and Sn2 reactions?
I know that allyl halides are reactive towards SN1 reactions due to the stabilization of carbocation due to resonance but why are they reactive towards SN2 too?
I actually read that allyl halides are more reactive towards SN2 than primary alkyl halides, whom I thought were most reactive.
Can you add what you just said above to your question?
In a reaction, SN2 as well as SN1 reactions take place. It only matters that which reaction pathway gives the major product. The answer to your question depends on the type of base used, the substrate and the overall stability of the product. Allyl halides are also reactive to SN2' reactions, i.e allylic nucleophilic substitution reactions. There are a lot of possibilities.
Keep in mind that the electronegativity order of hybridized carbons is $\ce{sp > sp2 > sp3}$. So in an allylic halide, such as $\ce{CH2=CH-CH2Br}$, the alpha carbon is much more partially positive than the alpha carbon in $\ce{CH3-CH2-CH2Br}$. Therefore, the attacking nucleophile will be attracted to the alpha carbon more in the allylic halide compound than in the primary halide. Hence, allylic halides tend to follow SN2 pathways more than primary halides.
Can you explain why are they more reactive than primary halides in SN2 reactions?
@KirtiAgrawal I have edited my question with the answer. I hoped it helped you! But next time please give one example at least and describe your problem instead of making your question so broad. This will help us answer you much better!
| common-pile/stackexchange_filtered |
How to send email with attachment of file without extension?
I created C# Console App and I can't send email with file attachment named "testfile" and has no extension.
It successfully sends, but with only the first attachment ("test.png")
How can I send this file?
Here is my code:
internal static void SendTest()
{
MailMessage mail = new<EMAIL_ADDRESS><EMAIL_ADDRESS> "Success", "Attachment email");
SmtpClient SmtpServer = new SmtpClient("smtp.gmail.com", 587);
SmtpServer.Credentials = new<EMAIL_ADDRESS> "BrudoTass");
SmtpServer.EnableSsl = true;
string test1 = @"C:\Users\Admin\Desktop\test.png"; //Picture
string test2 = @"C:\Users\Admin\Desktop\testfile"; //File without extension
var attachment1 = new Attachment(test1);
var attachment2 = new Attachment(test2);
mail.Attachments.Add(attachment1);
mail.Attachments.Add(attachment2);
SmtpServer.Send(mail);
}
I think that it relates to gmail policy about files. You can not send file(s) without extension.
I have just tried this with .NET 2.0 and .NET 4.5 and it works with both (using my network credentials of course). Your code works without an issue for me. I see both attachment with extension and without it. Which .NET framework version are you using and what exactly is in that file (I tried it both with empty extension-less file and one with some text in)?
This is probably not a thing but I think the default extension is a .file
Try This way
foreach (MessageAttachment ma in msg.Attachments)
mail.Attachments.Add(BuildAttachment(ma));
private Attachment BuildAttachment(MessageAttachment ma)
{
Attachment att = null;
if (ma == null || ma.FileContent == null)
return att;
att = new Attachment(new MemoryStream(ma.FileContent), ma.FileName + ma.FileType, ma.FileType.GetMimeType());
att.ContentDisposition.CreationDate = ma.CreationDate;
att.ContentDisposition.ModificationDate = ma.ModificationDate;
att.ContentDisposition.ReadDate = ma.ReadDate;
att.ContentDisposition.FileName = ma.FileName;
att.ContentDisposition.Size = ma.FileSize;
return att;
}
| common-pile/stackexchange_filtered |
Replacing the colon operator with an equivalent vectorizable solution
My current code does something like this:
for offset = 0:0.9:max_offset :
x = offset:step_size:max_value;
[...]
end
I would like to vectorize and remove the for loop to make it faster, but if I try making offset a vector, the colon operator on the second line is equivalent to doing
x = offset(1):step_size:max_value;
What is the most efficient way to achieve the desired result, i.e. get
x = [ 0:step_size:max_value;
0.9:step_size:max_value;
1.8:step_size:max_value; ... ]
assuming I don't know max_offset, and therefore the length of number of rows I want in x?
Since length of x would vary from iteration to iteration, you can't store all x's into a matrix form and that would mostly mean using cell arrays, which won't really help with performance, unless you actually want x as cells. So, rather you must look to make the rest of the code inside the loop efficient.
Since each vector will be a different size, they wont fit in a matrix easily. You will have to put them in a cell array, like so:
offset=0:.9:max_offset;
x=arrayfun(@(k) k:step_size:max_value,offset,'UniformOutput',false)
and rather than referring to rows of x by x(i,:) for a matrix, you would do x{i} to get the right vector out.
Is this faster than just keeping the for loop?
I'm not sure. For smaller sized problems it could be, but the performance isn't likely to be much difference. Actually, I've just thought of another way. Do x=0:step_size:max_value then instead of getting the i-th row of the x matrix, x(i,:), you can just do x(i:end) to get the values that you want.
| common-pile/stackexchange_filtered |
How to get component attached resources programmatically?
JSF: how to get component attached resources programmatically? For example,
<h:componentA>
<h:outputStylesheet library="foo" name="bar.css" target="head"/>
</h:componentA>
How to get resources attached to the componentA inside the component renderer?
public void encodeBegin(FacesContext context, UIComponent componentA) {
...
}
It seems that facelet handler moves all such resources to UIViewRoot.
But from UIViewRoot.getComponentResources(context, "head"), we can not tell which
resources attaches which components.
why do you need this?
For example, a custom <h:head> renderer needs to render resources attached to <h:head>, but not other component resources.
UIComponent should have a method like getAttachedResources(). It is better not to lose information present in facelet pages.
| common-pile/stackexchange_filtered |
How to count number of words found in a string in c
I'm trying to write a program which counts the number of words found in a string, however the program which I wrote counts the number of spaces. How do I construct this program - as efficient as possible - to count the number of words? What do I do say the string contains 4 words and 8 spaces, or if it contains no words and only spaces? What am I missing/doing wrong?
#include <stdio.h>
#include <cs50.h>
#include <ctype.h>
#include <string.h>
int main(void)
{
string text = get_string("Text: ");
int count_words = 0;
for (int i = 0, n = strlen(text); i < n; i++)
{
if (isspace (text[i]))
{
count_words++;
}
}
printf("%i\n", count_words + 1);
}
Detect and count the beginning of words, the space to non-space transition:
int count_words = 0;
bool begin = true;
for (int i = 0, n = strlen(text); i < n; i++) {
if (isspace (text[i])) {
begin = true;
} else {
if (begin) count_words++;
begin = false;
}
}
can you please explain to me what you did?
Try this condition to count words:
if(text[i]!=' ' && text[i+1]==' ')
Count=count+1;
If you don't mind explaining what you did? It works, but I don't see why?
bro i have placed the condition for checking "not a space" and "space after our word" so that we can slice out the word.
Conditon is : check if there is "not a space" at i. Then check if there is a space after i , if its true that means we have sliced out our word. If you still dont get it let me know.
Dont forget to Mark the answer correct only if you find it helpful.
If the input does not contain any spaces like "Hello", this approach will report 0 rather than 1.
| common-pile/stackexchange_filtered |
Setting a data/spend limit for Looker Studio user with @DS_USER_EMAIL parameter
We estimate Looker Studio reports spend by user independently.
In order to limit excessive querying through dashboards users, I added below to the original BigQuery Custom SQL source.
with check_user as (
select
case when count(*) then true else false and as continue_query
from ((
select
looker_user,
allowed --flag that is true if user allowed to query
from `user_spend`
where looker_user = @DS_USER_EMAIL
)
union distinct (
select @DS_USER_EMAIL, true --always-true for new users that haven't appeared in our tracking
))
)
select * from table1 --original custom query
cross join check_user
where check_user.continue_query = true
Is there a better way to achieve the same effect -> discourage dashboard users to query more?
How could I also prevent BigQuery billing by the disallowed Looker Studio users? Currently, they receive empty results, but query is still executed against BigQuery and charged.
| common-pile/stackexchange_filtered |
Update items in a listview in real time with Xamarin forms pcl - c #
My goal is to make the update of a listview of records in real time.
in the load page i set:
List<Myobjetcs> myitems = new List<Myobjects>();
...
...
listView.IsVisible = false;
listView.RowHeight = 55;
listView.IsPullToRefreshEnabled = true;
listView.Refreshing += ListView_Refreshing;
listView.ItemTapped += ListView_ItemTapped;
listView.SeparatorVisibility = SeparatorVisibility.None;
listView.SeparatorColor = Color.Black;
listView.ItemsSource = myitems;
every 10 seconds (with API) they will update the data, only a few records, randomly.
My goal is to update the item without doing a refresh of the listview ... and without the ItemSource set to null and then reassign data.
public async void setSuperficie()
{
//here i receive the result of API (every 10 seconds) and update record listview.
}
I tried to slide the ItemSource with a for loop and update the data, but don't work.
it's possibile do ?
listView.ItemsSource[index].Property = Value;
What is "myitems"?
You have to use a ObservableCollection
ObservableCollection<MyModel> myitems = new ObservableCollection<MyModel>();
then
public async void setSuperficie()
{
//here i receive the result of API (every 10 seconds) and update record listview.
// here you have to modify the myitems collection, adding or removing items
myitems.add(mynewitem);
}
you should also implemente INotifyPropertyChanged. For this I suggest to use Fody
myitems is a List myitems, maybe is better use observablecollection ? but how I update the record of list ? With your method i add the record on the list, but not update. ps.: sei Italiano ?
Brianzolo. Yes if you have to automatically update an UI with a ListView you don't have to use List but ObservableCollection. It solves your UI's update problems for Add and Delete. To Update an item (modify a property in your model) you have to use INotifyPropertyChanged: changing the property's value, it is automatically updated to UI https://developer.xamarin.com/guides/xamarin-forms/xaml/xaml-basics/data_binding_basics/
| common-pile/stackexchange_filtered |
SQL/Navision DateTime Issue
I don't know if anyone else has experiencing this issue before
I have DateTime Data in a MsSQl table but when displaying the same data in Navison 2013, it is displaying the time wrongly.
2018-12-14 08:20:22.000 is being displayed as 14-12-18 11:20:22 AM
Any suggestions ?
Thanks in Advance
Looks like some kind of time zone conversion.
Check current time zone difference with SELECT DATEDIFF(MINUTE, GETDATE(), GETUTCDATE());
The time stored in the SQL database will be stored in UTC. However, since you're based in Kenya (I assume from your profile), your time zone is UTC+3. Navision (Dynamics NAV) automatically converts UTC time into your local time zone.
So if you're writing to the database directly without going through Navision you should use UTC time to save the time.
| common-pile/stackexchange_filtered |
How to keep 2 digits after decimal in a float through javascript
I've a number as 0.233333333333
I want it to show 0.23 instead.
I know the function named toFixed() for that purpose.
I don't find the exact html element to add a event handler.
I want to manipulate the Dom with jQuery/javascript so that the toFixed function pass through the 0.23333333333 string and return it as 0.23 .
Thanks.
I hope I made myself clear.
Can you show the code? That helps us to help you.
"I hope I made myself clear." - No, not really. Where does the number come from? Where do you want to "return it"? Please show some of your code...
Yap. Sure. It's an Wp Woocommerce site. Here's the Url to show you what I want. http://devhj.linkinguplocal.com/product/action-aire-compact-adjustable-folding-fan/ . I want to manipulate the element 0.2323232323 to show as 0.23 .
OK, from the website you linked to it seems the value is in a particular element as follows:
<del>$0.23333333333333</del>
You can fix that formatting in several ways, e.g.:
$(document).ready(function() {
$("del").text(function(i, t) {
return "$" + (+t.slice(1)).toFixed(2);
});
});
That is, skip the dollar sign by taking the current text from the second character onwards with .slice(1), then use the unary plus operator to convert that to a number so that you can use .toFixed(2), then concatenate that back after a dollar sign.
Demo: http://jsfiddle.net/u5Qmv/
Or use a regex .replace() to just keep the part you want:
return t.replace(/(\$\d+\.\d\d).*/, "$1");
Demo: http://jsfiddle.net/u5Qmv/1/
Or .slice() from the start of the string through to two characters after the index of the . (if there's a possibility there will be no . you need to test for it and only .slice() if there is one - I'll leave that to you):
return t.slice(0, t.indexOf('.') + 3);
Demo: http://jsfiddle.net/u5Qmv/2/
Having said all of that, you're better off doing this server-side.
Thanks a lot. I know The server side would be the best part. but My client doesn't have the time I think It'd take me to figure out the fixes. Ok. I want to display everything including the dollar sign as well. I believe it'll help me.
parseFloat(parseFloat("0.233333333333").toPrecision(2))
You can use the .html() function in jQuery to set or get values of a DOM element - http://api.jquery.com/html/
$(selector).html(parseFloat(parseFloat("0.233333333333").toPrecision(2)))
Why the double parseFloat? I think OP only wants to display it, not use it for calculation.
Question is not very clear, if he wants to only display then I will edit it
I think so, not sure.
I know the toFixed function does the job. but the primary concern is how to select the html from the outside of the DOM. In jquery have a selector but doesn't have a method like the toFixed or toPrecision.
Thanks.
Better, you do it on server side if you serving html files. If your site is a single page application, you can do it on template rendering.
Worst case, you do it in jQuery ready
$(function() {
// search for elements which contains value, iterate them and update the element
}
| common-pile/stackexchange_filtered |
Howto migrate from networking to systemd-networkd with dynamic failover
Systemd's systemd-networkd can be used to replace the existing networking system on Raspbian.
How does it work with Raspbian on a Raspberry Pi with two interfaces for ethernet and wlan? Can I also realize dynamic failover for them?
Tested on a Raspberry Pi 4B with
Raspbian Buster Lite 2020-02-13 updated on 2020-04-11.
Updates done with sudo apt update && sudo apt full-upgrade && sudo reboot.
It will not work with Raspbian Stretch!
Here you will find the last tested revision for Raspbian Stretch Lite.
♦ Abstract
Using `systemd-networkd` instead of default `dhcpcd` is of course possible. But it is not meaningful in all cases.
networkd is a small and lean service to configure network interfaces,
designed mostly for server use cases in a world with hotplugged and
virtualized networking. Its configuration is similar in spirit and
abstraction level to ifupdown, but you don't need any extra packages
to configure bridges, bonds, vlan etc. It is not very suitable for
managing WLANs yet; NetworkManager is still much more appropriate for
such Desktop use cases. [1]
But for a raspi laying near by a TV or amplifier and doing its work 24/7 for streaming audio or video or for a camera etc., systemd-networkd is a good choice. But you have to do a complete switch. There is no way to mix up with networking and/or dhcpcd. Please also note that NetworkManager isn't supported by Raspbian out of the box.
♦ Step 1: Preparation
For reference I use a fresh flashed SD Card from **Raspbian Buster Lite**.
I will have attention to a headless installation only with ssh. If you are using this, double check typos or so otherwise you are lost with a broken connection. If you want a headless installation then look at SSH (Secure Shell) and follow section 3. Enable SSH on a headless Raspberry Pi (add file to SD card on another machine).
Disable the old stuff. Don't stop any service, only disable them! So it will take effect only on next boot. How to do it just follow to
Use systemd-networkd for general networking, but only section ♦ Quick step and come back here.
♦ Step 2: Setup the wired ethernet interface (eth0)
Create this file with your settings. You can just copy and paste this in one block to your command line beginning with `cat` and including both EOF (delimiter EOF will not get part of the file):
pi@raspberrypi: ~$ sudo -Es # if not already done
root@raspberrypi: ~# cat >/etc/systemd/network/04-eth.network <<EOF
[Match]
Name=e*
[Network]
# to use static IP (with your settings) toggle commenting the next 8 lines.
#Address=<IP_ADDRESS>/24
#DNS=<IP_ADDRESS> <IP_ADDRESS>
#[Route]
#Gateway=<IP_ADDRESS>
#Metric=10
DHCP=yes
[DHCP]
RouteMetric=10
EOF
♦ Step 3: Setup wlan interface (wlan0)
Create this file with your settings:
root@raspberrypi:~ # cat >/etc/systemd/network/08-wifi.network <<EOF
[Match]
Name=wl*
[Network]
# to use static IP (with your settings) toggle commenting the next 8 lines.
#Address=<IP_ADDRESS>/24
#DNS=<IP_ADDRESS> <IP_ADDRESS>
#[Route]
#Gateway=<IP_ADDRESS>
#Metric=20
DHCP=yes
[DHCP]
RouteMetric=20
EOF
Setup wpa_supplicant with this file and your settings for country=, ssid= and psk= and enable it:
root@raspberrypi:~ # cat >/etc/wpa_supplicant/wpa_supplicant-wlan0.conf <<EOF
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
country=DE
network={
ssid="TestNet"
psk="realyNotMyPassword"
key_mgmt=WPA-PSK
proto=RSN WPA
}
EOF
root@raspberrypi:~ # chmod 600 /etc/wpa_supplicant/wpa_supplicant-wlan0.conf
root@raspberrypi:~ # systemctl disable wpa_supplicant.service
root@raspberrypi:~ # systemctl enable<EMAIL_ADDRESS>root@raspberrypi:~ # rfkill unblock 0
root@raspberrypi:~ # exit
root@raspberrypi:~ $
Reboot and good luck ;-)
It is possible that the RasPi gets a new ip addresses so you may have to look at it for the next connection with ssh.
♦ Step 4: Bonding wired and wifi interface for failover
You should have both interfaces setup and running as described above. It is no problem when both interfaces are up. The kernel will use the interface with the lowest **metric** first. Here the ethernet interface will be used first. But this has a great disadvantage. As you can see with `ip -4 -brief addr` each interface has it's own ip-address. If the kernel switches the interface because one is gone down it also uses its new source ip-address. This will break any established statefull TCP communication, e.g. ssh, streaming, login sessions and so on. You can use a new connection from the changed source ip address but the old connections are stuck. That isn't really what we want.
The solution of this problem is bonding. We create an interim interface bond0 that does not change its settings. The wired and wifi interface will switch to bond0.
First disable the single ethernet and wifi network files:
pi@raspberrypi:~ $ sudo -Es
root@raspberrypi:~ # cd /etc/systemd/network/
root@raspberrypi:~ # mv 04-eth.network 04-eth.network~
root@raspberrypi:~ # mv 08-wifi.network 08-wifi.network~
Then setup bonding with these four files:
root@raspberrypi:~ # cat >/etc/systemd/network/02-bond0.netdev <<EOF
[NetDev]
# status: cat /proc/net/bonding/bond0
Name=bond0
Kind=bond
[Bond]
Mode=active-backup
# primary slave is defined in *eth.network
MIIMonitorSec=500ms
MinLinks=1
EOF
root@raspberrypi:~ # cat >/etc/systemd/network/12-bond0-add-eth.network <<EOF
[Match]
Name=e*
[Network]
Bond=bond0
PrimarySlave=yes
EOF
root@raspberrypi:~ # cat >/etc/systemd/network/16-bond0-add-wifi.network <<EOF
[Match]
Name=wl*
[Network]
Bond=bond0
EOF
root@raspberrypi:~ # cat >/etc/systemd/network/20-bond0-up.network <<EOF
[Match]
Name=bond0
[Network]
# to use static IP (with your settings) toggle commenting the next 4 lines.
DHCP=yes
#Address=<IP_ADDRESS>/24
#Gateway=<IP_ADDRESS>
#DNS=<IP_ADDRESS> <IP_ADDRESS>
EOF
But this is not the whole story. systemd-networkd checks if all interfaces are up before proceeding with startup depending services. With bonding we have slave interfaces (eth0, wlan0) that never signal that they are up. Its only the bond interface that comes up if at least one of its slaves is connected. So the check will fail with errors and long waiting on bootup. To manage this you have to modifiy the systemd-networkd-wait-online.service. How to do it, please follow the instructions at
A start job is running for wait for network to be configured
If finished that, it's time to reboot.
It is possible that the RasPi gets a new ip address so you may have to look at it for the next connection with ssh.
Then you can check the bonding status:
pi@raspberrypi:~ $ cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: eth0 (primary_reselect always)
Currently Active Slave: eth0
MII Status: up
MII Polling Interval (ms): 500
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: dc:a6:32:4c:08:1b
Slave queue ID: 0
Slave Interface: wlan0
MII Status: up
Speed: Unknown
Duplex: Unknown
Link Failure Count: 1
Permanent HW addr: dc:a6:32:4c:08:1c
Slave queue ID: 0
Test bonding: with the bonding status above you will see that the Currently Active Slave: will change and the MII Status: is down.
If your are headless, don't down both interfaces together ;-)
pi@raspberrypi:~ $ ip addr
pi@raspberrypi:~ $ sudo ip link set eth0 down
pi@raspberrypi:~ $ sudo ip link set eth0 up
pi@raspberrypi:~ $ sudo ip link set wlan0 down
pi@raspberrypi:~ $ sudo ip link set wlan0 up
Be patient after setting wlan0 up. I may take some time to reconnect to the router and manage bonding. This time ssh will not response.
For a more in-depth review of bonding you may have a look at Dynamic network failover prioritize wifi over ethernet.
references:
[1] /usr/share/doc/systemd/README.Debian.gz
[2] man systemd.netdev
[3] man systemd.network
[4] https://wiki.debian.org/Bonding
[5] https://www.kernel.org/doc/Documentation/networking/bonding.txt
Note with regard to the first quote up top: Raspbian never used NetworkManager. It is more an artifact of Fedora and derived systems (which was the first place that systemd, a Redhat supported project, was deployed).
Also applicable to other Debian based systems not necessarily running on ARM :) Thank you for the concise explanation.
This worked for my on stretch but on buster I encountered the problem, that my device cant resolve any domain. Any ideas what could be the cause?
@user5950 Maybe there has something changed with buster? I will look at it. Just a moment please.
@Ingo Thank you for the fast replay. I could resolve the problem by adding the line DNS=<IP_ADDRESS> to /etc/systemd/network/04-eth.network. (I am using an setup with static IP)…
To elaborate the answer from @Ingo: please consider using the link
ln -s /run/systemd/resolve/stub-resolv.conf resolv.conf
instead of the link to /run/systemd/resolve/resolv.conf. This enable the "integrated" DNS stub and enables things like per-interface DNS server which could be important if you use VPNs that provide their own DNS server with non-public entries.
Very interesting, thank you for feedback. Is there any documentation and/or sources of this? If so please edit your answer (using the link below it) and add it there.
| common-pile/stackexchange_filtered |
Running foreach without returning any value in R
I have a function doSomething() which runs in a foreach loop and as a result saves some calculations as .csv files. Hence I have no need for a return value of foreach, in fact I don't want a return value because it clutters my memory to the point where I cannot run as many iterations as I would want to.
How can I force foreach to not have a return value, or delete the return values of the iterations?
Here is a minimal example that illustrates my problem:
cl <- parallel::makePSOCKcluster(1)
doParallel::registerDoParallel(cl)
"%dopar%" <- foreach::"%dopar%"
doSomething <- function () {
a <- as.numeric(1L)
}
foreach::foreach (i = 1:4) %dopar% {
doSomething()
}
The output is:
[[1]]
[1] 1
[[2]]
[1] 1
[[3]]
[1] 1
[[4]]
[1] 1
What is with doSomething(); NULL ?
This would return a list of NULLs
I think your issue is not the return, it is the memory which causes you troubles right?
@Freakazoid yes, you are right. I ran some code over night on 31 cores and it used up nearly all of my 65GB of memory
Parallel computing in R workes (as far as i experienced) such that for each cluster node the memory will be allocated. That means if you have a big data set which each node needs for calculation, this data will be allocated mulitple times. This yields to high RAM consumption. Since you want to write the output in each loop run and throw away the result afterwards you can may try the rm function and call the garbage collection in each function call. I am not sure if this helps but at leas you can try
Thank you for your suggestion, I will try this. However, I see that the used memory increases somewhat linearly over time, which leads me to believe that the gigantic list created by foreach as a return value is the problem.
@Freakazoid Indeed using rm() and gc() in every worker yielded the desired result! Thank you for your help, if you want to add your own answer, I would accept it.
Parallel computing in R works (as far as I experienced) such that for each cluster node the memory will be allocated.
That means if you have a big data set which each node needs for calculation, this data will be allocated multiple times. This yields to high RAM consumption. Since you want to write the output in each loop and throw away the result afterwards you can try the rm function and call the garbage collection (for example with gc) in each function call.
This worked for E L M as mention above. Thx for testing!
From ?foreach:
The foreach and %do%/%dopar% operators provide a looping construct
that can be viewed as a hybrid of the standard for loop and lapply
function. It looks similar to the for loop, and it evaluates an
expression, rather than a function (as in lapply), but it's purpose is
to return a value (a list, by default), rather than to cause
side-effects.
The line
but it's purpose is to return a value (a list, by default)
Says that this is the intended behaviour of foreach. Not sure how you want to proceed from that...
Maybe there is a way to discard the return values of the iterations and have foreach return an empty list in the end? Or could you think of an alternative in my situation, maybe using a different parallelization tool?
As noted by dario; foreach returns a list. Therefore, what you want to do is to use for loop instead. You can use write.csv function inside the loop to write the results of each iteration inside the csv file.
For parallel computing, try using parSapply function from parallel package:
library(parallel)
cl <- parallel::makePSOCKcluster(1)
doParallel::registerDoParallel(cl)
parSapply(cl, 1:4, function(doSomething) a <- as.numeric(1L))
Edit;
Combining this with Freakozoid's suggestion (set the argument of the rm funciton to a);
library(parallel)
cl <- parallel::makePSOCKcluster(1)
doParallel::registerDoParallel(cl)
parSapply(cl, 1:4, function(doSomething) {a <- as.numeric(1L); write.csv(a, "output.csv"); rm()})
will give you the resulting output as csv file, as well as a list of NAs. Since the list consists of only NAs, it may not take lots of space.
Please let me know the result.
As other mentioned, if you are only interested in the side-effects of the function, returning NULL at the end will not save any input, saving on RAM.
If on top of that, you want to reduce the visual clutter (avoid having a list of 100 NULL), you could use the .final argument, setting it to something like .final = function(x) NULL.
library(foreach)
doSomething <- function () as.numeric(1L)
foreach::foreach(i = 1:4, .final = function(x) NULL) %do% {
doSomething()
}
#> NULL
Created on 2022-05-24 by the reprex package (v2.0.1)
| common-pile/stackexchange_filtered |
How do I delete a column that contains only zeros in Pandas?
I currently have a dataframe consisting of columns with 1's and 0's as values, I would like to iterate through the columns and delete the ones that are made up of only 0's. Here's what I have tried so far:
ones = []
zeros = []
for year in years:
for i in range(0,599):
if year[str(i)].values.any() == 1:
ones.append(i)
if year[str(i)].values.all() == 0:
zeros.append(i)
for j in ones:
if j in zeros:
zeros.remove(j)
for q in zeros:
del year[str(q)]
In which years is a list of dataframes for the various years I am analyzing, ones consists of columns with a one in them and zeros is a list of columns containing all zeros. Is there a better way to delete a column based on a condition? For some reason I have to check whether the ones columns are in the zeros list as well and remove them from the zeros list to obtain a list of all the zero columns.
Possible duplicate of Deleting DataFrame row in Pandas based on column value
I disagree. That question is to remove rows based on values in one column. Here multiple columns are to be removed based on their own values.
df.loc[:, (df != 0).any(axis=0)]
Here is a break-down of how it works:
In [74]: import pandas as pd
In [75]: df = pd.DataFrame([[1,0,0,0], [0,0,1,0]])
In [76]: df
Out[76]:
0 1 2 3
0 1 0 0 0
1 0 0 1 0
[2 rows x 4 columns]
df != 0 creates a boolean DataFrame which is True where df is nonzero:
In [77]: df != 0
Out[77]:
0 1 2 3
0 True False False False
1 False False True False
[2 rows x 4 columns]
(df != 0).any(axis=0) returns a boolean Series indicating which columns have nonzero entries. (The any operation aggregates values along the 0-axis -- i.e. along the rows -- into a single boolean value. Hence the result is one boolean value for each column.)
In [78]: (df != 0).any(axis=0)
Out[78]:
0 True
1 False
2 True
3 False
dtype: bool
And df.loc can be used to select those columns:
In [79]: df.loc[:, (df != 0).any(axis=0)]
Out[79]:
0 2
0 1 0
1 0 1
[2 rows x 2 columns]
To "delete" the zero-columns, reassign df:
df = df.loc[:, (df != 0).any(axis=0)]
I am trying this to drop a column if it has either 0 or 1 in it and it gives an error: df = df.loc[:, (df != 0 & df != 1).any(axis=0)]
this worked (in case it helps someone): df[df.columns[(~df.isin([0,1])).any(axis=0)]]
df.loc[:, (~df.isin([0,1])).any(axis=0)] would also work.
why does df != 0 & df != 1 not work? I also tried doing it step by step like a = df[df!=0] b = df[df!=1] c = a & b (elementwise and) and it complains
(df != 0) & (df != 1) is a DataFrame of boolean values. To select columns, you need to reduce that DataFrame to a 1D Series or array of boolean values, such as ((df != 0) & (df != 1)).any(axis=0). Then you can select columns using df.loc[:, ((df != 0) & (df != 1)).any(axis=0)].
the code (df != 0) & (df != 1) does not work in python. try it yourself
df = pd.DataFrame([[1,0,0,0], [0,0,1,0]]); (df != 0) & (df != 1) returns a boolean DataFrame. If there is a problem, it has something to do with the particular DataFrame you are using -- but I'm having trouble guessing what that problem might be.
Please post a new question, as long conversations in the comments not conducive to stackoverflow's goal of building a database of good question/answer pairs.
Why not just df = df.loc[:, df.any(axis=0)]?
@IgorFobia: Lot's of things are False-ish without being 0. For instance, empty strings or None or NaN. To demonstrate the difference, if df = pd.DataFrame([[np.nan]*10]), then df.loc[:, df.any(axis=0)] returns an empty DataFrame, while df.loc[:, (df != 0).any(axis=0)] returns a DataFrame with 10 columns.
I believe it is easier to understand if we check for a condition being true, instead of checking if condition not being true is never not satisfied. I think (df == 0).all(axis=0) is more straightforward.
Thanks for the breakdown. It made things very clear.
@RyszardCetnarski df = df.loc[:, (df != 0).any(axis=0)] is not equivalent to df = df.loc[:, (df == 0).all(axis=0)]. Your proposal deletes the non-zero columns, which is the opposite of what we want. What you're looking for is df = df.loc[:, ~(df == 0).all(axis=0)].
Here is an alternative way to use is
df.replace(0,np.nan).dropna(axis=1,how="all")
Compared with the solution of unutbu, this way is obviously slower:
%timeit df.loc[:, (df != 0).any(axis=0)]
652 µs ± 5.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit df.replace(0,np.nan).dropna(axis=1,how="all")
1.75 ms ± 9.49 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In case there are some NaN values in your columns, you may want to use this approach if you want to remove columns that have both 0 and NaN :
df.loc[:, (df**2).sum() != 0]
What if the sum happens to be 0? Maybe use mean square.
You are absolutely right, I did not think about these edge cases. I'm updating the answer.
In case you'd like a more expressive way of getting the zero-column names so you can print / log them, and drop them, in-place, by their names:
zero_cols = [ col for col, is_zero in ((df == 0).sum() == df.shape[0]).items() if is_zero ]
df.drop(zero_cols, axis=1, inplace=True)
Some break down:
# a pandas Series with {col: is_zero} items
# is_zero is True when the number of zero items in that column == num_all_rows
(df == 0).sum() == df.shape[0])
# a list comprehension of zero_col_names is built from the_series
[ col for col, is_zero in the_series.items() if is_zero ]
This should do the work:
zero_cols = df.columns[(df == 0).all()]
df.drop(labels=zero_cols, axis=1, inplace=True)
If you want to check if there is at least one column full of zeros in your dataframe, you could use :
(df==0).all().any() # Returns True if a column of the dataframe is made up entirely of zeros.
Note : This answer does not cover the "deleting" part of this particular question, because here you don't retrieve an identifier for the "all zero" column.
But some, like me, might be redirected here from a "closed as duplicate" question :
Check if pandas column contains all zeros which seems a bit different as it asks if a column contains only zeros. This answer covers an extension of that particular linked question : check if a dataframe contains at least one column with only zeros.
| common-pile/stackexchange_filtered |
Access database informix using Node.JS
I see this question 2 years ago but any kind of solution How to install informix on node.js?.
I'm trying to access informix using nodejs in a Windows enviroment, I try some npm packages but didnt have a good result and others need to be only on Linux.
Thanks for any suggestion, btw need to be in Windows becouse the server.
IBM has something like their SDK for Node.js, based on the latest community version of Node.js.
Here is the link
The author of ifx_db suggest a new repository: https://github.com/OpenInformix/IfxNode
Here is the Informix Native Node.JS driver URL
https://www.npmjs.com/package/ifx_db
| common-pile/stackexchange_filtered |
"as" keyword in c# returns null data
How do I get the base class data into the child class object when using keyword as. I tried the below code but it returns null data.
class BaseC
{
public int BaseId { get; set; }
public string BaseName { get; set; }
}
class DerivedC: BaseC
{
public int DerivedId { get; set; }
public string DerivedName { get; set; }
}
class Program
{
static void Main(string[] args)
{
BaseC baseC = new BaseC();
baseC.BaseId = 1;
baseC.BaseName = "base class name ";
var derivedC = baseC as DerivedC;
}
}
Because the pure Basec you created is NOT a DerivedC so as will return `null since it can't be cast. Downcasting is not allowed in C#. What are you trying to accomplish?
A BaseC object is not always a DerivedC object (the other way around it is). You created just a BaseC object and not a DerivedC object. You try to do a narrowing cast (BaseC to DerivedC), while only a widening cast would work (DerivedC to BaseC).
Derived C inherits from BaseC and you want to treat BaseC as DeviedC... See the problem? Also, why using var? var is an undefined type use it in a linq for example
Have a look at Is it really downcasting not possible? It is working fine for me
@user28470 var does not mean "undefined" - it means "let the compiler infer the type from the declaration". It's perfectly fine to use outside of Linq. In this case the compiler would use DerivedC as the inferred type.
Yes, so var is undefined type it gets the type of what will it be used for
@user28470 No, "undefined" means "I can use it for anything", which is not what var means - so you can't say var x = 1; x = "one"; because x is an int, not a string or "variant".
That's right as behaivour:
Your code (simplified):
BaseC baseC = new BaseC();
// null: result is a BaseC instance and not DerivedC one
DerivedC result = baseC as DerivedC;
Reversed code (probably, what you expected to see):
BaseC baseC = new DerivedC(); // <- Now, DerivedC instance created
// not null: result is in fact a DerivedC instance: new DerivedC()
DerivedC result = baseC as DerivedC;
// not null: DerivedC is inherired from BaseC
// and that's why any DerivedC instances are BaseC instances as well
BaseC result2 = baseC as BaseC;
This won't work. Replace BaseC with Animal and DerivedC with Cow and you'll see why.
The runtime cannot create a Cow from an instance of Animal, as there's information missing. This will work:
BaseC baseC = new DerivedC();
Because the instance actually is a DerivedC.
Wait, are you saying not every animal is a cow? :O
baseC is not an instance of DerivedC, so the as operator will always return null.
It would work however, if you changed the first line of Main to this:
BaseC baseC = new DerivedC();
As for a practical solution (perhaps you were already doing this before you tried out the as keyword, but I figured I'd throw it out there):
If all you have available is a base instance (and you can't change that for some reason), and you want to populate the derived instance, you might consider adding a constructor that accepts the base class as a parameter:
class DerivedC : BaseC
{
public DerivedC() {} // req'd so you can still create an instance without a BaseC
public DerivedC(BaseC baseC)
{
BaseId = baseC.BaseId;
BaseName = baseC.BaseName;
}
public int DerivedId { get; set; }
public string DerivedName { get; set; }
}
Then call it like this:
var derivedC = new DerivedC(baseC);
At least that cuts down on code duplication, so you're not manually assigning values in multiple places.
| common-pile/stackexchange_filtered |
how to hide a text entry in the print preview c#
There is no error showing. My selection is not working, I need to hide a specific field. If the field is empty the title of that field should be hidden.
I am using a printpreviewdialog to preview the details.
my code:
private void DVPrintDocument_PrintPage(object sender, System.Drawing.Printing.PrintPageEventArgs e)
{
//e.Graphics.DrawString("NAME : " + dgvItem.CurrentRow.Cells[4].Value, new Font("Arial", 12, FontStyle.Regular), Brushes.Black, new Point(25, 400));
e.Graphics.DrawString("WEIGHT : " + txtGemWeight.Text + " Cts", new Font("Arial", 12, FontStyle.Regular), Brushes.Black, new Point(25, 440));
//string txtSpecification = false;
if (txtSpecification.Text == null)
{
txtSpecification.Visible= false;
}
else
{
e.Graphics.DrawString("SPECIFICATION : " + txtSpecification.Text, new Font("Arial", 12, FontStyle.Regular), Brushes.Black, new Point(25, 480));
}
}
}
if(txtSpecification.Text == null)
change it to
if(txtSpecification.Text == "")
because text by default value is not null.
Can you check out, what is the value of default(string)?
@Slava The default of a string value is just "" which is not null. For reasons, idk.
| common-pile/stackexchange_filtered |
merge two files on basis of common data
I have two files. First file contains the userid and Name. Second file consists of userid and a number value for which that user id has access on. My requirement is to use the contents of both files and copy the output in third file in this format.
File#1 contents :
jaina39 Aayush Jain
pawarm02 Mukesh Pawar
dubeyd01 Devasya Dubey
sharmar01 Ram Sharma
File#2 contents:
jaina39 01
jaina39 02
jaina39 11
jaina39 12
jaina39 31
jaina39 35
jaina39 39
jaina39 41
jaina39 54
pawarm02 01
pawarm02 02
pawarm02 11
pawarm02 21
pawarm02 33
pawarm02 44
dubeyd01 31
dubeyd01 41
dubeyd01 51
dubeyd01 2047
dubeyd01 2049
sharmar01 100
sharmar01 101
sharmar01 111
sharmar01 2000
sharmar01 2011
Desired Output File:
Aayush Jain
01,02,11,12,31,35,39,41,54
Mukesh Pawar
01,02,11,21,33,44
Devasya Dubey
31,41,51,2047,2049
Ram Sharma
100,101,111,2000,20111
Can you describe little bit more with output
Basically, the output i am looking for should pickup the Name from first file and add it on top of second file on the basis of username..
I have ten usernames in the first file and these 10 users have access on multiple objects and thus there are multiple lines for same user in second line. For better readability, we want Names to be merged in the output file on top of access
Let me write for you .. just check when i answer the answer if required we can modify ok
Probably possible, but "shell scripting" is very much the wrong tool. Can you use awk, perl, ...?
If awk can be used, that'd be great!
please write more lines of your files
@AayushJain May be this link will help you to get the desired output try solution provided by Ed Morton awk 'NR==FNR{a[$5]=$2" "$3" "$6;next} $2 in a{print $0, a[$2]}' file1 file2. If it works all good otherwise then make little changes in awk command … it is already been answered by several user for this type of question link : https://stackoverflow.com/questions/17342954/awk-joining-two-files-on-a-specific-column
@codeholic24 I tried the above solution but somehow it didn't worked the expected way probably I didn't understood the syntax correctly. Any help on this would be very helpful
try:
awk 'NR==FNR{ Ids[$1]= Ids[$1]? Ids[$1] "," $2: $2; next; };
{ print $0; print Ids[$1]; }' file2 file1
read all Ids into awk array from the file2, then print the entire line from the file1 and the matched Ids for that Id.
| common-pile/stackexchange_filtered |
Calculating distance between two geographic coordinates in MongoDB aggregation pipeline
I have a MongoDB collection containing documents with geographic coordinates stored as GeoJSON objects. Each document represents an item with two locations (location_1 and location_2). Here's an example document structure:
[
{
"_id": ObjectId("62c43a4128c8b27fda436536"),
"item": "item1",
"location_1": {
"type": "Point",
"coordinates": [76.22191167727965, 10.929396489878417]
},
"location_2": {
"type": "Point",
"coordinates": [76.2249520851109, 10.97594496161423]
}
},
{
"_id": ObjectId("62c43a4128c8b27fda436537"),
"item": "item2",
"location_1": {
"type": "Point",
"coordinates": [50.22191167727965, 10.929396489878417]
},
"location_2": {
"type": "Point",
"coordinates": [10.2249520851109, 10.97594496161423]
}
},
// Additional documents...
]
I want to calculate the distance in kilometers between location_1 and location_2 for each items using the MongoDB aggregation pipeline.
I know that MongoDB has a $geoNear aggregation stage for geospatial queries, but I'm not sure how to use it to calculate distances between two points within the same document.
Can someone provide an example of how to achieve this using MongoDB aggregation pipeline?
Thank you!
I don't know what stage your project is in but if this sort of thing is a central aspect of your functionality and you're open to changing DBs, using Postgres with the PostGIS extension is incredibly powerful for geospatial data. It also has a json field type, which offers many of the same benefits as MongoDB
Thank you @Tim. But it is not possible to change the database now.
Does this answer your question?
I would suggest to use the Haversine formula.
db.collection.aggregate([
{
$set: {
distance: {
$let: {
vars: {
dlon: { $degreesToRadians: { $subtract: [{ $first: "$location_1.coordinates" }, { $first: "$location_2.coordinates" }] } },
dlat: { $degreesToRadians: { $subtract: [{ $last: "$location_1.coordinates" }, { $last: "$location_2.coordinates" }] } },
lat1: { $degreesToRadians: { $last: "$location_1.coordinates" } },
lat2: { $degreesToRadians: { $last: "$location_2.coordinates" } }
},
in: {
// Haversine formula: sin²(dLat / 2) + sin²(dLon / 2) * cos(lat1) * cos(lat2);
$add: [
{ $pow: [{ $sin: { $divide: ["$$dlat", 2] } }, 2] },
{ $multiply: [{ $pow: [{ $sin: { $divide: ["$$dlon", 2] } }, 2] }, { $cos: "$$lat1" }, { $cos: "$$lat2" }] }
]
}
}
}
}
},
{
$set: {
distance: {
// Distance in Meters given by "6378.1 * 1000"
$multiply: [6378.1, 1000, 2, { $asin: { $sqrt: "$distance" } }]
}
}
}
])
Mongo Playground
According to this sample $geoNear uses the same formula with an earth radius of 6378.1 km
db.collection.insertOne(
{
location: {
type: "Point",
coordinates: [-0.123507, 51.5083228]
}
}
)
const coordinate = [-0.0649729793707321, 51.50160291888072]
db.collection.aggregate([
{
$geoNear: {
near: { type: "Point", coordinates: coordinate },
distanceField: "distance_geoNear",
includeLocs: "location",
spherical: true
}
},
{
$set: {
distance: {
$let: {
vars: {
dlon: { $degreesToRadians: { $subtract: [{ $arrayElemAt: ["$location.coordinates", 0] }, coordinate[0]] } },
dlat: { $degreesToRadians: { $subtract: [{ $arrayElemAt: ["$location.coordinates", 1] }, coordinate[1]] } },
lat1: { $degreesToRadians: { $arrayElemAt: ["$location.coordinates", 1] } },
lat2: { $degreesToRadians: coordinate[1] }
},
in: {
// Haversine formula: sin²(dLat / 2) + sin²(dLon / 2) * cos(lat1) * cos(lat2);
$add: [
{ $pow: [{ $sin: { $divide: ["$$dlat", 2] } }, 2] },
{ $multiply: [{ $pow: [{ $sin: { $divide: ["$$dlon", 2] } }, 2] }, { $cos: "$$lat1" }, { $cos: "$$lat2" }] }
]
}
}
}
}
},
{
$set: {
distance_haversine: {
// Distance in Meters given by "6378.1 * 1000"
$multiply: [6378.1, 1000, 2, { $asin: { $sqrt: "$distance" } }]
}
}
}
])
{
distance_geoNear: 4124.233475368821,
distance_haversine: 4124.233475368682
}
| common-pile/stackexchange_filtered |
How long does it take for a black hole to form for an external observer?
The well-known fable of an astronaut sending signals out to an external observer while falling toward an event horizon states that the time lapse between such signals becomes greater even if in the astronaut sends them out periodically (as judged in his inertial frame). When viewed from earth and weighing time-dilation due to the gravitational field of the collapsing star, how is it possible that a black hole can form in finite time (for any external observer) if it takes an infinite amount of time to "see" events occurring at the event horizon?
For an observer on the surface of the collapsing star, the black hole forms rather quickly.
Right... but how does that permit black-holes forming for any external observer in a finite amount of time?
This is straying into metaphysics I'm afraid (If a tree falls in a forest...). Either the horizon is in spacetime or it isn't. If it is, some worldlines intersect the horizon and end on the singularity regardless of any external observer, i.e., it objectively exists.
It is not metaphysics at all... it's clearly physics as astronomers claim the existence (in the here and now) of black-holes. Furthermore the event horizon is a valid boundary according to GR.. So physics predicts a certain object- we should be able to observe it- quite unlike your tree
Or is physics now free of backing up theory with experimental data?
http://physics.stackexchange.com/q/21319/ http://physics.stackexchange.com/q/102202/
The accepted answer is unacceptable... particularly "Such collapsars possibly can become BHs for a short time due to quantum fluctuations and thus emit hawking radiation." but quantum fluctuations happen in an observably finite time.. it's the same problem again... (the other link is to my question for some reason)
"it's clearly physics as astronomers claim the existence (in the here and now)" the "here and now" is just an event in spacetime, not all of spacetime.
What's with the editing Nick Stauner?
Sorry Alfred- something odd just happened to my post- the "here and now" indicates observable events... if an event is unobservable then how can it be included in physical law?
It takes 6 to form.
"punctuation, non-redundancy[, grammar, brevity]"
No more rollbacks, please. If there are any more I will be locking the question.
I'd like to rollback to an era before you were born... but alas that would invoke casualty violations...
Similar questions have cropped up on this site many times, and the debate surrounding them is usually fractious because people misunderstand each other's use of words like exist.
One of the lessons of General Relativity is that any observer has to choose a locally convenient coordinate system that may not be globally convenient. We on Earth (quite sensibly) choose time as measured on our clocks and distance as measured by our rulers, and these coordinates are known as the Schwarzschild coordinates (strictly speaking they are shell coordinates, but the difference at the orbital distance of the Earth from the Sun is negligable). Locally our coordinates work very well, but when the central body is a black hole the coordinates become increasingly curved as you approach the event horizon and at the event horizon they fail completely resulting in a coordinate singularity.
I've made this sound like a mathematical nicety, but it's quite real. Remember that by the time coordinate I mean the time we measure on our clocks, and that means there is a singularity in our measurements of time at the event horizion. This is why it takes an infinite time for anything to reach the event horizon, let alone cross it.
The question is whether it is therefore correct to say that: the event horizon never forms. It is quite true that you and I and everyone outside the black hole will never measure the time the event horizon forms, because it would take an infinite time. However there are lots of coordinate systems that have no singularity at the horizon, such as Gullstrand-Painlevé, Eddington-Finkelstein and Kruskal-Szekeres coordinates. The trouble is that these coordinates are somewhat abstract and do not coincide with the experience of any human observer. However since such coordinates exist, physicists tend to be quite comfortable stating that black holes form even if human experimenters could never observe it.
Doesn't this remove the role of experimental observation when it comes to validating theory? In particular if a theory predicts a certain thing then how isn't the validity of that theory based upon data (taken by us humans, as we have no data from other intelligent creatures that pertains to GR) that evidences that very thing?
@jaskey13: GR predicts thing we cannot possibly observe. However it also predicts things that we can observe, and we observe that GR makes the correct predictions. If we always find GR to be correct about the things we can observe then is it safe to assume it's also correct about the things we can't observe? That's a tricky question, and even amongst physicists view differ. However I suspect most us believe that GR is reliable even when we can't confirm what it predicts.
That is not in the true spirit of the scientific method- nor Occams Razor..... In particular a theory that makes predictions that cannot ever be proven by experiment is not the simplest theory... As a) if predicts things unprovable by a method of experimentation and b) it includes within it extraneous things that are outside of experimental data
@jaskey13: the universe didn't consult us before it decided to exist
ah ha... does that mean there is an intrinsic unfathomabily to the universe?
and how does a universe "decide" to exist? Is there a conscious principle I'm not aware of?
@jaskey13: No, we have theories that describe the universe (the bits we can see) very well. But those theories predict some parts of the universe are inherently unobservable. That doesn't mean we've stopped trying. For example some of the firewell theories predict spacetime simply ends at an event horizon.
So back to my question... how long does it take for the horizon form? Can it form in finite time for an external observer?
@jaskey13: No. For an external observer the event horizon never forms. The best we see is an apparent horizon.
But an apparent horizon is based on the idea that there is not enough time (referenced by an external observer) for signals to propagate.... If the universe does not "crunch" then there will always be enough time... Right?
@jaskey13: No (I seem to starting all my answers with "No" :-). The whole point of infinity is you can never reach it. When we say the event horizon takes an infinite time to form we mean there will never be enough time to observe it.
Can you explain to me how the idea of an apparent horizon does not assume that there is a constraint on the time coordinate for an external observer- in particular, given enough time in any frame the apparent horizon shrinks, correct? So we must still see a collapsing object yet no barrier?
The time coordinate in Gulstrand-Painleve is the observer’s proper time and so it’s easy to interpret why it’s is regular at the horizon
It's a matter of what you mean by "see". Even for a distant observer, it will take a small amount of time for the gravitational redshift effect to become essentially infinite. If your collapsing gas star redshifts to the point where it won't emit a single photon in the age of the universe, it may not have yet technically "redshifted to zero", but it has functionally redshifted to zero as far as experiment is concerned.
The actual picture seen by an external observer is consistent with the general picture described by a naïve interpretation -- a black object that is not emitting anything other than (what is, for macroscopic holes, a vanishingly small amount of) Hawking radiation, that will absorb objects that enters it. I should also add that the apparent horizon of a collapsing star will move outward at a speed faster than the speed of light, which will solve the infinite redshift issue, as well.
I strongly disagree. I did the calculations several times, and I'm positive it takes 6 to form.
@dfg: I think I saw you at the coffeeshop last week: http://bitsocialmedia.com/wp-content/uploads/2013/07/Internet-Troll.jpg
Haha, well played :)
Doesn't the superluminal speed of formation of an apparent event horizon imply that gravitational influence has propagated superluminally as well?
And also a collapsing star will not just all sudden collapse to a point that it cannot emit radiation within the known time span of the universe... In fact the "collapsing" will simply appear to decelerate to an outside observer.... In any given time span signals will emerge... Those time spans just become further apart.. Unless you propose an end date to the universe?
@jaskey13: you can't transmit information from point A to point B by having an apparent horizon expand, so there is no causality or influence violation. And the spacetime is the thing doing the radiation emission, and that happens in the neighborhood of the horizon, not at the horizon, so there is no transmission of information out of the interior. Anything that falls into the hole will redshift to within an arbitrary tolerance of zero intensity in microseconds.
Then how we say even an apparent horizon forms if no information about it's forming is allowed for any observer in contact with us (the ones who make theoretical laws)?
and you say "fall into the hole" again like it is an event that can be referenced from experimental data and not just theory
but spacetime itself is still theoretically under Lorentz invariation- that being it does not somehow change superluminally- else the graviton cannot be a fluctuation of the space-time "field" right?
And while I suspect the reactions can be much shorter than micro-seconds- they still must be dilated from our reference frame... so even nanosecond reactions and femtosecond reactions contribute to our observation of this collapsing object... of course such interactions are witnessed in a dilated time in our frame...
@jaskey13: I can walk you through the calculations, but I suspect that you don't have the technical capacity to really get much from them. Suffice it to say that the real-world experience of a distant observer would be to see an object disappear due to redshifting. Even if they don't see a "crossing" event, they WILL see the falling object disappear relative to any possible observation, and this will happen in finite time relative to the shell observer.
Please show me the calculation that shows a falling object disappear in finite time for an external observer...And watch any coordinate that "goes to infinity"
@JerrySchirmer I'm still waiting- I didn't "redshift to infinity" :)
The short answer is infinity.
The elastic body model, resulting from the work of Milo Wolff and Gabriel LaFreniere and Jeff Yee, says that the elementary particle is a pulsating soliton in a medium that is non-linear. The non-linear medium has an absolute density (analogous to the curvature of space-time), instead of a purely abstract amplitude, and Hook's law applies in it - it reacts to compression with opposing force - this is something that is missing in Einstein's GTR, and this is his biggest mistake in this matter. What he said about GTR, saying that the GTR is not full.
Well, the biggest absurdity arising from this theory - black holes - is just a consequence of not taking into account the internal pressure of space, which is an obvious component of every classic elastic body.
The collapsing star first breaks the pressure inside the atoms, then inside the neutrons, until it disappears in singularity. However, most thinkers do not take into account the fact that the same pressure that prevents atoms and neutrons from collapsing will also stop the entire star from collapsing. This is because both the individual material particles and the gravitational field around the star consist of the density of the same substance - space-time at Einstein, flexible space at Wolff, Aether at LaFreniere and Jeff Yee.
Also we must remember the atomic and electromagnetic forces are much more powerful than gravitational forces.
Noting by Haramein, then by Gabriel LaFreniere (probably also by others :)) that the energy contained in the volume of the proton / electron is enough to make it a black hole, points to an important issue - well, according to the model of the elastic medium, each the material particle is a kind of black hole - looking at how much energy it has. It arises where, within a given area, the density of Ether exceeds a certain critical value. However, this will not lead to the collapse of this area, but to its transformation into a soliton - a creature that can sustain its own existence. What feeds him? Waves around it, called by scientists quantum fluctuations, or vacuum energy, relict radiation, etc.
Vacuum has its energy, some have called it background relikt radiation others have the lowest energy state. Vacuum thickens to form a temporary unstable soliton - what others have called quantum fluctuations.
That is why the equation of the black hole needed to include the vacuum energy it contained. For the same reason, Jeff Yee had to put a constant density of ether in the equation of photon and electron energy.
This is remarkably coincident with Tesla's belief that matter 'absorbs' the Ether to exist.
So the same mechanism that causes matter to exist also excludes black holes. Aether density will always take energy, bend the paths of other densities and waves, but it will never collapse because it will have internal pressure. This is a direct consequence of the characteristics of a classical elastic medium.
The same mechanism contained in the elastic body model is responsible for the process of absorption and emission of an electromagnetic wave by an atom (a soliton made from of other solitons). Soliton aims to return to its ground state - stable state.
Einstein did not suspect that his confusing space-time could be replaced by something so simple. However, physicists who perceive the wave nature of matter are easier because the physical wave, despite the trumpeting, requires a medium.
The unequivocal experimental proof of the wave (frequency) nature of matter is the electron image recorded in 2007. This image was recorded by a team of Swedish scientists from the University of Lund. The image turned out to be a soliton image in line with the proposals of Milo Wolf and Gabriel LaFreniere published in the years 1998 - 2002:
https://youtu.be/ofp-OHIq6Wo
Sorry for My English.
Regards.
| common-pile/stackexchange_filtered |
Why dynamically resizing a string causes a crash?
Consider code:
char *word = NULL; // Pointer at buffered string.
int size = 0; // Size of buffered string.
int index = 0; // Write index.
char c; // Next character read from file.
FILE *file = fopen(fileDir, "r");
if (file)
{
while ((c = getc(file)) != EOF)
{
printf("Current index: %d, size: %d, Word: %s\n", index, size, word);
if (isValidChar(c))
{
appendChar(c, &word, &size, &index);
}
else if (word) // Any non-valid char is end of word. If (pointer) word is not null, we can process word.
{
// Processing parsed word.
size = 0; // Reset buffer size.
index = 0; // Reset buffer index.
free(word); // Free memory.
word = NULL; // Nullify word.
// Next word will be read
}
}
}
fclose(file);
/* Appends c to string, resizes string, inceremnts index. */
void appendChar(char c, char **string, int *size, int *index)
{
printf("CALL\n");
if (*size <= *index) // Resize buffer.
{
*size += 1; // Words are mostly 1-3 chars, that's why I use +1.
char *newString = realloc(*string, *size); // Reallocate memory.
printf("REALLOC\n");
if (!newString) // Out of memory?
{
printf("[ERROR] Failed to append character to buffered string.");
return;
}
*string = newString;
printf("ASSIGN\n");
}
*string[*index] = c;
printf("SET\n");
(*index)++;
printf("RET\n");
}
For Input:
BLOODY
Output:
Current index: 0, size: 0, Word: <null>
CALL
REALLOC
ASSIGN
SET
RET
Current index: 1, size: 1, Word: B** // Where * means "some random char" since I am NOT saving additional '\0'. I don't need to, I have my size/index.
CALL
REALLOC
ASSIGN
CRASH!!!
So basically - *string[*index] = 'B' works, when index is first letter, it crashes at second one. Why? I probably messed up allocation or pointers, I don't really know (novice) :C
Thank you!
EDIT
I also ment to ask - is there anything else wrong with my code?
This expression is incorrect:
*string[*index] = c;
Since []'s precedence is higher than *'s, the code tries to interpret the double pointer string as an array of pointers. When *index is zero, you get the right address, so the first iteration works by pure coincidence.
You can fix this by forcing the right order of operations with parentheses:
(*string)[*index] = c;
| common-pile/stackexchange_filtered |
procedure dont receive char data in mysql
This procedure help us to make a little easier to add registers to our database
I have 4 variables declared outside the procedure(id_usuario, des, monto, id_tipo)
id_usuario,monto and id_tipo are integers and des is char
The main problem is that if i print des variable in the cmd it show me a value but if i print it in the procedure it says is null
Any ideas??
delimiter //
CREATE OR REPLACE PROCEDURE reg_egresos(IN id_usuario INT,IN descripcion CHAR(50),IN monto INT, IN id_tipo INT)
BEGIN
insert into Egresos values(null, @descripcion, @monto, CURDATE(), 1,
@id_tipo);
END; //
delimiter ;
call reg_egresos(@id_usuario, @des, @monto, @id_tipo);
please do a search before asking a question, it will save you time 90% of the time
I read the other post but it is completely different
| common-pile/stackexchange_filtered |
cut image out of div to reveal background
Is it possible to cutout an image from a div revealing the background image?
I have a vertically striped background with a div that covers most of the background. The div is filled solid black. I would like an image(which is really just an outline) somewhere inside that div to "cutout" the div, revealing the background image. Is this possible?
I have thought about using the same background pattern for the image, but the vertical lines do not align correctly.
Thank you in advance.
Can you provide an example of what you are looking to do? It sounds like you could make your image a png (or other type with transparencies) and just have the outline that way.
Yes, you can do it with CSS Masks. Here is a very detailed tutorial on a variety of masks. It covers clipping masks (coordinates on the image) and alpha masks (black and white images).
http://www.html5rocks.com/en/tutorials/masking/adobe/
| common-pile/stackexchange_filtered |
How to display figures instead of letters in position analysis notation in ChessBase?
I just installed Deep Fritz 14 with ChessBase. By default the notation in deep position analysis looks like this (only letters, no chess figures):
I would like to make it look like this:
How do I do this?
Nevermind, figured it out.
File > options > Clocks+Notations > Figurines radio button
| common-pile/stackexchange_filtered |
django : unique name for object within foreign-key set
I'm trying to upload files for an article model. Since an object can have multiple images, I'm using a foreign-key from file model to my article model. However, I want all the files to have unique titles. Herez the code snippet.
class Article(models.Model):
name = models.CharField(max_length=64)
class Files(models.Model):
title = models.CharField(max_length=64)
file = models.FileField(upload_to="files/%Y/%m/%d/")
article = models.ForeignKey(Article)
Now when I upload the files, I want the file titles to be unique within the "foreign_key" set of Article, and NOT necessarily among all the objects of Files. Is there a way I can automatically set the title of Files? Preferably to some combination of related Article and incremental integers!! I intend to upload the files only from the admin interface, and Files are set Inline in Article admin form.
Its hard to advice without knowing which django version.
Its latest release 1.2.3
def add_file(request, article_id):
if request.method == 'POST':
form = FileForm(request.POST, request.FILES)
if form.is_valid():
file = form.save(commit=False)
article = Article.objects.get(id=article_id)
file.article = article
file.save()
file.title = article.name + ' ' + file.id
file.save()
redirect_to = 'redirect to url'
return HttpResponseRedirect(redirect_to)
Nice approach. But I was trying to set the "title" of the file, rather than filename itself. Is there a way to make the file title dependent on the foreignkey article's name?
@Neo - you'd have to be careful, since multiple Files could belong to the same Article.
Exactly, thats why the title should be based on Article-Name + Some Integer. I realize it will take some tweaking in Django, since Inline objects are created first, and then their foreginkey is set after the main object is created. Is there a work-around?
What is the difference between "file" field and "title" field? Does title represents the name of the file?
while displaying an Article object, I wanted to show the file contents in a textbox, and use the title for the textbox's title. Ofcourse, if a user provides his/her own title, then the "automatically generated" title should be overridden.
Can you can display the title as a combination formed from the id of the File object and the Article's name field? This is a unique combination.
Yea, that will work. But how can I automatically set the title upon saving File Object?
Hmm, obviously I need to tweak it a bit more. But yes, it will work. Thanks.
| common-pile/stackexchange_filtered |
R: calculate value based on the previous row
I would like to calculate how many trees I have left every year, if I have a source tree population in year n, and I have how many trees have been removed every year. I can't figure out how to properly update my 'remained trees value'?
My aproach:
trees = 10
year = c(1,2,3,4,5)
removed = c(2,1,0,1,3)
dd <- data.frame(source_trees = trees,
year = year,
removed = removed,
remained_trees = trees-removed)
Output:
> dd
source_trees year removed remained_trees
1 10 1 2 8 # correct
2 10 2 1 9 # 8-1 -> should be 7
3 10 3 0 10 # 7-0 -> 7
4 10 4 1 9 # 7-1 -> 6
5 10 5 3 7 # 6-3 -> 3
I am looking for solution using dplyr, instead of the iterative for loop.
Read about lead/lag
Try cumsum
> transform(dd, remained_trees = source_trees - cumsum(removed))
source_trees year removed remained_trees
1 10 1 2 8
2 10 2 1 7
3 10 3 0 7
4 10 4 1 6
5 10 5 3 3
If you want to try using accumulate from purrr in tidyverse, this is another option.
The accumulate function will take as arguments: the removed column, a function to subtract the removed value (.y) from the result of previous row (.x), and the initial value, which in this case would be the first value of source_trees.
The [-1] indexing ignores the first result, which otherwise would be the initial value of source_trees of 10.
library(tidyverse)
dd |>
mutate(remained_trees = accumulate(
removed,
~ .x - .y,
.init = first(source_trees)
)[-1])
Output
source_trees year removed remained_trees
1 10 1 2 8
2 10 2 1 7
3 10 3 0 7
4 10 4 1 6
5 10 5 3 3
| common-pile/stackexchange_filtered |
What do I do when my joke non-answer is accepted?
It appears my joke non-answer was accepted by the questioner for some reason. Obviously, this does a disservice to anyone who might run across the question later on. Obviously, I voted up the other, better answers, but my answer will still be nailed to the top of the answer stack.
My guess is that the responsible thing is to delete my dumb answer. Maybe turn it into a comment. But not everyone will be willing to do that, especially if they have received a lot of upvotes on the post. Actually, I suppose it helped my reputation to have it accepted, so the system provides an incentive to stay quiet.
What do you think I should do?
If your answer, even in jest, was the most helpful then it was the most helpful.
Don't sweat it. It's up to the questioner to decide the 'best' answer, but the next best is always just one down if people are voting well. Besides, we can use a little humor on the site and it's good to see it being defined safely.
Agreed. It assumes the asker can't learn.
If you don't have a serious answer, don't bother posting.
Jokes should be in comments, not in answers, as this site is built around question/answers. If we start letting the quality go down, the answers themselves will start to look like Digg comments (which I love, but for entertainment and not for learning and helping people learn). I love a joke as much as the next guy, but I also think this site has great potential.
But my jokes in comments on SO get deleted. Like the guy who asked about the NSMuttableArray, and I asked in a comment if that was an array you could put dogs into. It was deleted within a day. :(
You could edit the answer, reflecting at the end the nature of the answer (just to be clear) and recommending one of the others. That way no one loses.
I agree, the responsible thing would be to delete it.
Of course, you could also just edit it into a compilation answer of all the correct answers.
Leave a comment on the question letting the person asking know that it was a joke (preferably before they set about doing what you've suggested). If they don't unaccept the answer, delete it yourself.
What's wrong with your answer? Personally, reading through all the other responses, I think you probably gave the best answer. I don't think anybody does Basic in this day and age, but if you wanted to, QBasic would be a good place to start. It probably still runs on a modern Windows machine. I think Basic is a good place to start for programmers. It gives them a good starting point, at least for the first few weeks. So they don't have to worry about object oriented problems, or pointers, or all that other non-basic stuff.
As you can't delete an accepted answer, I suggest flagging it to a moderator's attention for conversion to a comment.
However it would have been better not to post a joke answer in the first place. This just detracts from the quality of the Q&A for anyone who doesn't get the joke, especially when that joke answer is accepted.
Obviously your answer was picked for some reason. It's possible the question may have been a joke as well.
Can you edit it to include <sarcasm> tags or otherwise make it more obvious to the casual reader that it shouldn't be taking seriously? I think there's a place for humour on here and I'd hate for people to feel obliged to delete such content.
| common-pile/stackexchange_filtered |
Swift how to know currently focused object by accessibility VoiceOver engine
I need to know which element is currently become focused by accessibility focus engine. In a simple case, let' say I have come cell with a couple of label. I need to know (i.e. print on console, or doing other stuff) which object is currently focused. Them same if I had a view with a lot of subviews, of different type. I think I should use elementFocusedNotification, and as told in docs, use the key UIAccessibilityElementFocusedKeyElement. I think I should pass the object in the method, but how?.
among others, I looked here, here and here for info, but cannot find solutions to know which element is currently focused.
in my cellForRowAt indexPath:
in the cell:
override func accessibilityElementDidBecomeFocused() {
NotificationCenter.default.post(
name: NSNotification.Name(rawValue: UIAccessibility.elementFocusedNotification.rawValue),
object: self.subviews.first(where: {$0.isFocused}),
userInfo: ["UIAccessibilityElementFocusedKeyElement": "hello there"]
)
}
in the controller's viewDidLoad:
NotificationCenter.default.addObserver(
self,
selector: #selector(self.doSomething(_:)),
name: NSNotification.Name(rawValue: UIAccessibility.elementFocusedNotification.rawValue),
object: nil
)
in the same controller
@objc func catchNotification(_ notification: Notification) {
//print("subscribed, notification: \(notification)")
if let myNotification1 = notification.userInfo?["UIAccessibilityElementFocusedKeyElement"] {
print("+++", myNotification1)
}
}
What's in your 'in my cellForRowAt indexPath:' ? You forgot to write that down
I think you can observe the changes of focused items. Try observing UIAccessibilityVoiceOverStatusDidChangeNotification and then see what's inside its userInfo
First of all cell will never be focused.
VoiceOver focus single accessibility element and do not inform it's superviews.
So if you want to use accessibilityElementDidBecomeFocused() you will have to override UILabel
UIAccessibility.elementFocusedNotification is send by system, you probably should never post it by yourself
Going back to your problem.
I suppose that you have following situation:
ViewController
TableView
Cell 1
Label 1_1
Label 1_2
Label 1_3
View 1_1
Cell 2
Label 2_1
Label 2_2
Label 2_3
View 2_1
Cell 3
...
and whenever VO focus Label_i_* or View_i_* you want to inform yours controller that i-th element was focused.
My approach will be as follow:
every accessibility element should know which element is displaying
view controller catches every notification, checks if it was send by TableView's subview and if so then it get element from it
The code will be as follow:
class MyViewController ... {
private var VOFocusChanged: NSObjectProtocol?
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
VOFocusChanged = NotificationCenter.default.addObserver(
forName: UIAccessibility.elementFocusedNotification,
object: nil,
queue: OperationQueue.main
) {
[weak self] (notification: Notification) in
self?.handleVONotification(notification)
}
}
override func viewDidDisappear(_ animated: Bool) {
super.viewDidDisappear(animated)
// there is no reason to observe changes
// if we know that view is no longer visible
if let VOFocusChanged: NSObjectProtocol = VOFocusChanged {
NotificationCenter.default.removeObserver(VOFocusChanged)
}
}
private func handleVONotification(_ notification: Notification) {
if let focusedView: UIView = notification.userInfo?[
UIAccessibility.focusedElementUserInfoKey
] as? UIView {
if focusedView.isGrandson(of: view) {
if let fView: ViewWithElement = focusedView as? ViewWithElement {
elementsWasFocused(fView.element)
}
}
}
}
private func elementsWasFocused(_ element: T) {
// add your logic here
}
}
extension UIView {
func isGrandson(of view: UIView) -> Bool {
if superview === view {
return true
}
return superview?.isGrandson(of: view) ?? false
}
}
protocol ViewWithElement {
var element: T { get }
}
// Use this instead of label inside UITableViewCell
class MyLabel: UILabel, ViewWithElement {
func display(element: T) {
_element = element
text = ...
accessibilityLabel = ...
}
// MARK: ViewWithElement
private var _element: T!
var element: T { return _element }
}
This isn't a direct answer to your question, nor I'm certain if it all works as I say. Because Apple hides a lot of details of how the accessibility engine works, but still may help solve your problem from a completely different angle and the sense that you don't need to know what's the current focus.
Take a look at a few of Apple's own apps. e.g. the Settings App. Only the entire cell is accessible. The labels are not.
The solution is to either:
based on your labels override the accessibilityLabel of the cell and then set isAccessibilityElement to false for your labels. For more on that see Making Your iOS App Accessible. Scroll down to 'Enhance the Accessibility of Table Views'
just manually set the accessibilityLabel when necessary and then set isAccessibilityElement to false for your labels. Apparently this solution would require you to reset it if the label gets updated.
if you set a view or text for your accessoryView then I believe UITableViewCell will take its value into consideration as well. For more on that see here
Note: A tableviewcell's accessibilityLabel will default to its labels's text.
| common-pile/stackexchange_filtered |
How can I send a contact from one Android to another?
While I can chose Share by a contact on my Droid's contact list, the Droid on the other end seems unable to open, or really do anything with, the resulting attachment if I receive it via Gmail or the normal mail client. Am I missing something? How do I import what I've shared?
There is an app called Hoccer which allows transfer of arbitrary data (contacts, pictures, links, ...) by making gestures with your phone. It also works with the iPhone but of course requires both devices to have the app installed.
It's a known issue that Android (at least pre-2.2) does not support import of .vcf files via email or sms, see http://code.google.com/p/android/issues/detail?id=3537
However, it does appear to work in Froyo on my Nexus One, when importing from email. As far as I'm aware it's still not catered for over SMS, and the thread above suggests it doesn't work for downloads in the browser either.
Amazingly, still true, I think, for Android 6.0. Issue 3537 was marked as a duplicate of 2412, which is "Obsolete" with no fix that I noticed in a quick perusal. What's up? https://code.google.com/p/android/issues/detail?id=2412
If both droids are in the same room the following will apply - You can both install Barcode Scanner and then on your device, open the contact, tap share, then tap Barcode Scanner. This will generate a QR Code. On the other phone, open the barcode scanner app and scan the QR Code. The second device will be able to add the contact via the scanned QR Code via the Add as Contact button after the scan successfully completes.
There's also Bump.
Bump™ makes sharing photos, contacts,
and apps with people as simple as
bumping your phones together.
HOW TO USE BUMP™: 1) Open Bump™ on
both phones 2) While holding the
phones, gently bump your hands
together 3) Confirm the exchange
Compatible with iPhone too!
I've got an Sony Ericsson Xperia X10 and while viewing a contact, I just press the menu button, Send Business Card and choose between Bluetooth, Gmail, SMS or Moxier Mail (Exchange Client I think).
I've send contacts back and forth between my Android phone and my Palm phone. I don't have two Android phones, but I don't see why it wouldn't work.
Previously I was using 1.6 which was fully compatible with all my contacts on my old Palm phone, but since upgrading to 2.1 this does not work as well as I had hoped.
So, no software to install, works right out of the box for me.
| common-pile/stackexchange_filtered |
Kotlin: Why are unary plus/minus not able to infer generic type from assignment?
Why are unary plus/minus not able to infer generic type from assignment?
Using invoke, for example: inline operator fun <reified T> invoke(): T
You can call the method and are able to use it... for example val foo: Long = this()
But, using unaryPlus or unaryMinus in the same fashion doesn't work.
The method signature: inline operator fun <reified T> unaryPlus(): T
The non-working call: val foo: Long = +this
Your example appears incomplete, you don't show if this inline operator is declared inside of a class or not. It cannot be top level because it must be a class method, or extension on a class. So please provide enclosing class.
@JaysonMinard It is declared within a class like so: https://youtrack.jetbrains.com/issue/KT-10453
Operator must be declared as extension function or member of some class:
inline operator fun <reified T> T.unaryPlus(): T = this
Then you can use it on any T:
fun main(args: Array<String>) {
data class Type(val value: Int)
val foo = +Type(42)
}
This is not related to the problem. My problem is clearly a bug: https://youtrack.jetbrains.com/issue/KT-10453
| common-pile/stackexchange_filtered |
Adding text to Settings Saved in Wordpress
I am building a custom plugin in wordpress with a settings page.
Right now everything works as expected however I want to add more functionality.
In this case when you save the settings the page reloads with a message in the top which says Settings saved. I will like to add a line of code in that message box.
Is there a way to do that?
Thanks
Look into the admin_notices action hook, as well as settings_errors, depending on what information you're trying to output. You'll need to set the errors using add_settings_error, which will let you specify a success or error.
| common-pile/stackexchange_filtered |
Is this operation meaningful or it is a mistake in the book?
I've been reading Nakahara's "Geometry, Topology and Physics" and found something quite strange on the section 10.3.3 which discusses the geometrical meaning of the curvature of a connection. It is possible to find there the following text:
We first show that $\Omega(X,Y)$ yields the vertical component of the Lie bracket $[X,Y]$ of horizontal vectors $X,Y\in H_u P$. It follows from $\omega(X)=\omega(Y)=0$ that
$$d_P\omega(X,Y)=X\omega(Y)-Y\omega(X)-\omega([X,Y])=-\omega([X,Y]).$$
My problem here is the following. As I've previously studied in other books, the Lie bracket can only be computed for vector fields. That is, given a smooth manifold $M$, a point $a\in M$ and two vectors $v,w\in T_a M$ it is totally meaningless to talk about $[v,w]$. In that case, $[\cdot,\cdot]$ is only meaningful for vector fields.
On the text, Nakahara is talking about picking two vectors $X,Y\in H_u P$ the horizontal subspace of the tangent space $T_u P$ at $u\in P$ and then he is talking about the Lie bracket $[X,Y]$.
And this is not the only place where he does that. Some paragraphs later we can see the same thing being done again, so it is certainly not a typo.
Is that really a mistake in the book? Or is there something I'm missing? Perhaps there is some natural extension of the vectors to vector fields when dealing with connections on principal bundles and I'm not aware of that.
Am I missing something or the book has a mistake in it?
You are right, but, the paragraph you quoted shows that the vertical component of $[\tilde{X}, \tilde{Y}]$, for any horizontal "extensions" $\tilde{X}$, $\tilde{Y}$ of $X$, $Y$ respectively is the same, being equal to $-F(X,Y)$, which only depends on the values $X$ and $Y$ at the point considered (and thus independent of the choices of horizontal extensions $\tilde{X}$, $\tilde{Y}$).
| common-pile/stackexchange_filtered |
Solving system of equation using Macaulay2
As an algebraic curve, the Klein quartic can be viewed as a projective algebraic curve over the complex numbers $\mathbb{C}$, defined by the following quartic equation in homogeneous coordinates $[x:y:z]$ on $\mathbb{P}^2_{\mathbb{C}}$:
$$x^3 y + y^3 z + z^3 x = 0.$$
Now we want to find the eigenvectors of this curve, therefore consider the following matrix:
$$
\begin{bmatrix}
3x^2y+z^3 & x^3 + 3y^2z & y^3+3z^2 \\
x & y & z \\
\end{bmatrix},
$$
after computing the $2$-minors we got the following system:
\begin{equation*}
\left\{
\begin{alignedat}{3}
% R & L & R & L & R & L
3x^2y^2+z^3y & -{} & (3y^2zx + x^4) & = 0 \\
3z^2x^2+y^3x & -{} & (3x^2yz + z^4)& = 0 \\
3y^2z^2+x^3z & -{} & (3z^2xy + y^4)& = 0
\end{alignedat} \ .
\right.
\end{equation*}
I am new to Macaualy 2 and I am wondering if Macaulay2 can help me to solve the above system numerically with loading NumericalAlgebraicGeomety package, (I know that because of infinity many solution we need normalize the system on unit sphere).
I divided by $z^4$ all equations and set $$\frac{x}{z}=X;\;\frac{y}{z}=Y$$
so I got
$$
\begin{cases}
-X^4+3 X^2 Y^2-3 X Y^2+Y=0\\
-3 X^2 Y+3 X^2+X Y^3-1=0\\
X^3-3 X Y-Y^4+3 Y^2=0\\
\end{cases}
$$
solved by Mathematica I got these solutions
$$
\begin{array}{rr}
X & Y\\
\hline
1 & 1 \\
0.307979 & 1.55496 \\
0.643104 & 0.198062 \\
5.04892 & 3.24698 \\
-1.80194 & 0.801938 \\
-0.445042 & -0.554958 \\
1.24698 & -2.24698 \\
-0.610856 & 0.106042 \\
-0.173596 & -1.63705 \\
0.667631 & 1.26654 \\
0.789553 & 0.52713 \\
1.89706 & 1.49783 \\
9.4302 & -5.76459 \\
\end{array}
$$
Hi, may I ask which code you use? I have similar question, thanks a lot
I can admit I got the same many solution, I was using the below code (you can try it online Macaulay2Web):
"
R=QQ[x,y,z]
f=x^3* y + y^3* z + z^3* x
x1=basis(1,R)
I=minors(2,diff(x1,f)||x1)
pI=primaryDecomposition I
loadPackage "NumericalAlgebraicGeometry"
q=x^2+y^2+z^2
I=minors(2,diff(x1,f)||x1)+ideal(q-1)
sol=solveSystem(first entries gens I)
length sol"
and you will got 26, 13 pairs on sphere. Thanks for the answering I should learn Mathematica too. I will check it precisely.
| common-pile/stackexchange_filtered |
WooCommerce Nodejs Response Error
I am trying to get a product detail from WordPress woo-commerce store. For that i am using nodejs woo-commerce module
But when i make a request i get the following error from server
{"status":"FAIL","message":"Invalid Signature - provided signature does not match"}
and 401 Unauthorized
But when i tried the same thing with php WooCommerce-REST-API-Client-Library which is using curl, i got the desired result.
I have searched different things, even compared the headers and url parameters etc, for both php and nodejs reqeust.
I have also tried this with nodejs curlrequest module.
Can anybody please guide me in the right direction.
I am passing the following options to node module:
{
url:'my-woocommerce.com',
port:'81',
consumerKey:'ck_402b945d4b8a5017bb507df68295e833',
secret:'cs_5ac59207f8cb8c444ca4c4336ccc84e1'
}
I would double check the API URL in both cases (even going as far as running fiddler or similar tool and comparing the requests).
I have verified the api url again, and it looks fine.
I hope you also checked http vs https and www in front of the domain name vs the domain by itself. If they are the same in both API requests check if the URL you are using is the same as the URL defined in your WordPress general settings. Please also check if the consumer key and secret are the same and if the encodings of the both requests are the same.
Sample example here http://my-woocommerce.dev:81/wc-api/v2/products/70?oauth_consumer_key=ck_402b945d4b8a5017bb507df68295e833&oauth_timestamp=1435655668&oauth_nonce=6b44a4896777c4264a84bdf17be1f5a174e9b559&oauth_signature_method=HMAC-SHA256&oauth_signature=qte6vA9%2BZfgvIWd%2FflEq7ntvTcECrAoWrXyO6kGJptk%3D
Is this a request that works? If so - can you also sent the options object that you passed to WooCommerce?
I'm assuming that the .com in the url is there by mistake right?
no the actual url is http://woo2.piggybaq.com:81, i gave the sample url in code.
| common-pile/stackexchange_filtered |
Usage of backslash in base64 string padding "\=\="
I have a file that would appear to be named a base64 encoded string that is padded with "==" when viewed in macOS Finder app. However, when I drag the file over into a shell using bash 4.4 to use with the "file" command it adds two backslashes to the file's name and shows it as "\=\=" instead of "==" for some reason.
Are these escape characters that serve some purpose or is there any further explanation for this?
In a shell, = has the special meaning of assigning a value to an environment variable. In some special cases this could cause the shell to use a string containing = in some weird way.
| common-pile/stackexchange_filtered |
ApexCharts only shows 5 labels (type datetime)
I have a chart with one value per day. Somehow it only labels 5 days on the x-axis.
How can I make it label all the days between the set min and max date (as far as I know tickAmount does not work with type datetime)?
And how can I remove the offset on the x-axis so that the point for every day is directly above its label (See offset here)?
var options = {
series: [{
name: 'Series1',
data: [
5, 6, 5.5, 7, 1, 3, 4
]
}],
chart: {
height: 250,
type: 'area'
},
dataLabels: {
enabled: false
},
stroke: {
curve: 'smooth'
},
colors:['#000000'],
xaxis: {
type: 'datetime',
categories: [
'2022-07-20 00:00:00', '2022-07-21 00:00:00', '2022-07-22 00:00:00', '2022-07-23 00:00:00', '2022-07-24 00:00:00', '2022-07-25 00:00:00', '2022-07-26 00:00:00'
],
min: new Date("2022-07-20 00:00:00").getTime(),
max: new Date("2022-07-26 00:00:00").getTime(),
labels: {
formatter: function(val) {
return moment(new Date(val)).format("ddd");
},
style: {
colors: '#000000',
},
datetimeUTC: false, // Do not convert to UTC
},
},
yaxis: {
labels: {
style: {
colors: '#000000',
},
formatter: function (val) {
return val.toFixed(0)
},
},
tickAmount: 10,
min: 0,
max: 10
},
};
document.getElementById("chart").innerHTML = "";
var chart = new ApexCharts(document.querySelector("#chart"), options);
chart.render();
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Document</title>
</head>
<body>
<div id="chart"></div>
</body>
</html>
<script src="moment.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/apexcharts"></script>
Remove that min and max attributes in xaxis and see what happens.
@CuriousMind that does not change anything... Thanks anyway!
Found a solution for the problems you mention, we removed the tags:
type inside xaxis
min and max
We add the tickAmount tag with value 6 (7 ticks 0-6).
Another problem detected has been the value of the labels since 5.5 rounded it to 6, so that it does not happen we add it inside the tooltip label
y: {
formatter: function (val) {
return val.toFixed(1);
}
}
I leave the solution in react below.
CodeSandbox solution
Result of the solution in photo
| common-pile/stackexchange_filtered |
Is there a description of sheaf cohomology in algebraic-topological terms?
Is there a description of of sheaf cohomology for the sheaf of sections of a continuous function in terms of common constructions in Algebraic Topology?
In more detail: Any sheaf on a space X can be described as the sheaf of sections of some continuous map from the étale space Y to X. In fact, the category of sheaves (of sets) on X is equivalent to the category of maps to X which are local homeomorphisms. A sheaf of Abelian groups is the same as an Abelian group object in the category of sheaves of sets, so instead of talking about cohomology of sheaves, we could talk about cohomology of an Abelian group object in the category of local homeomorphisms to X, that is, a local homeomorphism from some space Y to X such that, roughly, every fibre has an Abelian group structure where all the multiplications (of the fibres) put together form a continuous map from Y × X Y to Y.
It seems like there should be a simple description of cohomology of X with coefficient in a sheaf of Abelian groups in terms of the corresponding map Y → X that uses only usual constructions in Algebraic Topology and the (fibrewise) group structure of Y. Is there one?
Computing derived functors is one of the common, usual constructions in Algebraic Topology!
As is computing the Cech Complex.
@Mariano & Charles Siegel, both: I wasn't very explicit about what kind of constructions I wanted in the description, but I meant things like homotopy classes of maps between appropriately defined spaces (for example, ordinary cohomology), and whatever you can get from them by kernels and other such operations from homological algebra. (You may mean something like this without me realizing it, in which case I would be very grateful if you enlightened me!)
@Charles Siegel: I guess I regard the simplicial Cech nerve as a construction in Algebraic Topology and you can get the usual Cech complex by Dold-Kan, but I wanted something more along the lines of "take this sort of cohomology on Y, and the map induced to cohomology on X and do this" or "build these spaces out of X and Y and take this kind of cohomology of them", etc. (I can't say precisely what description I want but I'll know it when I see it.)
@Mariano: Could you explain what you mean? Do you mean that derived functors are just the functors induced on homotopy categories by some functors on categories with weak equivalences (or more specifically, model categories)? If so (even though I didn't say), I wanted something more concrete (or "elementary") than "well, the injective resolutions you must take to compute sheaf cohomology are just cofibrant replacements in an approriate model structure".
In the case that $Y = A \times X$ is an untwisted sheaf, then there is an easy description (for reasonable spaces X and top. abelian groups A (say Hausdorff, compactly generated, locally contractible)) which is proven in G. Segal "Cohomology of Topological Groups" Sym. Math. Vol IV 1970 pg. 377. From the results of that paper it follows that for $i \geq 1$,
$$H^i(X, \mathcal{O}_A) \cong [X, B^i A]$$
where this is sheaf cohomology and $[-, -]$ denotes homotopy classes of maps, and $B^iA$ is the $i^{\text{th}}$ iterated classifying space. (Note that when A is abelian, BA is again an abelian topological group).
For twisted coefficients (i.e. arbitrary Y), there is a similar description, but you must work in the over category of spaces over X.
Right, I knew something like this must be true for constant coefficients (but not a precise set of hypothesis and a reference). Could you explain to me explicitly the case with arbitrary Y? Given an Abelian group object Y in {spaces over X}, to define BY as a space over X do you just form the (simplicial space over X valued) nerve of the Abelian group object $Y\to X$, and then take its realization (defined by the usual nerve and realization adjunction for the functor $\Delta \to$ {spaces over X} that sends the n-simplex to the projection $|\Delta^n|\times X \to X$)?
That sounds like it should work if all the spaces are reasonable, but there are technical pitfalls when working in the relative setting. What you want is another abelian group object $E \to X$ with an embedding as a closed sub-object $Y \hookrightarrow E$ such that E is "contractible" in the over category, i.e. $E \simeq X$ as spaces over X. Then the quotient E/Y in spaces over X will be the correct group BY. If you phrase it like this, then most (all?) of Segal's machinery should carry over.
I am pretty sure that you know what I'm going to say below, if it's correct, but maybe you or someone else can set me straight if I'm wrong.
Let $X$ be a space, and let $\mathcal{F}$ be a sheaf of abelian groups on $X$. Then $\mathcal{F}$ defines a functor
$\mathcal{O}(X) \to \mathbf{Ab}$ from the category $\mathcal{O}(X)$ of local homeomorphisms to $X$ to the category $\mathbf{Ab}$ of abelian groups, which satisfies descent: that is, if $U \to X$ is any local homeomorphism, then $\mathcal{F}$ can be recovered in the usual way from its pull-backs to $U, U \times_X U, \dots$ (actually, you only need the first two, and no "dots").
Now, $\mathcal{F}$ also defines a functor $$\mathcal{F}: \mathcal{O}(X) \to \mathbf{Sp}$$ where $\mathbf{Sp}$ is the $\infty$-category of spectra (namely, taking values in Eilenberg-MacLane spectra in degree zero). There is a well-defined notion of a sheaf of spectra: it's one which satisfies an analogous homotopy descent condition where you take the whole cosimplicial thing for the homotopy limit rather than an equalizer (and for hypercovers rather than Cech covers).
So $\mathcal{F}$ is a sheaf of abelian groups, but it's not a sheaf of spectra. In fact, if you take the sheafification of $\mathcal{F}$ (as a sheaf of spectra), and take its homotopy groups, you get the sheaf cohomology groups of $\mathcal{F}$. If I am not mistaken, this follows from the (degenerate) descent spectral sequence: that is, to sheafify $\mathcal{F}$, you take the inverse limit of the totalizations over each hypercover, and this gives you the sheaf cohomology groups of $\mathcal{F}$ (that is, $\pi_i$ of the sheafification is $H^{-i}$ of the sheaf over that open set).
Another way to check this is to treat $\pi_i$ of the sheafification as a $\delta$-functor on sheaves of abelian groups. The main thing to check is that if you have a sheaf of injective abelian groups, then this sheafification business doesn't give you anything new. Again, this follows from the descent spectral sequence, but there's probably another way to do it.
This doesn't quite answer your question: you want to describe the higher cohomology groups of $\mathcal{F}$ in terms of its "espace etale."
I can't see how to do this in terms of the present discussion: we really needed
to be in a stable context, as the sheaf cohomology groups occur in negative
degrees. But again, I haven't thought too much about this.
| common-pile/stackexchange_filtered |
Spring boot ResponseEntity not showing response in proper format if it cotains xml data as string
If response object contains the xml data in string format then it shows response in improper format.
@ApiResponses(value = { @ApiResponse(responseCode = "404", description = "Resource name not found"),
@ApiResponse(responseCode = "415 ", description = "Unsupported Media Type"),
@ApiResponse(responseCode = "500 ", description = "Application Error") })
@GetMapping(value = "/validate-resource/{resourceName}", consumes = { MediaType.APPLICATION_JSON_VALUE,
MediaType.APPLICATION_XML_VALUE }, produces = { MediaType.APPLICATION_JSON_VALUE,
MediaType.APPLICATION_XML_VALUE })
public ResponseEntity<Response> connectionValidate(
@Parameter(description = "resourceName cannot be empty.", required = true) @PathVariable(value = "resourceName", required = true) String resourceName)
throws Throwable {
HelperService helperService = new HelperService();
if (resourceMap == null || resourceMap.get(resourceName) == null) {
Response response = new Response();
response.setStatus("error");
response.setMessage("Resource name not found");
return new ResponseEntity<>(response, HttpStatus.NOT_FOUND);
}
HashMap<String, String> resourceDataMap = resourceMap.get(resourceName);
String localResponse = helperService.execute(helperSettingsFilePath,filepath, resourceDataMap, "SAP ERP",
"validate-resource");
Response response = new Response();
response.setStatus("success");
response.setData(localResponse);
return new ResponseEntity<>(response, HttpStatus.OK);
}
When we call api it shows response in following format
<Response>
<status>success</status>
<message/>
<data>
<?xml version="1.0" encoding="UTF-8"?>
<Return>
<Status name="SUCCESS">0</Status>
<Description>Connection to the requested SapR3 Server was established successfully.</Description>
</Return>
</data>
</Response>
Expected out put
<Response>
<status>success</status>
<message/>
<data>
<?xml version="1.0" encoding="UTF-8"?>
<Return>
<Status name = "SUCCESS">0</Status>
<Description>Connection to the requested SapR3 Server was established successfully.</Description>
</Return>
</data>
</Response>
Please let me know what is going wrong here.
I think the string should be formatted properly before sending.
Convert String to Document and Document back to String and send it.
You might need to add some dependency too I think.
Reference : https://www.journaldev.com/1237/java-convert-string-to-xml-document-and-xml-document-to-string
Sample code from reference
package com.journaldev.xml;
import java.io.StringReader;
import java.io.StringWriter;
import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.DocumentBuilderFactory;
import javax.xml.transform.OutputKeys;
import javax.xml.transform.Transformer;
import javax.xml.transform.TransformerException;
import javax.xml.transform.TransformerFactory;
import javax.xml.transform.dom.DOMSource;
import javax.xml.transform.stream.StreamResult;
import org.w3c.dom.Document;
import org.xml.sax.InputSource;
public class StringToDocumentToString {
public static void main(String[] args) {
final String xmlStr = "<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?>\n"+
"<Emp id=\"1\"><name>Pankaj</name><age>25</age>\n"+
"<role>Developer</role><gen>Male</gen></Emp>";
Document doc = convertStringToDocument(xmlStr);
String str = convertDocumentToString(doc);
System.out.println(str);
}
private static String convertDocumentToString(Document doc) {
TransformerFactory tf = TransformerFactory.newInstance();
Transformer transformer;
try {
transformer = tf.newTransformer();
// below code to remove XML declaration
// transformer.setOutputProperty(OutputKeys.OMIT_XML_DECLARATION, "yes");
StringWriter writer = new StringWriter();
transformer.transform(new DOMSource(doc), new StreamResult(writer));
String output = writer.getBuffer().toString();
return output;
} catch (TransformerException e) {
e.printStackTrace();
}
return null;
}
private static Document convertStringToDocument(String xmlStr) {
DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
DocumentBuilder builder;
try
{
builder = factory.newDocumentBuilder();
Document doc = builder.parse( new InputSource( new StringReader( xmlStr ) ) );
return doc;
} catch (Exception e) {
e.printStackTrace();
}
return null;
}
}
Mentioned output
<?xml version="1.0" encoding="UTF-8"?><Emp id="1"><name>Pankaj</name><age>25</age>
<role>Developer</role><gen>Male</gen></Emp>
Thanks for the reply, in this case i need to give response as is in xml string format, instead of ResponseEntity<>.
I believe ResponseEntity should be fine. Also you may refer this article as well.
https://codeburst.io/display-xml-data-on-html-page-and-highlight-the-search-text-cc1ce34c4147
And I am wondering since the response is already in xml format and the Spring tries to format again to xml will that cause some issue or so.. You can try sending response as text format since we already have xml.
Last way should be instead of string populate data to an object and send ResponseEntity<[new class]> so spring should ideally convert the object to proper xml.
If I return response ResponseEntity this way then it works fine.
| common-pile/stackexchange_filtered |
Passing javascript variable through url and decoding to a php variable
I found this little script to allow users to embed a simple javascript code on their site to show an iframe from my site. This works perfectly, however I am trying to pass 1 variable through the code they embed and on my end set it as a php variable.
I apologize in advance I dont know very much about javascript! sorry.
Sample code a user would embed (widget type code)
<script language="JavaScript" src="http://www.mysite.com/public/iframe.js" rel='{"id":"1"}'></script>
And this is the code that generates the iframe on my end
// Set path to the iframe file
var filePath = 'http://www.mysite.com/public/iframe.php';
// Setup the iframe target
var iframe='<iframe id="frame" name="widget" src ="#" width="100%" height="1"
marginheight="0" marginwidth="0" frameborder="no" scrolling="no"></iframe>';
// Write the iframe to the page
document.write(iframe);
var myIframe = parent.document.getElementById("frame");
// Setup the width and height
myIframe.height = 350;
myIframe.width = 960;
myIframe.src = filePath;
// set the style of the iframe
myIframe.style.border = "1px solid #999";
myIframe.style.padding = "8px";
The end goal is to the rel attribute id = 1 and assign it to a php variable for further use. I did a lot of looking around, it seems that json_decode is the answer but nothing seems to work.
I'm pretty sure the rel attribute is not something used in an iframe tag. That is usually used in an tag with rel="nofollow" so that search engines don't follow that link.
I haven't tried this but maybe you can just send that parameters over after a question mark in your embed code. Like this:
<script language="JavaScript" src="http://www.mysite.com/public/iframe.js?id=1"></script>
Then your PHP code just looks for that in a GET request.
<?php
echo $_REQUEST['id'];
?>
Of course this assumes that your receiving script is written in PHP. Since you show a .js extension I'm not sure the PHP code is executing? Also, I'm not 100% sure you can pass parameters like this in a javascript embed.
You can use rel on an iframe, although I'm not sure why you would - it's often used on various tags for microformats. And you can pass a param in the url like that. But I beat you to it :D
i tried passing it like you guys said in the url itself but it doesnt work. I am assuming the issue is the embed code calls a javascript file that calls a php file which is the file that needs the variable, not really the .js file.
Don't pass the id as a rel attribute, just add it to the url directly:
<script language="JavaScript" src="http://www.mysite.com/public/iframe.js?id=1"></script>
Then access it in your php.
$id = (array_key_exists('id', $_GET)) ? $_GET['id'] : null;
This is a shorthand if, which checks to make sure the id has been added to the url. It's the same as:
if(array_key_exists('id', $_GET) {
$id = $_GET['id'];
} else {
$id = null;
}
In your JavaScript you add this to find the script tag itself:
var allScripts = document.getElementsByTagName('script'),
currentScript = allScripts[allScripts.length - 1 ];
Then, it's up to you to either use rel attribute value or decoding the src attribute.
Btw, this may not always work when script elements are added in an asynchronous manner, in which case you have to check that src matches your filename or check for the rel perhaps
| common-pile/stackexchange_filtered |
scroll #mainContainer to element in polymer using JQuery
I am using Polymer "core_scaffold" and I want to perform some jQuery function on #mainContainer (Id).
I tried to use the following code
$('#mainContainer').on('scroll', function() {
// custom code
});
could anyone let me know how to get the scroll function on #mainContainer on polymer?
Thanks in advance.
First of all there is no point in using jQuery for this.
Looking at core_scaffold code at https://github.com/Polymer/core-scaffold/blob/master/core-scaffold.html it looks that there is an "scroll" event emitted.
scroll: function(e) {
this.fire('scroll', {target: e.detail.target}, this, false);
}
Your ID selector might be off. just try:
document.querySelector('[name="core-scaffold"]').addEventListener('scroll', function(event){ console.log(event)}, false)
the issue is in the selector. mainContainer is in the shadowdom of core-scaffold and can not be pierced with regular selectors. but core-scaffold provides a property for getting the scrollable element. i don't know jquery very well so i will use javascript and you can convert.
var scroller = document.querySelector('core-scaffold').scroller;
scroller.onscroll = function (e) {
// do something while scrolling
};
| common-pile/stackexchange_filtered |
Must be and must have been
Is any of these sentences correct?
There must be something happened in the past that traumatized him.
There must have been something happened in the past that traumatized him.
And;
Do I need to add 'that' after 'something'?
Like, There (must be or must have been) something that happened in the past that traumatized him.
There must have been something that happened in the past that traumatized him.
To me the first sentence has mixed tenses: "be" conflicts with "in the past".
How about adding that after something? Or is there other way to say it?
| common-pile/stackexchange_filtered |
Timeline App Tabs Icons
What happens if you don't upload the small icon that sits below the tab image?
What image will it pull?
For example - Facebook Apps - videos will show a reel icon below a visual of the video.
It will show the generic FB app icon:
| common-pile/stackexchange_filtered |
c# adding row that already belongs to a datatable
I have a datatable DTgraph, that datatable has a column named Campaign. that column could have one of three unique values, which are IVR, City, City2`. So the rows are like this:
I have a datatable has data like this format
........ IVR........
.........IVR........
**.........IVR........**
.........City1......
.........City1......
**.........City1......**
.........City2......
.........City2......
**.........City2......**
I want to take the last row of each unique value for that column, In other words, I want to take the rows that are bold. I did almost everything like this:
var cRows = new Dictionary<string, DataRow>(StringComparer.InvariantCultureIgnoreCase);
foreach (DataRow oRow in DTgraph.Rows)
{
var sKey = oRow["Campaign"].ToString();
if (!cRows.ContainsKey(sKey))
{
cRows.Add(sKey, oRow);
}
else
{
cRows[sKey] = oRow;
}
}
var oNewTable = DTgraph.Clone();
foreach (var oRow in cRows.Values)
{
oNewTable.Rows.Add(oRow);
}
As you see, I put the data in dictionary and transferred the dictionary to a datatable at the end.
My problem is that on this line:
cRows.Add(sKey, oRow);
I get an error:
The row is already belongs to another datatable
Note: I need to solve that exception, I don't need a new way of doing my goal
Note: I was wrong, the exception is on this line
oNewTable.Rows.Add(oRow);
Taken over from here: http://stackoverflow.com/questions/28648984/c-sharp-datatable-select-last-row-on-a-speicfic-condition
You are making an effort to ensure the keys are unique, but then putting the values in the table, and it complains the values you put in the table aren't unqiue.
To be honest I don't 100% understand your question, however to fix the exception:
The row is already belongs to another datatable.
Change:
oNewTable.Rows.Add(oRow);
To:
oNewTable.ImportRow(oRow);
Alternatively create a new row and clone the ItemArray.
foreach (var oRow in cRows.Values)
{
var newRow = oNewTable.NewRow();
newRow.ItemArray = oRow.ItemArray.Clone() as object[];
oNewTable.Rows.Add(newRow);
}
i was wrong, i corrected the question, i was wrong on the line that throws the exception, could you check please
@MarcoDinatsoli That is the line I have fixed, did you read the answer?
I am trying it wait please. the db is on cloud and i am waiting for permission, max 4 minutes
Use NewRow() function of the new table and then use oRow.ItemArray property to get values from the source row and copy them the newly created row's ItemArray. An example would be:
Array.Copy(oRow.ItemArray, oNewTable.NewRow().ItemArray, oRow.ItemArray.Length)
However, remember that this would not preserve original values and current state from the source row (which I don't think you're using here anyway). If those things matter, go for ImportRow() solution which preserves source row's state when copying.
i was wrong, i corrected the question, i was wrong on the line that throws the exception, could you check please
@MarcoDinatsoli: That is the line you need to replace with the code I have written above. Did you try it?
plus one, i hope i can accept more than answer, many thanks for the efforts . appreciate it
| common-pile/stackexchange_filtered |
Evidence that open source production processes increase efficiency and/or consumer surplus?
Is there peer reviewed evidence that open source production processes increase efficiency and/or consumer surplus? It seems that the first theorem of welfare economics requires complete markets which requires that all actors have perfect information. Yet, producers zealously seek to keep their own production processes secret rather than public.
For example, Soda Co. has an unpatented secret recipe at time $t_0$ that sells in the market. Are there economists who argue that if Soda Co. must patent and publicly reveal the secret recipe at time $t_x$, efficiency and/or consumer surplus would increase in comparison to current economy?
If not, are there good economic reasons why economic regulation does not require all suppliers to publicly reveal secret recipes, i.e., all information about the production process that would allow other producers to compete in production of the identical product (whether that be Soda Co.'s management techniques, scheduling practices, shipping arrangements, leasing terms, model number of vats, speed of conveyor belts, actual secret recipe for a beverage, etc.)? That is to say, are there good economic reasons that secrets anywhere in the entire production process are not simply barriers to entry?
Maybe Drink Co. can produce and sell Soda Co.'s recipe for 0.01 less than Soda Co. after they clone Soda Co.'s land, labor, capital, entrepreneurship, and recipe due to one tweak? Maybe consumers will only pay 0.10 less after they possess perfect information on how Soda Co. produces the beverage?
Producers are not in the business to cater to consumer surplus. And it seems you think that "economic regulation" is like a magic wand -"write the spell (sorry, the regulation), and it is done". The tension between the individual and the collective (and the compromises that must be reached) are the main reason human societies are interesting subjects to study.
There is a large economic literature on intellectual property rights. However, the issue seems far from settled on what even the optimal duration for patents are. Note that open source is even a step further than a 0 day patent duration. A strong case for your view would probably be found in Boldrin/Levine:
http://levine.sscnet.ucla.edu/general/intellectual/againstnew.htm
Here are several further starting points in the literature:
Scotchmer, Suzanne. "Standing on the shoulders of giants: cumulative research and the patent law." The Journal of Economic Perspectives (1991): 29-41.
Besen, Stanley M., and Leo J. Raskind. "An introduction to the law and economics of intellectual property." The Journal of Economic Perspectives (1991): 3-27.
Posner, Richard A. "Intellectual property: The law and economics approach." Journal of Economic Perspectives (2005): 57-73.
Fantastic answer! In case I was not clear, however, I edited question above to stress that question is not about eliminating patents but requiring patents for everything in the entire production process. Within intellectual property rights literature, have you seen this argument that Soda Co. must publicly reveal blueprint of entire production process (or at least more of it) for potential competitors and consumers to hold perfect (optimal) information?
Sorry, have not seen such an argument. Probably Boldrin/Levine would be closest to this idea. It is actually a straightforward consequence of their thesis. If companies no longer patent their processes but invest effort into hiding these processes, this generates inefficiencies if the patent system is optimal.
@jtd Why should the trade-offs be different at different steps in the production process? Each technology used (at any point) in the production process was invented (at some cost) by somebody. The ability to control the technology (either because you have a patent or because you keep it secret) and make profits from it is what gives people an incentive to innovate in the first place. Forcing disclosure would improve efficiency to by introducing competition in that part of the production process, but harm it to the extent that innovation is reduced. Policy must balance these two considerations.
| common-pile/stackexchange_filtered |
Sending a message to a TestProbe() fails sometimes with ActorInitializationException
I've some sporadic test failures and struggle to figure out why. I have a bunch of actors which to the work which I want to test. At the beginning of the test I pass in an actor reference which I get from a TestProbe(). Later on the group of actors do some work and send the result to the given test probe actor reference. Then I check the result with the TestProbe():
class MyCaseSpec extends Spec with ShouldMatchers{
describe("The Thingy"){
it("should work"){
val eventListener = TestProbe()
val myStuffUnderTest = Actor.actorOf(new ComplexActor(eventListener.ref)).start();
myStuffUnderTest ! "Start"
val eventMessage = eventListener.receiveOne(10.seconds).asInstanceOf[SomeEventMessage]
eventMessage.data should be ("Result")
}
}
}
Now once in a while the test fails. And when I look through the stack trace I see that I got a 'ActorInitializationException' when sending a message to the test probe actor. However at no point in time I stop the TestProbe actor.
Here's the exception:
[akka:event-driven:dispatcher:global-11] [LocalActorRef] Actor has not been started, you need to invoke 'actor.start()' before using it
akka.actor.ActorInitializationException: Actor has not been started, you need to invoke 'actor.start()' before using it
[Gamlor-Laptop_c15fdca0-219e-11e1-9579-001b7744104e]
at akka.actor.ScalaActorRef$class.$bang(ActorRef.scala:1399)
at akka.actor.LocalActorRef.$bang(ActorRef.scala:605)
at akka.mobile.client.RemoteMessaging$RemoteMessagingSupervision$$anonfun$receive$1.apply(RemoteMessaging.scala:125)
at akka.mobile.client.RemoteMessaging$RemoteMessagingSupervision$$anonfun$receive$1.apply(RemoteMessaging.scala:121)
at akka.actor.Actor$class.apply(Actor.scala:545)
....
I'm wondering if I'm missing something obvious or am I making a subtle mistake? Or maybe something is really going wrong inside my code and I can't see it?
I'm on Akka 1.2.
Update for Vitors-Comment. At line 125 I send a message to an actor with the !-operator. Now in the test-setup thats the TestProbe actor-reference. And I can't figure out why sometimes the TestProbe actor seems to be stopped.
protected def receive = {
case msg: MaximumNumberOfRestartsWithinTimeRangeReached => {
val lastException = msg.getLastExceptionCausingRestart
faultHandling ! ConnectionError(lastException, messages.toList, self) // < Line 125. The faultHandling is the TestProbe actor
become({
// Change to failure-state behavior
}
// Snip
Anyway, I'm trying to isolate the problem further for the time being. Thanks for any hint / idea.
Why haven't you included the most interesting section? at akka.mobile.client.RemoteMessaging$RemoteMessagingSupervision$$anonfun$receive$1.apply(RemoteMessaging.scala:125)
At 125 I send a message to my TestProbe() actor: I can't figure out why the test-probe actor is not running sometimes and I get the exception.
Ok, almost certainly found the issue =). TestProbes do have a timeout: When nothing happens after 5 seconds, they stop them self.
Now unfortunately the test takes just a little longer than 5 seconds: In that time the test-probe may stop itself all ready, which then causes the test to fail.
Fixing it is easy, increase the timeout on the TestProbe:
val errorHandler = ignoreConnectionMsgProbe()
errorHandler.setTestActorTimeout(20.seconds)
Yes, this is exactly what happens and the way to deal with it (you may also completely disable it using Duration.Inf). I also found that confusing (cost me some hours to blame the actual cause a few weeks back), and I should have known. Anyway, the issue is gone in 2.0, where all actors are orderly shutdown after the test without any timeouts (if you stop the ActorSystem, that is).
You are not starting your actor here. I'm not sure why your test is working some of the time. The code above needs to have the following line modified with an .start()
val myStuffUnderTest = Actor.actorOf(new ComplexActor(eventListener.ref)).start();
Oh. That's a mistake in the example above. In the actual test it is started. Fixed the example.
| common-pile/stackexchange_filtered |
Can the code button in the format bar be made to work for code blocks in numbered lists
I ran into a problem formatting a code block in a numbered list. I notice that I am not the first one to run into this problem -- there are more than a dozen questions related to this topic.
The solution is to add four extra spaces for each nested level of the list item that the code block appears in.
I wonder if the following steps can be taken to help newbies out with this issue --
In the "How to Format" reference (that appears to the right of you when edit, reword the "indent code by 4 spaces":
"indent code by 4 spaces. In lists, add an extra 4 spaces for level of the list item"
Add this information to the full reference as well. Currently, it says this"
"To put other Markdown blocks in a list; just indent four spaces for each nesting level:"
which is helpful to those who know markdown and are familiar with the terms, but will totally fly over the head of a newbie (who is the one who actually needs to know). Add an example of a code block within an nested list.
Fix the code button on the format bar (above the edit box) to figure out how many spaces to add automatically.
Add this information to the FAQ
+1. i'd actually prefer it if the code button just added a 4-space indent each time i pressed it. or maybe sequential pressings should add 4-space indents, but a select-new-text-then-press should remove all indentation (if that's possible). it's rare that i need to remove code-block markdown; i need to add extra spaces much more frequently.
Well it would indeed be a plus to have a context aware indentation.
This [question] shows the same problem. You either have to put a pre tag to format your code, or add an extra 4 spaces, even if the code block is at the end of the numbered list.
+1 This fix is super important. So annoying to have to copy to an external editor to get the correct indentation.
| common-pile/stackexchange_filtered |
How to access different File Manager specific path in android?
I have installed different types of File Manager. The different Types of File Managers are
Default File Manager
ASTRO-File-Manager-v3.1.342.apk(Astro)
FileManager-1.2.apk(OI File Manager)
Root-Browser-File-Manager-v1.4.0.apk(Root Browser)
Code :
File filePath = new File(Environment.getExternalStorageDirectory().getAbsolutePath()+"/Pictures");
Intent intent = new Intent(Intent.ACTION_GET_CONTENT);
intent.setDataAndType(Uri.parse("file:/"+filePath.getAbsolutePath()), "file/*");
startActivityForResult(intent,PICKFILE_RESULT_CODE);
When I can select Astro File Manager, Root Browser, Default File Manager I got the location path as /mnt/sdcard.
But when I access the OI File Manager I got the location path as /mnt/sdcard/Pictures.
How do get and open to different types of File Manager to access the location path as /mnt/sdcard/Pictures
private static final int PICKFILE_RESULT_CODE = 1;
Can you Please help me some one. How do Open this path /mnt/sdcard/Pictures directly using the File Manager as Astro File Manager, Root Browser, OI File Manager and also Default File Manager. Thanks.
Specifying initial path to a file explorer? You can't.
Read documentation for ACTION_GET_CONTENT. The contract for using this action doesn't specify initial location at all. So various file managers do different things, and it's all legal and correct. Some use the data you try to input, some use last directory where user was, some start at root of sdcard.
Btw, your mime type "file/*" looks like nonsense, there's no such known mime type. Just use "*/*" to match all files.
Mice Thanks for reply. May i directly access to internal memory and/or external memory. My Android Mobile Internal Memory path is /mnt/sdcard and External Memory path is /mnt/extsdcard. May I directly access to internal/external memory path. Is this possible?
You can read/write in device's mass memory (internal or external) without problem. You may need permission WRITE_EXTERNAL_STORAGE.
Mice Thank You. Your explanation is useful to me. Can you post any one example/part of code here. How do access the mass memory both internal and external. Where I can change the path of internal and external memory. Guide me. Thanks.
You should write new question for that, it's rather different topic than what you originally asked.
| common-pile/stackexchange_filtered |
Matching a time interval using regex
What I needed to do was check whether a given string matches a certain pattern. The pattern is this:
00:00:00,000 --> 00:00:00,000
Things to keep in mind:
The 0s can be numbers from 0 to 9.
The pattern must be alone in a single line; the line has to only consist of the pattern.
I came up with this:
"^(\\d\\d):(\\d\\d):(\\d\\d),(\\d\\d\\d) --> (\\d\\d):(\\d\\d):(\\d\\d),(\\d\\d\\d)"
I tested it a few times and strings that respect the pattern return true, the ones that don't, return false, as they should.
Here's a test case:
import java.util.regex.Pattern;
public class TestCaseRegex {
public static void main(String[] args) {
//will return true
String testOne = "00:01:23,846 --> 00:01:26,212";
//will return false, there's a letter where a number should be
String testTwo = "00:01:23,84a --> 00:01:21,221";
//will return true
String testThree = "00:05:54,846 --> 00:01:16,450";
//will return false. The string doesn't match the format.
String testFour = "00:05:54,6 --> 00:0116,450";
System.out.println(patternMatch(testOne));
System.out.println(patternMatch(testTwo));
System.out.println(patternMatch(testThree));
System.out.println(patternMatch(testFour));
}
public static boolean patternMatch(String str) {
Pattern p = Pattern.compile("^(\\d\\d):(\\d\\d):(\\d\\d),(\\d\\d\\d) "
+ "--> (\\d\\d):(\\d\\d):(\\d\\d),(\\d\\d\\d)");
return p.matcher(str).matches();
}
}
As I'm very new to regex, I'm wondering if this is the most efficient/correct way to accomplish this.
Suppose that a string matches the pattern. Then what? What do you do with that information?
The idea will be to split the string again by --> and store each time interval separately. But the question is about the regex I came up with, and if it's the most efficient to validate the pattern I presented. I'm not thinking ahead of that (yet).
I'm far to be an expert in Regex. But you may have used \d{2} instead of \d\d
@gervais.b Oh that's good. Can I also use \\d{3} for \\d\\d\\d?
@Morgan Yes, of course, you can alos go a little deeper by repeating the 00: twice : (\d{2}:){2}. And if your goal is to extract both parts, you can use capturing groups : ((?:\d{2}:){2}\d{2},\d{3}) --> ((?:\d{2}:){2}\d{2},\d{3}) http://www.regexplanet.com/cookbook/ahJzfnJlZ2V4cGxhbmV0LWhyZHNyDwsSBlJlY2lwZRiqgfU0DA/index.html
Regarding a small optimization, turn "p" into a constant (private final static Pattern PATTERN_TIMESTAMP = Pattern.compile("...");). Compiling each and every call to patternMatch is fine and good if you're planning on using it once, but I'm guessing you're using it to search through log files, am I right?
yes i concur with @gervais.b: it is probably better in this case to use a quantifier
Rather than using \d, you might want to limit the matcher to valid ranges.
\d{2}:([0-5]\d:){2},\d{3}
would match either of the timestamps and it would prevent someone from entering
11:88:88,888 --> 12:99:99,999
If you want to check for any valid time interval, you should consider to allow strings like:
0:0:0,0-->12:03:4,344
Try this:
^\s*\d{1,2}:\d{1,2}:\d{1,2},\d{1,3}\s*-->\s*\d{1,2}:\d{1,2}:\d{1,2},\d{1,3}\s*$
In this case, the example you provided isn't considered a valid time interval. The string needs to contain double digits for H, M, and S and three digits for milliseconds.
| common-pile/stackexchange_filtered |
Not able to use global variables in a @app.route block in flask app
from flask import Flask, request, render_template
app = Flask(__name__)
def fun(userAns="True", question="this is first question"):
if question == "this is first question":
if userAns == "True":
return "actual first question", "Yes", "No", None
elif question == "actual first question":
if userAns == "True":
return "2nd question to be asked after ans = true", "Yes", "No", None
elif userAns=="False":
return "2nd question to be asked after ans = false", "Yes", "No", None
question = None
userAns = None
@app.route("/", methods=["POST", "GET"])
def hello():
a=0
while a < 1:
question = "this is first question"
userAns = 'True'
returnValue = fun(userAns, question)
a += 1
if request.method == "POST":
userAns = request.form["answer"]
print(userAns)
returnValue = fun(userAns, question)
question=returnValue[0]
return render_template("index.html",question=returnValue[0], option1= returnValue[1],option2=returnValue[2],option3=returnValue[3])
if __name__ == "__main__":
app.run(debug=True)
Here I want to call the fun() function in the @app.route, but for that I have to initialize some variables as they will be updated for the next iteration. I observed in the debugger that every variable gets its initial value at the end of @app.route, so I tried to make those variables global, but it didn't take global variables for some reason.
To update a global variable in function use this global variable_name at the start of hello function
| common-pile/stackexchange_filtered |
How can I get sudo back to work? Any sudo results in "sudo: 3 incorrect password attempts"
I had problems with some ssh connection and finally ended up with disenabling my sudo completely. Any sudo command results in
Sorry, try again.
Sorry, try again.
Sorry, try again.
sudo: 3 incorrect password attempts
Surprisingly, I am never actually asked to enter a password. Maybe this is already a problem.
I would appreciate any help
"disenabling my sudo completely" how, exactly?
What did you actually do? Which commands did you run? Which config files did you edit?
I am not completely sure, which command caused that sudo does not work anymore. I tried to changed authorized keys, .ssh/config and other ssh-related files.
Could it be that it has something to do with the command cat /etc/pam.d/sudo ?
Login to root su root then type passwd username. You will be able to change the password. Maybe you edited passwd file by mistake
'su root' gives 'su: Authentication failure'
I guess this is what happens when you " disenabling my sudo completely" . The first step is going to be to obtain root access. If you can not sudo or su from your user, can you ssh in as root ? If not you are going to need to boot to recovery mode. Do you have physical access to the server ?
@Panther, I now booted to recovery mode. Could you please help me how to proceed?
can you post the output of id your_username to start with and make sure your user is in /etc/sudoers either by name or group.
uid=1001(ac) gid=1002(ac) groups=1002(ac), 0(root), 4(adm), 20(dialout), 21(fax), 24(cdrom), 25(floppy), 26(tape), 27(sudo), 30(dip), 44(video), 46(plugdev), 105(fuse), 110(lpadim), 123(sambashare)
post grep sudo /etc/sudoers
I guess this line is of importance
%sudo ALL=(ALL:ALL) ALL
It should be working. su your_user and try sudo , post any errors. May need to look at permissions of sudo , but that usually give a different error.
su my_user gives su: Authentification failure
I am still in the recovery mode. Is this correct?
and you are sure you are using the correct password for your user account? Reset your user password with passwd uer_name and run faillog -u your_user -r , then try su your_user and sudo from your account again
This does not work. When I did passwd my_username , I was asked to enter a new password (2 times) but then it said passwd: Authentication token manipulation error passwd: wassword unchanged. By the way, many thanks for your efforts!
did you remount in rw ? otherwise your file system is ro. mount -o remount,rw / and try again . If that fails, will have to wait , have plans for the day.
the password change was now successful. But faillog -u my_username -r results in faillog: Cannot open /var/log/faillog: Read-only file system
try su and sudo again. check to see if outer file systems are mounted ro, mount | grep ro and remount them also. re run faillog .....
su my_username gives still su: Authentication failure;
mount -ro remount,rw / results in mount: /dev/sda1 already mounted or / busy mount: according to mtab, /dev/sda1 is already mounted on / ; and mount | grep ro gives /dev/sda1 on / type proc (rw,noexec,nosuid,nodev) check the /proc/mounts file.
| common-pile/stackexchange_filtered |
componentWillReceiveProps state is different from render state after redux state update
First of all, all the relevant code (click on the filename for the full source code of that file).
LoginView.js
LoginView = class extends React.Component {
handleLogin = (email, password) => {
this.props.authenticationActionCreator.login(email, password);
};
componentWillMount () {
console.log('componentWillMount', 'this.props.isAuthenticated', this.props.isAuthenticated);
}
componentWillReceiveProps () {
console.log('componentWillReceiveProps', 'this.props.isAuthenticated', this.props.isAuthenticated);
}
render () {
let {
errorMessage,
isAuthenticating
} = this.props;
return <div>
<p>this.props.isAuthenticated: {this.props.isAuthenticated ? 'true' : 'false'}</p>
<button onClick={() => {
<EMAIL_ADDRESS>'nosyte');
}}>Login</button>
</div>;
}
};
authentication.js (reducer)
if (action.type === 'AUTHENTICATION.LOGIN_SUCCESS') {
return initialState.merge({
isAuthenticated: true,
token: action.data.token,
user: action.data.user
});
}
authenticationActionCreator.js
authenticationActionCreator.loginSuccess = (token) => {
let decodedToken;
// @todo Handle failure to decode token.
decodedToken = jwtDecode(token);
localStorage.setItem('token', token);
return {
type: 'AUTHENTICATION.LOGIN_SUCCESS',
data: {
token,
user: decodedToken.user
}
};
};
The flow is simple:
User opens the page.
User clicks the <button /> that invokes authenticationActionCreator.login.
The console.log output is:
componentWillMount this.props.isAuthenticated true
action AUTHENTICATION.LOGIN_REQUEST @ 16:52:50.880
componentWillReceiveProps this.props.isAuthenticated true
componentWillReceiveProps this.props.isAuthenticated false
action AUTHENTICATION.LOGIN_SUCCESS @ 16:52:51.975
The expected console.log output is:
componentWillMount this.props.isAuthenticated true
action AUTHENTICATION.LOGIN_REQUEST @ 16:52:50.880
componentWillReceiveProps this.props.isAuthenticated false
action AUTHENTICATION.LOGIN_SUCCESS @ 16:52:51.975
componentWillReceiveProps this.props.isAuthenticated true
The problem is that render has the correct state (the state after AUTHENTICATION.LOGIN_SUCCESS) and componentWillReceiveProps has the old state (the state after AUTHENTICATION.LOGIN_REQUEST).
I am the last call to componentWillReceiveProps to have the same state object as the render method.
Is this:
a bug
I am doing something wrong
my expectations are false
?
It took me writing all this debug trace/question to remember that componentWillReceiveProps API is:
componentWillReceiveProps: function(nextProps) {}
In other words, my LoginView.js example should have been:
LoginView = class extends React.Component {
handleLogin = (email, password) => {
this.props.authenticationActionCreator.login(email, password);
};
componentWillReceiveProps (nextProps) {
console.log('componentWillReceiveProps', 'nextProps.isAuthenticated', nextProps.isAuthenticated);
}
render () {
let {
errorMessage,
isAuthenticating
} = this.props;
return <div>
<p>this.props.isAuthenticated: {this.props.isAuthenticated ? 'true' : 'false'}</p>
<button onClick={() => {
<EMAIL_ADDRESS>'nosyte');
}}>Login</button>
</div>;
}
};
Did you ever figure out why? Is it because componentWillReceiveProps is passed with updated props even before props are updated? Thank you for leaving the answer btw. I too did not check the API first and was spending hours trying to figure out what was going on.
| common-pile/stackexchange_filtered |
Combining validation errors in Rails
For example, if I have a user with an email address that needs validating on presense and format:
validates_presence_of :email_address, :message => "can't be blank"
validates_format_of :email_address, :with => /^([^@\s]+)@((?:[-a-z0-9]+\.)+[a-z]{2,})$/i
If nothing is entered, how can I prevent both error messages appearing? I know for this scenario I wouldn't need the validates_presence_of, this is just an example. Thanks
In this example, you can add :allow_blank => true to the validates_format_of.
In general, I think it depends on the situation, most often it can be solved with clever usage of ActiveRecord validation options.
Thanks, that does the job fine. When I think of a situation where I'd want more than that I'll post it up.
You can also introduce a conditional :if, such as:
validates_format_of :email_address, :with => EMAIL_REGEXP, :if => :email_address?
The email_address? method should return true only if that field has a non-blank value. That can be very handy for situations like this.
Yeh this would be more relavent if the conditional could use another validates_foo...
validates_format_of :email_address, :with => EMAIL_REGEXP, :if => {validates_whatever_of :email_address}
I know that won't work, but that explains my initial question a bit more clearly.
I think it'd be a lot better if validations could be chained, like you say, but there's no support for that yet. They all run, all the time, and each has a chance to introduce an error message.
| common-pile/stackexchange_filtered |
Replace all header tags with h4 in java using Jsoup
I have the following html :
<!DOCTYPE html>
<html>
<body>
<h2> heading here </h2>
<p>This is a paragraph.</p>
<p>This is another paragraph.</p>
<h3> heading here again </h3>
<p>This is another another paragraph.</p>
<h4> heading here again </h4>
</body>
</html>
How can I replace all the h tags with h4 tags using Jsoup? I don't wish to use regex for this.
JSoup's Element lets you rename the tag. Have you attempted anything yet?
Damn, Didn't know about the replace functionality earlier in jsoup. Here's how easy it is:
Document examString111 = Jsoup.parse(examString);
Elements elements = examString111.select("h1, h2, h3, h4, h5, h6");
elements.tagName("h4");
| common-pile/stackexchange_filtered |
Why is my JS only working if I put it after the HTML, even though I have a window.onload?
I just started a CS class at school, so please excuse my total lack of basic knowledge. This JS only works if I put it after the HTML code, not if I put it in the headtag. Shouldn't the window.onload take care of that? Can someone please explain what's wrong? Thanks in advance!
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<title>Document</title>
<style>
#field {
width: 600;
height: 600;
background-color: black;
position: relative;
}
#player {
width: 50;
height: 50;
background-color: red;
position: absolute;
left: 0px;
top: 0px;
}
</style>
<script>
var player = document.getElementById("player");
var playerLeft = 0;
var PlayerTop = 0;
function move(e) {
if (e.keyCode == 68) {
playerLeft += 10
player.style.left = playerLeft + "px";
}
if (e.keyCode == 65) {
playerLeft -= 10
player.style.left = playerLeft + "px";
}
if (e.keyCode == 87) {
PlayerTop -= 10
player.style.top = PlayerTop + "px";
}
if (e.keyCode == 83) {
PlayerTop += 10
player.style.top = PlayerTop + "px";
}
}
function loadEvents() {
document.onkeydown = move;
}
window.onload = loadEvents;
</script>
</head>
<body>
<div id="field">
<div id="player">
</div>
</div>
Possible duplicate of Why window.onload works and onload="" does not work
Your assignment of the player needs to ALSO be inside the onload: var player; function loadEvents() { player = document.getElementById("player"); document.onkeydown=move; }
var
@TimB - wrong duplicate!
The problem is that your want to get an element which doesn't exist yet
var player = document.getElementById("player");
Put this line in the loadEvents() function which is called when the window is loaded.
Note: Avoid errors (if #player element doesn't exist) adding if (player) { ... }
<script>
var player = null;
var playerLeft = 0;
var playerTop = 0;
function move(e) {
if (e.keyCode == 68) {
playerLeft += 10
player.style.left = playerLeft + "px";
}
if (e.keyCode == 65) {
playerLeft -= 10
player.style.left = playerLeft + "px";
}
if (e.keyCode == 87) {
playerTop -= 10
player.style.top = playerTop + "px";
}
if (e.keyCode == 83) {
playerTop += 10
player.style.top = playerTop + "px";
}
}
function loadEvents() {
player = document.getElementById("player");
if (player) {
document.onkeydown = move;
}
}
window.onload = loadEvents;
</script>
Edit
For @tmslnz
var player = null is somewhat redundant, since getElementById returns null if no element is found.
From the ECMAScript2015 spec
4.3.10 undefined value
primitive value used when a variable has not been assigned a value
4.3.12 null value
primitive value that represents the intentional absence of any object value
See this thread and this answer
In addition, strict mode would invalidate this.
@mplungjan Yes you right, I just pasted the code too quickly without reading myselft after. Thanks guys
I am hair-splitting, but for it's own sake: var player = null is somewhat redundant, since getElementById returns null if no element is found.
This runs before the document is loaded.
AKA before the browser has any clue of your markup.
var player = document.getElementById("player");
…
Any time you are querying the DOM (think of it as the HTML you wrote as seen by the browser), you need to wait for it be ready.
Your script would probably be OK if you placed it at the very end of your document, before the closing </body> tag, because by then the browser would have had a chance to spot #player.
What you need to do is find a way to structure your scripts so that they don't try to access the DOM before it's ready.
The easy way is to put everything inside onload in inside $(function(){ … }) if you are using jQuery.
Alternatively if you prefer a bit more control, you can use something like an init function, which you use to "start the engine after the tank is filled" (to use a crappy metaphor…)
For example:
var player;
var playerLeft = 0;
var playerTop = 0;
function init () {
player = document.getElementById("player");
loadEvents();
}
function move(e) {
if (e.keyCode == 68) {
playerLeft += 10
player.style.left = playerLeft + "px";
}
if (e.keyCode == 65) {
playerLeft -= 10
player.style.left = playerLeft + "px";
}
if (e.keyCode == 87) {
playerTop -= 10
player.style.top = playerTop + "px";
}
if (e.keyCode == 83) {
playerTop += 10
player.style.top = playerTop + "px";
}
}
function loadEvents() {
document.onkeydown = move;
}
window.onload = init;
| common-pile/stackexchange_filtered |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.