id
stringlengths 5
11
| text
stringlengths 0
146k
| title
stringclasses 1
value |
|---|---|---|
doc_1100
|
public override bool OnStart()
{
// Set the maximum number of concurrent connections
ServicePointManager.DefaultConnectionLimit = 12;
DiagnosticMonitorConfiguration diagConfig = DiagnosticMonitor.GetDefaultInitialConfiguration();
var procTimeConfig = new PerformanceCounterConfiguration();
procTimeConfig.CounterSpecifier = @"\Processor(_Total)\% Processor Time";
procTimeConfig.SampleRate = TimeSpan.FromSeconds(10);
diagConfig.PerformanceCounters.DataSources.Add(procTimeConfig);
diagConfig.PerformanceCounters.ScheduledTransferPeriod = TimeSpan.FromMinutes(1);
DiagnosticMonitor.Start("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString", diagConfig);
return base.OnStart();
}
I have tried different log tables like WADLogsTable and WADDiagnosticInfrastructureLogsTable and both are created correctly.
A: This code works fine in my application. Since your ScheduledTransferPeriod is 1 minute are you letting your role run for at least 1 minute? That's when the table will be created.
A: Definitely, this problem is caused by the language of the system. It is explained here:
Error in Azure Emulator when creating Performance Counters
My Windows is the Spanish version, so the name of the performance counters must be in Spanish:
procTimeConfig.CounterSpecifier = @"\Procesador(_Total)\% de tiempo de procesador";
Be careful, this only works locally, not on the cloud.
| |
doc_1101
|
And after a long time, it becomes not available with red sign like this:
I'm sure I made correct profile because someone else used it and it's available. We uploaded it by Organizer. Please help me how to solve it. Many thanks in advance!
A: This happens sometimes when there some error on itunes server. I have faced it multiple times.
Just try uploading a new version of your build 2.9.1 after approx 5 hours & you will see a processed build in that activity.
Hope this helps.
| |
doc_1102
|
I need to print an invoice on roll paper, but crystal report using fix height and size. I tried using the minimum amount of height and make it get multiple pages. But the problem now after I attempted to print it, it creates spaces between each page.
I have tried setting the margin into zero to minimize the gap, but it doesn't affect the gap.
| |
doc_1103
|
var workItemHandler = new WorkItemHandler((action) =>
{
SwapChainPanel mPanel = (SwapChainPanel)inputElement;
CoreIndependentInputSource coreIndependentInputSource;
coreIndependentInputSource = mPanel.CreateCoreIndependentInputSource(CoreInputDeviceTypes.Touch | CoreInputDeviceTypes.Pen | CoreInputDeviceTypes.Mouse);
coreIndependentInputSource.PointerPressed += CoreWindow_PointerPressed;
coreIndependentInputSource.PointerMoved += CoreWindow_PointerMoved;
coreIndependentInputSource.PointerReleased += CoreWindow_PointerReleased;
coreIndependentInputSource.Dispatcher.ProcessEvents(CoreProcessEventsOption.ProcessUntilQuit);
});
But you must get keyboard input from the routed events provided by CoreWindow or UI elements. (You can also get mouse input from the windows as well, but it suffers the exact same issue)
This results in keyboard input being blocked while the mouse is moving/clicking.
How can I get Keyboard input without using the routed events that are delayed by the mouse input?
| |
doc_1104
|
Could someone give me some advice how to repair this error and make cwd work?
Some codes:
file = 'myhongze.jpg'
dirname = './项目成员资料/zgcao/test-python/'
site = '***.***.***.***'
user = ('zhigang',getpass('Input Pwd:'))
ftp = FTP(site)
ftp.login(*user)
ftp.cwd(dirname)# throw exception
Some tests:
u'./项目成员资料/zgcao/test-python/'.encode('utf-8')
Output:
b'./\xe9\xa1\xb9\xe7\x9b\xae\xe6\x88\x90\xe5\x91\x98\xe8\xb5\x84\xe6\x96\x99/zgcao/test-python/'
u'./项目成员资料/zgcao/test-python/'.encode('utf-8').decode('cp1252')
Output:
UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position 10: character maps to <undefined>
u'./项目成员资料/zgcao/test-python/'.encode('utf-8').decode('latin-1')
Output:
'./项ç\x9b®æ\x88\x90å\x91\x98èµ\x84æ\x96\x99/zgcao/test-python/'
Using the result of decode('latin-1'), the cwd can't still work.
It is noted that 项目成员资料 is showed as ÏîÄ¿×é³ÉԱ˽È˿ռä when I used retrlines('LIST').
A: No need to edit ftplib source code. Just set ftp.encoding property in your code:
ftp.encoding = "UTF-8"
ftp.cwd(dirname)
A similar question, about FTP output, rather then input:
List files with UTF-8 characters in the name in Python ftplib
A: I solved this problem by editing ftplib.py. On my machine, it is under C:\Users\<user>\AppData\Local\Programs\Python\Python36\Lib.
You just need to replace encoding = "latin-1" with encoding = "utf-8"
| |
doc_1105
|
I'm using javacv version 1.2 and latest binaries.
error:
06-12 16:00:37.595 10778-11036/com.example.example E/dalvikvm: dlopen("/data/app-lib/com.example.example-1/libjniavutil.so") failed: dlopen failed: cannot locate symbol "av_version_info" referenced by "libjniavutil.so"...
I tried this https://github.com/bytedeco/javacv/issues/333 but still getting error
I'm using android studio 2.1.2 and here's my Gradle.Build:
apply plugin: 'com.android.application'
android {
compileSdkVersion 23
buildToolsVersion "23.0.1"
lintOptions { abortOnError false }
defaultConfig {
applicationId "com.example.example"
minSdkVersion 15
targetSdkVersion 18
versionCode 1
versionName "1.0"
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
}
packagingOptions {
pickFirst 'META-INF/maven/org.bytedeco.javacpp-presets/opencv/pom.properties'
pickFirst 'META-INF/maven/org.bytedeco.javacpp-presets/opencv/pom.xml'
pickFirst 'META-INF/maven/org.bytedeco.javacpp-presets/ffmpeg/pom.properties'
pickFirst 'META-INF/maven/org.bytedeco.javacpp-presets/ffmpeg/pom.xml'
pickFirst 'META-INF/maven/org.bytedeco.javacpp-presets/flandmark/pom.properties'
pickFirst 'META-INF/maven/org.bytedeco.javacpp-presets/flandmark/pom.xml'
}
dependencies {
compile fileTree(include: ['*.jar'], dir: 'libs')
compile 'com.android.support:appcompat-v7:23.1.1'
compile files('libs/ffmpeg.jar')
compile files('libs/javacpp.jar')
compile files('libs/javacv.jar')
}
here is my libs folder:
A: After days of struggling here's what I figured out:
I had been testing on a Huawei honor 6 running android kitkat so by changing target sdk to 19 in build.gradle solved the problem but it sometimes gives the same error without changing anything and sometimes works with no error and I realized if I make a signed apk and install that it works. so I think android studio in debug mode doesn't copy files correctly.
gradle.build:
dependencies {
compile fileTree(include: ['*.jar'], dir: 'libs')
compile 'com.android.support:appcompat-v7:23.1.1'
compile files('libs/ffmpeg.jar')
compile files('libs/javacpp.jar')
compile files('libs/javacv.jar')
compile files('libs/opencv.jar')
}
| |
doc_1106
|
Basically, I have developed a simple AS400 java application, which basically just calls some HTTP methods to send a transaction to an external party. This application is called directly from an RPGLE program. This has been working fine, but now they have decided to use HTTPS. The client has sent me a .PFX file that contains some stuff in it from when they created the key using a utility called digital certificate manager on the AS400. I have found enough information to gather that I have to have an SSL properties file in my root application directory in IFS, and the job is looking at that properties file and it seems to have the correct parameters. What I am having a hard time with, is how you can have the certificate trust the application. I'm not sure if you need the .PFX file to exist in your IFS root directory of the application, or if you have to create a trust store/key store, or if you need anything at all besides the SSL properties file? I have found answers of yes to all questions, depending on who you ask. Some answers lead you down the path of doing exhaustive things to get to a certain point, only to have nothing happen. This is more of a vent than anything. I have almost come to the conclusion that this stuff is impossible. :)
For what it's worth, below is the code I'm using for connecting to the service using just HTTP. I have been looking for a simple step by step process to explain what is required for HTTPS handshakes to occur successfully on the AS400. I don't have enough information to know whether I need to get more from the client, or if I have enough to make it work on my own.
HttpClient m_HttpClient = null;
PostMethod m_PostMthd = null;
SimpleHttpConnectionManager m_simpleHttpConMnger = new
SimpleHttpConnectionManager();
int timeoutInMilliseconds = 10000;
m_PostMthd = new PostMethod(urlEndpoint);
m_HttpClient = new HttpClient(m_simpleHttpConMnger);
HttpConnectionManagerParams lhttpConMnger =
m_simpleHttpConMnger.getParams();
lhttpConMnger.setConnectionTimeout(timeoutInMilliseconds);
lhttpConMnger.setSoTimeout(timeoutInMilliseconds);
m_PostMthd.setRequestHeader("SOAPAction",SOAPAction);
m_PostMthd.setRequestHeader("Content-Type","text/xml; charset=UTF-8");
m_PostMthd.setRequestEntity(new StringRequestEntity(inputMsg));
int l_status = m_HttpClient.executeMethod(m_PostMthd);
System.out.println("EXECUTION STATUS : " + l_status+"\n");
InputStream is = m_PostMthd.getResponseBodyAsStream();
BufferedReader rd = new BufferedReader(new InputStreamReader(is));
String line;
StringBuilder response = new StringBuilder();
while ((line = rd.readLine()) != null)
{
response.append(line);
response.append('\r');
}
rd.close();
return response.toString();
A: I think there are two separate issues here: how do I use a PFX file in Java, and what am I supposed to do with it. The first one is fairly straight forward, the second one may require a bit of extra information, but I'll answer it as best I can.
Handling PFX in Java
PFX is a container file format that can contain a bunch of different kinds of data that relate to SSL/TLS and cryptography. Java has its own container format called a JKS KeyStore. Broadly speaking, these are equivalent ways to hold the same sort of data, and you just need to export from one into the other. It's easier to do this beforehand, than to try and do it at run time. Citing this answer:
keytool -importkeystore \
-srckeystore mypfxfile.pfx -srcstoretype pkcs12 \
-destkeystore clientcert.jks -deststoretype JKS
Keytool is included with the JDK. It'll ask for some passwords along the way. You can use the keytool -list -keystore file.jks command to show the contents of the
JKS file.
What am I supposed to do with this stuff?
This depends on the situation, and here your question is a bit vague. In all cases, the purpose is authentication: the client needs to verify that it's talking to the correct server, and optionally, the server needs to verify that it's talking to the correct client. This involves three bits of information and a lot of math.
*
*The private key and public key (along with some math, commonly the RSA algorithm) work to create a lock and key system. You keep the private key to yourself, and you share the public key with anyone who needs it. The way these are used in authentication is that the private key is used to mathematically make a claim that can be verified with the public key. The math is such that it's practically impossible to forge this claim; therefore, if the math checks out you can be sure that the person you're talking to holds the private key. This is digital signing in a nutshell.
*A certificate embeds a public key and adds a bunch of information to it (a key is basically just a giant prime number, so it's not much use on its own). The information added consists of the identity of the key holder, and the identity of the issuer who vouches for the validity of the certificate, all of this sealed with digital signatures.
Depending on the application, the PFX file you received may contain just a certificate, or it may contain a certificate and the corresponding private key (or both of these).
The output of keytool -list will show any number of trustedCertEntry records, these are certificates for which no private key is available, and it may include a privateKeyEntry, which is a record for which both a certificate and its private key are present. Take note of the alias of any private key entry.
Client Authenticates Server
When an HTTPS connection is created, the server presents a certificate to the client, along with proof that the server holds the private key that corresponds to the public key embedded in the certificate. The client will have to decide whether it trusts the server's identity, or aborts the connection. It does so by mathematically checking the certificate presented by the server against a list of pre-shared, trusted certificates called the trust store (cacerts is a common filesystem name).
Your OS'es and web browsers all ship with a default trust store; this mechanism is behind the padlock icon in your browser.
In your case, it's likely that the endpoint you're connecting to uses a certificate that's not present in the default trust store, so you have to provide your own [trust store].
Assuming Java on AS/400 works the same way as Java on Windows and Linux, you can (citing this answer) pass a Java system property called javax.net.ssl.trustStore that contains the filesystem path to the JKS keystore that contains the trusted signer's certificate.
An alternative is to name the JKS file jssecacerts and put it in your jre/lib/security directory, but keep in mind that it now affects all java processes that run using that JVM installation in stead of just the one for which you're doing this work.
Server Authenticates Client
It's not used much on the web, but HTTPS has the option of doing two-way certificate verification. That means that after the client has authenticated the server in the way described above, the server now authenticates the client. It's the exact same process, but in the other direction.
It's a bit more common in proprietary SOAP services, so it may be the case here. Your question didn't really make that clear to me.
It means your application will have to present the server with a certificate and it will have to possess the private key that corresponds to the public key embedded in the certificate. Sadly, I've never had to do this in Java myself, so I don't know the exact commands you have to pass. What you need to look for is a way to tell your connection these things:
*
*what JKS keystore file to use (+its password)
*which key alias from that JKS to use (+its password)
| |
doc_1107
|
I'm using this sensor in a 17:1 gear ratio application (1 steering wheel rotation in a car = 17~ 360 degree rotations as seen by sensor). The steering wheel can rotate several times lock to lock.
The angle sensor doesn't always read angles linearly (0,1,2,3...360,0) as the angle updates can skip numbers based on RPM (but thankfully, won't miss an entire rotation). So, I can't write code to increment/decrement based on absolute 0/360 crossing.
I'm struggling to write some code to handle the wraparound, as I need to "read" angles greater than "360 degrees".
Much research into wraparound values for Arduino refer to the time since boot overflow. This doesn't apply to my application.
The goal is a variable that contains the total sensor reading as a signed int.
A: First, 0° and 360° is the same angle. Isn't it 0 to 359°?
Why not extend the zero crossing detection like this:
angle += sensor
If 240<=lastsensor<=359 and 0<=sensor<=120:
angle +=360
If 240<=sensor<=359 and 0<=lastsensor<=120:
angle -= 360
lastsensor=sensor
A: I had this problem once too but with 24h hour day.
If your sensor always reports jumps (far) less than 180° mechanical difference between each sample, then it will work.
diff = currentsensor - lastsensor
if Math.Abs(diff) < 180 // no wrapparound
angle += diff;
else // wraparound (diff >= 180) // values around 180 are critical, was it a wrap around or not?
diff = -1 * Math.Sign(diff) * (360 - Math.Abs(diff)) // so 181 ... 359 --> -179 ... -1 or -181 ... -359 --> +179 ... +1
angle += diff;
lastsensor = currrentsensor
| |
doc_1108
|
So my question is, which is better?
*
*Two separate projects in the same solution. To delineate or have demarcation.
*Have Receivers and Processors code in the first project keeping the solution down to just one project?
A: I recommend you to use the second way. Azure function can scale out, so there is no need to create two same function, azure function will automatically scale out instances.
A: Thought I'd back track and supply the answer I discovered today to be the good working answer. Although I did upvote and mark Bowman's answer previously, I did follow his suggestion and my deployment was good but it broke everything. My previously working WebHooks was no longer functioning and I kept getting the error, ""Could not load file or assembly 'Microsoft.Extensions.Primitives, Version=5.0.0.0, Culture=neutral, PublicKeyToken=adb9793829ddae60""
To make a very long story short, Source Code Version Control was my savior. I went into my "(WebHook Receivers" project and reverted back to a version from two days prior that had for the past weeks been working flawlessly for just the Webhook activities, redeployed to Azure and my 3rd party website that sends JSON data confirmed the integration was working again. Peek in the Azure Message Queue gave the second confirmation. Then I went back into my second project for the QueueTrigger Processors, deployed it to a new Function App container in Azure (to keep it apart from the WebHook Receivers), and all the confirmations showed it was working. Last visual confirmation was seeing all the data sitting in the Azure message queue had successfully transferred to our SharePoint 365 Lists.
The lesson to walk away with from my experience is, keep WebHooks and Message Queue data processing activities apart in their own Azure Function Apps. Especially, if the queue data processing project introduces SharePoint processing (CSOM) as part of its purpose. Apparently, the assemblies that pertain to WebHook processing in Azure are very sensitive and do not play well with the assemblies you need for other targets like SharePoint 365.
| |
doc_1109
|
A: Anything that can be packaged can be published; the PyPI is there to share your Python code with the community.
Just follow the PyPI tutorial and make sure your package follows the guidelines.
| |
doc_1110
|
System.Web.HttpContext.Current.User.Identity.Name
I feel like I could go two ways with this.
*
*Anytime the user table or any tables that references the user table are accessed, I could add the user if it doesn't exist. I'm worried this might be very error prone if user's existance isn't checked.
*Anytime the user visits any page on the site, check if that user exists in the db and if they don't exist, add the user. This may have a lot of overhead as it'll be checked every page change.
I'd like to hear which of these is the better solution and also how to implement them.
A: I think a better way would be something similar to the option two.
Anytime a user visits a page, check a session variable to see if that user was checked against the DB. If the session variable is not there, check if that user exists in the DB, add the user to your table if necessary, then set the session variable.
That way you don't have to hit the DB on every request.
| |
doc_1111
|
thanks
nk
A: Aliasing means that some high-frequency signal shows up as a low-frequency signal. You cannot distinguish this from any true low-frequency signal. Therefore, there is no way to determine if a signal was aliased during sampling or not, unless you have the original continuous-domain signal to compare to.
| |
doc_1112
|
I had a piece of code that was something like this where h is an object:
const getSomething = (h) => {
return new Promise(
(resolve, reject) => {
//using h (accessing element)
....
Now the function should accept an array of object but when I use the array inside the promise is undefined:
const getSomething = (hs) => {
return new Promise(
(resolve, reject) => {
const a = hs[0] //hs undefined
I have tried also something like this:
const getSomething = (hs) => {
const _hs = [];
_.each(hs, (h) => { h.push(getH(h._id)); }); //where getH return the same object contained in array
return new Promise(
(resolve, reject) => {
const a = _hs[0] //_hs undefined
I think this could be something memory related, but I have no idea why and what I'm doing wrong.
Any idea?
A: Sorry my bad :) a few lines after I was declaring a variable called hs.
| |
doc_1113
|
#include <iostream>
using namespace std;
struct S1
{
uint64_t a;
uint32_t b;
};
struct S2
{
alignas(uint64_t) char a[8];
uint32_t b;
};
int main()
{
cout << "sizeof(S1)=" << sizeof(S1) << endl;
cout << "sizeof(S2)=" << sizeof(S2) << endl;
}
The output is:
sizeof(S1)=12
sizeof(S2)=16
What is happening here? Why are S1 and S2 of different size? As I understand it, 64 bit integer values are aligned to 32 bit on 32-bit x86 machines. This does explain why the size of S1 is 12 bytes. But why does this not apply to S2?
A: The alignof keyword measures the alignment of a type as a complete object; i.e. when it is allocated as an individual object or array element. This is not necessarily the same as the alignment requirements of that type as a subobject; Are members of a POD-struct or standard layout type guaranteed to be aligned according to their alignment requirements?
The alignment of a 64-bit integer within a struct is mandated by the x386 ABI at 4 bytes; gcc is not at liberty to change this, as it would break binary compatibility with other object files and programs. However, it can align complete-object 64-bit integers to 8 bytes, as doing so does not affect the ABI, and makes for more efficient access to memory.
| |
doc_1114
|
package com.bct.internal.form.model;
public class Employee {
int id;
String name;
int salary;
String designation;
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public String getDesignation() {
return designation;
}
public int getId() {
return id;
}
public void setId(int id) {
this.id = id;
}
public int getSalary() {
return salary;
}
public void setSalary(int salary) {
this.salary = salary;
}
public void setDesignation(String designation) {
this.designation = designation;
}
@Override
public String toString() {
return "Employee [ID=" + id + ", NAME=" + name
+ ", SALARY=" + salary + ", DESIGNATION=" + designation + "]";
}
}
And i am passing values for that columns in a controller.
public static final class EmployeeMapper implements RowMapper<Employee>
{
public Employee mapRow(ResultSet rs, int rowNum) throws SQLException {
Employee employee = new Employee();
employee.setId(id);
employee.setName(rs.getString("name"));
employee.setSalary(salary);
employee.setDesignation(rs.getString("designation"));
return employee;
}
}
But for id and salary "salary cannot be resolved to a variable".
What is the problem here.. Please help me..
Sorry for my poor english.
A: it should be
Employee employee = new Employee();
employee.setId(rs.getInt("id")); //or (1)
employee.setName(rs.getString("name"));//or (2)
employee.setSalary(rs.getInt("salary"));//or (3)
employee.setDesignation(rs.getString("designation"));//or (4)
return employee;
A: There are no any id or salary parameters in your mapRow(). You have to get it from ResultSet object.
So, Instead of doing this:
employee.setId(id); //or (1)
employee.setSalary(salary); //or (3)
try this :
employee.setId(rs.getInt("id")); //or (1)
employee.setSalary(rs.getInt("salary")); //or (3)
A: The compiler cannot resolve the symbols id and salary within that scope. In simplest terms, the compiler does not know what you mean when you attempt to access a memory location at id and salary.
In your case, you have to use the appropriate ResultSet getter methods to retrieve column values from that specific row.
For more information, visit: https://docs.oracle.com/javase/7/docs/api/java/sql/ResultSet.html
Sajal Gupta has the correct idea
| |
doc_1115
|
Edit: problem is that I'm getting the same value at index 0 and index 1.
public T get(int index) {
int counter = 0;
Node<T> temp = head;
if(index < 0 || index > size() || head == null){
throw new IndexOutOfBoundsException();
} else {
if(index == size()){
temp = tail;
return temp.data;
}
if(index == 0){
return temp.data;
} else {
while (counter +1 != index){
temp = temp.next;
counter++;
}
return temp.data;
}
}
}
A: Imagine you're passed in index==1 - you'd want the second element, yes?
However, your while loop will never enter (since counter ==0 means counter+1 == index).
So change your while loop to "while (counter < index)".
You'll find you don't need the explicit "if(index==0)" then either :)
In fact, this loop then condenses into a straight for loop, so :
for (int counter=0; counter < index; counter++) {
temp = temp.next;
}
A: Your condition in while loop is wrong. You need to change it to-
while (counter != index){
temp = temp.next;
counter++;
}
| |
doc_1116
|
I am actually quite good and familiar with css, flexboxes and grids. However, I don't know how to solve the following problem:
On my shop page I have a grid list of products. But due to different lengths of headings and some other features like badges or on sale, the products are not vertically aligned. For example the product button ends on different heights.
My goal is to have the image, title, price, button, etc. aligned vertically at the same height. I think this is not gonna work with flexboxes because with flexboxes I can just grow or shrink thinks and align the complete flexbox either on top or on bottom.
But how can I aligned all elements (image, title, etc) on the same height? Do I do this with a grid?
<link href="https://pagecdn.io/theme/wp-oceanwp/1.8.6/assets/css/woo/woocommerce.css" rel="stylesheet" crossorigin="anonymous" >
<link href="https://pagecdn.io/plugin/wp-elementor/3.0.15/assets/css/frontend.css" rel="stylesheet" crossorigin="anonymous" >
<link href="https://pagecdn.io/theme/wp-oceanwp/1.8.6/assets/css/style.css" rel="stylesheet" crossorigin="anonymous" >
<div class="woocommerce columns-4 ">
<ul class="products oceanwp-row clr grid">
<li class="entry has-media has-product-nav col span_1_of_4 owp-content-center owp-thumbs-layout-horizontal owp-btn-normal owp-tabs-layout-horizontal product type-product post-830 status-publish first instock product_cat-jacken product_cat-outdoor has-post-thumbnail post-password-protected taxable shipping-taxable purchasable product-type-simple">
<div class="product-inner clr">
<div class="woo-entry-image-swap woo-entry-image clr">
<a href="https://google.de" class="woocommerce-LoopProduct-link no-lightbox"><img src="https://picsum.photos/300/300" class="woo-entry-image-main" width="300" height="300"></a>
</div>
<!-- .woo-entry-image-swap -->
<ul class="woo-entry-inner clr">
<li class="image-wrap">
<div class="woo-entry-image-swap woo-entry-image clr">
<a href="https://google.de" class="woocommerce-LoopProduct-link no-lightbox"><img src="https://picsum.photos/300/300" class="woo-entry-image-main" width="300" height="300"></a>
</div>
<!-- .woo-entry-image-swap -->
</li>
<li class="category"><a href="https://google.de" rel="tag">Jacken</a>, <a href="https://google.de" rel="tag">Outdoor</a></li>
<li class="title">
<h2><a href="https://google.de">Geschützt: ISG Jacket Multicam (Kopie) This is a test This is a test</a></h2>
</li>
<li class="price-wrap">
<span class="price"><span class="woocommerce-Price-amount amount"><bdi>1.188,81 <span class="woocommerce-Price-currencySymbol">€</span></bdi></span></span>
</li>
<li class="wc-gzd">
<p class="wc-gzd-additional-info tax-info">inkl. 19 % MwSt.</p>
<p class="wc-gzd-additional-info shipping-costs-info">zzgl. <a href="https://google.de" target="_blank">Versandkosten</a></p>
</li>
<li class="rating"></li>
<li class="btn-wrap clr"><a href="?add-to-cart=830" data-quantity="1" class="button product_type_simple add_to_cart_button ajax_add_to_cart" data-product_id="830" data-product_sku="" aria-label="„ISG Jacket Multicam (Kopie)“ zu deinem Warenkorb hinzufügen" rel="nofollow">In den Warenkorb</a></li>
</ul>
</div>
<!-- .product-inner .clr -->
</li>
<li class="entry has-media has-product-nav col span_1_of_4 owp-content-center owp-thumbs-layout-horizontal owp-btn-normal owp-tabs-layout-horizontal product type-product post-585 status-publish instock product_cat-ausruestung product_cat-outdoor has-post-thumbnail taxable shipping-taxable purchasable product-type-variable">
<div class="product-inner clr">
<div class="woo-entry-image-swap woo-entry-image clr">
<a href="https://google.de" class="woocommerce-LoopProduct-link no-lightbox"><img src="https://picsum.photos/300/300" class="woo-entry-image-main" width="300" height="300"></a>
</div>
<!-- .woo-entry-image-swap -->
<ul class="woo-entry-inner clr">
<li class="image-wrap">
<div class="woo-entry-image-swap woo-entry-image clr">
<a href="https://google.de" class="woocommerce-LoopProduct-link no-lightbox"><img src="https://picsum.photos/300/300" class="woo-entry-image-main" width="300" height="300"></a>
</div>
<!-- .woo-entry-image-swap -->
</li>
<li class="category"><a href="https://google.de" rel="tag">Ausrüstung</a>, <a href="https://google.de" rel="tag">Outdoor</a></li>
<li class="title">
<h2><a href="https://google.de">Wilderness .x-navbar-inner superiörrrrr geil</a></h2>
</li>
<li class="price-wrap">
<span class="price"><span class="woocommerce-Price-amount amount"><bdi>357,00 <span class="woocommerce-Price-currencySymbol">€</span></bdi></span></span>
</li>
<li class="wc-gzd">
<p class="wc-gzd-additional-info tax-info">inkl. MwSt.</p>
<p class="wc-gzd-additional-info shipping-costs-info">zzgl. <a href="https://google.de" target="_blank">Versandkosten</a></p>
<p class="wc-gzd-additional-info delivery-time-info"></p>
<p class="wc-gzd-additional-info product-units-wrapper product-units"></p>
</li>
<li class="rating"></li>
<li class="btn-wrap clr"><a href="https://google.de" data-quantity="1" class="button product_type_variable add_to_cart_button" data-product_id="585" data-product_sku="" aria-label="Wähle Optionen für „Wilderness .x-navbar-inner superiörrrrr geil“" rel="nofollow">Ausführung wählen</a></li>
</ul>
</div>
<!-- .product-inner .clr -->
</li>
<li class="entry has-media has-product-nav col span_1_of_4 owp-content-center owp-thumbs-layout-horizontal owp-btn-normal owp-tabs-layout-horizontal has-no-thumbnails product type-product post-522 status-publish outofstock product_cat-ausruestung product_cat-outdoor has-post-thumbnail post-password-protected taxable shipping-taxable purchasable product-type-variable">
<div class="product-inner clr">
<div class="woo-entry-image clr">
<a href="https://google.de" class="woocommerce-LoopProduct-link no-lightbox"><img src="https://picsum.photos/300/300" class="woo-entry-image-main" width="300" height="300"></a>
</div>
<!-- .woo-entry-image -->
<ul class="woo-entry-inner clr">
<li class="image-wrap">
<div class="outofstock-badge">
Ausverkauft
</div>
<!-- .product-entry-out-of-stock-badge -->
<div class="woo-entry-image clr">
<a href="https://google.de" class="woocommerce-LoopProduct-link no-lightbox"><img src="https://picsum.photos/300/300" class="woo-entry-image-main" width="300" height="300"></a>
</div>
<!-- .woo-entry-image -->
</li>
<li class="category"><a href="https://google.de" rel="tag">Ausrüstung</a>, <a href="https://google.de" rel="tag">Outdoor</a></li>
<li class="title">
<h2><a href="https://google.de">Geschützt: Habicht 10×40 W</a></h2>
</li>
<li class="price-wrap"></li>
<li class="wc-gzd">
<p class="wc-gzd-additional-info tax-info">inkl. MwSt.</p>
<p class="wc-gzd-additional-info shipping-costs-info">zzgl. <a href="https://google.de" target="_blank">Versandkosten</a></p>
<p class="wc-gzd-additional-info delivery-time-info"></p>
<p class="wc-gzd-additional-info product-units-wrapper product-units"></p>
</li>
<li class="rating"></li>
<li class="btn-wrap clr"><a href="https://google.de" data-quantity="1" class="button product_type_variable" data-product_id="522" data-product_sku="" aria-label="Wähle Optionen für „Habicht 10x40 W“" rel="nofollow">Ausführung wählen</a></li>
</ul>
</div>
<!-- .product-inner .clr -->
</li>
<li class="entry has-media has-product-nav col span_1_of_4 owp-content-center owp-thumbs-layout-horizontal owp-btn-normal owp-tabs-layout-horizontal product type-product post-483 status-publish last instock product_cat-hosen product_cat-outdoor has-post-thumbnail post-password-protected taxable shipping-taxable purchasable product-type-variable has-default-attributes">
<div class="product-inner clr">
<div class="woo-entry-image-swap woo-entry-image clr">
<a href="https://google.de" class="woocommerce-LoopProduct-link no-lightbox"><img src="https://picsum.photos/300/300" class="woo-entry-image-main" width="300" height="300"></a>
</div>
<!-- .woo-entry-image-swap -->
<ul class="woo-entry-inner clr">
<li class="image-wrap">
<div class="woo-entry-image-swap woo-entry-image clr">
<a href="https://google.de" class="woocommerce-LoopProduct-link no-lightbox"><img src="https://picsum.photos/300/300" class="woo-entry-image-main" width="300" height="300"></a>
</div>
<!-- .woo-entry-image-swap -->
</li>
<li class="category"><a href="https://google.de" rel="tag">Hosen</a>, <a href="https://google.de" rel="tag">Outdoor</a></li>
<li class="title">
<h2><a href="https://google.de">Geschützt: MIG 4.0 Trousers</a></h2>
</li>
<li class="price-wrap">
<span class="price"><span class="woocommerce-Price-amount amount"><bdi>238,00 <span class="woocommerce-Price-currencySymbol">€</span></bdi></span></span>
</li>
<li class="wc-gzd">
<p class="wc-gzd-additional-info tax-info">inkl. MwSt.</p>
<p class="wc-gzd-additional-info shipping-costs-info">zzgl. <a href="https://google.de" target="_blank">Versandkosten</a></p>
<p class="wc-gzd-additional-info delivery-time-info"></p>
<p class="wc-gzd-additional-info product-units-wrapper product-units"></p>
</li>
<li class="rating"></li>
<li class="btn-wrap clr"><a href="https://google.de" data-quantity="1" class="button product_type_variable add_to_cart_button" data-product_id="483" data-product_sku="" aria-label="Wähle Optionen für „MIG 4.0 Trousers“" rel="nofollow">Ausführung wählen</a></li>
</ul>
</div>
<!-- .product-inner .clr -->
</li>
</ul>
</div>
A: you can do that with flex. the problem here is that you are not fixing the height of the items. once you fix the height of your images you have to add this line in flex parent class
justify content: space-evenly;
but first you have to fix the heights of the flex items specially the images.
you also have to use margin-bottom.
| |
doc_1117
|
Dialog.html
<form>
<button id="foo">Foo</button>
</form>
<script src="//ajax.googleapis.com/ajax/libs/jquery/1.9.1/jquery.min.js"></script>
<script>
$( document ).ready(function(){
$( "#foo" ).click(google.script.run.myFunction());
});
</script>
Code.gs
function myFunction(){
Logger.log("I ran!");
}
When checking the logger, the function is called the moment the dialog is opened (which I don't want), but it is not called when the button is clicked.
What am I missing?
| |
doc_1118
|
String[] ids = {"1","2","3"};
JSONObject topoInfo = readTaskLog(); //returns an object like {Name:"Stack"}
if (topoInfo != null) {
for (String id : ids) {
JSONObject tempobj=topoInfo;
tempobj.put("id", id));
topologyInfo.put(tempobj);
}
}
I need to get 3 JSONObjects with name as Stack and id as 1,2 &3. In my JSONArray the 3 objects are with "id" 3
My final result should be like
[{
"Name": "Stack",
"id": "1"
},
{
"Name": "Stack",
"id": "2"
},
{
"Name": "Stack",
"id": "3"
}]
But I'm getting as
[{
"Name": "Stack",
"id": "3"
},
{
"Name": "Stack",
"id": "3"
},
{
"Name": "Stack",
"id": "3"
}]
A: The issue here is that you are reusing the same JSONObject in each iteration of the for loop so you are overriding the "id" value.
Try cloning the object instead...
JSONArray topologyInfo = new JSONArray();
String[] ids = {"1","2","3"};
JSONObject topoInfo = readTaskLog(); //returns an object like {Name:"Stack"}
if (topoInfo != null) {
for (String id : ids) {
JSONObject tempobj=new JSONObject(topoInfo.toString());
tempobj.put("id", id));
topologyInfo.put(tempobj);
}
}
A: You're overriding the same property 'id' every iteration.
JSONObject#put does refer to the Map interface.
That's because with:
JSONObject tempobj = topoInfo;
you're not dealing with a new JSONObject, but you're simply copying it's reference.
| |
doc_1119
|
Suppose A(.) is a subroutine that takes as input a number in binary, and takes linear time (that is, O(n), where n is the length (in bits) of the number).
Consider the following piece of code, which starts with an n-bit number x.
while x>1:
call A(x)
x=x-1
Assume that the subtraction takes O(n) time on an n-bit number.
(a) How many times does the inner loop iterate (as a function of n)? Leave your answer in big-O form.
(b) What is the overall running time (as a function of n), in big-O form?
My guess is that (a) is O(n^2) and (b) is O(n^3). Is this correct? The way I'm thinking about it is that the loop has to compute two steps each time it cycles through and it will cycle through x time each time subtracting 1 from n bits until x reaches 0. And for part b since A(.) takes time O(n) we multiply that with the time it takes to execute the loop and we then have the over all running time. Is my analysis correct?
A: Something that might help here is to write x = 2n, since if x has n bits its value is O(2n). Therefore, the loop will run O(2n) times.
Each iteration of the loop does O(n) work, giving an upper bound on the work of O(n · 2n). This bound ends up being tight. Notice that for the first x/2 iterations of the loop, the value of x will still need n bits. Therefore, as a lower bound on the work done, we get x/2 = 2n-1 iterations doing n work each, giving a total of Ω(n · 2n) work. Thus the work done is Θ(n · 2n).
Hope this helps!
| |
doc_1120
|
Unrecognized field "GaugeDeviceId" (Class GaugeDevice), not marked as ignorable
The problem seems, that the service returns the property names with a leading upper letter, while the class properties begin with a lower letter.
I tried:
*changing the propertyNames to first upper letter - same error
*adding @JsonProperty("SerialNo") to the property instantiation - same error
*adding @JsonProperty("SerialNo") to the corresponding getters - same error
*adding @JsonProperty("SerialNo") to the corresponding setters - same error
*adding @JsonProperty("SerialNo") to all of them (just for fun) - same error
(note: @JsonProperty("SerialNo") is just an example)
The strange thing is, that annotation: @JsonIgnoreProperties(ignoreUnknown = true) should suppress exactly that error, but it is still triggering...
here the Class: (note: not complete)
@JsonIgnoreProperties(ignoreUnknown = true)
public class GaugeDevice
{
private int gaugeDeviceId;
private Date utcInstallation;
private String manufacturer;
private float valueOffset;
private String serialNo;
private String comment;
private int digitCount;
private int decimalPlaces;
@JsonProperty("SerialNo")
public String getSerialNo() {
return serialNo;
}
public void setSerialNo(String serialNo) {
this.serialNo = serialNo;
}
@JsonProperty("Comment")
public String getComment() {
return comment;
}
public void setComment(String comment) {
this.comment = comment;
}
Where is the way out here? Please help.
edit:
Here is the Client Class: (just a simple test client)
import ccc.android.meterdata.*;
import javax.ws.rs.client.Client;
import javax.ws.rs.client.ClientBuilder;
import javax.ws.rs.client.Invocation;
import org.glassfish.jersey.jackson.JacksonFeature;
public class RestClient
{
private String connectionUrl;
private javax.ws.rs.client.Client client;
public RestClient(String baseUrl) {
client = ClientBuilder.newClient();;
connectionUrl = baseUrl;
client.register(JacksonFeature.class);
}
public GaugeDevice GetGaugeDevice(int id){
String uri = connectionUrl + "/GetGaugeDevice/" + id;
Invocation.Builder bldr = client.target(uri).request("application/json");
return bldr.get(GaugeDevice.class);
}
}
I hope the error has its root here?
A: Another thing to check out is PropertyNamingStrategy, which would allow Jackson to use "Pascal naming" and match JSON properties with POJO properties. See f.ex here: http://www.javacodegeeks.com/2013/04/how-to-use-propertynamingstrategy-in-jackson.html
A: Given the following is your error:
Unrecognized field "GaugeDeviceId" (Class GaugeDevice), not marked as ignorable
I'm pretty sure you need to do the same thing for the GaugeDeviceId property as you've done for the SerialNo property.
@JsonProperty("SerialNo")
public String getSerialNo() {
return this.serialNo;
}
@JsonProperty("GaugeDeviceId")
public int getGaugeDeviceId() {
return this.gaugeDeviceId;
}
Here I have a quick test class that is not throwing errors.
import org.codehaus.jackson.map.ObjectMapper;
public class JsonDeserialization {
public static void main(final String[] args) {
final String json = "{ \"SerialNo\":\"123\", \"GaugeDeviceId\":\"456\"}";
final ObjectMapper mapper = new ObjectMapper();
try {
final GaugeDevice readValue = mapper.readValue(json, GaugeDevice.class);
System.out.println(readValue.getSerialNo());
System.out.println(readValue.getGaugeDeviceId());
} catch (final Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
And it outputs:
123
456
EDIT: Version information
Not that it matters, as I believe the above is all using some pretty standard stuff from jackson, I'm using version 1.9.13 of the core-asl and the mapper-asl libraries
EDIT: Client Provided
I wonder if this is related to this issue? I believe the resolution is the configuration of dependencies that you're using.
I'm not sure, but I feel like I'm close with the following dependency setup (note I'm using maven)
<dependency>
<groupId>javax.ws.rs</groupId>
<artifactId>javax.ws.rs-api</artifactId>
<version>2.0</version>
</dependency>
<dependency>
<groupId>org.glassfish.jersey.media</groupId>
<artifactId>jersey-media-json-jackson</artifactId>
<version>2.0</version>
</dependency>
<dependency>
<groupId>org.glassfish.jersey.core</groupId>
<artifactId>jersey-client</artifactId>
<version>2.0</version>
</dependency>
<dependency>
<groupId>org.glassfish.jersey.media</groupId>
<artifactId>jersey-media-json-processing</artifactId>
<version>2.0</version>
</dependency>
These links have provided the configuration information: Link 1, Link 2
A: You can ignore the unknown properties while deserializing by using an annotation at class level.
For example :
@JsonIgnoreProperties(ignoreUnknown=true)
class Foo{
...
}
The above snippet will ignore any unknown properties.
( Annotation import : org.codehaus.jackson.annotate.JsonIgnoreProperties )
A: Make all your private variables and members to public. Jackson works on public members variables.
public int gaugeDeviceId;
public Date utcInstallation;
....
or add public Getters to private fields.
A: I had the same issue and I resolved it by changing the annotation import from:
com.fasterxml.jackson.annotation.JsonIgnoreProperties
to
org.codehaus.jackson.annotate.JsonIgnoreProperties
Didn't have to define any NamingStrategy or ObjectMapper.
A: I had the same issue and solved it by changing the annotation import from
com.fasterxml.jackson.annotation.JsonProperty
to
org.codehaus.jackson.annotate.JsonProperty
A: The solution that worked for me is the following
*
*Add the import
import org.codehaus.jackson.map.DeserializationConfig;
*Configure the ObjectMapper
objectMapper.configure(DeserializationConfig.Feature.FAIL_ON_UNKNOWN_PROPERTIES,false);
*The complete solution
ObjectMapper objectMapper = new ObjectMapper();
objectMapper.configure(DeserializationConfig.Feature.FAIL_ON_UNKNOWN_PROPERTIES, false);
String jsonInString = objectMapper.writeValueAsString(eje);
Eje newElement = objectMapper.readValue(jsonInString, Eje.class);
this.eje = newElement;
| |
doc_1121
|
>Loading package http-enumerator-0.7.1.1 ... linking ... done.
>Loading package double-conversion-0.2.0.1 ... can't load .so/.DLL for: stdc++ ?>>> (libstdc++.so: cannot open shared object file: No such file or directory)
Further investigation reveals I have multiple stdc++ libraries installed
>locate libstdc++.so
>/usr/lib/libstdc++.so.6
>/usr/lib/libstdc++.so.6.0.14
>/usr/lib/gcc/x86_64-linux-gnu/4.4/libstdc++.so
>/usr/lib32/libstdc++.so.6
>/usr/lib32/libstdc++.so.6.0.14
I thought maybe I could make a symlink to what it wants, but I have no idea which one. I'm using this OS
2.6.35-22-server #33-Ubuntu SMP Sun Sep 19 20:48:58 UTC 2010 x86_64 GNU/Linux
How can I tell exactly what it wants?
A: /usr/lib/libstdc++.so.6 should be a symlink to /usr/lib/libstdc++.so.6.0.14. This is probably the version you need.
/usr/lib32/libstdc++.so.6 should be a symlink to /usr/lib32/libstdc++.so.6.0.14, they are for 32-bit programs, you don't normally need them.
/usr/lib/gcc/x86_64-linux-gnu/4.4/libstdc++.so is the problem.
double-conversion-0.2.0.1 probably has got linked against it, and ghci cannot find it. Normally everything should be linked against libstdc++.so.6, not libstdc++.so without a version suffix.
I think one should not have a version-less libstdc++.so at all anywhere in the system. (There's none on my gentoo box for example.) It is dangerous, as different major versions of libstdc++ are usually binary incompatible. Try removing the library you have under /usr/lib/gcc/, then reinstall gcc and see if it gets installed again.
If it does get installed, then a symlink named /usr/lib/libstdc++.so pointing to /usr/lib/libstdc++.so.6 should solve this problem. I'm not sure this would be the right way to solve it in the long run though.
These are things I have found through experiments with my own Linux box. I am not an expert in Ubuntu, it may do things differently from other Linuxes.
A: To work around the issue on 64-bit Fedora 16:
sudo ln -si /usr/lib64/libstdc++.so.6 /usr/lib64/libstdc++.so
A: The ones in /usr/lib symlink to one file:
$ ls -l libstdc++*
lrwxrwxrwx 1 root root 19 2011-09-24 22:14 libstdc++.so.6 -> libstdc++.so.6.0.13
-rw-r--r-- 1 root root 1044112 2010-03-26 20:16 libstdc++.so.6.0.13
Just run:
sudo ln -si /usr/lib/libstdc++.so.6 /usr/lib/libstdc++.so
and it should work.
| |
doc_1122
|
A: Select Microsoft SQL Server instead of Microsoft Access Database File.
| |
doc_1123
|
I have also attached the snippet of the algorithm here too.
I have written following python snippet for the algorithm. Here it is:
def knapsack(v,w,n,W):
V = [[None for x in range(W+1)] for x in range(len(v)+1)]
for wy in range(W+1):
V[0][wy] = 0
for i in range(1,n+1):
for wx in range(W+1):
# print i,wx
if w[i] <= wx:
V[i][wx] = max(V[i-1][wx], v[i]+V[i-1][wx-w[i]])
else:
V[i][wx] = V[i-1][wx]
return V[n][W]
print knapsack(v = [10,40,30,50],w=[5,4,6,3],n=4,W=10)
I am supposed to get output 90 at position [4,9] . What am I doing wrong here?
A: I am not sure but i think the error is
*
*Elements v and w are 0-based index(0 to n-1)
*You are iterating in the range 1 to n
*So w[n] or v[n] will throw IndexError
Updated CODE:
def knapsack(v,w,n,W):
V = [[None for x in range(W+1)] for x in range(len(v)+1)]
for wy in range(W+1):
V[0][wy] = 0
for i in range(1,n+1):
for wx in range(W+1):
# print i,wx
if w[i-1] <= wx:
V[i][wx] = max(V[i-1][wx], v[i-1]+V[i-1][wx-w[i-1]])
else:
V[i][wx] = V[i-1][wx]
return V[n][W]
print knapsack(v = [10,40,30,50],w=[5,4,6,3],n=4,W=10)
The output is now 90.
Check the results at Ideone
| |
doc_1124
|
By pattern matching, I mean to find some "shapes" in my matrix such as a line, a T or a U, e.g.:
0 1 0
0 1 0
1 1 1
that matrix contains a T, but it also contains 2 lines! Now if the matrix was a 4x4, the shapes don't increase but they can be positioned at more place obviously, e.g.:
0 0 0 0
1 1 1 0
0 0 1 0
1 1 1 0
That matrix would contain a U (no lines though, this is the exception, lines have the size of the matrix).
Naively since the matrix is pretty small I would have tried all possibilities for each shape I'm wiling to support, but it's not very fun. I cannot figure out any algorithm for this though, and not being able to label this operation properly doesn't help ;) Has anyone got any idea how to would do this "efficiently" ? (efficiently may be a bit of an overstatement considering the size of the matrix, but you know what I mean).
A: There's some ambiguity in your question. For instance, does:
1 1 1
1 1 1
1 1 1
contain 6 lines, a T, a U, and a bunch of other letters of the alphabet? Or are all letters separated? Your initial question implied that letters could be discovered in overlapping fashion, because the T template contains two lines. Thus, a matrix where all elements were 'on' would contain every possible letter/line in every possible position.
Also, I'm assuming you're only concerned about 90 degree rotations and you wouldn't want to try to find 45-degree offset letters when the matrix sizes get large enough to support it.
In terms of ease-of-implementation, the brute-force approach you're talking about (test every position for all four letter rotations) really wins out, I'd say.
Alternatively, you could get pretty fancy by (warning: vague algorithm descriptions ahead!):
1) Walking along the matrix elements until you found a 1. Then essentially flood-fill from that 1 on a stack and keep track of the direction changes. Then have some sort of rotation-invariant lookup that mapped a set of 'on' pixels to found letters.
2) Use some sort of integral-image or box-filter description to take sums of subsections of the matrix. You could then do lookups on the subsections and map the subsection sums to letter/line values.
3) Since the comments have determined that you're only really looking for 4 shapes, a new approach may be worthwhile. You're only examining 4 shapes (line, cross, T, and U) if I'm not mistaken. Each of them can be in 4 orientations. One quick tip is that you can just run the algorithm 4 times but rotate the underlying matrix by 90 degrees. Then you don't have to adjust for rotation in your algorithm. Also note that the cross only needs to be found in one orientation because it looks identical in all 4 orientations and the line is identical in two orientations. Anyway, you could save yourself some work by searching for the 'hardest' things to match first. Let's say I'm looking for an upright 'U' here:
1 0 1
1 0 1
1 1 1
I start in the top left. Rather than checking to make sure that any pixels are 'off' (or 0), I go to the next place I expect to find an 'on' value (or a 1). Let's say that's the pixel below the top left. I check the middle-left pixel, and indeed it's on. Then I check below that. If you develop a simple rule set for each letter, you can quickly abandon the search for it if you don't have the required values turned 'on'. If you then run the same algorithm 4 times and only search for upright values, I'm not sure you'd be able to do much better than this!
The approaches I've mentioned are just ideas. They may be more trouble than they're worth in terms of efficiency gains, though. And who knows, they may not work at all!
Good luck!
A: I thought I could contribute with what I ended up doing so here it is, following aardvarkk idea. (objective-c code) I wasn't very pedantic with the array size checks because my matrix is necessarily a square matrix. Also sorry if the code is ugly :D
I made a little class structure for the shapes I want to reconize, they have a list of "directions" which are essentially values of an enum.
-(BOOL)findShape:(NSInteger)size directions:(NSArray*)directions{
NSMutableArray* current = [mgs tokens];
for (int rot = 0; rot < 4; rot++) {
for (int i = 0; i < size; i++) {
for(int j = 0; j < size; j++){
NSInteger value = [[[current objectAtIndex:i] objectAtIndex:j] integerValue];
if(value){
BOOL carryOn = [self iterateThroughDirections:directions i:i j:j tokens:current size:size];
if(carryOn){
return YES;
}
}
}
}
current = [self rotate:current];
}
return NO;
}
-(BOOL) iterateThroughDirections:(NSArray*)directions i:(NSInteger)i j:(NSInteger)j tokens:(NSMutableArray*)tokens size:(NSInteger)size{
BOOL carryOn = YES;
for(int k = 0; k < [directions count] && carryOn; k++){
NSNumber* dir = [directions objectAtIndex:k];
NSInteger d = [dir integerValue];
//move in the direction
switch (d) {
case UP:
if(i > 0){
i--;
}else{
carryOn = NO;
}
break;
case DOWN:
if(i < size-1){
i++;
}else{
carryOn = NO;
}
break;
case LEFT:
if(j > 0){
j--;
}else{
carryOn = NO;
}
break;
case RIGHT:
if(j < size-1){
j++;
}else{
carryOn = NO;
}
break;
default:
NSAssert(NO, @"invalid direction");
break;
}
NSInteger v = [[[tokens objectAtIndex:i] objectAtIndex:j] integerValue];
//now that we moved, check if the token is active, if it's not we're done
if(!v){
carryOn = NO;
break;
}
}
return carryOn;
}
-(NSMutableArray*)rotate:(NSMutableArray*)matrix{
NSInteger w = [matrix count];
NSInteger h = [[matrix objectAtIndex:0] count];
NSMutableArray* rotated = [[NSMutableArray arrayWithCapacity:h] retain];
for (int i = 0; i < h; i++) {
[rotated addObject:[NSMutableArray arrayWithCapacity:w]];
}
for(int i = 0; i < h; ++i){
for(int j = 0; j < w; ++j){
[[rotated objectAtIndex:i] addObject:[[matrix objectAtIndex:j] objectAtIndex:h-i-1]];
}
}
return rotated;
}
This seems to be working well for me! Thanks again for the help!
| |
doc_1125
|
Does anybody know if this is possible and maybe how to do that?
Thanks so far
A: It's not possible right now, but we're working on adding this to an upcoming version of Chrome.
| |
doc_1126
|
If I restart service then it starts with 4% of RAM, then gradually increases
| |
doc_1127
|
I've already make a data for that username "mike_one", and it's perfectly working,
this is my xcode result
{
IC2ol5bimucKu2d89u2YBz0Bqot2 = {
name = mike;
photo = "https://firebasestorage.googleapis.com/v0/b/instagram-ios-1ed09.appspot.com/o/photo%2Fa2.jpg?alt=media&token=9b6c58f1-eedc-4190-bc63-3f3325c84d77";
username = "mike_one";
};}
And this is my database
So what i'm asking right now is, How to make the result of the database to be a model? so I can use it as a list.
Please help me.
I appreciate your answer!
A: Strictly speaking, it would not be possible to convert textual firebase data to a List object. A SwiftUI List is defined as
A container that presents rows of data arranged in a single column
in other words it's a UI element, not a data element.
That being said a List is often backed by an Array or other data storage element. So you'll read Firebase data and store that in an array. Then that array backs your UI List object.
In this case, I would suggest creating a UserClass to hold the Firebase data
class UserClass {
var name = ""
var photoUrl = ""
var username = ""
}
and then array to store your users in
var userArray = [UserClass]()
then as data is read from Firebase, create the user objects and populate the array. Your Firebase code wasn't included so in brief
firebase.observe.... { snapshot in
let user = UserClass()
...populate user properites from the snapshot
self.userArray.append(user)
}
Then in your view, access the array elements to show them in the List object
struct ContentView: View {
var body: some View {
List(userArray) { user in
//do something with each user object
// like Text(user.username)
}
}
}
| |
doc_1128
|
My USSD run code:
procedure TForm1.Button1Click(Sender: TObject);
var
Intent : JIntent ;
strNo : String;
begin
strNo := 'tel:*101%23';
Intent := TJIntent.Create ;
Intent.setAction ( TJIntent.JavaClass.ACTION_CALL ) ;
Intent.setData ( StrToJURI ( strNo ) ) ;
SharedActivity.startActivity ( Intent ) ;
end;
It works, but I need to get USSD code result to string.
For Delphi 10.2 Tokyo - Firemonkey Android Platform.
A: There's an answer here that is probably relevant to your problem:
How to Get Response from USSD code from Android?
However the solution on Github (see the link in the answer) looks quite involved, and some parts may need to be done in Java.
| |
doc_1129
|
This is my BaseEntity
@Getter
@Setter
@Accessors(chain = true)
@MappedSuperclass
@NoArgsConstructor
@AllArgsConstructor
@EntityListeners(AuditingEntityListener.class)
public class BaseEntity {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
@Size(max = 55, message = "name length more then 55")
private String name;
@Size(max = 255, message = "remark length more than 255")
private String remark;
}
And my entity
@Data
@NoArgsConstructor
@Table(name = "sys_user")
@Entity(name = "sys_user")
@Accessors(chain = true)
@ToString(callSuper = true)
@EqualsAndHashCode(callSuper = true)
public class SysUser extends BaseEntity implements Serializable {
@NonNull
private String username;
@NonNull
private String password;
}
In my controller
@Controller
@GraphQLApi
@RequiredArgsConstructor
public class SysUserController implements BaseController {
private final SysUserRepository sysUserRepository;
@GraphQLQuery
public List<SysUser> sysUsers() {
return sysUserRepository.findAll();
}
}
My GraphQL Config
@Configuration
@RequiredArgsConstructor
public class GraphQLConfig {
private final @NotNull List<BaseController> controllerLists;
@Bean
public GraphQLSchema graphqlSchema() {
GraphQLSchemaGenerator generator = new GraphQLSchemaGenerator();
generator.withOperationsFromSingletons(controllerLists.toArray());
return generator.generate();
}
}
Now, I try to get
{
sysUsers {
username
}
}
The result is right
{
"data": {
"sysUsers": [
{
"username": "Hello"
}
]
}
}
But I try to get the parent class field:
{
sysUsers {
name
}
}
I will get a error
{
"errors": [
{
"message": "Validation error of type FieldUndefined: Field 'name' in type 'SysUser' is undefined @ 'sysUsers/name'",
"locations": [
{
"line": 3,
"column": 5
}
]
}
]
}
I use io.leangen.graphql:graphql-spqr-spring-boot-starter:0.0.4
How to resolve this question?
Thanks!
A: Inherited fields will only be exposed if they're within the configured packages. This way, you don't accidentally expose framework fields, JDK fields (like hashCode) etc. If no base packages are configured, SPQR will stay within the package the directly exposed class is.
To configure the base packages, add something like:
graphql.spqr.base-packages=your.root.package,your.other.root.package
to your application.properties file.
Note: These rules will get relaxed in the next release of SPQR, so that all non-JDK fields are exposed by default, as the current behavior seems to confuse too many people.
A: I'd recommend you to add auto-generation of classes based on the types defined in your graphql schema.
It will provide you more clarity on what is exposed to the user and avoid such errors in future.
Here are the plugins:
*
*Gradle plugin: graphql-java-codegen-gradle-plugin
*Maven plugin: grapqhl-java-codegen-maven-plugin
| |
doc_1130
|
the class should also implement IEnumerable and have a struct which is an IEnumerator. I'm only focusing on the Add method here.
class MyList<T> : IEnumerable
{
private static readonly T[] arrayStarter = new T[1];
private int capacity = 1;
private int currentItems = 1;
public T[] TList { get; set; }
public MyList()
{
TList = arrayStarter;
TList[0] = default(T);
}
public T[] Add(T[] tArray, T item)
{
T[] temp = new T[++capacity];
for (int i = 0; i < tArray.Length; i++)
temp[i] = tArray[i];
temp[tArray.Length] = item;
currentItems++;
return temp;
}
}
When I'm creating an instance of my list and I want to add an item using the method it looks like this:
MyList<int> m = new MyList<int>();
m.TList = m.Add(m.TList, 5);
m.TList = m.Add(m.TList, 7);
m.TList = m.Add(m.TList, 13);
m.TList = m.Add(m.TList, 15);
I'm pretty sure there's a better way to make a custom list I hope someone out there has a good insight on the matter.
A: This implementation demonstrates that you are missing the point of encapsulation in general, and of using private members in particular. Since you made TList array a member of your list class, you have an ability to hide it from your users completely. Your users should be able to write
m.Add(5);
m.Add(7);
instead of
m.TList = m.Add(m.TList, 5);
m.TList = m.Add(m.TList, 7);
and not worry about m.TList's presence at all.
Fortunately, you can do it very easily: make TList a private field, rename it to something that starts in lower case letter to follow C#'s naming guidelines, remove it from the list of Add's parameters, and change Add to not return anything. This would match IList<T>'s signature, which you should consider implementing.
Once you've made this work, consider "divorcing" capacity from currentItems, letting the first one grow faster, and the other one catching up to it as more items are added. This would reduce the number of re-allocations.
As a final point, consider switching to using Array.Resize<T> to avoid manual copying of data.
A: If you take a look at List<>'s Add() method implementation in ReferenceSource or with ILSpy, you'll see this:
private int _size;
private T[] _items;
public void Add(T item)
{
if (this._size == this._items.Length)
{
this.EnsureCapacity(this._size + 1);
}
this._items[this._size++] = item;
this._version++;
}
private void EnsureCapacity(int min)
{
if (this._items.Length < min)
{
int num = (this._items.Length != 0) ? (this._items.Length * 2) : 4;
if (num > 2146435071)
{
num = 2146435071;
}
if (num < min)
{
num = min;
}
this.Capacity = num;
}
}
| |
doc_1131
|
SELECT ongoing_portfolio.*,
Portfolio.Activation
FROM Ongoing_Portfolio
INNER JOIN Portfolio ON Ongoing_Portfolio.idPortfolio = Portfolio.idPortfolio
WHERE ongoing_portfolio.`idPortfolio`= 2 ORDER BY `Updated_Date` DESC LIMIT 4
SELECT SUM(`Transaction_Amount`) AS `Total`
FROM `transactions`
WHERE `idPortfolio`= 2 AND `Transaction_TimeStamp` <= "2016-12-17"
Actually what I am trying to do in here is this.
*
*Get the financial details of all the portfolios (first query).
*Get the total transaction for the date mentioned as Updated_Date of the particular portfolio referred by the first query. (second query)
Now, I really need to do this in one query, so I tried below.
SELECT ongoing_portfolio.*,
Portfolio.Activation,
SUM(Transactions.`Transaction_Amount`) AS `Total` WHERE `Transaction_TimeStamp` <= ongoing_portfolio.`Updated_Date`
FROM Ongoing_Portfolio
INNER JOIN Portfolio ON Ongoing_Portfolio.idPortfolio = Portfolio.idPortfolio
INNER JOIN Transactions ON Transactions.`idPortfolio` = Ongoing_Portfolio.idPortfolio
WHERE ongoing_portfolio.`idPortfolio`= 2 ORDER BY `Updated_Date` DESC LIMIT 4
However this generates errors as it says
#1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'WHERE `Transaction_TimeStamp` <= ongoing_portfolio.`Updated_Date`
FROM Ongoing_' at line 3
How can I make this one query successfully?
A: You can't use where condition in retrieving column.
You can use if else or when condition for total.
SELECT ongoing_portfolio.*,
Portfolio.Activation,
SUM(case when Transaction_TimeStamp` <= ongoing_portfolio.`Updated_Date`
then Transactions.`Transaction_Amount` end) AS `Total` FROM Ongoing_Portfolio
INNER JOIN Portfolio ON Ongoing_Portfolio.idPortfolio = Portfolio.idPortfolio
INNER JOIN Transactions ON Transactions.`idPortfolio` = Ongoing_Portfolio.idPortfolio
WHERE ongoing_portfolio.`idPortfolio`= 2 group by 'Updated_Date'
ORDER BY `Updated_Date` DESC LIMIT 4
| |
doc_1132
|
$.ajax({
url: 'Content/data.txt',
dataType: 'text',
success: function (treeObj) {
var count = treeObj.root.length; //here
A: with dataType: 'text' you tell the ajax-call to treat the answer from the server as a plain string. Try changing that to dataType: 'json'
A: treeObj is most likely getting passed in as a string, not an object so you need to parse the string into an object using something along:
try{
treeObj = jQuery.parseJSON(treeObj);
}catch(e){}
| |
doc_1133
|
FAIL - Deployed application at context path /AsteriskTeams but context failed to start
D:\AsteriskDemo\AsteriskTeams\nbproject\build-impl.xml:758:
The module has not been deployed.
at org.netbeans.modules.j2ee.deployment.devmodules.api.Deployment.deploy(Deployment.java:210)
at org.netbeans.modules.j2ee.ant.Deploy.execute(Deploy.java:106)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
at sun.reflect.GeneratedMethodAccessor51.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)
at org.apache.tools.ant.Target.execute(Target.java:390)
at org.apache.tools.ant.Target.performTasks(Target.java:411)
at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1399)
at org.apache.tools.ant.Project.executeTarget(Project.java:1368)
at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41)
at org.apache.tools.ant.Project.executeTargets(Project.java:1251)
at org.apache.tools.ant.module.bridge.impl.BridgeImpl.run(BridgeImpl.java:284)
at org.apache.tools.ant.module.run.TargetExecutor.run(TargetExecutor.java:539)
at org.netbeans.core.execution.RunClassThread.run(RunClassThread.java:153)
BUILD FAILED (total time: 5 seconds)
Please help me to resolve the issue..
Thanks in advance...
| |
doc_1134
|
I've tried animating using button.imageView.frame (with image set as imageView) but it only animates the position, the image doesn't get scaled down at all.
using hateButton.frame (when image is set as backgroundImage), the backgroundImage gets scaled down but it's not animated.
How do I animate-scale a UIButton image?
A: You can call layoutIfNeeded between [UIView beginAnimations] and [UIView commintAnimations] or with [UIView animateWithDuration].
[UIView animateWithDuration:0.3f animations:^{
self.myButton.frame = aFrame;
[self.myButton layoutIfNeeded];
}];
A: I could achieve this using Quartz 2D:
// move anchor point without moving frame
CGRect oldFrame = hateButton.frame;
hateButton.layer.anchorPoint = CGPointMake(0, 1); // bottom left
hateButton.frame = oldFrame;
[UIView beginAnimations:nil context:NULL];
CGAffineTransform newTransform;
newTransform = CGAffineTransformMakeScale(0.8, 0.8);
hateButton.transform = CGAffineTransformTranslate(newTransform, 0, -160);
[UIView commitAnimations];
Make sure you import Quartz2D too:
#import <QuartzCore/QuartzCore.h>
| |
doc_1135
|
With XE2 there was iphoneall unit, which exposed UIApplication. XE4 doesn't use FPC anymore, so I can't use that.
Embarcadero documentation
says I can use SDKs only with C++ or using delphi interfaces (and still, macapi is for OSX only, not iOS). So, it seems that there is no interface for UIKit framework?!
Another solution I tried was:
_system('open http://www.google.com');
But that had no affect at all!
Is there any other ways to open urls or am I out of luck to accomplish it?
I know there is TWebBrowser component for ios, but I wouldn't want to take that road just to display a webpage.
A: By chance, someone at Embarcadero posted a code snippet to do exactly this two days ago.
If you are using XE4, look in the Samples, and you can find one (sorry, not sure of the name) where the final code is:
OpenURL('http://www.embarcadero.com');
This uses the XE4 FireMonkey framework and a class helper written by David Clegg, available in the sample.
If you are using an older version of FireMonkey, you can use the rather more cumbersome code:
function SharedApplication: UIApplication;
begin
Result := TUIApplication.Wrap(TUIApplication.OCClass.sharedApplication);
end;
procedure TForm2.Button1Click(Sender: TObject);
begin
SharedApplication.openURL(TNSURL.Wrap(TNSURL.OCClass.URLWithString(NSSTR(PChar(String('http://www.embarcadero.com'))))));
end;
(Attribution: Code snippets all copied from the linked blog post.)
There is also a very old forum post from the early days of FireMonkey showing how to tackle these problems in general (basically, string <-> NSString <-> NSURL), and while it's a bit out of date - as you can see by the above code, FireMonkey has matured greatly - it may give some insight into the underlying reason for the code.
| |
doc_1136
|
A: You're looking at it the wrong way - a FormulaFieldDefinition is simply the definition of the formula itself, and not an object on the report. Therefore manipulating the size or position of it makes no sense.
What is actually shown on the report is an IFieldObject which displays the result for the given formula. This is how you can (if needed) show the same formula several times on a report.
You need to find the name of the IFieldObject that is displaying the formula, and manipulate the location of that instead. This can be done using ReportDefinition.ReportObjects("NameOfIFieldObject") and the Top, Left, Width and Height properties of it. Remember that the Top and Left values are relative to the section the object is in, not to the report.
| |
doc_1137
|
But I need this gradient to be applied only to bars (CALayer) of the first view.
I haven't found any relevant information to my problem. Any help appreciated.
A: You have to apply a mask to the gradient. There are various ways you could approach this problem.
You could create a CAShapeLayer, set the shape layer's path to the shape of the bars, and set the gradient layer's mask to that shape layer.
Or you could get rid of the bar layer and instead use two gradient layers, one for the orange bars and the other for the gray bars. Put both gradient layers in a subview, side-by-side, and set the superview's layer mask to the shape layer. Here's how to do that.
You'll need two gradient layers and a shape layer:
@IBDesignable
class BarGraphView : UIView {
private let orangeGradientLayer = CAGradientLayer()
private let grayGradientLayer = CAGradientLayer()
private let maskLayer = CAShapeLayer()
You'll also need the bar width:
private let barWidth = CGFloat(9)
At initialization time, set up the gradients and add all the sublayers:
override init(frame: CGRect) {
super.init(frame: frame)
commonInit()
}
required init?(coder aDecoder: NSCoder) {
super.init(coder: aDecoder)
commonInit()
}
private func commonInit() {
backgroundColor = .black
initGradientLayer(orangeGradientLayer, with: .orange)
initGradientLayer(grayGradientLayer, with: .gray)
maskLayer.strokeColor = nil
maskLayer.fillColor = UIColor.white.cgColor
layer.mask = maskLayer
}
private func initGradientLayer(_ gradientLayer: CAGradientLayer, with color: UIColor) {
gradientLayer.colors = [ color, color, color.withAlphaComponent(0.6), color ].map({ $0.cgColor })
gradientLayer.locations = [ 0.0, 0.5, 0.5, 1.0 ]
layer.addSublayer(gradientLayer)
}
At layout time, set the frames of the gradient layers and set the mask layer's path. This requires a little work because you don't want a bar to be half orange and half gray.
override func layoutSubviews() {
super.layoutSubviews()
let barCount = ceil(bounds.size.width / barWidth)
let orangeBarCount = floor(barCount / 2)
let grayBarCount = barCount - orangeBarCount
var grayFrame = bounds
grayFrame.size.width = grayBarCount * barWidth
grayFrame.origin.x = frame.maxX - grayFrame.size.width
grayGradientLayer.frame = grayFrame
var orangeFrame = bounds
orangeFrame.size.width -= grayFrame.size.width
orangeGradientLayer.frame = orangeFrame
maskLayer.frame = bounds
maskLayer.path = barPath()
}
private func barPath() -> CGPath {
var columnBounds = self.bounds
columnBounds.origin.x = columnBounds.maxX
columnBounds.size.width = barWidth
let path = CGMutablePath()
for datum in barData.reversed() {
columnBounds.origin.x -= barWidth
let barHeight = CGFloat(datum) * columnBounds.size.height
let barRect = columnBounds.insetBy(dx: 1, dy: (columnBounds.size.height - barHeight) / 2)
path.addRoundedRect(in: barRect, cornerWidth: 2, cornerHeight: 2)
}
return path
}
let barData: [Double] = {
let count = 100
return (0 ..< count).map({ 0.5 + (1 + sin(8.0 * .pi * Double($0) / Double(count))) / 4 })
}()
}
Result:
The BarGraphView is transparent wherever there are no bars. If you want it on a dark background, put a dark view behind it, or make it a subview of a dark view:
| |
doc_1138
|
My Model is trained with Keras and SKlearn and it must do the same normalization as I did using SKLearn Normalizer, which is the default L2 normalizer. What I am doing below apperantly is not equalivant of sklearn, any ideas?
vDSP_normalizeD(vec, 1, &normalizedVec, 1, &mean, &std, vDSP_Length(count))
let (normalizedXVec, _, _) = normalize(vec: doubleArray)
Then here I convert normalizedXVec to MLMultiArray and use as input to my predictor
Note: I also tried to convert the normalizer from sklearn using coreml tools but I got errors as seen here:
A: vDSP_normalizeD uses the mean and standard deviation. That is not the same as L2.
The L2 normalization first computes the L2-norm of the vector, which is the same as sqrt(v[0]*v[0] + v[1]*v[1] + ... + v[n]*v[n]) and then it divides each element of the vector by that number.
| |
doc_1139
|
I want to create data frame by parsing data from excel using python.
Data in my excel file looks like as follow:
The first row highlighted in yellow contains match, which will be one of the columns in data frame that I wanted to create.
In fact, second row and 4th row are the name of the columns that I wanted to created in a new data frame.
3rd row and fifth row are the value of each column.
The sample here is only for one match.
I have multiple matches in the excel file.
I want to create a data frame that contain the column Match and all name in blue colors in the file.
I have attached the sample file that contains multiple matches.
Download the file here.
My expected data frame is
Match 1-0 2-0 2-1 3-0 3-1 3-2 4-0 4-1 4-2 4-3.......
MOL Vivi -vs- Chelsea 14 42 20 170 85 85 225 225 225 .....
Can anyone advise me how to parse the excel data and convert to data frame?
Thanks,
Zep
A: Use:
import pandas as pd
from datetime import datetime
df = pd.read_excel('test_match.xlsx')
#mask for check a-z in column HOME -vs- AWAY
m1 = df['HOME -vs- AWAY'].str.contains('[a-z]', na=False)
#create index by matches
df.index = df['HOME -vs- AWAY'].where(m1).ffill()
df.index.name = 'Match'
#remove same index and HOME -vs- AWAY column rows
df = df[df.index != df['HOME -vs- AWAY']].copy()
#test if datetime or string
m2 = df['HOME -vs- AWAY'].apply(lambda x: isinstance(x, datetime))
m3 = df['HOME -vs- AWAY'].apply(lambda x: isinstance(x, str))
#seelct next rows and set new columns names
df1 = df[m2.shift().fillna(False)]
df1.columns = df[m2].iloc[0]
#also remove only NaNs columns
df2 = df[m3.shift().fillna(False)].dropna(axis=1, how='all')
df2.columns = df[m3].iloc[0].dropna()
#join together
df = pd.concat([df1, df2], axis=1).astype(float).reset_index().rename_axis(None, axis=1)
print (df.head())
Match 2000-01-01 00:00:00 2000-02-01 00:00:00 \
0 MOL Vidi -vs- Chelsea 14.00 42.00
1 Lazio -vs- Eintracht Frankfurt 8.57 11.55
2 Sevilla -vs- FC Krasnodar 7.87 6.63
3 Villarreal -vs- Spartak Moscow 7.43 7.03
4 Rennes -vs- FC Astana 4.95 6.38
2018-02-01 00:00:00 2000-03-01 00:00:00 2018-03-01 00:00:00 \
0 20.00 170.00 85.00
1 7.87 23.80 15.55
2 7.87 8.72 8.65
3 7.07 10.00 9.43
4 7.33 12.00 13.20
2018-03-02 00:00:00 2000-04-01 00:00:00 2018-04-01 00:00:00 \
0 85.0 225.00 225.00
1 21.3 64.30 42.00
2 25.9 14.80 14.65
3 23.9 19.35 17.65
4 38.1 31.50 34.10
2018-04-02 00:00:00 ... 0-1 0-2 2018-01-02 00:00:00 \
0 225.0 ... 5.6 6.80 7.00
1 55.7 ... 11.0 19.05 10.45
2 38.1 ... 28.0 79.60 29.20
3 38.4 ... 20.9 58.50 22.70
4 81.4 ... 12.9 42.80 22.70
0-3 2018-01-03 00:00:00 2018-02-03 00:00:00 0-4 \
0 12.5 12.0 32.0 30.0
1 48.4 27.4 29.8 167.3
2 223.0 110.0 85.4 227.5
3 203.5 87.6 73.4 225.5
4 201.7 97.6 103.6 225.5
2018-01-04 00:00:00 2018-02-04 00:00:00 2018-03-04 00:00:00
0 29.0 60.0 220.0
1 91.8 102.5 168.3
2 227.5 227.5 227.5
3 225.5 225.5 225.5
4 225.5 225.5 225.5
[5 rows x 27 columns]
| |
doc_1140
|
names = [ "Wim Duisenberg", "Jean-Claude Trichet", "Mario Draghi", "Christine Lagarde"]
And the following block of text that is scraped via beautiful soup:
print(textauthors)
<h2 class="ecb-pressContentSubtitle">Mario Draghi, President of the ECB, <br/>Vítor Constâncio, Vice-President of the ECB, <br/>Frankfurt am Main, 20 October 2016</h2>
I tried the following solution (based on this answer on stack overflow):
def exact_Match(textauthors, names):
b = r'(\s|^|$)'
res = return re.match(b + word + b, phrase, flags=re.IGNORECASE)
print(res)
It gives me an error of incorrect syntax and I am not sure how to solve it. Also let me in advance apologize if there is already answer for this somewhere on stack overflow, I am python beginner and I am not really sure how to even search for the right question. When I search for matching of names I see answers which try to do it with nltk but that is not really appropriate for me where I want to get exact match and when I try to search for match based on string text I cant find the answer that would work for me.
A: This will give you authors from textauthors:
import re
textauthors = '<h2 class="ecb-pressContentSubtitle">Mario Draghi, President of the ECB, <br/>Vítor Constâncio, Vice-President of the ECB, <br/>Frankfurt am Main, 20 October 2016</h2>'
regex = r">(?P<name>[^\s]+\s[^\s]+),"
matches = re.findall(regex, textauthors)
print(matches) # ['Mario Draghi', 'Vítor Constâncio']
of course if you need to extract authors from your textauthors
| |
doc_1141
|
Example:
ls = [[1,1], [1,2], [1,3], [1,4], [2,2], [2,3], [3,4], [3,5], [3,6], [3,7]]
desired_result = [[1,1], [1,2], [1,3], [2,2], [2,3], [3,4], [3,5], [3,6]]
A: If the input is sorted by the first element, you could use groupby and islice:
from itertools import groupby, islice
from operator import itemgetter
ls = [[1, 1], [1, 2], [1, 3], [1, 4], [2, 2], [2, 3], [3, 4], [3, 5], [3, 6], [3, 7]]
result = [e for _, group in groupby(ls, key=itemgetter(0)) for e in islice(group, 3)]
print(result)
Output
[[1, 1], [1, 2], [1, 3], [2, 2], [2, 3], [3, 4], [3, 5], [3, 6]]
The idea is to group the elements by the first value using groupby, and then fetch the first 3 values, if they exist, using islice.
A: Probably not the shortest answer.
The idea is to count occurrences while you're iterating over ls
from collections import defaultdict
filtered_ls = []
counter = defaultdict(int)
for l in ls:
counter[l[0]] += 1
if counter[l[0]] > 3:
continue
filtered_ls += [l]
print(filtered_ls)
# [[1, 1], [1, 2], [1, 3], [2, 2], [2, 3], [3, 4], [3, 5], [3, 6]]
A: If the list is already sorted, you can use itertools.groupby then just keep the first three items from each group
>>> import itertools
>>> ls = [[1,1], [1,2], [1,3], [1,4], [2,2], [2,3], [3,4], [3,5], [3,6], [3,7]]
>>> list(itertools.chain.from_iterable(list(g)[:3] for _,g in itertools.groupby(ls, key=lambda i: i[0])))
[[1, 1], [1, 2], [1, 3], [2, 2], [2, 3], [3, 4], [3, 5], [3, 6]]
A: You can do it like below:
ls = [[1,1], [1,2], [1,3], [1,4], [2,2], [2,3], [3,4], [3,5], [3,6], [3,7]]
val_count = dict.fromkeys(set([i[0] for i in ls]), 0)
new_ls = []
for i in ls:
if val_count[i[0]] < 3:
val_count[i[0]] += 1
new_ls.append(i)
print(new_ls)
Output:
[[1, 1], [1, 2], [1, 3], [2, 2], [2, 3], [3, 4], [3, 5], [3, 6]]
A: You can use collections.defaultdict to aggregate by first value in O(n) time. Then use itertools.chain to construct a list of lists.
from collections import defaultdict
from itertools import chain
dd = defaultdict(list)
for key, val in ls:
if len(dd[key]) < 3:
dd[key].append([key, val])
res = list(chain.from_iterable(dd.values()))
print(res)
# [[1, 1], [1, 2], [1, 3], [2, 2], [2, 3], [3, 4], [3, 5], [3, 6]]
A: Ghillas BELHADJ answer is good. But you should consider defaultdict for this task. The idea is taken from Raymond Hettinger who suggested to use defaultdict for grouping and counting tasks
from collections import defaultdict
def remove_sub_lists(a_list, nth_occurence):
found = defaultdict(int)
for sublist in a_list:
first_index = sublist[0]
print(first_index)
found[first_index] += 1
if found[first_index] <= nth_occurence:
yield sublist
max_3_times_first_index = list(remove_sub_lists(ls, 3)))
A: Here's an option that doesn't use any modules:
countDict = {}
for i in ls:
if str(i[0]) not in countDict.keys():
countDict[str(i[0])] = 1
else:
countDict[str(i[0])] += 1
if countDict[str(i[0])] > 3:
ls.remove(i)
| |
doc_1142
|
I read that throw stores previous exceptions, not getting this line.
Can i get this in brief with example?
A: Throw will rethrow original exception;
throw ex will create a new exception, so the stack trace changes. Usually makes little sense, in general you should either just throw, or create a new exception and throw that, eg
// not a great code, demo purposes only
try{
File.Read("blah");
}
catch(FileNotFoundException ex){
throw new ConfigFileNotFoundException("Oops", ex);
}
A: Yes - throw re-throws the exception that was caught, and preserves the stack trace. throw ex throws the same exception, but resets the stack trace to that method.
Unless you want to reset the stack trace (i.e. to shield public callers from the internal workings of your library), throw is generally the better choice, since you can see where the exception originated.
I would also mention that a "pass-through" catch block:
try
{
// do stuff
}
catch(Exception ex)
{
throw;
}
is pointless. It's the exact same behavior as if there were no try/catch at all.
| |
doc_1143
|
for instance text and checkbox tags from form tag?:
<s:form>
<s:text .../>
<s:checkbox .../
<s:form>
I know that form is a tag defined in strust-tags.tld and it extends the org.apache.struts2.views.jsp.ui.FormTag,
but how does it goes behind the scene to get the nested tags ?
A: use getComponentStack() method from the org.apache.struts2.components.Component class.
| |
doc_1144
|
What's on the parent page? - Just a div called "#hidden" that loads a separate php page called "getcontent.php"
What's on the Iframe modal box? - Just a form. On form submit it goes to a submission page where I do all my updating inside the database. After all of that updating is complete, then my function for closing the modal box and refreshing the div comes in inside the iframe
Here's my code:
$(document).ready(function() {
$('#hidden').load('getcontent.php');
parent.$.fallr('hide', function(){
});
});
I expect this code to a)refresh the div #hidden on the parent page and b)close my modal box. So far closing the modal box works perfectly! Refreshing the div does not
A: You have a functinception there:
parent.$.fallr('hide', function(){
function(){
A: Within your callback function you create another function, but you don't do anything with it. It gets created and thrown away. If you remove the inner-most function I guess it'll work loads better for you:
parent.$.fallr('hide', function(){
$('#hidden').load(
'admin/getcontent.php',
{ pageID : "<? echo $_REQUEST['pageID']; ?>" }
);
});
Possibly also failing cause of the query string. Not sure if it'll be able to encode that properly. It should do so if you place it in the data-parameter however. See above.
So, now that we've gotten a much clearer explanation, and gotten rid of the extra function and query string (the latter of which was probably working fine anyhow), we can move forward.
With the current code you're asking the #hidden div on the dialog to load content. But there is no #hidden div on the dialog, it's on the parent. Prefixing with parent. would fix that, but we also have another issue. We're telling the dialog to refresh the div as soon as the dialog loads. That's not what we want. We want it to refresh when we're closing the dialog, right? So moving it into the fallr callback should fix that. And that should run in the context of the parent, so no parent. prefix required.
$(document).ready(function() {
parent.$.fallr('hide', function(){
$('#hidden').load('getcontent.php');
});
});
But that's more or less what we had earlier that was causing the problem with the context not existing any more.
So, to me the problem is that you're trying to do things with the parent from the child. In my opinion, the parent should be the one doing things. So when the dialog is ready to be closed it should call a function on the parent that will close the dialog and update itself:
//From an event on the dialog
parent.closeDialogAndRefresh();
//On the parent define it
function closeDialogAndRefresh() {
$.fallr('hide');
$('#hidden').load('getcontent.php');
}
If that doesn't work. Try starting a timer that will call closeDialogAndRefresh after 50ms or something like that.
The real problem here I think is that you're using IFrames instead of floating divs from some decent UI-framework. Hope you'll get where you're going though.
| |
doc_1145
|
Any IDE plugins or external tools that can do this?
A: Eclipse gives you a way to see Call hierarchy using Ctrl+ Alt + H or choose from the menu like references.
This should show you the entire call tree for this method.
A: Use PMD or CheckStyle and have it construct an abstract syntax tree. You can then use that to find indirect usages/references.
A: If you use Eclipse, take a look at the Call Hierarchy view.
A: Check out nWire for Java. It shows all the code associations in one view, which can be further expanded and explored.
| |
doc_1146
|
public class WordCountBolt extends BaseStatefulBolt<KeyValueState<String, Long>> {
private KeyValueState<String, Long> wordCounts;
private OutputCollector collector;
...
@Override
public void prepare(Map stormConf, TopologyContext context, OutputCollector collector) {
this.collector = collector;
}
@Override
public void initState(KeyValueState<String, Long> state) {
wordCounts = state;
}
@Override
public void execute(Tuple tuple) {
String word = tuple.getString(0);
Integer count = wordCounts.get(word, 0);
count++;
wordCounts.put(word, count);
collector.emit(tuple, new Values(word, count));
collector.ack(tuple);
}
...
}
I would need to trigger a method to update the state (wordCounts), let's say each X seconds, independently of receiving an event or not. Is this possible in Apache Storm statefull bolts? Is it possible to simply schedule a method like this to be run in a defined interval repetitively?
public void updateState() {
wordCounts.put("NewKey", 1);
}
A: I'm not sure I follow why you need to update the state periodically, but if you need to run some code every so often, tick tuples might work for you. https://kitmenke.com/blog/2014/08/04/tick-tuples-within-storm/
| |
doc_1147
|
Here is the html:
<select id="f-cut-off-time" name="f-cut-off-time" aria-invalid="false" class="valid">
<option value="6:00AM">6:00AM</option>
<option value="6:30AM">6:30AM</option>
<option selected="" disabled="" hidden="" value="7:30PM">7:30PM</option>
<option value="7:30AM">7:30AM</option>
<option value="8:00AM">8:00AM</option>
<option value="8:30AM">8:30AM</option>
<option value="9:00AM">9:00AM</option>
<option value="9:30AM">9:30AM</option>
<option value="10:00AM">10:00AM</option>
<option value="10:30AM">10:30AM</option>
<option value="11:00AM">11:00AM</option>
<option value="11:30AM">11:30AM</option>
<option value="12:00PM">12:00PM</option>
<option value="12:30PM">12:30PM</option>
<option value="1:00PM">1:00PM</option>
<option value="1:30PM">1:30PM</option>
<option value="2:00PM">2:00PM</option>
<option value="2:30PM">2:30PM</option>
<option value="3:00PM">3:00PM</option>
<option value="3:30PM">3:30PM</option>
<option value="4:00PM">4:00PM</option>
<option value="4:30PM">4:30PM</option>
<option value="5:00PM">5:00PM</option>
<option value="5:30PM">5:30PM</option>
<option value="6:00PM">6:00PM</option>
<option value="6:30PM">6:30PM</option>
<option value="7:00PM">7:00PM</option>
<option value="7:30PM">7:30PM</option>
<option value="8:00PM">8:00PM</option>
<option value="8:30PM">8:30PM</option>
<option value="9:00PM">9:00PM</option>
<option value="9:30PM">9:30PM</option>
<option value="10:00PM">10:00PM</option>
<option value="10:30PM">10:30PM</option>
<option value="11:00PM">11:00PM</option>
<option value="11:30PM">11:30PM</option>
<option value="12:00AM">12:00AM</option>
<option value="12:30AM">12:30AM</option>
<option value="1:00AM">1:00AM</option>
<option value="1:30AM">1:30AM</option>
<option value="2:00AM">2:00AM</option>
<option value="2:30AM">2:30AM</option>
<option value="3:00AM">3:00AM</option>
<option value="3:30AM">3:30AM</option>
<option value="4:00AM">4:00AM</option>
<option value="4:30AM">4:30AM</option>
<option value="5:00AM">5:00AM</option>
<option value="5:30AM">5:30AM</option>
</select>
A: You could just change the value like this.
<option value="6:00PM">5:30PM</option>
That way it will appear as if they are selecting 5:30, but it is outputting 6:00
A: You can retrieve the selected index of the select element, make sure there is another option that comes after it, and then locate it by index in the options collection of that select element.
$('#timeslot').on('input', function () {
var i = this.selectedIndex;
if (i + 1 < this.options.length) {
console.log("next option value is " + this.options[i + 1].value);
}
});
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<select id='timeslot'>
<option value="6:00AM">6:00AM</option>
<option value="6:30AM">6:30AM</option>
<option value="7:00AM">7:00AM</option>
<option value="7:30AM">7:30AM</option>
</select>
A: To select the next or previous option from the dropdown after or before the selected option you can use the following solution:
*
*Next option:
*
*XPath:
//select[@class='valid' and @id='f-cut-off-time']//option[@selected]//following::option[1]
*Browser snapshot:
*
*Previous option:
*
*XPath:
//select[@class='valid' and @id='f-cut-off-time']//option[@selected]//preceding::option[1]
*Browser snapshot:
A: As I understand you , you need to use :
$('option:selected').next() or
$('option:selected').prev()
| |
doc_1148
|
What we think about LDAP upto now is(Correct me if i am wrong):
*
*There is LDAP server somewhere which contains users and its
associated role.
*when user browse our web application it asks to enter username and
password (which is actually credentials for LDAP server)
*when LDAP server authenticates it sends a token to our web
application ,we put that token in cookies and use this token to
authenticate users for every web application with in our domain.
*so using this technique we do not have to make additional modules
for user management.
what we are missing ?
Any idea will be appreciated/
Thanks in Advance
| |
doc_1149
|
a = [array([0, 1, 2, 3, 4]) array([0, 1, 2, 3]) array([0, 1, 2, 3, 4])
array([0, 1, 2, 3, 4, 5, 6, 7, 8]) array([0, 2, 3, 4, 5, 8, 9])
Is there any way to show this result is the form of key-value pairs like
[[(0),(1,2,3,4)],[(1),(0,2,3)],[(2),(0,1,3,4)],[(3),(0,1,2,4,5,6,7,8)]
[(4),(0,2,3,5,8,9)]]
i will get increment by 1 in the key and that value will be not included in the values list
I tried like this, but not able to put it into the required form.
c = [id for id in b if id != i] for i, b in enumerate(a)]
A: List comprehension with enumerate is one way. Note the comma after each single-item key. This represents that the object type is a tuple of length 1.
from numpy import array
a = [array([0, 1, 2, 3, 4]), array([0, 1, 2, 3]), array([0, 1, 2, 3, 4]),
array([0, 1, 2, 3, 4, 5, 6, 7, 8]), array([0, 2, 3, 4, 5, 8, 9])]
res = [[(i,), tuple(j for j in arr if j != i)] for i, arr in enumerate(a)]
# [[(0,), (1, 2, 3, 4)],
# [(1,), (0, 2, 3)],
# [(2,), (0, 1, 3, 4)],
# [(3,), (0, 1, 2, 4, 5, 6, 7, 8)],
# [(4,), (0, 2, 3, 4, 8, 9)]]
Alternatively, you can create a dictionary:
res_dict = {i: tuple(j for j in arr if j != i) for i, arr in enumerate(a)}
# {0: (1, 2, 3, 4),
# 1: (0, 2, 3),
# 2: (0, 1, 3, 4),
# 3: (0, 1, 2, 4, 5, 6, 7, 8),
# 4: (0, 2, 3, 4, 8, 9)}
A: You can try numpy approach:
import numpy as np
a = [np.array([0, 1, 2, 3, 4]),np.array([0, 1, 2, 3]),np.array([0, 1, 2, 3, 4]),
np.array([0, 1, 2, 3, 4, 5, 6, 7, 8]), np.array([0, 2, 3, 4, 5, 8, 9])]
print([[(i,),tuple(np.delete(j,np.argwhere(j==i)))] for i,j in enumerate(a)])
output:
[[(0,), (1, 2, 3, 4)], [(1,), (0, 2, 3)], [(2,), (0, 1, 3, 4)], [(3,), (0, 1, 2, 4, 5, 6, 7, 8)], [(4,), (0, 2, 3, 5, 8, 9)]]
| |
doc_1150
|
The real purpose of this is making a button visible when the softInputKeyboard appears. This button must always be just over the softKeyboard, wherever the scrollBar is.
So maybe there is an other solution to reach the goal, ask for precision, Thanks.
A: It is possible with a RelativeLayout.
LinearLayout prevents this from happening and places the widgets in horizontal/vertical order.
A: 1) soft keyboard hidden
2)soft keyboard visible
ANDROID MANIFEST:
(add android:windowSoftInputMode="adjustResize" to your activity tag)
<activity
android:windowSoftInputMode="adjustResize"
android:name="._TestActivity"
android:label="@string/app_name" >
XML:
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:id="@+id/RelativeLayout1"
android:layout_width="fill_parent"
android:layout_height="fill_parent" >
<LinearLayout
android:layout_width="match_parent"
android:layout_height="match_parent"
android:layout_above="@+id/layout_buttons"
android:orientation="vertical" >
<EditText
android:id="@+id/editText1"
android:layout_width="match_parent"
android:layout_height="wrap_content"
/>
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="RandomText"
android:textAppearance="?android:attr/textAppearanceLarge" />
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="RandomText"
android:textAppearance="?android:attr/textAppearanceLarge" />
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="RandomText"
android:textAppearance="?android:attr/textAppearanceLarge" />
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="RandomText"
android:textAppearance="?android:attr/textAppearanceLarge" />
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="RandomText"
android:textAppearance="?android:attr/textAppearanceLarge" />
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="RandomText"
android:textAppearance="?android:attr/textAppearanceLarge" />
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="RandomText"
android:textAppearance="?android:attr/textAppearanceLarge" />
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="RandomText"
android:textAppearance="?android:attr/textAppearanceLarge" />
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="RandomText"
android:textAppearance="?android:attr/textAppearanceLarge" />
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="RandomText"
android:textAppearance="?android:attr/textAppearanceLarge" />
</LinearLayout>
<LinearLayout
android:id="@+id/layout_buttons"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_alignParentBottom="true"
android:background="@android:color/darker_gray"
android:orientation="horizontal" >
<Button
android:layout_width="0dp"
android:layout_height="wrap_content"
android:layout_weight="1"
android:text="Ok" />
<Button
android:layout_width="0dp"
android:layout_height="wrap_content"
android:layout_weight="1"
android:text="Cancel" />
</LinearLayout>
</RelativeLayout>
| |
doc_1151
|
Brief overview:
We have an unsorted list of whole numbers. The goal is to find out how far each element of this list is from the nearest '0' value.
So if we have a list similar to this: [0, 1, 2, 0, 4, 5, 6, 7, 0, 5, 6, 9]
The expected result will be: [0, 1, 1, 0, 1, 2, 2, 1, 0, 1, 2, 3]
I've tried to simplify the problem in order to come up with some naive algorithm, but I can't figure out how to keep track of previous and next zero values.
My initial thoughts were to figure out all indexes for zeros in the list and fill the gaps between those zeros with values, but this obviously didn't quite work out for me.
The poorly implemented code (so far I'm just counting down the steps to the next zero):
def get_empty_lot_index(arr: list) -> list:
''' Gets all indices of empty lots '''
lots = []
for i in range(len(arr)):
if arr[i] == 0:
lots.append(i)
return lots
def space_to_empty_lots(arr: list) -> list:
empty_lots = get_empty_lot_index(arr)
new_arr = []
start = 0
for i in empty_lots:
steps = i - start
while steps >= 0:
new_arr.append(steps)
steps -= 1
start = i + 1
return new_arr
A: One possible algorithm is to make two sweeps through the input list: once forward, once backward. Each time retain the index of the last encountered 0 and store the difference. In the second sweep take the minimum of what was stored in the first sweep and the new result:
def space_to_empty_lots(arr: list) -> list:
result = []
# first sweep
lastZero = -len(arr)
for i, value in enumerate(arr):
if value == 0:
lastZero = i
result.append(i - lastZero)
# second sweep
lastZero = len(arr)*2
for i, value in reversed(list(enumerate(arr))):
if value == 0:
lastZero = i
result[i] = min(result[i], lastZero - i)
return result
NB: this function assumes that there is at least one 0 in the input list. It is not clear what the function should do when there is no 0. In that case this implementation will return a list with values greater than the length of the input list.
| |
doc_1152
|
Everything work fine until I tried to install an helm chart.
The helm binary (version: 3.5.2) end up on the error 'http2 connection lost'.
The GKE trigger an auto-repaire mode.
I don't understand why because I use kubectl to create some configmap without problem.
Did you known if there is some where I can find log about the pool machine or GKE master plan ?
A: Without the information on how exactly this cluster was created, what resources were applied with Helm and the logs from the cluster (which I will address how to retrieve) it could be hard to pinpoint the issue and a reason behind it.
Overview:
GKE's node auto-repair feature helps you keep the nodes in your cluster in a healthy, running state. When enabled, GKE makes periodic checks on the health state of each node in your cluster. If a node fails consecutive health checks over an extended time period, GKE initiates a repair process for that node.
-- Cloud.google.com: Kubernetes Engine: Docs: How to: Node auto repair: Overview
Answering the question posted:
Did you known if there is some where I can find log about the pool machine or GKE master plan ?
Yes. There are ways that you can check your cluster health as well the logs of it.
GKE generates a log entry for automated repair events. You can check the logs by using the:
*
*gcloud container operations list
The output should look similar to the one below:
operation-XXXXXXXXXXXXX-XXXXXXXX CREATE_CLUSTER europe-west3-c example-cluster DONE 2021-03-07T11:59:55.133563829Z 2021-03-07T12:03:09.684215827Z
operation-YYYYYYYYYYYYY-YYYYYYYY AUTO_REPAIR_NODES europe-west3-c gke-example-cluster-default-pool-AAAAAAAA-AAAA DONE 2021-03-07T12:21:14.814774338Z 2021-03-07T12:24:15.6305881Z
Adding to that you can look for specific node logs with: Google Cloud's operations suite (formerly Stackdriver)
You can access it by following:
*
*GCP Cloud Console (Web UI) -> Logging -> Upgrade -> Upgrade to the New Logs Explorer
and look for those logs using below filter:
resource.type="k8s_node"
resource.labels.cluster_name="CLUSTER-NAME"
resource.labels.project_id="PROJECT-NAME"
resource.labels.location="ZONE"
Additional resources:
*
*Cloud.google.com: Kubernetes Engine: Docs: How to: Node auto repair
*Cloud.google.com: Logging: Docs
| |
doc_1153
|
I am trying to use the Microsoft.Hadoop.Avro library. I dumped the schema out using a java avro tool which produces this:
{
""type"":""record"",
""name"":""EventData"",
""namespace"":""Microsoft.ServiceBus.Messaging"",
""fields"":[
{""name"":""SequenceNumber"",""type"":""long""},
{""name"":""Offset"",""type"":""string""},
{""name"":""EnqueuedTimeUtc"",""type"":""string""},
{""name"":""SystemProperties"",""type"":{ ""type"":""map"",""values"":[""long"",""double"",""string"",""bytes""]}},
{""name"":""Properties"",""type"":{ ""type"":""map"",""values"":[""long"",""double"",""string"",""bytes"", ""null""]}},
{""name"":""Body"",""type"":[""null"",""bytes""]}
]
}
However, when trying to deserialize the file to read the data back in like this:
using (var reader = AvroContainer.CreateReader<EventData>(stream))
{
using (var streamReader = new SequentialReader<EventData>(reader))
{
foreach (EventData dta in streamReader.Objects)
{
//stuff here
}
}
}
It doesn't work when passing the actual EventData type used on the Producer side so I tried to create a special class marked up with DataContract attributes like this:
[DataContract(Namespace = "Microsoft.ServiceBus.Messaging")]
public class EventData
{
[DataMember(Name = "SequenceNumber")]
public long SequenceNumber { get; set; }
[DataMember(Name = "Offset")]
public string Offset { get; set; }
[DataMember(Name = "EnqueuedTimeUtc")]
public string EnqueuedTimeUtc { get; set; }
[DataMember(Name = "Body")]
public ArraySegment<byte> Body { get; set; }
//[DataMember(Name = "SystemProperties")]
//public SystemPropertiesCollection SystemProperties { get; set; }
//[DataMember(Name = "Properties")]
//public IDictionary<string, object> Properties { get; set; }
}
It errors with the following:
System.Runtime.Serialization.SerializationException occurred
Message=Cannot match the union schema.
Is there a reason no sample code exists from MS for this use case of reading the Avro archive files using C#?
A: If you're trying to read the Avro files using Microsoft.Hadoop.Avro library, you can use the following class:
[DataContract(Name = "EventData", Namespace = "Microsoft.ServiceBus.Messaging")]
class EventData
{
[DataMember(Name = "SequenceNumber")]
public long SequenceNumber { get; set; }
[DataMember(Name = "Offset")]
public string Offset { get; set; }
[DataMember(Name = "EnqueuedTimeUtc")]
public DateTime EnqueuedTimeUtc { get; set; }
[DataMember(Name = "SystemProperties")]
public Dictionary<string, object> SystemProperties { get; set; }
[DataMember(Name = "Properties")]
public Dictionary<string, object> Properties { get; set; }
[DataMember(Name = "Body")]
public byte[] Body { get; set; }
public EventData(dynamic record)
{
SequenceNumber = (long)record.SequenceNumber;
Offset = (string)record.Offset;
DateTime.TryParse((string)record.EnqueuedTimeUtc, out var enqueuedTimeUtc);
EnqueuedTimeUtc = enqueuedTimeUtc;
SystemProperties = (Dictionary<string, object>)record.SystemProperties;
Properties = (Dictionary<string, object>)record.Properties;
Body = (byte[])record.Body;
}
}
When you're reading your avro file, you can read it as a dynamic object and then serialize it. Here's an example:
var reader = AvroContainer.CreateGenericReader(stream);
while (reader.MoveNext())
{
foreach (dynamic record in reader.Current.Objects)
{
var eventData = new EventData(record);
var sequenceNumber = eventData.SequenceNumber;
var bodyText = Encoding.UTF8.GetString(eventData.Body);
var properties = eventData.Properties;
var sysProperties = eventData.SystemProperties;
}
}
You can refer to this answer for more details.
A: I used both the Microsoft.Hadoop.Avro and apache avro C# libs and they seemed to have the same exact issue. When just trying to read the sequence, offset, and EnqueuedTimeUTC they both get the same garbled data that appears to be the codec and schema definition data. So here's what I found out. I was downloading the blob to a memorystream and then trying to deserialize from there. The issue is that the deserializer was not taking into account the header and schema in the file and was trying to deserialize from the very beginning of the stream.
To solve this and what worked was to use the Apache Avro C# library and use their gen tool to create the C# class based off of the dumped json formatted schema and then use a DataFileReader that can read from the stream.
using (var dataFileReader = Avro.File.DataFileReader<EventData>.OpenReader(stream, evtSample.Schema))
where evtSample.Schema is an instance of the EventData class which contains it's schema.
Now to find out if I can do the same thing with the Microsoft.Hadoop.Avro library.
BTW, here is the generated C# class output from the Apache AVRO gen tool:
public partial class EventData : ISpecificRecord
{
public static Schema _SCHEMA = Avro.Schema.Parse(@"{""type"":""record"",""name"":""EventData"",""namespace"":""Microsoft.ServiceBus.Messaging"",""fields"":[{""name"":""SequenceNumber"",""type"":""long""},{""name"":""Offset"",""type"":""string""},{""name"":""EnqueuedTimeUtc"",""type"":""string""},{""name"":""SystemProperties"",""type"":{""type"":""map"",""values"":[""long"",""double"",""string"",""bytes""]}},{""name"":""Properties"",""type"":{""type"":""map"",""values"":[""long"",""double"",""string"",""bytes"",""null""]}},{""name"":""Body"",""type"":[""null"",""bytes""]}]}");
private long _SequenceNumber;
private string _Offset;
private string _EnqueuedTimeUtc;
private IDictionary<string, System.Object> _SystemProperties;
private IDictionary<string, System.Object> _Properties;
private byte[] _Body;
public virtual Schema Schema
{
get
{
return EventData._SCHEMA;
}
}
public long SequenceNumber
{
get
{
return this._SequenceNumber;
}
set
{
this._SequenceNumber = value;
}
}
public string Offset
{
get
{
return this._Offset;
}
set
{
this._Offset = value;
}
}
public string EnqueuedTimeUtc
{
get
{
return this._EnqueuedTimeUtc;
}
set
{
this._EnqueuedTimeUtc = value;
}
}
public IDictionary<string, System.Object> SystemProperties
{
get
{
return this._SystemProperties;
}
set
{
this._SystemProperties = value;
}
}
public IDictionary<string, System.Object> Properties
{
get
{
return this._Properties;
}
set
{
this._Properties = value;
}
}
public byte[] Body
{
get
{
return this._Body;
}
set
{
this._Body = value;
}
}
public virtual object Get(int fieldPos)
{
switch (fieldPos)
{
case 0: return this.SequenceNumber;
case 1: return this.Offset;
case 2: return this.EnqueuedTimeUtc;
case 3: return this.SystemProperties;
case 4: return this.Properties;
case 5: return this.Body;
default: throw new AvroRuntimeException("Bad index " + fieldPos + " in Get()");
};
}
public virtual void Put(int fieldPos, object fieldValue)
{
switch (fieldPos)
{
case 0: this.SequenceNumber = (System.Int64)fieldValue; break;
case 1: this.Offset = (System.String)fieldValue; break;
case 2: this.EnqueuedTimeUtc = (System.String)fieldValue; break;
case 3: this.SystemProperties = (IDictionary<string, System.Object>)fieldValue; break;
case 4: this.Properties = (IDictionary<string, System.Object>)fieldValue; break;
case 5: this.Body = (System.Byte[])fieldValue; break;
default: throw new AvroRuntimeException("Bad index " + fieldPos + " in Put()");
};
}
}
}
| |
doc_1154
|
id | value1 | value2
-------------------------
1 | AAAA | NULL
2 | NULL | BBBB
3 | CCCC | DDDD
required to get it:
id | value
--------------
1 | AAAA
2 | BBBB
3 | CCCC
3 | DDDD
as a way of sql I get it?
A: Try something like this
select Id, value1 from yourtable
Where value1 is not null
Union all
select Id, value2 from yourtable
Where value2 is not null
A: It should work with an UNION :
SELECT col1, col2 FROM table WHERE col2 IS NOT NULL
UNION
SELECT col3, col4 FROM table WHERE col4 IS NOT NULL
Use an UNION ALL if you want to keep duplicates
A: You can try something like this
insert into #temp (id, value1, value2) values
(1,'AAAA', NULL)
,(2,NULL,'BBBB')
,(3, 'CCCC','DDDD')
select id, value1 as value from #temp where value1 is not null
union all
select id, value2 as value from #temp where value2 is not null
A: This is also helpful...for mysql IFNULL and sql-server ISNULL.
SELECT id, IFNULL(value1, value2) val1 FROM table1
UNION ALL
SELECT id, value2 FROM table1 WHERE value1 IS NOT NULL AND value2 IS NOT NULL
| |
doc_1155
|
using namespace std;
//template<class T>
class Node
{
public:
// T data;
int data;
Node *next;
int priority;
};
//template<class T>
class Que
{
Node *front , *rear;
public:
Que(){front = rear = NULL;}
// void enqueue(T x);
// T dequeue();
void enqueue(int *x, int l);
int dequeue();
void display();
};
//template<class T>
//void Que<T>::enqueue(T x)
void Que::enqueue(int *x, int l)
{
Node * pt = front;
for(int i=0; i<l; i++){
Node *t = NULL;
t = new Node;
if(t==NULL)
cout<<"Queue is full"<<endl;
else
{
t->next = NULL;
t->data = x[i];
t->priority = x[i];
if(front==NULL)
front = rear =t;
else
{
if(front->priority <= t->priority)
{
t->next = front;
front = t;
}
else
{
while(pt->next!= NULL && pt->next->priority <= x[i])
pt = pt->next;
t->next = pt->next;
pt->next = t;
}
}
}
}
}
//template<class T>
//T Que<T>::dequeue()
int Que::dequeue()
{
// T x = -1;
int x = -1;
Node *t = NULL;
if(front == NULL)
cout<<"Queue is empty"<<endl;
else
{
x = front->data;
t = front;
front = front->next;
delete t;
}
return x;
}
//template<class T>
//void Que<T>::display()
void Que::display()
{
Node *t = front;
while(t)
{
cout<<t->data<<" ";
t = t->next;
}
cout<<endl;
}
int main()
{
// Que <int> q();
Que q;
int a [] = {6, 1, 2, 5, 4, 3};
q.enqueue(a, sizeof(a)/sizeof(a[0]));
// q.dequeue();
q.display();
return 0;
}
It's a code for priority queue using linked list in C++. The while loop inside enqueue member function is showing segmentation fault.
I think pointer *pt to node which is used to point front is not pointing correctly. I have been trying to resolve it but couldn't get any idea. What can be reason for it?
A: You initialize pt at the start of enqueue but never reset it within the loop. So when you add multiple elements to an empty list, pt will start as nullptr, and won't be updated after the first element is added to the list. When you try to add the second element to the list you dereference pt->next which cause your segmentation fault because pt is still nullptr.
The fix is easy: move Node * pt = front; to within the for loop.
| |
doc_1156
|
*
*So, should I just ignore that and use the default response?
*What is it's purpose?
(I'm listening to Windows events/messages in Qt (C++) to shutdown some launched processes, but that's just the context and shouldn't have any bearing here...)
A: Yes, a WM_ENDSESSION with wParam==false is simply for information. Prior to receiving this, your application will have received a WM_QUERYENDSESSION. If you did something to get ready to shut down in response to the WM_QUERYENDSESSION, you can un-do it when/if you received a WM_ENDSESSION with wParam=false. If you haven't taken any steps to start shutting down, you can just return 0.
| |
doc_1157
| ||
doc_1158
|
// top-rated-movies.component.ts
// ... ... ...
import {TopRatedMoviesService} from '../../services/top-rated-movies.service';
//... ... ...
export class TopRatedMoviesComponent implements OnInit {
topRatedMovies: Object;
pageNum = 1;
constructor(private _topMoviesService: TopRatedMoviesService) { }
ngOnInit(): void {
this._topMoviesService.getPopularMovies().subscribe(data => {
this.topRatedMovies = data;
});
}
onScrollDown() {
this.pageNum++;
this._topMoviesService.getPopularMovies(this.pageNum)
.subscribe(data => this.topRatedMovies = data);
}
}
A: Make an array, push data which is coming from response on this array, assign that array to your view component hope its work.
A: you can try like this
// top-rated-movies.component.ts
// ... ... ...
import {TopRatedMoviesService} from '../../services/top-rated-movies.service';
//... ... ...
export class TopRatedMoviesComponent implements OnInit {
topRatedMovies: any[] = [];
pageNum = 1;
constructor(private _topMoviesService: TopRatedMoviesService) { }
ngOnInit(): void {
this._topMoviesService.getPopularMovies().subscribe(data => {
this.topRatedMovies = data;
});
}
onScrollDown() {
this.pageNum++;
this._topMoviesService.getPopularMovies(this.pageNum)
.subscribe(data => {
this.topRatedMovies.push(data); // problem with you code is you are replace your old data with your new data so to solve this we create a array and push the data into that array so your old data remains the same
});
}
}
I hope it helps you out
| |
doc_1159
|
Property[Caption] = null
Property[ClassGuid] = {4d36e978-e325-11ce-bfc1-08002be10318}
Property[CompatID] = BTHENUM\{00001101-0000-1000-8000-00805f9b34fb}
Property[CreationClassName] = null
Property[Description] = Standard Serial over Bluetooth link
Property[DeviceClass] = PORTS
Property[DeviceID] = BTHENUM\{00001101-0000-1000-8000-00805F9B34FB}_LOCALMFG&0002\7&2B70A8A8&0&88C626AD9497_C00000000
Property[DeviceName] = Standard Serial over Bluetooth link
Property[DevLoader] = null
Property[DriverDate] = 20060621000000.******+***
Property[DriverName] = null
Property[DriverProviderName] = Microsoft
Property[DriverVersion] = 10.0.15063.0
Property[FriendlyName] = Standard Serial over Bluetooth link (COM5)
Property[HardWareID] = BTHENUM\{00001101-0000-1000-8000-00805f9b34fb}_LOCALMFG&0002
Property[InfName] = bthspp.inf
Property[InstallDate] = null
Property[IsSigned] = true
Property[Location] = null
Property[Manufacturer] = Microsoft
Property[Name] = null
Property[PDO] = \Device\BthModem0
Property[Signer] = Microsoft Windows
Property[Started] = null
Property[StartMode] = null
Property[Status] = null
Property[SystemCreationClassName] = null
Property[SystemName] = null
What I would like to do is to find the correspondent Bluetooth device and unpair it programatically (C# or C++). So far I didn't manage to find any way to do so. Unintalling the ports doesn't automatically unpair the device.
I tried to map these properties to properties of the BT device driver and found no matches...
| |
doc_1160
|
Problem is, I'm calling it like this:
export default {
middleware: 'auth',
and it is returning me the following warning:
callback-based asyncData, fetch or middleware calls are deprecated. Please switch to promises or async/await syntax
I'm new into the front-end world and I searched but couldn't find/understand how to implement this async/await syntax on my middleware call. Can you help me?
Thanks in advance.
A: Faced a similar problem. I also use middleware: ['lang'], I got such an error and for a long time could not understand why this happened, if I did not change anything in the code. It turned out that in lang.js I mistakenly receive the second argument req
export default async function ({ isHMR, app, store }, req) {
}
Only a function servermiddleware can take multiple arguments
module.exports = function (req, res, next) {
| |
doc_1161
|
**Example:**
ticker=['HF (NYSE) (81%);BPO (NEW YORK)]']
**Expected Output:**
Tickercode-HF;BPO
StockCode-NYSE;NEW YORK
Relevancescore-81;0
**My code**:
Tickercode=[x for x in ticker if re.match(r'[\w\.-]+[\w\.-]+', x)]
Stockcode=[x for x in ticker if re.match(r'[\w\.-]+(%)+[\w\.-]+', x)]
Relevancescore=[x for x in ticker if re.match(r'[\w\.-]+(%)+[\w\.-]+', x)]
**My output:**
['HF (NYSE) (81%);BPO (NEW YORK)]']
[]
[]
But i am getting wrong output. Please help me to resolve the issue.
Thanks
A: Firs, each item of ticker contains multiple records separated by semicolon, so I recommend normalize ticker. Then iterate over strings and extract info using
pattern '(\w+) \(([\w ]+)\)( \(([\d]+)%\))?'.
import re
ticker=['HF (NYSE) (81%);BPO (NEW YORK)]']
ticker=[y for x in ticker for y in x.split(';')]
Tickercode=[]
Stockcode=[]
Relevancescore=[]
for s in ticker:
m = re.search(r'(\w+) \(([\w ]+)\)( \(([\d]+)%\))?', s)
Tickercode.append(m.group(1))
Stockcode.append(m.group(2))
Relevancescore.append(m.group(4))
print(Tickercode)
print(Stockcode)
print(Relevancescore)
Output:
['HF', 'BPO']
['NYSE', 'NEW YORK']
['81', None]
Update:
Using re.search instead of re.match which will match pattern from start of string. Your input have a leading white space, causing it failed.
You can add this to print which string doesn't match.
if m is None:
print('%s cannot be matched' % s)
continue
A: The problem with your code is that you're building up each of your lists from the input. You're telling it, "make a list of the input if the input matches my regular expression". The re.match() only matches against the beginning of a string, so the only regex that matches is the one that matches against the ticker symbol itself.
I've reorganized your code a bit below to show how it can work.
*
*Use re.compile() to the regex doesn't have to be created each time
*Use re.search() so you can find your embedded patterns
*Use match.group(1) to get the matching part of the query, not the whole of the input.
*Break up your input so you're only handling one group at a time
#!/usr/bin/env python
import re
# Example:
ticker=['HF (NYSE) (81%);BPO (NEW YORK)]']
# **Expected Output:**
# Tickercode-HF;BPO
# StockCode-NYSE;NEW YORK
# Relevancescore-81;0
tickercode=[]
stockcode=[]
relevancescore=[]
ticker_re = re.compile(r'^\s*([A-Z]+)')
stock_re = re.compile(r'\(([\w ]+)\)')
relevance_re = re.compile(r'\((\d+)%\)')
for tick in ticker:
for stockinfo in tick.split(";"):
ticker_match = ticker_re.search(stockinfo)
stock_match = stock_re.search(stockinfo)
relevance_match = relevance_re.search(stockinfo)
ticker_code = ticker_match.group(1) if ticker_match else ''
stock_code = stock_match.group(1) if stock_match else ''
relevance_score = relevance_match.group(1) if relevance_match else '0'
tickercode.append(ticker_code)
stockcode.append(stock_code)
relevancescore.append(relevance_score)
print 'Tickercode-' + ';'.join(tickercode)
print 'StockCode-' + ';'.join(stockcode)
print 'Relevancescore-' + ';'.join(relevancescore)
| |
doc_1162
|
Consider the following lines:
'this is a valid line
'this is also a valid line
Any string ' this is an invalid line
I Need a regular expression that matches the first two line and does not match third line.
Basic regex tried was '.* but it matches all the three lines:
So need a regex that does not match a non empty string before '
A: perhaps try anchor at the beginning... ^\s*'.*
| |
doc_1163
|
import numpy as np
import pandas as pd
class Aclass:
pass
df = pd.DataFrame(np.random.rand(8,2),columns=['a','b'])
This works:
Aclass.a = df['a']
Aclass.a is df['a']
Out[51]: True
But not this:
Aclass.a = df['a'].values
Aclass.a is df['a'].values
Out[54]: False
I want to do this as a way to incrementally include pandas into a project without getting hit with too much extra memory usage.
A: Actually, in this case, you're not making a copy of the data, just the array "container".
There are lots of cases where df.values will return a copy (e.g. different data types for different columns or any case where the data isn't contiguous in memory), but for a simple series or a DataFrame with one datatype, it returns a view of the data.
Even if the array objects are different, they point to the same data buffer. Only a few extra bytes of memory are used.
For example:
import numpy as np
import pandas as pd
df = pd.DataFrame(np.random.rand(8,2),columns=['a','b'])
# Every time you call `values` a new array object is created:
print df.a.values is df.a.values # This will be False
# But the data is _not_ copied:
x = df['a'].values
y = df.a.values
print np.may_share_memory(x, y) #This will be True
# And if we modify "x" or "y", we'll modify the original data frame:
x[0] = -9
y[-1] = -8
print df
# However, this only holds for cases where the data can be
# viewed as a numpy array.
# This will modify the original dataframe:
z = df.values
z[0,:] = -5
print df
# But this won't, because the types are different and "values" returns
# a copy:
df['b'] = df['b'].astype(int)
arr = df.values
arr[0,:] = 10
print df
| |
doc_1164
|
Automatic sorting didn't work, so I tried to tackle it programatically.
The Sort method takes a non-generic IComparer which I created and used, but I get the error:
DataGridView control is data-bound. The control cannot use the
comparer to perform the sort operation.
Any ideas how I can get this to sort?
Edit: More research shows that you should be sorting the source in a scenario where it's bound. I am using a BindingSource but it has no Sort() method.
A: There's a method called Sort on the dataGridView..Use the method as like:
this.dataGridView1.Sort(this.dataGridView1.Columns["Name"],ListSortDirection.Ascending);
Hope this helps:)
| |
doc_1165
|
Then I created an Azure API management resource and added this App service as API. After the deployment, all the methods are empty. I do not see any endpoints. What can I do to fix this issue ?
I added the API to Azure API management by following steps:
*
*Under API Management > APIs
*
*
*Add API
*Select app Service
*In the pop-up browse for App Service & provide display name, name, API URL suffix and click Create.
A: You have to install swashbuckle to your app service application so your app service returns a configuration endpoint where apim can find the open api spec or the swagger definition of your Apis and use it to import the Apis to apim
Assuming you use .Net Core, you have to install these packages
and insert the below line of code in your startup.cs (The Configure Method)
app.UseSwagger(options =>
{
options.SerializeAsV2 = true; // this is optional to control the swagger version
});
app.UseSwaggerUI(); //optional
| |
doc_1166
|
viewModelScope.launch {
repository.cacheAccount(person)
.flatMapConcat { it->
Log.d(App.TAG, "[2] create account call (server)")
repository.createAccount(person)
}
.flatMapConcat { it ->
if (it is Response.Data) {
repository.cacheAccount(it.data)
.collect { it ->
// no op, just execute the command
Log.d(App.TAG, "account has been cached")
}
}
flow {
emit(it)
}
}
.catch { e ->
Log.d(App.TAG, "[3] get an exception in catch block")
Log.e(App.TAG, "Got an exception during network call", e)
state.update { state ->
val errors = state.errors + getErrorMessage(PersonRepository.Response.Error.Exception(e))
state.copy(errors = errors, isLoading = false)
}
}
.collect { it ->
Log.d(App.TAG, "[4] collect the result")
updateStateProfile(it)
}
}
*
*cache an account on the local disk
*create an account on the backend
*in positive scenario, cache the newly create account in the local disk
Now I have to add more calls to a new API endpoint and the scenario become even more sophisticated. This endpoint is a ethereum chain.
4a. In the positive scenario, put in the local disk (cache) initiated transaction cacheRepository.createChainTx()
4b. In the negative scenario, just emit further the response from the backend
4a.->5. Register user on the 2nd endpoint repository.registerUser()
*The response from 2nd endpoint put in the cache by updating existing row. Even negative case except of exception should be cached to update status of tx.
viewModelScope.launch {
lateinit var newTx: ITransaction
cacheRepository.createChainTxAsFlow(RegisterUserTransaction(userWalletAddress = userWalletAddress))
.map { it ->
newTx= it
repository.registerUserOnSwapMarket(userWalletAddress)
}
.onEach { it -> preProcessResponse(it, newTx) }
.flowOn(backgroundDispatcher)
.collect { it -> processResponse(it) }
}
This a scenario which should be integrated into the 1st Flow chain.
The issue is I do not see how to do it clear in Flow chain. I can rewrite code without chaining, but it also bring variety if else statements.
How would you do this scenario in human readable way?
A: I'll ended up with this code for transition period:
viewModelScope.launch(backgroundDispatcher) {
try {
var cachedPersonProfile = repository.cacheAccount(person)
var createAccountResponse = repository.createAccount(person)
when(createAccountResponse) {
is Response.Data -> {
repository.cacheAccount(createAccountResponse.data)
val cachedTx = cacheRepository.createChainTx(RegisterUserTransaction(userWalletAddress = person.userWalletAddress))
val chainTx = walletRepository.registerUserOnSwapMarket(userWalletAddress = person.userWalletAddress)
when(chainTx) {
is ru.home.swap.core.network.Response.Data -> {
if (chainTx.data.isStatusOK()) {
cachedTx.status = TxStatus.TX_MINED
} else {
cachedTx.status = TxStatus.TX_REVERTED
}
}
is ru.home.swap.core.network.Response.Error.Message -> {
cachedTx.status = TxStatus.TX_EXCEPTION
}
is ru.home.swap.core.network.Response.Error.Exception -> {
cachedTx.status = TxStatus.TX_EXCEPTION
}
}
cacheRepository.createChainTx(cachedTx)
withContext(Dispatchers.Main) {
state.update { state ->
if (cachedTx.status == TxStatus.TX_MINED) {
state.copy(
isLoading = false,
profile = createAccountResponse.data,
status = StateFlagV2.PROFILE
)
} else {
val txError = "Failed register the profile on chain with status ${TxStatus.TX_MINED}"
state.copy(
isLoading = false,
errors = state.errors + txError
)
}
}
}
}
else -> { updateStateProfile(createAccountResponse) }
}
} catch (ex: Exception) {
withContext(Dispatchers.Main) {
state.update { state ->
val errors = state.errors + getErrorMessage(PersonRepository.Response.Error.Exception(ex))
state.copy(errors = errors, isLoading = false)
}
}
}
}
If you have a better alternative, please share it in the post as an answer.
| |
doc_1167
|
Emp_id, Emp_Name, Region_Code
Due to Data Load failure there were duplicate entries in the master table.
I found the duplicate entries are for Region_Code=5,10 & 13.
How can I find out the Duplicate Emp_id for this?
Also how can I write a query to find duplicates in a given table?
A: The following solution works on SQL Server 2005 and later versions:
-- Find Duplicate Rows
SELECT
Emp_id,
Emp_Name,
MAX(Region_Code) as Region_Code
FROM
Employee_Master
GROUP BY
Emp_id,
Emp_Name
HAVING
COUNT(*) > 1
-- Delete Duplicate Rows
DELETE FROM
Employee_Master
WHERE
Region_Code IN
(
SELECT
MAX(Region_Code)
FROM
Employee_Master
GROUP BY
Emp_id,
Emp_Name
HAVING
COUNT(*) > 1
)
| |
doc_1168
|
Each df has an id variable, and the time intervals are unique to the id.
I have two data frames, both have a common 'id' variable. The first df has the start and stop time, and the second has the data that I wish to trim by the start and stop time for each id.
df_1:
id start stop
<dbl> <dttm> <dttm>
12 2018-04-10 12:00:00 2018-04-10 18:00:00
18 2018-02-01 04:00:00 2018-02-01 09:00:00
df_1 <- structure(list(id = c(12, 18),
start = structure(c(1523361600, 1517457600),
class = c("POSIXct", "POSIXt"), tzone = "UTC"),
stop = structure(c(1523383200, 1517475600),
class = c("POSIXct", "POSIXt"), tzone = "UTC")),
.Names = c("id", "time_on", "time_off"),
class = c("tbl_df", "tbl", "data.frame"), row.names = c(NA, -2L))
df_2:
id timestamp
12 10/04/2018 11:00
12 10/04/2018 12:00
12 10/04/2018 13:00
12 10/04/2018 14:00
12 10/04/2018 15:00
12 10/04/2018 16:00
12 10/04/2018 17:00
12 10/04/2018 18:00
12 10/04/2018 19:00
12 10/04/2018 20:00
18 01/02/2018 01:00
18 01/02/2018 02:00
18 01/02/2018 03:00
18 01/02/2018 04:00
18 01/02/2018 05:00
18 01/02/2018 06:00
18 01/02/2018 07:00
18 01/02/2018 08:00
18 01/02/2018 09:00
18 01/02/2018 10:00
df_2 <- structure(list(id = c(12, 12, 12, 12, 12, 12, 12, 12, 12, 12,
18, 18, 18, 18, 18, 18, 18, 18, 18, 18),
timestamp = structure(c(1523358000, 1523361600, 1523365200,
1523368800, 1523372400, 1523376000, 1523379600, 1523383200,
1523386800, 1523390400, 1517446800, 1517450400, 1517454000.005,
1517457600.01, 1517461200.015, 1517464800.02, 1517468400.025,
1517472000.03, 1517475600.035, 1517479200.04),
class = c("POSIXct", "POSIXt"), tzone = "UTC")),
.Names = c("id", "timestamp"),
class = c("tbl_df", "tbl", "data.frame"),
row.names = c(NA, -20L))
using the lubridate package I have created 'interval' variable:
df_1 <- mutate(df_1,
interval = (interval(start, stop)))
do I need to use a loop to crop df_2 by the intervals unique to the 'id' variable? Possibly using %within% ?
I would like to end up with:
df_2:
12 10/04/2018 12:00
12 10/04/2018 13:00
12 10/04/2018 14:00
12 10/04/2018 15:00
12 10/04/2018 16:00
12 10/04/2018 17:00
12 10/04/2018 18:00
18 01/02/2018 04:00
18 01/02/2018 05:00
18 01/02/2018 06:00
18 01/02/2018 07:00
18 01/02/2018 08:00
18 01/02/2018 09:00
| |
doc_1169
|
Here is the difference between browsers:
https://i.imgur.com/7ZoA6Fz.jpg
To try this out for yourself, use Chrome and check out the results from the link below and resize the preview window to trigger the breakpoints.
https://www.codeply.com/go/DUmlhXetYE
If you cannot repeat the problem then maybe it's a browser update issue...
A: If you remove display: flex; on .gallery-items the margin is consistent.
Seems ok in chrome, ff and edge.
| |
doc_1170
|
I've done nothing to the code that VS auto-generates except add the following to Global.asax.cs:
string myself = System.Security.Principal.WindowsIdentity.GetCurrent().Name;
string me = User.Identity.Name;
String myself stores my current windows logon ID successfully. But string me is null, because when I debug, User is null. However, Visual Studio shows no errors before debugging.
How do I get the proper value for string me using User.Identity.Name?
A: Input code in Session_Start :
void Session_Start(object sender, EventArgs e)
{
string myself = System.Security.Principal.WindowsIdentity.GetCurrent().Name;
string me = User.Identity.Name;
Response.Write("myself:" + myself + "<br>me:" + me);
}
A: What authentication are you using? I don't remember off-hand but anonymous authentication may present a null or null like object in User.
| |
doc_1171
|
The method below is the solution I came up with. All it does is subtract powers of ten until the original number is less than 1. It works well, but seems like a brute-force approach. Is there a more efficient way to do this?
template< typename T >
class Math
{
public:
static T modf( T const & x, T & intpart )
{
T sub = 1;
T ret = x;
while( x >= sub )
{
sub *= 10;
}
sub /= 10;
while( sub >= 1 )
{
while( ret >= sub )
{
ret -= sub;
}
sub /= 10;
}//while [sub] > 0
intpart = x - ret;
return ret;
}
}
Note that I've removed the sign management code for brevity.
A: You could perhaps replace the subtraction loop with a binary search, although that's not an improvement in complexity class.
What you have requires a number of subtractions approximately equal to the sum of the decimal digits of x, whereas a binary search requires a number of addition-and-divide-by-two operations approximately equal to 3-and-a-bit times the number of decimal digits of x.
With what you're doing and with the binary search, there's no particular reason to use powers of 10 when looking for the upper bound, you could use any number. Some other number might be a bit quicker on average, although it probably depends on the type T.
Btw, I would also be tempted to make modf a function template within Math (or a free template function in a namespace), rather than Math a class template. That way you can specialize or overload one function at a time for particular types (especially the built-in types) without having to specialize the whole of Math.
Example:
namespace Math
{
template <typename T>
T modf( T const & x, T & intpart )
{ ... }
}
Call it like this:
float f = 1.5, fint;
std::cout << Math::modf(f, fint) << '\n';
double d = 2.5, dint;
std::cout << Math::modf(d, dint) << '\n';
mpf_class ff(3.5), ffint(0); // GNU multi-precision
std::cout << Math::modf(ff, ffint) << '\n';
Overload it like this:
namespace Math {
double modf(double x, double &intpart) {
return std::modf(x, &intpart);
}
mpf_class modf(const mpf_class &x, mpf_class &intpart) {
intpart = floor(x);
return x - intpart;
}
}
A: mb use std::modf is better?
for custom type you can release Math class specialization.
#include <cmath>
#include <iostream>
template<typename T>
class Math
{
public:
static T modf(const T& x, T& integral_part)
{
return std::modf(x, &integral_part);
}
};
int main()
{
double d_part = 0.;
double res = Math<double>::modf(5.2123, d_part);
std::cout << d_part << " " << res << std::endl;
}
A: I don't know how strict your "ideally use only mathematical operations" restraint is, but nonetheless for the fractional part, could you extract it to a string and convert back to a float?
| |
doc_1172
|
EWSService = new ExchangeService(ExchangeVersion.Exchange2010_SP1);
EWSService.TraceListener = tr;
EWSService.TraceFlags = TraceFlags.DebugMessage | TraceFlags.EwsRequest | TraceFlags.EwsResponse;
EWSService.TraceEnabled = true;
EWSService.Credentials = new WebCredentials(user, psw,domain);
EWSService.Url = new Uri("https://----/EWS/Exchange.asmx");
FolderId id = Test(EWSService, "inbox", null);
Folder source = Microsoft.Exchange.WebServices.Data.Folder.Bind(EWSService, id);
SearchFilter> slist = new List<SearchFilter> ();
Add(new SearchFilter.IsEqualTo(EmailMessageSchema.From, "some@emailaddress.com"));
SearchFilter filter = new SearchFilter.SearchFilterCollection(LogicalOperator.Or, slist);
ItemView messageView = new ItemView(99);
FindItemsResults<Item> list = source.FindItems(filter,messageView);
the list sometimes contains 0 items when I use a specific email address in the searchFilter even when the mail item is present in the folder.
When I don't use a SearchFilter with FindItems it is present in the list.
How come the SearchFilter is not working ?
A: First off.
You DONT need a List of Searchfilter, if you only want to look for ONE email address
SearchFilter> slist = new List<SearchFilter> ();
Now on to some recommendations:
*
*I'd recommend using a query string instead of a SearchFilter.
// Find all items where the From: contains "some@emailaddress.com".
string filter= "From:\"some@emailaddress.com\"";
Source: https://msdn.microsoft.com/en-us/library/office/dn579420(v=exchg.150).aspx
*Do not pull 99 items in the ItemView instead pull 20 and use pagination
ItemView messageView = new ItemView(20, 0, OffsetBasePoint.Beginning);
*Load only the properties that you NEED
messageView.PropertySet = BasePropertySet.IdOnly;
*Define how deep do you want to search
messageView.Traversal = ItemTraversal.Shallow
The code below is ONLY an example of how I've used the findItems method in the past for my own projects using VB... FOR DEMONSTRATION PURPOSES
Private Function GetAllSyncedContactIdsInExchange(pService As ExchangeService) As List(Of Integer)
Dim oInternalContactIdDefinition As New ExtendedPropertyDefinition(DefaultExtendedPropertySet.PublicStrings, conContactIdPropertyName, MapiPropertyType.Integer)
Dim oInternalContactIdFilter As New SearchFilter.Exists(oInternalContactIdDefinition)
Dim oResults As FindItemsResults(Of Item) = Nothing
Dim oPropertySet As New PropertySet(oInternalContactIdDefinition)
Dim lstSyncedContactIds As New List(Of Integer)
Dim intDBId As Integer
Dim lstEESContactFolders As List(Of FolderId) = GetAllCustomEESFolderIds(pService)
For Each oFolderId As FolderId In lstEESContactFolders
Dim blnMoreAvailable As Boolean = True
Dim intSearchOffset As Integer = 0
Dim oView As New ItemView(conMaxChangesReturned, intSearchOffset, OffsetBasePoint.Beginning)
oView.PropertySet = BasePropertySet.IdOnly
Do While blnMoreAvailable
oResults = pService.FindItems(oFolderId, oInternalContactIdFilter, oView)
blnMoreAvailable = oResults.MoreAvailable
If Not IsNothing(oResults) AndAlso oResults.Items.Count > 0 Then
pService.LoadPropertiesForItems(oResults, oPropertySet)
For Each oExchangeItem As Item In oResults.Items
If oExchangeItem.TryGetProperty(oInternalContactIdDefinition, intDB2Id) Then
lstSyncedContactIds.Add(intDBId)
End If
Next
If blnMoreAvailable Then oView.Offset = oView.Offset + conMaxChangesReturned
End If
Loop
Next
Return lstSyncedContactIds
End Function
| |
doc_1173
|
has_attached_file :image, styles: { medium: "300x300>", thumb: "100x100>" }
validates_attachment_content_type :image, content_type: /\Aimage\/.*\z/
then in my index.html.erb i got this
<div class="row">
<% @posts.each do |post| %>
<div class="col-md-4">
<div class="card mb-4 shadow-sm ">
<img class="bd-placeholder-img card-img-top img-fluid " ><%= image_tag post.image.url(:medium) %></img>
<div class="card-body">
<p class="card-text"><h3><%= post.title %> </h3> <%= post.body %></p>
<div class="d-flex justify-content-between align-items-center">
<div class="btn-group">
<%= link_to "View", post_path(post), :class =>'btn btn-md btn-outline-secondary' %>
<%= link_to "Edit", edit_post_path(post), :class =>'btn btn-md btn-outline-secondary' %>
</div>
<small class="text-muted">9 mins</small>
</div>
</div>
</div>
</div>
<% end %>
</div>
When I upload diffrent images then it doesn't resize it properly: look at this: https://iv.pl/image/resize-ruby.Gt21xMz
I use paperclip and Magick to upload images.
A: I checked through the docs for Paperclip, and it appears that 300x300> and 100x100> are directives passed to ImageMagick.
https://www.imagemagick.org/script/command-line-processing.php#geometry says that 300x300> will shrink the image so that it fits within 300x300. That means if you have a picture that's 500x300, it'll shrink down to 300x180.
{ medium: "300x300>", thumb: "100x100>" } looks like the line of code to modify, to change your resizing.
The picture that renders correctly is probably close to square in its original form. The picture that does not is probably a wide rectangle.
| |
doc_1174
|
For example:
Example input/output
findMyCampsites(campgrounds, 'ocean', 8) //-> [1]
findMyCampsites(campgrounds, 'forest', 4) //-> [18]
findMyCampsites(campgrounds, 'forest', 6) //-> 'Sorry, no campsites with that view are available to host your party'
This is the data set I've been given
let campgrounds = [
{ number: 1, view: 'ocean', partySize: 8, isReserved: false },
{ number: 5, view: 'ocean', partySize: 4, isReserved: false },
{ number: 12, view: 'ocean', partySize: 4, isReserved: true },
{ number: 18, view: 'forest', partySize: 4, isReserved: false },
{ number: 23, view: 'forest', partySize: 4, isReserved: true }
];
I've tried this
function findMyCampsites(campgrounds, location, groupSize) {
var available = []
for (var i = 0; i < campgrounds.length; i++) {
if (campgrounds[i].isReserved == false && campgrounds[i].view == location) {
available.push(campgrounds[i].number)
}
}
return available
}
My issue is every time I add in the partySize variable whether through an else/if conditional statement or just go right ahead and add in the else statement I'm always coming up short on the output and I have no clue what I'm doing wrong.
I'm admittedly new to this so I apologize if I'm being unclear ill add screenshots to help with that screenshot:
A: This works fine for me. Note that you should use let instead of var.
let campgrounds = [
{ number: 1, view: 'ocean', partySize: 8, isReserved: false },
{ number: 5, view: 'ocean', partySize: 4, isReserved: false },
{ number: 12, view: 'ocean', partySize: 4, isReserved: true },
{ number: 18, view: 'forest', partySize: 4, isReserved: false },
{ number: 23, view: 'forest', partySize: 4, isReserved: true }
];
function findMyCampsites(location, groupSize)
{
let available = [];
for (let i = 0; i < campgrounds.length; i++)
{
const campground = campgrounds[i];
if (campground.isReserved == false && campground.view == location && groupSize <= campground.partySize)
{
available.push(campground.number)
}
}
return available;
}
// The following is just for the HTML form
let inputs = document.querySelectorAll('input');
let submit = document.querySelector('button');
let results = document.querySelector('div#results');
submit.addEventListener('click', function()
{
let location = inputs[0].value;
let groupSize = inputs[1].value;
results.innerHTML += '<p><b>' + location + ', ' + groupSize.toString() + '</b>' + ' '.repeat(4) + findMyCampsites(location, groupSize) + '</p>\n';
inputs[0].value = '';
inputs[1].value = '';
});
<input id="location" type="text" placeholder="Location">
<input id="groupSize" type="number" placeholder="Group size">
<button>Find</button>
<div id="results">
</div>
| |
doc_1175
|
let activeUrl = new URL(this.serverAddress);
let targetUrl = activeUrl.origin + environment.apiBasePath + '/named_selection/' + componentId;
let params = new HttpParams().set('name', name);
if (this.customLookupAddress != "") {
params.set('lookup', this.customLookupAddress);
}
if (this.customGatewayAddress != "") {
params.set('gateway', this.customGatewayAddress);
}
return this.httpClient.get(targetUrl, { headers: { 'responseType': 'json' }, params: params }).pipe(
map((namedSelection) => {
this.namedSelections.set(componentId.toString() + '.' + name, namedSelection);
I converted the entire code the usage of fetch API. Herein what it looks like now:
let activeUrl = new URL(this.serverAddress);
let targetUrl = activeUrl.origin + environment.apiBasePath + '/named_selection/' + componentId;
let params = new HttpParams().set('name', name);
if (this.customLookupAddress != "") {
params.set('lookup', this.customLookupAddress);
}
if (this.customGatewayAddress != "") {
params.set('gateway', this.customGatewayAddress);
}
const data$ = new Observable(observer => {
fetch(targetUrl, { headers: { 'responseType': 'json'}, method: 'GET'})
.then(response => response.json())
.then(namedSelection => {
observer.next(namedSelection);
observer.complete();
})
.catch(err => observer.error(err));
});
return data$.pipe(
tap((namedSelection) => {
this.namedSelections.set(componentId.toString() + '.' + name, namedSelection);
})
);
}
However, I am unable to pass in the parameters 'params' in this case. Can you guys please help me out on how to go about it and how should it be structure the code when it comes to the part of the fetch function?
A: To add URL params to a request using fetch, you need to append them to the fetch url (and also set the correct header names):
const params = new URLSearchParams({ name });
if (this.customLookupAddress) {
params.set('lookup', this.customLookupAddress);
}
if (this.customGatewayAddress) {
params.set('gateway', this.customGatewayAddress);
}
fetch(`${targetUrl}?${params}`, { headers: { 'Accept': 'application/json' } })
.then(res => res.json())
// ...
| |
doc_1176
|
jQuery(function ($) {
$('table#<%= MyTable.ClientID %> tr td input[type=text]').filter('input[id*=tCell1]')
.bind('blur', validateCell1);
$('table#<%= MyTable.ClientID %> tr td input[type=text]').filter('input[id*=tCell2]')
.bind('blur', ValidateCell2);
$('table#<%= MyTable.ClientID %> tr td input[type=text]').filter('input[id*=tCell3]')
.bind('blur', ValidateCell3);
$('table#<%= MyTable.ClientID %> tr td input[type=text]').filter('input[id*=tCell4],input[id*=tCell5]')
.bind('blur', ValidateCell4and5);
});
since I am attaching "onblur" event on each cell, it is loading slow. Is there any other way for this?
Thanks,
sridhar.
A: Instead of using redundant wildcard attribute filters (like input[id*=tCell1]) which tend to slow things down, try selecting elements more directly. If you know the positions of the elements beforehand, it should not be a problem. For example:
$("table tr").each(function() {
$(this).find("td:eq(0) input[type=text]").bind('blur', validateCell1);
$(this).find("td:eq(1) input[type=text]").bind('blur', validateCell2);
$(this).find("td:eq(3) input[type=text]").bind('blur', validateCell3);
$(this).find("td:eq(4) input[type=text]").bind('blur', ValidateCell4and5);
$(this).find("td:eq(5) input[type=text]").bind('blur', ValidateCell4and5);
});
Also, I don't think there is any need to use filter. A well thought-out selector should be enough.
A: you can try with classes:
$(".classMyTable tbody tr").each(function() {
$(".classMyInput_1", $(this)).bind('blur', validateCell1);
$(".classMyInput_2", $(this)).bind('blur', validateCell2);
$(".classMyInput_3", $(this)).bind('blur', validateCell3);
$(".classMyInput_4, .classMyInput_5", $(this)).bind('blur', ValidateCell4and5);
});
if this solution serves you, you should take as @karim79 correct response because it is based on.
| |
doc_1177
|
A: By the help of android startActivityForResult() method, you can get result from another activity.
By the help of android startActivityForResult() method, you can send information from one activity to another and vice-versa. The android startActivityForResult method, requires a result from the second activity (activity to be invoked).
In such case, we need to override the onActivityResult method that is invoked automatically when second activity returns result.
MainActivity.java
public class MainActivity extends Activity {
TextView textView1;
Button button1;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
textView1=(TextView)findViewById(R.id.textView1);
button1=(Button)findViewById(R.id.button1);
button1.setOnClickListener(new OnClickListener() {
@Override
public void onClick(View arg0) {
Intent intent=new Intent(MainActivity.this,SecondActivity.class);
startActivityForResult(intent, 2);// Activity is started with
requestCode 2
}
});
}
// Call Back method to get the Message form other Activity
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data)
{
super.onActivityResult(requestCode, resultCode, data);
// check if the request code is same as what is passed here it is 2
if(requestCode==2)
{
String message=data.getStringExtra("MESSAGE");
textView1.setText(message);
}
}
SecondActivity.java
public class SecondActivity extends Activity {
EditText editText1;
Button button1;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_second);
editText1=(EditText)findViewById(R.id.editText1);
button1=(Button)findViewById(R.id.button1);
button1.setOnClickListener(new OnClickListener() {
@Override
public void onClick(View arg0) {
String message=editText1.getText().toString();
Intent intent=new Intent();
intent.putExtra("MESSAGE",message);
setResult(Activity.RESULT_OK,intent);
finish();//finishing activity
}
});
}
@Override
public boolean onCreateOptionsMenu(Menu menu) {
// Inflate the menu; this adds items to the action bar if it is present.
getMenuInflater().inflate(R.menu.second, menu);
return true;
}
}
A: //do Some work
Intent i = new Intent(this,MainActivity2..class);
startActivityForResult(i,12);
}
In MainActivity2.class
// after your work complete
Intent i =new Intent();
i.putExtra("result",true);// any data you want to pass
setResult(RESULT_OK,i);
After this we handle result
protected void onActivityResult(int requestCode, int resultCode, @Nullable Intent data) {
switch(requestCode){
case 12:
if(resultCode == Activity.RESULT_OK){// onsuccess do something
boolean isSucces = data.getBooleanExtra("result",false);
if(isSuccess)// perform action
{// show toast}
}
}
}
A: From your MainActivity call the TargetActivity using startActivityForResult()-
For example:
Intent intent = new Intent(this, TargetActivity.class);
intent.putExtra(); // sent your putExtra data here to pass through intent
startActivityForResult(intent, 1000);
In your intent set the data which you want to return back to MainActivity. If you don't want to return back any data then you don't need to set any data.
For example:
In TargetActivity if you want to send back data:
Intent returnIntent = new Intent();
returnIntent.putExtra("result", result);
setResult(Activity.RESULT_OK, returnIntent);
finish();
If you don't want to return data:
Intent returnIntent = new Intent();
setResult(Activity.RESULT_CANCELED, returnIntent);
finish();
Now in your MainActivity class write following code for the onActivityResult() method.
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
if (requestCode == 1000) {
if(resultCode == Activity.RESULT_OK){
String result=data.getStringExtra("result");
}
if (resultCode == Activity.RESULT_CANCELED) {
// Do your task here.
}
}
}
A: I find the best to use callbacks.
in Loader:
Create inner class
MyCallback callback;
viod setCallback(MyCallback callback){
this.callback = callback;
}
viod onStop(){
callback = null;
}
interface MyCallback{
void doSomething(Params params);
}
in MainActivity:
implement MyCallback
set reference in onCreate
Loader loader = new Loader();
loader.setCallback(this);
override method doSomething()
@override
void doSomething(Params params){
//do your thing with the params…
}
when the job is done inside Loader call MainActivity:
callback.doSomething(params);
destroy reference inside MainActivity in onStop()
loader.onStop();
| |
doc_1178
|
01:30:00 h
03:45:00 h
are summed to 47500,0 because sum converts times to integer.
I changed my admin.py like it is described here:
django-admin: Add extra row with totals
class MyChangeList(ChangeList):
def get_results(self, *args, **kwargs):
super(MyChangeList, self).get_results(*args, **kwargs)
q = self.result_list.aggregate(status_sum=Sum('duration'))
self.status_count = q['action_sum']
...
class ActionAdmin(admin.ModelAdmin):
def get_changelist(self, request):
return MyChangeList
class Meta:
model = Status
list_display = ('name', 'duration')
Duration is defienes as TimeField in modles.py:
class Action (models.Model):
duration = models.TimeField()
Somebody knows how to change aggregate() function in MyChangeList ?
I think i have to change time values to float or integer, make the sum and then convert it back.
Any suggestion?
Thanks a lot.
A: I suppose you are using MySQL. It has an old bug on summing up times (I don't know if it's fixed now).
You can store the duration in minutes (or 15-minutes if it's ok). Or you can use django-durationfield.
| |
doc_1179
|
Possible Duplicate:
Print an XML document without the XML header line at the top
I'm trying to create a fragment of XML using the Nokogiri::XML::Builder but I can't find any documentation on how to exclude the processing instruction (<?xml version=...)
Can anyone point me in the right direction?
A: Now I can answer:
doc.to_xml :save_with => Nokogiri::XML::Node::SaveOptions::NO_DECLARATION
| |
doc_1180
|
However, if the user leaves the app (placing it into the background) and then later comes back to it, the nested map fragment is no longer displayed. It's as if someone called map.setVisibility(View.INVISIBLE) (not View.GONE) on it (except that this wasn't called - I'm just stating it to put the issue in context).
Here is the relevant code for the first fragment (note that I'm leaving a lot of code out, so please don't worry if there are any variables that aren't initialized in what you see here):
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) {
Log.d("resume", "today onCreateView");
view = inflater.inflate(R.layout.fragment_today, container, false);
fragmentManager = getFragmentManager();
return view;
}
@Override
public void onResume() {
super.onResume();
addMapFragment(new LatLng(location.getLatitude(), location.getLongitude()), location.getLocationName());
}
private void addMapFragment(LatLng latLng, String marker_name) {
Log.d("resume", Double.toString(latLng.latitude));
Bundle mapBundle = new Bundle();
mapBundle.putDouble("lat", latLng.latitude);
mapBundle.putDouble("lng", latLng.longitude);
mapBundle.putString("marker_title", marker_name);
// capture this for use in getting directions if requested
destinationLatLng = new LatLng(latLng.latitude, latLng.longitude);
mMapFragment = new MapFragment();
mMapFragment.setArguments(mapBundle);
fragmentTransaction = fragmentManager.beginTransaction();
fragmentTransaction.add(R.id.map_today, mMapFragment);
fragmentTransaction.commitAllowingStateLoss();
Log.d("resume", "map added");
}
fragment_today.xml
<RelativeLayout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
tools:context="com.mysite.myapp.Activities.ViewControllers.TodayFragment"
android:id="@+id/today_background"
android:layout_width="match_parent"
android:layout_height="match_parent">
<LinearLayout
android:layout_width="match_parent"
android:layout_height="match_parent"
android:orientation="vertical">
<FrameLayout
android:id="@+id/title_frame_today"
android:layout_width="match_parent"
android:layout_height="wrap_content"/>
<FrameLayout
android:id="@+id/map_today"
android:layout_width="match_parent"
android:layout_height="150dp"
android:paddingLeft="15dp"
android:paddingRight="15dp" />
<!-- lots of other textviews follow. omitted for brevity-->
</LinearLayout>
</RelativeLayout>
fragment_map.xml
<?xml version="1.0" encoding="utf-8"?>
<fragment
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:map="http://schemas.android.com/apk/res-auto"
android:id="@+id/fragment_map"
android:name="com.google.android.gms.maps.SupportMapFragment"
android:layout_width="match_parent"
android:layout_height="match_parent"
map:mapType="normal"
map:uiCompass="true"
map:uiRotateGestures="true"
map:uiScrollGestures="true"
map:uiTiltGestures="true"
map:uiZoomControls="true"
map:uiZoomGestures="true" />
Then, finally, in the map fragment, I have this code. I've played around with where I put the code to generate the map (switching between onCreateView and onResume). I'm putting as much code in onResume as possible. Note that none of the values ever changes between when the user places the app into the background and when they return to it, so that's not an issue:
MapFragment.java
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) {
Log.d("resume", "map oncreateview");
view = inflater.inflate(R.layout.fragment_map, container, false);
// set the title text (if given by parent fragment)
Bundle bundle = getArguments();
if (bundle != null) {
if (bundle.containsKey("lat") && bundle.containsKey("lng")) {
myLocation = new LatLng(bundle.getDouble("lat"), bundle.getDouble("lng"));
}
if (bundle.containsKey("marker_title"))
markerTitle = bundle.getString("marker_title");
}
return view;
}
@Override
public void onResume() {
super.onResume();
Log.d("resume", "map onResume");
if (myLocation != null && markerTitle != null) {
CameraPosition cameraPosition = new CameraPosition.Builder()
.target(workoutLocation)
.zoom(12)
.tilt(30)
.build();
GoogleMap map = ((SupportMapFragment) getChildFragmentManager().findFragmentById(R.id.fragment_map)).getMap();
Marker marker = map.addMarker(new MarkerOptions()
.position(workoutLocation)
.title(markerTitle)
);
marker.showInfoWindow();
map.setBuildingsEnabled(true);
map.animateCamera(CameraUpdateFactory.newCameraPosition(cameraPosition));
}
}
At runtime, this all generates the following output:
on initial instantiation
today onResume current
44.915647 map added map oncreateview map onResume
when the app goes into the background
today onPause onStop
when the app comes back into the foregound
today onCreateView
map oncreateview today onCreateView
Activity.onPostResume() called today onResume current
44.915647 map added map onResume today onResume
current 44.915647 map added map oncreateview map
onResume map oncreateview map onResume
I think that this all looks good, although it's curious that 'map oncreateview' and 'map onResume' appears twice (I don't think that this is an issue).
Long post, but an interesting dilemma. Thanks to anyone who got this far!
A: Try using the new API which gets your map asynchronously.
Have a reference to your Google Map object. In onResume(), check if it is null and then get your map.
Also I suggest adding your child map as you would a regular fragment. That is, instead of using a fragment in layout use a FrameLayout and add (Support)MapFragment to it. I've found a lot of problems when using nested child fragment in my fragments.
So, I suggest something like this:
private GoogleMap googleMap;
private void setUpMapIfNeeded(){
if(googleMap == null){
SupportMapFragment mapFragment = SupportMapFragment.newInstance();
// mapContainerLayout would be some FrameLayout to replace the <fragment> you use for map
getChildFragmentManager.beginTransaction.replace(R.id.mapContainerLayout, mapFragment, "MapTag").commit;
mapFragment.getMapAsync(this);
}
}
@Override
public void onMapReady(GoogleMap map) {
this.googleMap = map;
// Do other stuff as needed
}
Then simply call setUpMapIfNeeded() in onResume().
EDIT: change your fragmentManager to getChildFragmentManager();
| |
doc_1181
|
However, how do you friend a template parameter's constructor?
class BeMyFriend
{
public:
BeMyFriend& operator=(const BeMyFriend& rhs) = default;
BeMyFriend(const BeMyFriend& rhs) = default;
};
template<class T>
class Test
{
friend T& T::operator=(const T&); //Works fine, no error
friend T::T(const T&); //error: prototype for 'BeMyFriend::BeMyFriend(const BeMyFriend&)' does not match any in class 'BeMyFriend'
};
int main()
{
Test<BeMyFriend> hmm;
return 0;
}
I'm able to friend the template parameter's operator= just fine, but I'm unable to friend T::T(const T&).
How can I make friend T::T(const T&); work?
edit:
This appears to be a different issue than what is solved in Make Friend the constructor of a template class. The issue there is dealing with circular template parameters in a declaration. It doesn't deal with the constructor of an actual template parameter.
The type of the Foo is a normal templated class, not a template parameter like T in my example. Something like friend Foo<B>::Foo<B>() from that submission should compile just fine, unlike the issue I'm having here with friend T::T(const T&).
edit:
In case this ends up mattering, I'm compiling with gcc 7.2.
edit:
I also want to clarify that C++ does support making constructors friends. For example, friend X::X(char), X::~X(); in the first example at http://en.cppreference.com/w/cpp/language/friend.
The issue here is how to make a template parameter's constructor a friend.
A: Somehow I think T::T(const T&) isn't well parsed by the compiler because it doesn't consider the namespace T:: before the member constructor T() as a reference to some external class, obviously seen in error console ISO C++ forbids declaration of ‘T’ with no type and prototype for ‘BeMyFriend::BeMyFriend(const BeMyFriend&)’ does not match any in class ‘BeMyFriend’ where the compiler is blatantly trying to get any definition or declaration exported from outside the class, thus T:: should be forced by the user to get introduced to the compiler referenced to the befriended class like so T&:: this is sufficient to remove ambiguity.
You can check here that the instantiator "works" perfectly and the value is properly befriended.
If, neverthless, you see in this example the error shown std::__cxx11::string Test<BeMyFriend>::mything’ is private within this context areyoumyfriend.mything; depicts the status of violation access to a private value, this is because the member function is not befriended to the host class simply.
A: I think the C++ standard prohibits what you are trying to do. Maybe it's a defect/oversight; maybe it was intentional. I'm not sure, so I'll just lay out the pieces I found in the standard (it's safe to skim over the section numbers unless/until someone is checking my analysis):
*
*In 6.4.3.1.2, there is a description of when something is considered to name a constructor. This involves the "injected-class-name" which (according to 12.2) means the name of the class inserted into the class' scope. In your context, I read these sections as saying that to name the constructor, you need T:: followed by the class-name of T. You have T::T, which works if T is considered the class-name of T. Let's see how this plays out.
*In 15.1.1, it is stated that your friend declaration would need to name a constructor. It goes on to say that the class-name shall not be a typedef-name. So we should be fine as long as T is not a typedef-name.
*In 17.1.3, it is stated that in your class Test, the identifier T is a typedef-name. Uh-oh.
So it's kind of weird. If I'm reading things correctly, you would need to use friend T::BeMyFriend to name the constructor, which of course only works in this one particular example. For other template parameters, this would look for a member function named "BeMyFriend". Not what you are looking for.
(You might notice that in the examples of constructors declared as friends, the class name being used is always the name used when defining the class. In that situation, there is no typedef'ing going on, so this issue does not arise.)
Solution? I think you need to make the class T a friend, make a static function in T a friend and call that from the constructor, or find a (better?) way to do what you want to do without using friends. There is a warning flag waving in the back of my mind when I see a template parameter made a friend -- it's legal, but often goes against the principles of encapsulation.
A: Constructors are very special methods. They have no return type (not even void) and no real name. AFAIK that just cannot be used in friend declaration.
A possible workaround is to build a copy factory function in class BeMyFriend:
class BeMyFriend
{
public:
BeMyFriend& operator=(const BeMyFriend& rhs) = default;
BeMyFriend(const BeMyFriend& rhs) = default;
static BeMyFriend makeCopy(const BeMyFriend& rhs) {
BeMyFriend tmp(rhs);
return tmp;
}
};
template<class T>
class Test
{
friend T& T::operator=(const T&); //Works fine, no error
//friend T::T(const T&); //error: prototype for 'BeMyFriend::BeMyFriend(const BeMyFriend&)' does not match any in class 'BeMyFriend'
friend T T::makeCopy(const T&);
};
int main()
{
Test<BeMyFriend> hmm;
return 0;
}
This does not really answer your question because a default copy construction would not be friend, but in the followin code:
BeMyFriend foo;
BeMyFriend bar = BeMyFriend::makeCopy(foo);
you get a friend copy contruction inside makeCopy, and next one is likely to be elided.
Anyway I cannot really imagine a real use case for friending only a specific constructor from a class and not the whode class...
A: I was able to get it to compile in GCC by adding void after friend:
friend void T::T(const T&);
I was then able to access one of Test's private members from BeMyFriend's constructor. However, note that this is compiler specific. I tried it in clang, and it did not work.
| |
doc_1182
|
Results results = Runner.parallel(tagQuery, featurePaths, null, new ArrayList<>(), 3, karateOutputPath);
With @parallel=false works fine however when I removed, it fails with the following error:
com.intuit.karate.exception.KarateException: test_input.feature:50 - driver config / start failed:
org.apache.http.conn.HttpHostConnectException: Connect to localhost:9222 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused: connect, options: {type=chrome, target=null}
This occurs at the * driver <url> phase. This is an intermittent failure which most of the time 2/3 scenarios pass and one fails with this error.
Version: 0.9.6.RC3
A: I recommend using Zalenium like Remote Browser Grid to achieve parallelism and make sure that your scenario are self-contained or atleast keep feature independent.
A: Sorry, running browser tests in parallel is non-trivial which is why we have the Docker option.
EDIT: and in case you landed here because you wanted karate.callSingle() to work for UI tests, sorry that is not possible as well. But you are encouraged to perform an API sign-in via karate.callSingle() and then speed up your UI tests: https://github.com/intuit/karate/tree/develop/karate-core#hybrid-tests
Consider that this is not supported on a single node. It can be made to work if you know what you are doing, but you need to figure this out depending on whether you are using Chrome or WebDriver.
Please refer the docs: https://twitter.com/ptrthomas/status/1159295560794308609 | https://github.com/intuit/karate/tree/master/karate-core#configure-drivertarget
EDIT - also see this answer: https://stackoverflow.com/a/60387907/143475
| |
doc_1183
|
I try to load Json Data from my local Json File !
I have searched many many many Post but nothing really helped me, that is why i have chosen to make this post.
I show you how i possible tried it.
Path: Data/test.json
[
{
"id": 1,
"name": "",
"trade_name": "",
"short_name": "",
"test": "",
"test1": "",
"test2": "",
"test3": "",
"test4": "",
"test5": "",
"test6": "",
"test7": "",
"test8": [],
"test9": "",
"test10": "",
"test11": "",
"test12": ""
},
]
Path: App/DataLoader.swift
import Foundation
public class DataLoader {
@Published var contentData = [JSONData]()
init(){
load()
sort()
}
func load(){
if let fileLocation = Bundle.main.url(forResource: "test", withExtension: "json"){
do {
let data = try Data(contentsOf: fileLocation)
let jsonDecoder = JSONDecoder()
let dataFromJson = try jsonDecoder.decode([JSONData].self, from: data)
self.contentData = dataFromJson
} catch {
print(error)
}
}
}
func sort(){
self.contentData = self.contentData.sorted(by: { $0.id < $1.id})
}
}
Here in this File i add the variables.
Path: App/JsonData.swift
import Foundation
struct JSONData: Codable {
var id: Int
var name: String
var trade_name: String
var short_name: String
var test: String
var test1: String
var test2: [String:String]
var test3: [String:String]
var test4: String
var test5: String
var test6: [String:String]
var test7: String
var test8: [String:String]
var test9: String
var test10: String
var test11: String
var test12: String
}
I didn't set the Display yet but i want it in a List.
In my ContentView.swift File i would load the Data from that Json File !
Path: App/ContentView.swift
import SwiftUI
struct ContentView: View {
var body: some View {
NavigationView {
Home()
.navigationTitle("Test Interface")
.navigationBarTitleDisplayMode(.inline)
}
}
}
struct ContentView_Previews: PreviewProvider {
static var previews: some View {
ContentView()
}
}
struct Home : View {
let data = DataLoader().contentData
var body: some View {
data[IndexPath.row].id
}
}
When i did it like that, it shows some Errors.
Return type of property 'body' requires that 'Int' conform to 'View'
Instance member 'row' cannot be used on type 'IndexPath'; did you mean to use a value of this type instead?
Regards
CreatingBytes
A: SwiftUI is not UIKit, there are no index paths.
The body property of a View must return some View, nothing else.
*
*First of all adopt Identifiable
struct JSONData: Codable, Identifiable { ...
*Then adopt ObservableObject in DataLoader
public class DataLoader : ObservableObject {
*You have to observe the data in the root view, replace ContentView with
struct ContentView: View {
@StateObject var data = DataLoader()
var body: some View {
NavigationView {
Home(data: data)
.navigationTitle("Test Interface")
.navigationBarTitleDisplayMode(.inline)
}
}
}
*In Home hand over data and use ForEach
struct Home : View {
@ObservedObject var data : DataLoader
var body: some View {
VStack() {
ForEach(data.contentData) { item in
HStack {
Text(item.name)
Spacer()
Text("\(item.id)")
}
}
}
}
}
Alternatively replace ForEach with List to get a table view
I highly recommend to watch the videos of WWDC 2019 and 2020 about SwiftUI
| |
doc_1184
|
Update :
I boiled it down to :
#include <stdio.h>
#include <stdlib.h>
#include <iostream>
#include <readline/readline.h>
#include <readline/history.h>
int main()
{
char *buf;
std::wcout << std::endl; /* ADDING THIS LINE MAKES PRINTF VANISH!!! */
rl_bind_key('\t',rl_abort);//disable auto-complete
while((buf = readline("my-command : "))!=NULL)
{
if (strcmp(buf,"quit")==0)
break;
std::wcout<<buf<< std::endl;
if (buf[0]!=0)
add_history(buf);
}
free(buf);
return 0;
}
So I guess it might be a flushing problem, but it still looks strange to me, I have to check up on it.
Update -> Work around :
First of all, the same problem arise with wprintf. But I found that adding :
std::ios::sync_with_stdio(false);
actually did the trick...(note false and not as I would expect true..), the only thing that bothers me, is that I don't understand why and how to figure it out :-(
A: I think you're talking about std::ios_base::sync_with_stdio, but IIRC it is on by default.
A: You should be able to mix them, but they typically use separate buffering mechanisms so they overlap each other:
printf("hello world");
cout << "this is a suprise";
can result in:
hellothis is a suprise world
You don't provide enough information to diagnose your problem with printf() in your application, but I suspect you have more than one c runtime (one in your code, one in the printf() code) and there is a conflict.
A: The printf() and cout buffers are either synchronised by default, or are in fact the same buffer. If you are having problems with buffering, the obvious solution is to flush the buffer after each output:
fflush( stdout );
cout.flush();
this flushes the buffer(s) to the operating system, and once done there is no possibility of interleaving, or of output being lost.
A: buffering headaches. typically, you can, though, since they are synced. the people who tell you not to are probably people who remember the pain of using multiple io methods and want to save you from it. (just don't mix either with system calls. that would be painful.)
A: Libraries should not use printf, cout, or any other I/O to standard handles. They should use a callback routine to delegate output to a method of the main program's choice.
An obvious exception is a library whose sole purpose is output, but then it's the main program's choice to use that. And this rule doesn't forbid I/O to file descriptors opened by the library.
Not only would that solve the issue raised here, but it also takes care of disconnected operation (linux program with no tty, eg run via nohup, or Win32 service).
A: My opinion, not necessarily right, just for your reference. 😄
Overview
*
*cout/wcout is the concept of the C++ world, the equivalence terms of C world is printf/wrpintf. These mechanisms are used to transfer our content that we provide in application into the standard output stream.
*Compatible with C language is one of the C++' design objective. That is to say, the C++'s implementation must guarantee the mixing of C++ and C code execute correctly.
*The problem is obvious, C and C++ are two different languages, after all. The C++'s cout/wcout and C's printf/wprintf have their own implementation, such as their own application layer buffer, their own flush strategy of the buffer, their own locale dependency ... At a word, cout/wcout and printf/wprintf are two separate operating mechanisms, simple mixing can lead to unexpected results, so the C++ standard designer must develop a workable solution.
*From the coder's point of view, the correct behavior of mixing cout and printf should be guaranteed the execute result as like as just use cout or printf. To achieve this effect, the concept of sync with stdio is put forward. In my opinion, the 'stdio' refer in particular to the C language's I/O mechanism, printf/wprintf belong to it. So 'sync to stdio' means sync C++'s I/O mechanism to C's mechanism. We can use the method std::ios::sync_with_stdio to enable/disable the sync, it is enabled by default.
*So, what is 'sync' indeed ? how to sync to stdio ? It depends on the implementation, I do not know the details. It seems that the 'sync' concept is more like 'share', the buffer of printf/wprintf shared with cout/wcout. Or the cout/wcout use print/wprintf's buffer directly. In brief, if we enable 'sync', the cout/wcout is no longer independent, typically the buffer, it depend on C stdio. The 'sync' makes the I/O process of C++ and C just like one language.
*Therefore, we can use cout+printf or wcout+wprintf at ease in C++ code by enable sync with stdio (it's enabled by default, we don't need to operate). This problem is solved by 'sync'. The another puzzle is mix cout+wcout or mix printf+wprintf, is that ok ? -- this topic discuss below .
*cout printf work on the char storage unit, and wcout wprintf work on wchar_t. Mixed use cout+wcout or printf+wprintf do not involve language level conflict. Because the cout wcout belong to same language, so do the printf and wprintf. The crux of the question here is 'the orientation of the standard output stream'.
*What is the orientation mean? It can be interpreted as granularity i think. As we know, the OOD is object orientated design, that means the granularity of design is object. Another example is byte stream orientated protocol - TCP, that means we must concern the byte if we construct application based on TCP. Get in back to the point, so... what is the orientation of the standard output stream ? The answer is byte or wide byte or something else i don't confirm ...
*The standard output stream either byte-oriented or wide byte-oriented, and how does the C++/C language determine which one? The strategy is to see the first executed I/O function/method's version, if it is the byte version, just like cout or printf, then the standard output stream is set to byte-oriented, else if it is the wide byte version, like wcout or wprintf, then the standard output stream is wide byte-oriented.
*Why should we care the standard output stream oriented? That's because the different orientation of the standard output stream will affect the presentation of the content. It's easy to think of it this way, the orientation decide the granularity of the standard output stream to process/extract/translate the content from cout/wcout/printf/wprintf. Just think that, we first use wcout to trigger the standard output stream be set to wide byte-oriented, then we provide some content by cout, the result is the wide byte-oriented standard output stream received a group of content processed by byte, finally the content print to the device which connected to the standard output stream is undefined.
*In addition, in my actual development, there's another situation of standard output stream orientation, i've discovered. After disable the sync witch stdio - std::ios::sync_with_stdio(false), no matter i call wcout then cout or call cout then wcout, the print content is all ok!(Of course, we must to set locale, but that's another topic). In this condition, i call fwide(stdout, 0) always return 0, that means the current standard output stream orientation is undecided, is the undecided state due to standard output stream can auto switch to suitable orientation? Or undecided means standard output stream in omnipotent state ? i don't know ...
*A special case. There is a function called _setmode provided in Windows world, This function use for set the specific stream's translate mode. After my attempt, i guess the so called 'translate mode' is equivalent to the stream orientation. But, to use the _setmode set stream orientation seems to set some nonstandard flag value in underlying! Because after i call _setmode to set certain modes(e.g. _O_WTEXT, _O_U8TEXT, _O_U16TEXT ... ), if i call fwide try to check current stream orientation, the crash will happen. Perhaps an unintelligible data was queried by fwide, so it crashed!
Some demos
I write a function checkStdoutOrientation to get standard output stream orientation
void checkStdoutOrientation()
{
std::fstream fs("result.txt", std::ios::app);
int ret = fwide(stdout, 0);
if (ret > 0) {
fs << "wide byte oriented\n";
} else if (ret < 0) {
fs << "byte oriented\n";
} else {
fs << "undecided oriented\n";
}
fs.close();
}
Demo01: first call wcout, then call cout
#include <cstdio>
#include <iostream>
#include <fstream>
int main()
{
checkStdoutOrientation();
std::wcout << "456" << std::endl;
checkStdoutOrientation();
std::cout << "123" << std::endl;
checkStdoutOrientation();
return 0;
}
debian10 buster + gcc 8.3.0
Output:
result.txt:
My understanding:
*
*wcout is first called, so standard output stream is wide byte oriented;
*just because the above situation, content processed by cout cannot be processed by standard output stream, so the "123" cannot print;
Win10 + vs2022
Output:
result.txt:
My understanding:
*
*it seems that, the windows is not conforming to standards, the standard output stream is always undecided oriented;
*just because the undecided oriented of the standard output stream, all content is print.
Demo02: first call cout, then call wcout
#include <cstdio>
#include <iostream>
#include <fstream>
int main()
{
checkStdoutOrientation();
std::cout << "123" << std::endl;
checkStdoutOrientation();
std::wcout << "456" << std::endl;
checkStdoutOrientation();
return 0;
}
debian10 buster + gcc 8.3.0
Output:
result.txt:
My understanding:
*
*cout is first called, so standard output stream is byte oriented;
*because byte is a smaller granularity than wide byte, so wcout content to byte oriented standard output stream can finally print to console.
Win10 + vs2022
Output:
result.txt:
My understanding:
*
*the standard output stream is always undecided oriented;
*so all content is print.
Demo03: first call wprintf, then call printf
#include <cstdio>
#include <iostream>
#include <fstream>
int main()
{
checkStdoutOrientation();
wprintf(L"456\n");
checkStdoutOrientation();
printf("123\n");
checkStdoutOrientation();
return 0;
}
It's the same conclusion as Demo01
Demo04: first call printf, then call wprintf
#include <cstdio>
#include <iostream>
#include <fstream>
int main()
{
checkStdoutOrientation();
printf("123\n");
checkStdoutOrientation();
wprintf(L"456\n");
checkStdoutOrientation();
return 0;
}
debian10 buster + gcc 8.3.0
Output:
result.txt:
My understanding:
*
*cout is first called, so standard output stream is byte oriented;
*the result is different from Demo02, i don't know why the "456" not show.
Win10 + vs2022
Output:
result.txt:
My understanding:
*
*the same result as Demo02 .
Demo05: disable sync with stdio, then wcout, cout
#include <cstdio>
#include <iostream>
#include <fstream>
int main()
{
std::ios::sync_with_stdio(false);
checkStdoutOrientation();
std::wcout << "456" << std::endl;
checkStdoutOrientation();
std::cout << "123" << std::endl;
checkStdoutOrientation();
return 0;
}
debian10 buster + gcc 8.3.0
Output:
result.txt:
My understanding:
*
*after disabled sync with stdio, the standard output stream is always undecided oriented;
*so all content can print.
Win10 + vs2022
Output:
result.txt:
My understanding:
*
*the same result as Demo01 .
Demo06: mix cout, printf
#include <cstdio>
#include <iostream>
int main()
{
printf("1\n");
std::cout << "2\n";
printf("3\n");
std::cout << "4\n";
printf("5\n");
printf("\n");
std::ios::sync_with_stdio(false);
printf("1\n");
std::cout << "2\n";
printf("3\n");
std::cout << "4\n";
printf("5\n");
return 0;
}
debian10 buster + gcc 8.3.0
Output:
My understanding:
*
*sync with stdio is enabled default, so mix cout and printf worked as just call cout or printf.
*after disabled sync with stdio, cout will worked independent, cout and printf do their own thing, so the print content is out of order.
Win10 + vs2022
Output:
My understanding:
*
*Windows is still so special, no matter disable sync with stdio or not, the mix cout and printf worked as just call cout or printf.
Demo07: print Non-ASCII Characters -- Method A
#include <cstdio>
#include <iostream>
int main()
{
std::locale myloc("en_US.UTF-8");
std::locale::global(myloc); // this setting does not affect wcout
std::wcout << L"漢字\n";
wprintf(L"漢字\n");
return 0;
}
debian10 buster + gcc 8.3.0
Output:
My understanding:
*
*set global locale does not affect wcout, this wcout's locale is still C locale. It is because the wcout is an object, its locale has already set when object constructed.
*so in that case, why wcout can print the content? don't forget the C++'s iostream is default sync with the stdio, we can simply think that, the wcout work on stdio's buffer, and the stdio is already set to en_US.UTF-8 by the global(myloc) code.
Win10 + vs2022
Output:
My understanding:
*
*nothing special;
Demo08: print Non-ASCII Characters -- Method B
#include <cstdio>
#include <iostream>
int main()
{
std::ios::sync_with_stdio(false);
std::locale myloc("en_US.UTF-8");
// std::locale::global(myloc);
std::wcout.imbue(myloc);
std::wcout << "wcout> " << L"漢字\n";
wprintf(L"wprintf> 漢字\n");
return 0;
}
debian10 buster + gcc 8.3.0
Output:
My understanding:
*
*due to disabled sync to stdio, the wprintf and wcout work separately, so print out of order;
*Still due to the sync disabled, wcout should work independent, so we must set wout's locale to en_US.UTF-8 by imbue method, if not do this, the wcout will print content like "??";
*wprintf print "??", that's because we commented out the std::locale::global(myloc);, so the locale for stdio is still C locale.
Win10 + vs2022
Output:
My understanding:
*
*printf and cout print always in order, this is the windows special place, this has been mentioned several times;
*wprintf print empty is equivalent to the "??" in linux;
*so, what's new is the garbled code! i try to uncomment the std::locale::global(myloc); code, the print content is ok! so, i think the windows implemention is little special, the wcout may depend on more things that need change locale by global locale setting.
Demo09: print Non-ASCII Characters -- Method C
This demo depends on the windows platform .
#include <cstdio>
#include <iostream>
int main()
{
_setmode(_fileno(stdout), _O_WTEXT); // Unique function to windows
std::wcout << "wcout> " << L"漢字\n";
wprintf(L"wprintf> 漢字\n");
return 0;
}
Win10 + vs2022
Output:
My understanding:
*
*after _setmode, the globale and wcout locale is still C locale;
*why wcout and wprintf all can print content correctly? i guess maybe the windows implement a mechanism after standard output stream to translate the content by the _setmode specified mode.
| |
doc_1185
|
System.IO.FileNotFoundException: C:\Windows\DtlDownloads\VisualStudioRemoteDeployer3be2a227-e8d4-4e1f-b155-d1a53666793b\NeuronExplorer.exe
I cannot figure out how to actually get the NeuronExplorer.exe into that transient directory to be available for execution.
I have tried GACing it. It GACs successfully, but I get the same error.
I have tried adding the path to the PATH environment variable, also resulting in the same error.
I have been successful if I copy the NeuronExplorer.exe into the transient folder but I have to be very fast to actually make it work. There MUST be a way to get this file available in the remote context.
A: The easiest workaround here would be to invoke the executable via the full, explicit path.
Also, you didn't specify if you changed the PATH environment variable at the machine level, but if you did, you most likely need to restart the agent on that machine for the changes to be reflected in the agent process.
A: The way I ended up solving this was to copy the .exe into my System.AppDomain.CurrentDomain.BaseDirectory prior to calling any API code that depends on the .exe.
let fileName = (FileInfo pathToExe).Name
let targetFile = sprintf "%s\\%s" System.AppDomain.CurrentDomain.BaseDirectory fileName
File.Copy(pathToExe, targetFile, true)
I could possible accomplish the same thing by using System.AppDomain.CurrentDirectory.Load(assemblyBytes), but the File.Copy works for me.
Hopefully this helps someone else not have to struggle with this particular issue.
| |
doc_1186
|
But the default implementation is as here :
The implementation looks like :
baseGroup.append("g")
.attr("class", "xaxis")
.attr("transform", "translate(5," + (height - marginBottom) + ")")
.style({ 'stroke': 'Black', 'fill': 'none', 'stroke-width': '0.5px','font-size': '14px', 'shape-rendering': 'crispEdges'})
.call(xBar);
Want to remove vertical bars on each tick.
Can anyone suggest me the right way to style D3 x axis ?
A: According to D3 changes documentation the outerTickSize and innerTickSize have been renamed to tickSizeInner and tickSizeOuter respectively.
Therefore, updating Gerardo Furtado answer, the snippet would be:
var xBar = d3.svg.axis()
.scale(scale)
.orient("bottom")
.tickSizeOuter(0)
.tickSizeInner(0)
A: For setting the size of the inner ticks, you have to use axis.innerTickSize. According to the API:
If size is specified, sets the inner tick size to the specified value and returns the axis. If size is not specified, returns the current inner tick size, which defaults to 6.
Therefore, in your case, the axis generator should be like this:
var xBar = d3.svg.axis()
.scale(scale)
.orient("bottom")
.outerTickSize(0)
.innerTickSize(0)
Here is a demo:
var svg = d3.select("svg");
var scale = d3.scale.linear()
.domain([0, 100])
.range([20, 480]);
var axis = d3.svg.axis()
.scale(scale)
.orient("bottom")
.outerTickSize(0)
.innerTickSize(0)
.tickPadding(10);
svg.append("g")
.attr("class", "x axis")
.style({
'stroke': 'gray',
'fill': 'none',
'stroke-width': '1px',
'font-size': '14px',
'shape-rendering': 'crispEdges'
})
.attr("transform", "translate(0,50)")
.call(axis)
<script src="https://d3js.org/d3.v3.min.js"></script>
<svg width="500" height="100"></svg>
A: It can be achieve through css properties. Reduce stroke-opacity of the axis tick line to 0
var svg = d3.select("svg");
var scale = d3.scale.linear()
.domain([0, 100])
.range([20, 480]);
var axis = d3.svg.axis()
.scale(scale)
.orient("bottom")
.tickPadding(10);
svg.append("g")
.attr("class", "x axis")
.style({
'stroke': 'gray',
'fill': 'none',
'stroke-width': '1px',
'font-size': '14px',
'shape-rendering': 'crispEdges'
})
.attr("transform", "translate(0,50)")
.call(axis);
.x.axis>.tick> line {
stroke-opacity: 0;
}
<script src="https://d3js.org/d3.v3.min.js"></script>
<svg width='500' height='200'>
| |
doc_1187
|
I also saw this post: Zend expressive - php error reporting. Which cleared a lot of things for me but still quite didn't solve the asked question.
Things I did: Defined my own ErrorHandlerFactory where I attached two listeners to the Zend-stratigility's ErrorHandler
*
*First Listener uses Zend Log to log into my application's log
file.(Just thought it would be nice to have errors in my
application.log too.)
*In the Second Listener, I want to log into PHP's error log file, so I have used error_log() method from php.
Questions:
*
*The error_log() is not printing the log the way a log appears when printed by php's error handler. What I mean:
When an error is printed by the php's error handler, it looks something like this:
[08-Feb-2018 08:22:51 US/Central] PHP Warning: array_push() expects
at least 2 parameters, 1 given in
C:\webserver\webroot\myapi\src\App\src\Action\PageAction.php on
line 38
While when I print the log using error_log() it looks something like this:
[08-Feb-2018 09:03:49 US/Central] array_push() expects at least 2
parameters, 1 given in
C:\webserver\webroot\myapi\src\App\src\Action\PageAction.php on
line 38
What am I missing here is the PHP's error type: PHP Warning, Is this the error code? The error code I get is an integer, how do I parse that code? should I map the error codes with PHP errors constants which appear in the logs, for example: WARNING, NOTICE, etc, I can even do that, but the problem is: I got the same error code of 0 both the times when php's error handler printed a WARNING and a Fatal error logs.
*Is it right to log errors in PHP's error log file like this? Should
I do the job of PHP's error handler? The error handler could be doing a lot of things, for example: Logging the error message for few errors but for another also logging the stack trace. If this is not right, then how else can I send the error to the PHP's error_handler?
From my understanding:
My own Error Handler prevents users to look for exceptions and stack
traces but rather returns a
generic message. This also means that the Error Handler consumes the error and doesn't throw it further outside, i.e. will not throw it
to the PHP's error handler.
A: Answering question 1:
I am able to almost simulate the way PHP error handler logs PHP errors.
Things I did:
*
*Went through the docs and this SO question. Using these I was able to attach listeners to Zend-stratigility's ErrorHandler
*Went through PHP's Error Constants and set_error_handler(), which gave me some ideas on how to find out which type of Error or Exception occured.
Below is the code for my ErrorHandlerFactory where I attach the listeners.
<?php
// TODO: PHP 7.0.8 is giving strict erros eben if this directive is not enabled. And that too, it should be enabled per file from my understanding.
//declare(strict_types = 1);
namespace App\Factories;
use Interop\Container\ContainerInterface;
use Psr\Http\Message\RequestInterface;
use Psr\Http\Message\ResponseInterface;
use Zend\Log\Logger as ZendLogger;
use Throwable;
use Zend\Diactoros\Response;
use Zend\Expressive\Middleware\ErrorResponseGenerator;
use Zend\Stratigility\Middleware\ErrorHandler;
class ErrorHandlerFactory
{
/**
* @param ContainerInterface $container
* @return ErrorHandler
* @throws \Psr\Container\ContainerExceptionInterface
* @throws \Psr\Container\NotFoundExceptionInterface
*/
public function __invoke(ContainerInterface $container)
{
$generator = $container->has(ErrorResponseGenerator::class)
? $container->get(ErrorResponseGenerator::class)
: null;
$errorHandler = new ErrorHandler(new Response(), $generator);
// attaching a listener for logging into application's log file.
if ($container->has(ZendLogger::class)) {
/** @var ZendLogger $logger */
$logger = $container->get(ZendLogger::class);
$errorHandler->attachListener(function (
Throwable $throwable,
RequestInterface $request,
ResponseInterface $response
) use ($logger) {
$logger->err(NULL, [
'method' => $request->getMethod(),
'uri' => (string) $request->getUri(),
'message' => $throwable->getMessage(),
'file' => $throwable->getFile(),
'line' => $throwable->getLine(),
]);
});
}
// Attaching second listener for logging the errors into the PHP's error log
$errorHandler->attachListener(function (
Throwable $throwable,
RequestInterface $request,
ResponseInterface $response
) {
// Default Error type, when PHP Error occurs.
$errorType = sprintf("Fatal error: Uncaught %s", get_class($throwable));
if (get_class($throwable) === "ErrorException") {
// this is an Exception
/** @noinspection PhpUndefinedMethodInspection */
$severity = $throwable->getSeverity();
switch($severity) {
case E_ERROR:
case E_USER_ERROR:
$errorType = 'Fatal error';
break;
case E_USER_WARNING:
case E_WARNING:
$errorType = 'Warning';
break;
case E_USER_NOTICE:
case E_NOTICE:
case E_STRICT:
$errorType = 'Notice';
break;
case E_RECOVERABLE_ERROR:
$errorType = 'Catchable fatal error';
break;
case E_USER_DEPRECATED:
case E_DEPRECATED:
$errorType = "Deprecated";
break;
default:
$errorType = 'Unknown error';
}
error_log(sprintf("PHP %s: %s in %s on line %d", $errorType, $throwable->getMessage(), $throwable->getFile(), $throwable->getLine()), 0);
}
else {
// this is an Error.
error_log(sprintf("PHP %s: %s in %s on line %d \nStack trace:\n%s", $errorType, $throwable->getMessage(), $throwable->getFile(), $throwable->getLine(), $throwable->getTraceAsString()), 0);
}
});
return $errorHandler;
}
}
Apart from this, this Factory needs to be added to the dependencies.
In the file:
dependencies.global.php, in the factories array:
Replace
Zend\Stratigility\Middleware\ErrorHandler::class => Container\ErrorHandlerFactory::class,
with
Zend\Stratigility\Middleware\ErrorHandler::class => \App\Factories\ErrorHandlerFactory::class
And this should almost simulate the logging behaviour how php error handler does.
Answering Question 2:
I think it is fine to do this since PHP by itself provides set_error_handler() and anyway we have to handle the errors by ourselves and not pass it to the PHP's error handler. If our ErrorHandler(listener) can replicate the messages and log into the PHP's error log using error_log(), then it is fine.
| |
doc_1188
|
I tried forcing an setState update of graphicLayers, but it doesn't seem to be working. Like this:
let graphicLayersCopy = Object.assign([], this.state.graphicLayers);
this.setState({graphicLayers: graphicLayersCopy});
but that's not working. I know through the debugger that it's setting the data correctly, and if I refresh (it saves state and reloads the state), the GUI is then rendered correctly.
Is there anyway I can force a re-render of a variable some how even if it doesn't change value?
constructor
constructor(props, context) {
super(props, context);
this.state = {
graphicLayers: [id1, id2, id3],
graphicLayersById: {
id1: { ... },
id2: { ... },
id3: { ... }
}
this.addLayerClick = this.addLayerClick.bind(this);
};
render
render() {
return (
<div>
{this.state.graphicLayers.map((id) =>
<GraphicLayer addLayerClick={this.addLayerClick.bind(this)} />
)}
</div>
);
}
addLayerClick
addLayerClick() {
... change some property in graphicLayersById dictionary ...
self.setState({ graphicLayersById: newDataHere });
}
EDIT: I found the problem on my end, and it's not exactly shown here.
So my addLayerClick() actually calls another functions that is listening to a call and it sets the state inside. it's weird because the setState gets called in the callback function, but i got it to work by putting the setState in the addLayerClick() itself.. still dont know why this doens't work but i will upvote all of you at least
listenFunction() {
let self = this;
this.firebaseWrapper.storage.on('graphicLayersById', function (save) {
if (save) {
self.setState({ graphicLayersById: save }); // FOR SOME REASON THIS DOESN'T UPDATE THE GUI THE 2nd CLICK. The data is correct though and I see it going here on a breakpoint, but GUI won't update unless I setState in the actual button
}
else {
self.setState({ graphicLayersById: undefined });
}
});
}
A: In addLayerClick() you're only updating graphicLayersById, but rendering depends on graphicLayers. You should be updating the graphicLayers state in addLayerClick() as well.
addLayerClick() {
this.setState({
graphicLayers: ...
graphicLayersById: ....
});
}
On a side note, you shouldn't bind methods inside render() since that creates a brand new function on every render (and could impact performance). Instead of
<GraphicLayer addLayerClick={this.addLayerClick.bind(this)} />
do
<GraphicLayer addLayerClick={this.addLayerClick} />
and leave the binding in your constructor (the way you already have).
A: Actually, you have bound the addLayerClick() function to the component, so you can use this instead of self
You should revise your code like this: (there are about 2 changes)
constructor(props, context) {
super(props, context);
this.state = {
graphicLayers: [id1, id2, id3],
graphicLayersById: {
id1: { ... },
id2: { ... },
id3: { ... }
}
// don't need to bind here anymore, since you bind it in the click
//this.addLayerClick = this.addLayerClick.bind(this);
};
addLayerClick() {
//... change some property in graphicLayersById dictionary ...
// just use 'this' here
this.setState({ graphicLayersById: newDataHere });
// the below line is NOT recommended, which is force the component to update
// this.forceUpdate(); // this line will update the component even though there's no change
}
If this doesn't work yet, please post here how you handle onCLick function in the child component, and also post some errors if any, thanks
A: Hope these two possible way will reder your view
this.setState({graphicLayersById: newDataHere} , ()=>{
console.log(this.state.graphicLayersById);
});
OR
var update = require('react-addons-update');
var graphicLayers = update(this.state, {
graphicLayersById: {$set: newDataHere}
});
this.setState(graphicLayers);
| |
doc_1189
|
[0;0;0;0;0];
[0;0;0;0;0];
[0;0;1;0;0];
[0;0;0;0;0];
I can use as many functions as necessary, but only one function may use a print function. Here is what I have so far:
let rec rowToString(row) =
if (row == []) then []
else string_of_int(List.hd row) :: ";" :: rowToString(List.tl row);;
let rec pp_my_image s =
print_list(rowToString(List.hd s)) :: pp_my_image(List.tl s);;
I know this is wrong, but I can't figure out a way to do it.
A: Here is one way to do it:
let rec rowToString r =
match r with
| [] -> ""
| h :: [] -> string_of_int h
| h :: t -> string_of_int h ^ ";" ^ (rowToString t)
let rec imageToString i =
match i with
| [] -> ""
| h :: t -> "[" ^ (rowToString h) ^ "];\n" ^ (imageToString t)
let pp_my_image s =
print_string (imageToString s)
The rowToString function will create a string with the items in each inner list. Notice that case h :: [] is separated so that a semicolon is not added after the last item.
The imageToString function will create a string for each inner list with a call to rowToString. It will surround the result of each string with brackets and add a semicolon and newline to the end.
pp_my_image will simply convert the image to a string and print the result.
| |
doc_1190
|
My user migration is this:
exports.up = function(knex) {
return knex.schema.createTable('users', table => {
table.increments('id').primary()
table.string('name').notNull()
table.integer('age').notNull()
table.string('city').notNull()
table.string('email').notNull().unique()
table.string('primary').notNull()
table.timestamp('deletedAt')
table.timestamp('updatedAt')
});
};
exports.down = function(knex) {
return knex.schema.dropTable('users');
};
While the post migration is this:
exports.up = function(knex) {
return knex.schema.createTable('posts', table => {
table.increments('id').primary()
table.string('description').notNull()
table.string('image_url', 1000).notNull()
table.string('latitude').notNull()
table.string('longitude').notNull()
table.integer('userId').references('id')
.inTable('users').notNull()
table.timestamp('deletedAt')
table.timestamp('updatedAt')
});
};
exports.down = function(knex) {
return knex.schema.dropTable('posts');
};
And my last table is the evaluations:
exports.up = function(knex) {
return knex.schema.createTable('evaluations', table => {
table.inplements('id').primary()
table.int('review').notNull()
table.decimal('rate', (1,2)).notNull()
table.integer('postId').references('id')
.inTable('posts').notNull()
table.integer('userId').references('id')
.inTable('users').notNull()
table.timestamp('deletedAt')
table.timestamp('updatedAt')
});
};
exports.down = function(knex) {
return knex.schema.dropTable('evaluations');
};
When I run the command knex migrate:latest I create the users table and posts table in the database, but the evaluations table is not created and I get this error at the prompt:
migration file "20210902232536_create_posts_table.js" failed
migration failed with error: alter table `posts` add constraint `posts_userid_foreign` foreign key (`userId`) references `users` (`id`) - Referencing column 'userId' and referenced column 'id' in foreign key constraint 'posts_userid_foreign' are incompatible.
alter table `posts` add constraint `posts_userid_foreign` foreign key (`userId`) references `users` (`id`) - Referencing column 'userId' and referenced column 'id' in foreign key constraint 'posts_userid_foreign' are incompatible.
Error: alter table `posts` add constraint `posts_userid_foreign` foreign key (`userId`) references `users` (`id`) - Referencing column 'userId' and referenced column 'id' in foreign key constraint 'posts_userid_foreign' are incompatible.
at Packet.asError (F:\nodejs\test\node_modules\mysql2\lib\packets\packet.js:722:17)
at Query.execute (F:\nodejs\test\node_modules\mysql2\lib\commands\command.js:28:26)
at Connection.handlePacket (F:\nodejs\test\node_modules\mysql2\lib\connection.js:456:32)
at PacketParser.onPacket (F:\nodejs\test\node_modules\mysql2\lib\connection.js:85:12)
at PacketParser.executeStart (F:\nodejs\test\node_modules\mysql2\lib\packet_parser.js:75:16)
at Socket.<anonymous> (F:\nodejs\test\node_modules\mysql2\lib\connection.js:92:25)
at Socket.emit (events.js:314:20)
at addChunk (_stream_readable.js:297:12)
at readableAddChunk (_stream_readable.js:272:9)
at Socket.Readable.push (_stream_readable.js:213:10
I'm not sure whay this is hapening with the association because I used to create it this same way before, but now it seems I don't have the id into the users table or into the posts table either I have
And here is my package.json:
{
"name": "test",
"version": "1.0.0",
"main": "index.js",
"license": "MIT",
"scripts": {
"dev": "nodemon"
},
"dependencies": {
"bcrypt": "^5.0.1",
"consign": "^0.1.6",
"cors": "^2.8.5",
"express": "^4.17.1",
"jwt-simple": "^0.5.6",
"knex": "^0.95.10",
"moment": "^2.29.1",
"mysql2": "^2.3.0",
"node-schedule": "^2.0.0",
"passport": "^0.4.1",
"passport-jwt": "^4.0.0"
},
"devDependencies": {
"nodemon": "^2.0.12"
}
}
I appreciate any help someone can give me
A: Once that I had no unswears here, I had to find out where is the documentation of the knex lib so I saw that many things are different from 3 years ago, and after a long research I finally was able to solve this problem by myself:
My users migration is the same as before.
But the posts migration now had to be created this way:
exports.up = function(knex) {
return knex.schema.createTable('posts', table => {
table.increments('id').primary()
table.string('description').notNull()
table.string('image_url', 1000).notNull()
table.string('latitude').notNull()
table.string('longitude').notNull()
table.integer('userId').unsigned().notNullable()
.references('id').inTable('users')
table.timestamp('updatedAt')
});
};
exports.down = function(knex) {
return knex.schema.dropTable('posts');
};
And the evaluations migration was set like this:
exports.up = function(knex) {
return knex.schema.createTable('evaluations', table => {
table.increments('id').primary()
table.integer('review').notNull()
table.decimal('rate', (1,2)).notNull()
table.integer('postId').unsigned().notNullable()
.references('id').inTable('posts')
table.integer('userId').unsigned().notNullable()
.references('id').inTable('users')
table.timestamp('deletedAt')
});
};
exports.down = function(knex) {
return knex.schema.dropTable('evaluations');
};
If someone else faces this trouble with this new way to create foreignkey into a migration using knex, this was the way I found.
| |
doc_1191
|
I'm not clear what the file_name.file_extension corresponds to. Also, do I need to use some additional scripts?
A:
I'm not clear what the
file_name.file_extension corresponds
to.
Your example onclick="pageTracker._trackPageview('/file_name.file_extension') logs every click on that link as a page view for file_name.file_extension.
You can edit file_name.file_extension to be whatever you want. It is simply the name of the "page view" that gets passed to Google Analytics and is what will show up in your analytics reports.
Also, do I need to use some additional
scripts?
No, adding the above onlick attribute to each link you want tracked will be enough.
See here for reference. Hope that helps.
Edit:
I assumed you knew you needed the general Google Analytics script for this to work (Thanks to Ryan in the comments for clarifying). The script looks like the following, but contains your Google Analytics account number in place of the X's in UA-XXXXXX-X:
<script type="text/javascript">
var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www.");
document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E"));
</script>
<script type="text/javascript">
try{
var pageTracker = _gat._getTracker("UA-XXXXXX-X");
pageTracker._trackPageview();
} catch(err) {}
</script>
To obtain the script, you'll need a Google Analytics account. Once signed into your account and after adding a new "Website Profile", you'll be given a snippet of Javascript (using your account number) to include in each page you want tracked, along with instructions. That should be enough to get you started, but let me know if I can clarify anything.
Edit 2:
As pointed out in the comments, I erroneously posted the latest, asynchronous version of the Google Analytics script which is actually incompatible with _trackPageview. I've edited my answer to include the "traditional" script that you'll want to use. See here for more info.
A: For a similar problem on a client site, we're using GA Events (rather than Page view tracking) to track the downloads.
Google's Event Tracking guide has all the details but essentially, instead of
pageTracker._trackPageview("download name");
you can call something like
pageTracker._trackEvent(category, action, opt_label, opt_value)
which ends up as a set of datatables. category defines which table the data goes into, each action is a different row in the table, counted separately.
We have six different downloads that can be delivered in different ways (PDF download, by email, by snailmail etc), so we track the delivery method as the category and the brochure name as the action.
| |
doc_1192
|
A: Query caching will only cache the primary key of the results of the query. From the query cache documentation:
Note that the query cache does not
cache the state of any entities in the
result set; it caches only identifier
values and results of value type. So
the query cache should always be used
in conjunction with the second-level
cache.
| |
doc_1193
|
Please find the code for the same.
myApp.requires.push('kendo.directives');
myApp.controller('CalenderController',['$scope', '$http', 'StatsService', function ($scope, $http, StatsService) {
var self=this;
$scope.schedulerOptions = {
date: new Date(),
startTime: new Date(),
showWorkHours: true,
height: 600,
views: [
"day",
{type: "week", selected: true},
],
editable: {
destroy: false,
create: false,
template: $("#editor").html()
},
timezone: "GMT",
dataSource: {
batch: true,
transport: {
read: function (options) {
url = '/consultants/applications/interviews';
$http.get(url).success(function (data, status, headers, config) {
options.success(data.result);
}).error(function (data, status, headers, config) {
options.error(data);
});
},
parameterMap: function (options, operation) {
if (operation !== "read" && options.models) {
return {models: kendo.stringify(options.models)};
}
}
},
schema: {
model: {
id: "interviewId",
fields: {
taskId: {from: "id", type: "number", editable: false},
candidateName: {from: "candidateName" , editable: false},
title: {from: "title", defaultValue: "No title" , editable: false},
companyName: {from: "companyName" , editable: false},
start: {type: "date", from: "interviewTiming", editable: false},
end: {type: "date", from: "interviewEndTiming" , editable: false},
candidateEmail: {from: "candidateEmail" , editable: false},
candidateMobile: {from: "candidateMobile" , editable: false}
}
}
}
}
};
}]);
A: Use a custom combined script, which is smaller than kendo.all.min.js...
http://docs.telerik.com/kendo-ui/intro/installation/what-you-need#build-scripts
... or use individual script files:
http://docs.telerik.com/kendo-ui/intro/installation/what-you-need#individual-widget-scripts
In both cases, the size of the loaded script file(s) will be reduced.
The same option does not exist for CSS code, unfortunately.
| |
doc_1194
|
expose.pro
TEMPLATE = lib
CONFIG += qt plugin
QT += qml gui
DISTFILES += expose.json
DESTDIR = ../g/Expose
TARGET = expose
SOURCES += expose.cpp
expose.cpp
#include <QQmlExtensionPlugin>
#include <QQmlEngine>
#include <QGuiApplication>
class QGuiApplicationWrapper : public QGuiApplication {
int argc;
public:
QGuiApplicationWrapper() : QGuiApplication(argc, nullptr) {}
};
class QExampleQmlPlugin : public QQmlExtensionPlugin {
Q_OBJECT
Q_PLUGIN_METADATA(IID QQmlExtensionInterface_iid)
public:
void registerTypes(const char *uri) {
Q_ASSERT(uri == QLatin1String("g.Expose"));
qmlRegisterType<QGuiApplicationWrapper>(uri, 1, 0, "GuiApplication");
}
};
But when I try to import g.Expose 1.0 from QML, the following error appears:
plugin cannot be loaded for module "g.Expose": Failed to extract plugin meta data from 'g/Expose/libexpose.so
I looked into the code of the loader (qlibrary.cpp):
bool ret = false;
if (pos >= 0) {
if (hasMetaData) {
const char *data = filedata + pos;
QJsonDocument doc = QLibraryPrivate::fromRawMetaData(data);
lib->metaData = doc.object();
if (qt_debug_component())
qWarning("Found metadata in lib %s, metadata=\n%s\n",
library.toLocal8Bit().constData(), doc.toJson().constData());
ret = !doc.isNull();
}
}
if (!ret && lib)
lib->errorString = QLibrary::tr("Failed to extract plugin meta data from '%1'").arg(library);
So I guess some json must be provided and specified in in Q_PLUGIN_METADATA but I cannot find a place in documentation which describes its format and inclusion procedure
| |
doc_1195
|
Without using hadoop, a typical distributed solution I can think is: split logs into different machines using hash, etc;
each machine parses its own log files and calculate different metrics for these log files. The results can be stored as SQL, XML, or some other format in files. Then a master machine parses these intermediate files, aggregates these metrics and stored the final results to another file.
Using hadoop, how to obtain the final results? All the examples I saw are very simple cases, like count words.
I just cannot figure out how hadoop mapreducer will cooperate to aggregate the intermediate files intelligently to final result. I thought maybe my mapper should save intermediate files somewhere and my reducer should parse the intermediate files to get the final results. I must be wrong since I do not see any benefit if my mapper and reducer are implemented in this way.
It is said the format of map and reduce should be:
map: (K1, V1) → list(K2, V2)
combine: (K2, list(V2)) → list(K2, V2)
reduce: (K2, list(V2)) → list(K3, V3)
In summery, how to design my mapper and reducer code (suppose using python, other language is also fine.) Can anybody answer my question or provide a link for me to read?
A: Start thinking on how to solve challenges in a MR way. Here (1, 2) are some resources. These have got some of the MR algorithms which can be implemented in any language.
| |
doc_1196
|
string valid2Regex = @"\d{4}"; // regex to check for 4 integers
Regex rgx = new Regex(valid2Regex);
string idCheck = id;
if (rgx.Matches(idCheck, rgx))
{
parameters.Add(DataAccess.CreateParameter("@YEAR", SqlDbType.NVarChar, HttpContext.Request.QueryString.Get("Year")));
}
A: ^\d{4}$
This constrains it to just 4 digits. Otherwise any 4 digits together within a string would work with yours.
Also, there is no instance overload which takes those 2 parameters, instead use IsMatch:
if (rgx.IsMatch(idCheck))
{
...
A: talking about syntax, you can use Regex in different ways:
string match = rgx.Match(idCheck);
in the this case you look for expression and expect a single result, es:
expr:"\d{4}" text:"asdfas1234asdfasd" -> "1234"
expr:"\d{4}" text:"1234" -> "1234"
expr:"^\d{4}$" text:"asdfas1234asdfasd" -> null
expr:"^\d{4}$" text:"1234" -> "1234"
if you want only to check if the string matches you can use:
bool found = rgx.IsMatch(idCheck);
that works as:
expr:"\d{4}" text:"asdfas1234asdfasd" -> true
expr:"\d{4}" text:"1234" -> true
expr:"^\d{4}$" text:"asdfas1234asdfasd" -> false
expr:"^\d{4}$" text:"1234" -> true
the method (Matches) in your code is used find multiple instances and returns a MatchCollection:
MatchCollection result = rgx.Matches(idCheck, 0);
probably the error in your is about the second parameter, according to MSDN is an integer and represents the start position in the string.
| |
doc_1197
|
Code:
db.collection('posts', function(err, collection) {
collection.remove({_id: '4d512b45cc9374271b00000f'});
});
A: You need to pass the _id value as an ObjectID, not a string:
var mongodb = require('mongodb');
db.collection('posts', function(err, collection) {
collection.deleteOne({_id: new mongodb.ObjectID('4d512b45cc9374271b00000f')});
});
A: MongoDb has now marked the remove method as deprecated. It has been replaced by two separate methods: deleteOne and deleteMany.
Here is their relevant getting started guide: https://docs.mongodb.org/getting-started/node/remove/
and here is a quick sample:
var mongodb = require('mongodb');
db.collection('posts', function(err, collection) {
collection.deleteOne({_id: new mongodb.ObjectID('4d512b45cc9374271b00000f')}, function(err, results) {
if (err){
console.log("failed");
throw err;
}
console.log("success");
});
});
A: With TypeScript, you can to it using imports, instead of requiring the whole library
import { ObjectID } from 'mongodb'
db.collection('posts', function(err, collection) {
collection.deleteOne({_id: new ObjectID('4d512b45cc9374271b00000f')});
});
A: First include mongodb
var mongodb = require("mongodb");
You have to include the ObjectID from mongodb
var ObjectID = require('mongodb').ObjectID;
Then Use
var delete_id = request.params.id;//your id
collection.deleteOne({_id: new mongodb.ObjectID(delete_id.toString())});
1000% works...
A: i think we have to require mongodb as const
and use it with mongodb
A: I recently stumbled with this problem today and I find that the fix is:
const mongodb = require('mongodb');
const ObjectID = require('mongodb').ObjectID;
databaseName.collectionName.deleteOne({_id: new mongodb.ObjectID(id)} , (err)=>{
if (err) throw err;
console.log('Deleted'+id);
});
| |
doc_1198
|
9.7077E-4
4.25514E-4
. These inaccuraces make my text scale more than I want, here is my code I am using LibGdx:
@Override
public void render(float delta) {
// Update
System.out.println(delta);
if(titleEnlargement)
{
if(title.getScaleX() < 1.1)
title.setScale(title.getScaleX() + (delta * 0.4f));
else
titleEnlargement = false;
}
else if(!titleEnlargement)
{
title.setScale(title.getScaleX() - (delta * 0.4f));
if(title.getScaleX() <= 1)
titleEnlargement = true;
}
// Render
batch.draw(background, 0, 0);
logo.draw(batch);
title.draw(batch);
}
A: You can clip the delta so that it never exceeds a certain value. This is what Stage does to prevent jitters with actions.
Place the following at the start of your render method:
delta = Math.min(delta, 1 / 30f);
| |
doc_1199
|
export class ApiService {
data$: Observable<any>;
constructor(
private http: HttpClient,
) { }
loadData(lang:string) {
this.data$ = this.http.get(APP_ENDPOINT + '?lang=' + lang).pipe(
shareReplay(1)
);
}
getData() {
return this.data$;
}
}
I am trying to figure out how to update my data$ observable if the language in my app changes, as I would need a new http request and pass the lang variable in the request.
What is the best approach for 'resetting' the cached data$ and create a new http call to get the new data for that language?
A: You can leverage observables/subjects to trigger a new response; something like this:
export class ApiService {
lang$ = new BehaviorSubject("myDefaultLang");
data$ = this.lang.pipe(
switchMap((lang)=> this.http.get(APP_ENDPOINT + '?lang=' + lang)),
shareReplay(1),
);
constructor(
private http: HttpClient,
) { }
loadData(lang:string) {
this.lang$.next(lang)
}
getData() {
return this.data$;
}
}
A: You could make language an observable, then define your data$ from language$:
export class ApiService {
private language$ = new BehaviorSubject('en-US');
data$: Observable<any>;
constructor(
private http: HttpClient,
) { }
loadData(lang:string) {
this.data$ = this.language$.pipe(
switchMap(lang => this.http.get(`${APP_ENDPOINT}?lang=${lang}`),
shareReplay(1)
);
}
getData() {
return this.data$;
}
setLanguage(language: string) {
this.language$.next(language);
}
}
Note: since observables are lazy, you don't really need a getData$ method. Consumers can simply refer to data$. The act of subscribing is what "gets" the data.
You can define data$ directly on the component.
export class ApiService {
private language$ = new BehaviorSubject('en-US');
data$: Observable<any> = this.language$.pipe(
switchMap(lang => this.http.get(`${APP_ENDPOINT}?lang=${lang}`),
shareReplay()
);
constructor(
private http: HttpClient,
) { }
setLanguage(language: string) {
this.language$.next(language);
}
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.