id
stringlengths 5
11
| text
stringlengths 0
146k
| title
stringclasses 1
value |
|---|---|---|
doc_23526700
|
What can I do to fix it?
CREATE DATABASE `moodle`
DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci;
CREATE USER ‘moodle-owner’@’localhost’;
CREATE USER ‘moodle-owner’@’127.0.0.1’;
CREATE USER ‘moodle-owner’@’::1′;
SET PASSWORD FOR ‘moodle-owner’@’localhost’ = PASSWORD(‘moodle123$%’);
SET PASSWORD FOR ‘moodle-owner’@’127.0.0.1’ = PASSWORD(‘moodle123$%’);
SET PASSWORD FOR ‘moodle-owner’@’::1′ = PASSWORD(‘moodle123$%’);
A: I'm not sure what version of MySQL you are running, but in 5.7, the SET PASSWORD is deprecated. Try setting the user passwords with IDENTIFIED BY in the same query as CREATE USER.
CREATE USER 'moodle-owner'@'localhost' IDENTIFIED BY 'moodle123%';
CREATE USER 'moodle-owner'@'127.0.0.1' IDENTIFIED BY 'moodle123%';
CREATE USER 'moodle-owner'@'::1' IDENTIFIED BY 'moodle123%';
| |
doc_23526701
|
getContext().getResources().getIdentifier(resName, "string", getContext().getPackageName());
where Context would be MyApplication in the App and TestMyApplication in Robolectric tests.
With Robolectric 3.0 this no longer works when an applicationIdSuffix is added to the build file, the call returns 0.
Is this a known issue? This is on com.android.tools.build:gradle:1.2.0-beta1 and org.robolectric:robolectric:3.0-rc2
Update https://github.com/robolectric/robolectric/issues/1623
A: There is actually a simple fix for this now, just add @Config(constants = BuildConfig.class, packageName = com.your.package)
See https://github.com/robolectric/robolectric-samples/tree/master/android-flavors
A: As workaround I created another build flavour jenkins where I removed suffix editing. Unfortunately it is not proper solution if you want to test something specific/customised for flavour.
A: For Robolectric 3.0 you should probably use the RobolectricGradleTestRunner instead of the basic RobolectricTestRunner. This test runner allows you to specify all the details specific to a build variant with a constant class. For your specific case, it also allows the specification of a proper applicationId. This should get you around the issue your having with the applicationIdSuffix for your tests.
Usage is as follows:
@RunWith(RobolectricGradleTestRunner.class)
@Config(emulateSdk = Build.VERSION_CODES.KITKAT,
constants = SomeTestClass.BuildConfig.class)
public class SomeTestClass {
@Test
public void someTest() throws Exception {
String s = RuntimeEnvironment.application.getResources().getString(R.string.app_name);
assert(s).equals("Some Debug Name"); // <-- defined in src/debug/res/values/strings.xml
}
public static class BuildConfig {
public static final String APPLICATION_ID = "com.some.company.special";
public static final String BUILD_TYPE = "debug";
public static final String FLAVOR = "";
}
}
There are a couple of key things here to note:
*
*We're running with the Gradle specific test runner @RunWith(RobolectricGradleTestRunner.class)
*We're configuring this test to use specific application configurations via the constants property in the @Config block. The empty string for BuildConfig.FLAVOR is for when there is no specific simple or compound flavor.
*The APPLICATION_ID specified in the BuildConfig class should match applicationId you have specified for the build variant in your build.gradle file.
*We're counting on the android-gradle plugin to properly merge all of our resources for the particular build variant.
A: with applicationIdSuffix the generated BuildConfig will use a package name with the suffix, this doesn't work under Robolectric 3.0.
My solution is creating a subclass of RobolectricGradleTestRunner and "override" the getPackageName() function (cannot override it directly, it's private static) to return the hard coded package name without the suffix.
| |
doc_23526702
|
I have an HTML input element where the user types in a search query. Now, if the user starts by entering a diacritic mark, I'd like to be able to display that to the user, and if a diacritic it written without a preceding base character in front of it it will be placed a bit off to the left – and I'd thought I'd just add some padding-left to my text input element to make sure the first character would be visible even if it’s a diacritic... However, Chromium doesn't like that.
Below is an image of what it looks like in Chromium. To the left is a (yellow) input element, and to the right is (purple) textarea element (both containing the same string). The textarea shows the desired result, while the input element truncates the diacritics at the top and to the left.
If one inspects the elements, one sees the following. (The text is truncated in the padding area of the input element, but not in the textarea.)
I would prefer to use an input element if I can.
Is there any way I could style the input element so that it won't truncate my text in the padding region?
This problem does not appear in Firefox, where both the textarea an input elements look the same, with no truncating going on.
The following snippet also shows the problem (at least if you're running Chromium).
@font-face {
font-family: "FreeSans-SWL";
src: url("https://zrajm.github.io/teckentranskription/freesans-swl.ttf");
}
input, textarea {
font-family: "FreeSans-SWL";
/* make textarea & input look the same */
vertical-align: top;
border: .25em inset #888;
padding: .25em .5em;
}
<input value="">
<textarea rows=1></textarea>
| |
doc_23526703
|
Part of my UI also enables cells to be repositioned horizontally. I would like these cells to not be re-positioned each time the table's data is refreshed, but unfortunately, [tableview reloadData] does just that.
What is the ideal way to update the data in a tableview without repositioning it's rows? Should I override the tableview's reloadData method and do something fancy there or perhaps override the tableview cells layoutSubviews method?
I position the cell like so:
CGRect newFrame = cell.frame;
newFrame.origin.x = -cell.frame.size.width;
cell.frame = newFrame;
After the NSFetched Results Controller receives more data from the network call, it calls it's delegate's method:
- (void)controllerDidChangeContent:(NSFetchedResultsController *)controller {
[self.eventTableView reloadData];
}
Which calls tableview: cellForRowAtIndexPath: and the cell that is returned from the dequeue has it's origin at (0,0)
- (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath
{
static NSString *CellIdentifier = @"EventCell";
EventCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier];
if (cell == nil) {
[cellNib instantiateWithOwner:self options:nil];
cell = self.customCell;
}
// Configure the cell...
Event *event = [fetchedResultsController objectAtIndexPath:indexPath];
[cell configureCellWithEvent:event];
return cell;
}
A: You might try the UITableViewDelegate method tableView:willDisplayCell:forRowAtIndexPath:, which is invoked just before any cell is added to the table view or scrolls into the table's visible area. In there, you can position the cell if needed, and this will work after reloading.
This isn't essential to your question, but I'd also suggest changing the cell's transform property instead of its frame. That way you can't accidentally move it farther than you want it to go (say if you shift it twice).
- (void)tableView:(UITableView *)tableView willDisplayCell:(UITableViewCell *)cell forRowAtIndexPath:(NSIndexPath *)indexPath {
//Determine if the cell should be shifted.
if (cellShouldShift) {
cell.transform = CGAffineTransformMakeTranslation(0 - cell.bounds.size.width, 0);
} else {
cell.transform = CGAffineTransformIdentity;
}
}
| |
doc_23526704
|
in{1} = [10, 20, 30, 40, 50, 60, 70, 80, 90];
in{2} = inf;
in{3} = "last";
in{4} = "first";
out = cell(4, 1);
[out{1:3}] = find(in{1 : 3}); % line which I do not understand
So at the end of this section, we have in looking like:
in =
{
[1,1] =
10 20 30 40 50 60 70 80 90
[1,2] = Inf
[1,3] = last
[1,4] = first
}
and out looking like:
out =
{
[1,1] =
1 1 1 1 1 1 1 1 1
[2,1] =
1 2 3 4 5 6 7 8 9
[3,1] =
10 20 30 40 50 60 70 80 90
[4,1] = [](0x0)
}
Here, find is called with 3 output parameters (forgive me if I'm wrong on calling them output parameters, I am pretty new to Octave) from [out{1:3}], which represents the first 3 empty cells of the cell array out.
When I run find(in{1 : 3}) with 3 output parameters, as in:
[i,j,k] = find(in{1 : 3})
I get:
i = 1 1 1 1 1 1 1 1 1
j = 1 2 3 4 5 6 7 8 9
k = 10 20 30 40 50 60 70 80 90
which kind of explains why out looks like it does, but when I execute in{1:3}, I get:
ans = 10 20 30 40 50 60 70 80 90
ans = Inf
ans = last
which are the 1st to 3rd elements of the in cell array.
My question is: Why does find(in{1 : 3}) drop off the 2nd and 3rd entries in the comma separated list for in{1 : 3}?
Thank you.
A: The documentation for find should help you answer your question:
When called with 3 output arguments, find returns the row and column indices of non-zero elements (that's your i and j) and a vector containing the non-zero values (that's your k). That explains the 3 output arguments, but not why it only considers in{1}. To answer that you need to look at what happens when you pass 3 input arguments to find as in find (x, n, direction):
If three inputs are given, direction should be one of "first" or
"last", requesting only the first or last n indices, respectively.
However, the indices are always returned in ascending order.
so in{1} is your x (your data if you want), in{2} is how many indices find should consider (all of them in your case since in{2} = Inf) and {in3}is whether find should find the first or last indices of the vector in{1} (last in your case).
| |
doc_23526705
|
The date is a DatePicker.
What I have done is added a binding to a "submit" button to make sure that the users have inputted all the information before the button becomes available, however, I don't know how to bind to the LocalDate.
I tried this method which works (except for the date.getValue() at the end):
public BooleanBinding isEitherFieldEmpty() { //This is for the "SUBMIT binding" to check if all text fields are empty
return txtFirstName.textProperty().isEmpty().or(txtSurname.textProperty().isEmpty()).or
(txtPNumber.textProperty().isEmpty()).or(txtEmail.textProperty().isEmpty()).or(date.getValue() == null);
}
the last part: .or(date.getValue() == null); gives me an error which says
The method or(ObservableBooleanValue) in the type BooleanExpression is not applicable for the arguments (boolean)
I was wondering whether there was another way to bind the button to the LocalDate as well.
Thank you
A: Assuming date is a DatePicker, you can just use
date.valueProperty().isNull()
which returns a BooleanBinding. I.e.:
public BooleanBinding isEitherFieldEmpty() { //This is for the "SUBMIT binding" to check if all text fields are empty
return txtFirstName.textProperty().isEmpty()
.or(txtSurname.textProperty().isEmpty())
.or(txtPNumber.textProperty().isEmpty())
.or(txtEmail.textProperty().isEmpty())
.or(date.valueProperty().isNull());
}
| |
doc_23526706
|
<div class="image" id="image">
<img src="blah.jpg">
<div class="btn-edit">Edit</div>
</div>
I've also tried it with preventDefault() and stopPropagation() but the bug is still there. The console.log statement only prints once so subsequent click events are not registered.
$('.btn-edit').click(function(e){
// e.preventDefault();
// e.stopPropagation();
console.log('edit clicked');
var parent = $(this).parent();
parent.hide();
parent.prev().show();
});
On another part of the code, I have shown the image and edit button again, so the Edit button is visible. I'm not showing it here because it's not really relevant. You can assume the Edit button is visible the second time but it's not clickable.
A: So I changed this to use event delegation and it works. I guess this is why event delegation exists, so that you don't bind click events to classes! Thanks to @hjpotter92 for his suggestion!
$('.image').on('click', '.btn-edit', function(e){
e.stopPropagation();
console.log('edit clicked');
var parent = $(this).parent();
parent.hide();
parent.prev().show();
});
| |
doc_23526707
|
Possible Duplicate:
What's the correct encoding of HTTP get request strings?
One of my clients sent me they require HTTP requests to be encoded in ISO-8859-2,
so I wonder about what charset is used for HTTP communication, and if this request is somehow technicaly right.
A: Pure ASCII is all that's allowed in HTTP headers. But, as far as HTTP is concerned, anything goes in the request body of a POST. The headers and body are always separated by a blank line. A set of headers will normally identify the format of the content/body. Responses work the same way. However, HTML has some additional rules regarding what normally goes in a POST.
EDIT: Sorry, I missed the word 'GET' in your title. Might be nice to duplicate that in the body of your question.
At any rate, I believe I am correct in saying ONLY ASCII (ANSI X3.4-1986) is allowed in the headers of any HTTP request, GET or POST. So no, ISO-8859-2 requests are not strictly valid HTTP. That said, there's probably a way to escape the desired special characters in the query string if that's what you're really asking for here.
SOURCE: https://www.rfc-editor.org/rfc/rfc2616
A: It depends. A "smart" server will always use percent-escaped UTF-8, but you can't rely on that.
| |
doc_23526708
|
// Block I
if(condition1)
{
// Do something
}
else
{ if(condition2)
{
// Do something
}
else
{ if(condition3)
{
// Do something
}
else
{ if(condition4)
{
// Do something
}
}
}
}
//--------------------------------
// Block II
if(condition1)
{
// Do something
}
else if(condition2)
{
// Do something
}
else if(condition3)
{
// Do something
}
else if(condition4)
{
// Do something
}
Help me!
A: Assuming the language is C (you didn't specify), then you can verify that your two snippets generate exactly the same code by comparing the assembly output of gcc:
#!/bin/bash
diff <(gcc -O0 -S -o - -x c - <<EOF
extern int condition1();
extern int condition2();
extern int condition3();
extern int condition4();
extern void do_something1();
extern void do_something2();
extern void do_something3();
extern void do_something4();
void main() {
if(condition1())
{
do_something1();
}
else
{ if(condition2())
{
do_something2();
}
else
{ if(condition3())
{
do_something3();
}
else
{ if(condition4())
{
do_something4();
}
}
}
}
}
EOF
) <(
gcc -O0 -S -o - -x c - <<EOF
extern int condition1();
extern int condition2();
extern int condition3();
extern int condition4();
extern void do_something1();
extern void do_something2();
extern void do_something3();
extern void do_something4();
void main() {
if(condition1())
{
do_something1();
}
else if(condition2())
{
do_something2();
}
else if(condition3())
{
do_something3();
}
else if(condition4())
{
do_something4();
}
}
EOF
)
This generates no output (you can prove the test is valid by (e.g.) removing the last condition from one of the functions and observing that it now shows a difference).
Since the assembly language output is identical for the two blocks, you can deduce that the performance characteristics must be exactly the same.
| |
doc_23526709
|
Input Size | Encrypted Size
. | .
. | .
6 bytes | 8 bytes
7 bytes | 8 bytes
8 bytes | 16 bytes
9 bytes | 16 bytes
. | .
. | .
Is it normal? Is it the way it is supposed to work. Here is how I am trying to use triple DES:
class TripleDESEncryption
{
private readonly TripleDESCryptoServiceProvider engine;
public TripleDESEncryption () : this (256) { }
public TripleDESEncryption (int keySizeInBits) {
engine = new TripleDESCryptoServiceProvider { KeySize = keySizeInBits };
engine.GenerateKey ();
}
public byte[] Encrypt (byte[] plain) {
return engine.CreateEncryptor ().TransformFinalBlock (plain, 0, plain.Length);
}
public byte[] Decrypt (byte[] encrypted) {
return engine.CreateDecryptor ().TransformFinalBlock (encrypted, 0, encrypted.Length);
}
}
class Program
{
static readonly int MAX_TEXT_LENGTH = 128;
static void Main (string[] args) {
Console.WriteLine ("{0,10}{1,10}{2,10}{3,10}", "Algo", "Key Size", "Input Size", "Encrypted Size");
var tripleDES = new TripleDESEncryption ();
var input = new List<byte> ();
for (int i = 0; i <= MAX_TEXT_LENGTH; i++) {
var plain = input.ToArray ();
var encrypted = tripleDES.Encrypt (plain);
Console.WriteLine ("{0,10}{1,10}{2,10}{3,10}", "Triple DES", keySize, input.Count, encrypted.Length);
input.Add (0x65);
}
Console.ReadLine ();
}
}
A: TripleDESCryptoServiceProvider defaults to using PKCS7-padding. This pads any message to the next multiple of the block-size.
To avoid using padding, just set the Padding-property to PaddingMode.None
new TripleDESCryptoServiceProvider {
KeySize = keySizeInBits,
Padding = PaddingMode.None
};
| |
doc_23526710
|
Is the reference here to the actual condition variable declared as pthread_cond_t
OR
A normal shared variable count whose values decide the signaling and wait.
?
A:
is the reference here to the actual condition variable declared as pthread_cond_t or a normal shared variable count whose values decide the signaling and wait?
The reference is to both.
The mutex makes it so, that the shared variable (count in your question) can be checked, and if the value of that variable doesn't meet the desired condition, the wait that is performed inside pthread_cond_wait() will occur atomically with respect to that check.
The problem being solved with the mutex is that you have two separate operations that need to be atomic:
*
*check the condition of count
*wait inside of pthread_cond_wait() if the condition isn't met yet.
A pthread_cond_signal() doesn't 'persist' - if there are no threads waiting on the pthread_cond_t object, a signal does nothing. So if there wasn't a mutex making the two operations listed above atomic with respect to one another, you could find yourself in the following situation:
*
*Thread A wants to do something once count is non-zero
*Thread B will signal when it increments count (which will set count to something other than zero)
*
*thread "A" checks count and finds that it's zero
*before "A" gets to call pthread_cond_wait(), thread "B" comes along and increments count to 1 and calls pthread_cond_signal(). That call actually does nothing of consequence since "A" isn't waiting on the pthread_cond_t object yet.
*"A" calls pthread_cond_wait(), but since condition variable signals aren't remembered, it will block at this point and wait for the signal that has already come and gone.
The mutex (as long as all threads are following the rules) makes it so that item #2 cannot occur between items 1 and 3. The only way that thread "B" will get a chance to increment count is either before A looks at count or after "A" is already waiting for the signal.
A: A condition variable must always be associated with a mutex, to avoid the race condition where a thread prepares to wait on a condition variable and another thread signals the condition just before the first thread actually waits on it.
More info here
Some Sample:
Thread 1 (Waits for the condition)
pthread_mutex_lock(cond_mutex);
while(i<5)
{
pthread_cond_wait(cond, cond_mutex);
}
pthread_mutex_unlock(cond_mutex);
Thread 2 (Signals the condition)
pthread_mutex_lock(cond_mutex);
i++;
if(i>=5)
{
pthread_cond_signal(cond);
}
pthread_mutex_unlock(cond_mutex);
As you can see in the same above, the mutex protects the variable 'i' which is the cause of the condition. When we see that the condition is not met, we go into a condition wait, which implicitly releases the mutex and thereby allowing the thread doing the signalling to acquire the mutex and work on 'i' and avoid race condition.
Now, as per your question, if the signalling thread signals first, it should have acquired the mutex before doing so, else the first thread might simply check the condition and see that it is not being met and might go for condition wait and since the second thread has already signalled it, no one will signal it there after and the first thread will keep waiting forever.So, in this sense, the mutex is for both the condition & the conditional variable.
A: Per the pthreads docs the reason that the mutex was not separated is that there is a significant performance improvement by combining them and they expect that because of common race conditions if you don't use a mutex, it's almost always going to be done anyway.
https://linux.die.net/man/3/pthread_cond_wait
Features of Mutexes and Condition Variables
It had been suggested that the mutex acquisition and release be
decoupled from condition wait. This was rejected because it is the
combined nature of the operation that, in fact, facilitates realtime
implementations. Those implementations can atomically move a
high-priority thread between the condition variable and the mutex in a
manner that is transparent to the caller. This can prevent extra
context switches and provide more deterministic acquisition of a mutex
when the waiting thread is signaled. Thus, fairness and priority
issues can be dealt with directly by the scheduling discipline.
Furthermore, the current condition wait operation matches existing
practice.
A: I thought that a better use-case might help better explain conditional variables and their associated mutex.
I use posix conditional variables to implement what is called a Barrier Sync. Basically, I use it in an app where I have 15 (data plane) threads that all do the same thing, and I want them all to wait until all data planes have completed their initialization. Once they have all finished their (internal) data plane initialization, then they can start processing data.
Here is the code. Notice I copied the algorithm from Boost since I couldnt use templates in this particular application:
void LinuxPlatformManager::barrierSync()
{
// Algorithm taken from boost::barrier
// In the class constructor, the variables are initialized as follows:
// barrierGeneration_ = 0;
// barrierCounter_ = numCores_; // numCores_ is 15
// barrierThreshold_ = numCores_;
// Locking the mutex here synchronizes all condVar logic manipulation
// from this point until the point where either pthread_cond_wait() or
// pthread_cond_broadcast() is called below
pthread_mutex_lock(&barrierMutex_);
int gen = barrierGeneration_;
if(--barrierCounter_ == 0)
{
// The last thread to call barrierSync() enters here,
// meaning they have all called barrierSync()
barrierGeneration_++;
barrierCounter_ = barrierThreshold_;
// broadcast is the same as signal, but it signals ALL waiting threads
pthread_cond_broadcast(&barrierCond_);
}
while(gen == barrierGeneration_)
{
// All but the last thread to call this method enter here
// This call is blocking, not on the mutex, but on the condVar
// this call actually releases the mutex
pthread_cond_wait(&barrierCond_, &barrierMutex_);
}
pthread_mutex_unlock(&barrierMutex_);
}
Notice that every thread that enters the barrierSync() method locks the mutex, which makes everything between the mutex lock and the call to either pthread_cond_wait() or pthread_mutex_unlock() atomic. Also notice that the mutex is released/unlocked in pthread_cond_wait() as mentioned here. In this link it also mentions that the behavior is undefined if you call pthread_cond_wait() without having first locked the mutex.
If pthread_cond_wait() did not release the mutex lock, then all threads would block on the call to pthread_mutex_lock() at the beginning of the barrierSync() method, and it wouldnt be possible to decrease the barrierCounter_ variables (nor manipulate related vars) atomically (nor in a thread safe manner) to know how many threads have called barrierSync()
So to summarize all of this, the mutex associated with the Conditional Variable is not used to protect the Conditional Variable itself, but rather it is used to make the logic associated with the condition (barrierCounter_, etc) atomic and thread-safe. When the threads block waiting for the condition to become true, they are actually blocking on the Conditional Variable, not on the associated mutex. And a call to pthread_cond_broadcast/signal() will unblock them.
Here is another resource related to pthread_cond_broadcast() and pthread_cond_signal() for an additional reference.
| |
doc_23526711
|
import tensorflow as tf
a = tf.ones([1000])
b = tf.ones([1000])
for i in range(int(1e6)):
a = a * b
My intuition is that this should require very little memory. Just the space for the initial array allocation and a string of commands that utilizes the nodes and overwrites the memory stored in tensor 'a' at each step. But memory usage grows quite rapidly.
What is going on here, and how can I decrease memory usage when I compute a tensor and overwrite it a bunch of times?
Edit:
Thanks to Yaroslav's suggestions the solution turned out to be using a while_loop to minimize the number of nodes on the graph. This works great and is much faster, requires far less memory, and is all contained in-graph.
import tensorflow as tf
a = tf.ones([1000])
b = tf.ones([1000])
cond = lambda _i, _1, _2: tf.less(_i, int(1e6))
body = lambda _i, _a, _b: [tf.add(_i, 1), _a * _b, _b]
i = tf.constant(0)
output = tf.while_loop(cond, body, [i, a, b])
with tf.Session() as sess:
result = sess.run(output)
print(result)
A: Your a*b command translates to tf.mul(a, b), which is equivalent to tf.mul(a, b, g=tf.get_default_graph()). This command adds a Mul node to the current Graph object, so you are trying to add 1 million Mul nodes to the current graph. That's also problematic since you can't serialize Graph object larger than 2GB, there are some checks that may fail once you are dealing with such a large graph.
I'd recommend reading Programming Models for Deep Learning by MXNet folks. TensorFlow is "symbolic" programming in their terminology, and you are treating it as imperative.
To get what you want using Python loop you could construct multiplication op once, and run it repeatedly, using feed_dict to feed updates
mul_op = a*b
result = sess.run(a)
for i in range(int(1e6)):
result = sess.run(mul_op, feed_dict={a: result})
For more efficiency you could use tf.Variable objects and var.assign to avoid Python<->TensorFlow data transfers
| |
doc_23526712
|
Will the timestamp passed to the punctuator always represent milliseconds since UNIX epoch? It'd be helpful to know what Java code is being used to get wall clock time?
A: Yes, for WALL_CLOCK_TIME punctuation the passed timestamp will be the system timestamp, i.e., UNIX epoch ms timestamp, returned by System.currentTimeMillis().
| |
doc_23526713
|
So in my plugin I have this swift File: /ios/src/TestClass.swift
open class TestClass {
@objc public func testTestClass() {
return "It Works!"
}
}
but when I try to generate type information for this class using ns typings ios, No types are generated for this class.
I've also tried annotating the class with @objc(TestClass) and still the same result.
I have some cocapod libraries for which types are being generated.
Is it possible in Nativescript to use Swift files directly? or have I misunderstood the documentation?.
Edit: Turning the package into a coca pod makes everything work.
| |
doc_23526714
|
var req = new XMLHttpRequest();
req.addEventListener("load", reqListener);
req.open('GET', tab.url,false);
req.send();
if(req.status == 200)
alert(req.responseText);
But there is only one issue. I could see the code in the browser View Source section in Chrome But could not able to extract it. I guess this is a dynamic content.
So what I was thinking to achieve is to load the page as view source and then extract it using some chrome extension. But I am not getting the way out how I can get it. I know one thing that I can get the source code of the web page if I scrape the View Source page from the chrome browser.
Kindly suggest me what can be best possibly done to achieve it.
| |
doc_23526715
|
If Index.html runs first.
Main.ts or better say Main.js (after transpilation) can't run by itself as it is a javascript file at the end, and Index.html file is the one that contains the reference of main.js at the bottom before the closing body tag, obviously, the webpack does all this.
Now, let's say from the configuration that is Angular.json file, the angular knows that Index is the main HTML file that should be served first.
Then again, as Main.js is unknown at this point, so there is no way that the angular would know about the root component. And it must throw an error while parsing but it doesn't throw the error. This means, it already knows about app-root, which means Main.js is the entry point. But how is this possible, how a javascript file can be triggered without Html page?
first way:- Angular.json ---> Main.js--->Index.html (but how is this possible? who triggers Main.js?)
second way:- Angular.json--->Index.html---->Main.js (but then how do angular know about ??)
also,
My question is, If I write huge "ts" code inside the App-Component itself, then also it will not be executed even after the flow reaches the <app-root> as angular has no idea about what <app-root> actually is until it finds the Main.js in the body tag and executes it and then only it could know about it.
A: You can tested it easily. Put console.log(1) in the index.html, and console.log(2) in main.ts. First console will be 1, so index.html runs first.
When application is opened, initially index.html start to render and it will render with empty <app-root></app-root> - because Angular app is still to be loaded (you can test that easily with CRTL + U - that is initial content that browser see). That was the big bottleneck for the SEO of SPA apps.
Once the Angular app is loaded, it will dynamically populate the content to the <app-root></app-root> of the index.html.
UPDATE
I missed the part why the error is not thrown when index.html comes to <app-root></app-root> tag. @Ashish explained that really well in his answer (and definitely deserves an upvote), so I will just quote his answer here:
Reason is, index.html is not an Angular template file, it is pure
html, you can place any element inside it and it will never
throw an error. But for Angular template files, during compile time it
checks if is defined or not and throws compile time error if not
defined.
A: As it is clear from Nenad's answer index.html loads first followed by main.js. Once main.js is loaded it renders the root component inside <app-root>.
Your main confusion here seems, Angular encounters <app-root> in html before main.js/main.ts is loaded or executed, then why it doesn't throw any error or exception because if main.js is not loaded that means <app-root> is not defined yet.
Reason is, index.html is not an Angular template file, it is pure html, you can place any <xyz> element inside it and it will never throw an error. But for Angular template files, during compile time it checks if <xyz> is defined or not and throws compile time error if not defined.
| |
doc_23526716
|
The strange thing is that I can not editing "href" attribute. Other attributes can be edited.
This element does not work:
{
type: 'text',
id: 'url',
label: 'URL',
commit: function(element) {
element.setAttribute('href', this.getValue());
},
setup: function(element) {
this.setValue(element.getAttribute('href'));
}
}
When I create a link, href attribute is written. When I editing a link "href" attribute is not changed. Strange!
When I change the code above and rewrite name of attribute for example to "href-s":
{
type: 'text',
id: 'url',
label: 'URL',
commit: function(element) {
element.setAttribute('href-s', this.getValue());
},
setup: function(element) {
this.setValue(element.getAttribute('href-s'));
}
}
Creation and editing attribute works perfectly.
You do not know what's the problem?
Thank you.
A: For various internal reasons, CKEditor uses data-cke-saved-href attribute to duplicate href during runtime. So what in the output would look like
<p>I'm a <a href="http://foo.com">plain link</a>.</p>
<p>I'm a <a href="mailto:foo@bar.com?subject=Subject&body=Body">mailto link</a>.</p>
is actually something different in editor DOM
<p>I'm a <a data-cke-saved-href="http://foo.com" href="http://foo.com">plain link</a>.</p>
<p>I'm a <a data-cke-saved-href="mailto:foo@bar.com?subject=Subject&body=Body" href="mailto:foo@bar.com?subject=Subject&body=Body">mailto link</a>.</p>
Update the data- attribute each time you change href and things should go right.
| |
doc_23526717
|
A: Solr is awesome. I don't know your exact use case, but solr will probably handle it.
A: I never ended up finding any better in-app solutions other than Compass and Hibernate Search. We implemented search with Compass. In retrospect, I find it hard to get answers to my questions and while it works respectably well, I can't help but think that in-app searching is not the way to go. While it's not necessarily a great idea to muck up your environment with several interconnecting applications, from the in-app search land the grass is certainly greener looking over in the Solr pasture (and the ElasticSearch one for that matter).
A: There is always solrj
http://wiki.apache.org/solr/Solrj
| |
doc_23526718
|
In hive, I have a table t with two columns:
Name, Value
Bob, 2
Betty, 4
Robb, 3
I want to do a case when that uses the total of the Value column:
Select
Name
, CASE
When value>0.5*sum(value) over () THEN ‘0’
When value>0.9*sum(value) over () THEN ‘1’
ELSE ‘2’
END as var
From table
I don’t like the fact that sum(value) over () is computed twice. Is there a way to compute this only once. Added twist, I want to do this in one query, so without declaring user variables.
I was thinking of scalar queries:
With total as
(Select sum(value) from table)
Select
Name
, CASE
When value>0.5*(select * from total) THEN ‘0’
When value>0.9*(select * from total)THEN ‘1’
ELSE ‘2’
END as var
From table;
But this doesn’t work.
TLDR: Is there a way to simplify the first query without user variables ?
A: Don't worry about that. Let the optimizer worry about it. But, you can use a subquery or CTE if you don't want to repeat the expression:
select Name,
(case when value > 0.5 * total then '0'
when value > 0.9 * total then '1'
else '2'
end) as var
From (select t.*, sum(value) over () as total
from table t
) t;
A: Cross join a subquery that fetches the sum to the table:
Select
t.Name
, CASE
When t.value>0.9*tt.value THEN '1'
When t.value>0.5*tt.value THEN '0'
ELSE '2'
END as var
From table t cross join (select sum(value) value from table) tt
and change the order of the WHEN clauses in the CASE expression because as they are, the 2nd case will never succeed.
A: Since I/O is the major factor the slows down Hive queries, we should strive to reduce the num of stages to get better performance.
So it's better not to use a sub-query or CTE here.
Try this SQL with a global window clause:
select
name,
case
when value > 0.5*sum(value) over w then '0'
when value > 0.9*sum(value) over w then '1'
else '2'
end as var
from my_table
window w as (ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)
In this case window clause is the recommended way to reduce repetition of code.
Both the windowing and the sum aggregation will be computed only once. You can run explain select..., confirming that only ONE meaningful MR stage will be launched.
Edit:
1. A simple select clause on a subquery is not sth to worry about. It can be pushed down to the last phase of the subquery, so as to avoid additional MR stage.
2. Two identical aggregations residing in the same query block will only be evaluated once. So don’t worry about potential repeated calculation.
| |
doc_23526719
|
For example, in the following image I have a project "Proj" and I want to create a new issue in one of the subprojects. As one can see, there are more statuses available for me to choose, than I would like to have for this project. I only need displayed 4 statuses out of 7.
Is it possible to limit the statuses available, by project ?
A: Well it is possible to edit the status by Workflow. If you click on Settings and then Workflow and then the Status Transition tab, it allows you to select the Status for a given Status. For example if your issue has a status of New, then you can choose which status should show in the dropdown.
You can manage Issue Categories and Trackers per project but as far as I know Redmine does not allow us to manage Status per project
| |
doc_23526720
|
Is this possible to be done in asp.net (or asp.net mvc4)?
*i have the username/password
*the site login form is : http://exat.ru/toursearch/
Thanks ,
A: I think you are talking about web scraping, and ASP.net might not be the best fit for what you are trying. There are a number of web scraping frameworks out there, e.g.
http://scrapy.org/ for python
or
http://spyderwebtech.wordpress.com/2008/08/07/scraping-websites-with-curl/ using CURL
A: You can have a look at the 'HttpWebRequest' which can get the site data for you. Although you may have to parse it using a custom solution
| |
doc_23526721
|
I get the following crash frequently while playing the video:
08-03 11:18:25.289 15393 15393 E AndroidRuntime: java.lang.NullPointerException: Attempt to invoke virtual method 'void iqe.a(boolean)' on a null object reference
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at ioy.onFilterTouchEventForSecurity(SourceFile:115)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:2091)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.ViewGroup.dispatchTransformedTouchEvent(ViewGroup.java:2561)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:2199)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.ViewGroup.dispatchTransformedTouchEvent(ViewGroup.java:2561)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:2199)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.ViewGroup.dispatchTransformedTouchEvent(ViewGroup.java:2561)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:2199)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.ViewGroup.dispatchTransformedTouchEvent(ViewGroup.java:2561)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:2199)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.ViewGroup.dispatchTransformedTouchEvent(ViewGroup.java:2561)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:2199)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.ViewGroup.dispatchTransformedTouchEvent(ViewGroup.java:2561)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:2199)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.ViewGroup.dispatchTransformedTouchEvent(ViewGroup.java:2561)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:2199)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.ViewGroup.dispatchTransformedTouchEvent(ViewGroup.java:2561)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:2199)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.ViewGroup.dispatchTransformedTouchEvent(ViewGroup.java:2561)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:2199)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.ViewGroup.dispatchTransformedTouchEvent(ViewGroup.java:2561)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:2199)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.ViewGroup.dispatchTransformedTouchEvent(ViewGroup.java:2561)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:2199)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.ViewGroup.dispatchTransformedTouchEvent(ViewGroup.java:2561)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:2199)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.ViewGroup.dispatchTransformedTouchEvent(ViewGroup.java:2561)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.ViewGroup.dispatchTouchEvent(ViewGroup.java:2199)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at com.android.internal.policy.PhoneWindow$DecorView.superDispatchTouchEvent(PhoneWindow.java:2419)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at com.android.internal.policy.PhoneWindow.superDispatchTouchEvent(PhoneWindow.java:1744)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.app.Activity.dispatchTouchEvent(Activity.java:2771)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.support.v7.view.WindowCallbackWrapper.dispatchTouchEvent(WindowCallbackWrapper.java:71)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.support.v7.view.WindowCallbackWrapper.dispatchTouchEvent(WindowCallbackWrapper.java:71)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at com.android.internal.policy.PhoneWindow$DecorView.dispatchTouchEvent(PhoneWindow.java:2380)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.View.dispatchPointerEvent(View.java:9529)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.ViewRootImpl$ViewPostImeInputStage.processPointerEvent(ViewRootImpl.java:4230)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.ViewRootImpl$ViewPostImeInputStage.onProcess(ViewRootImpl.java:4096)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.ViewRootImpl$InputStage.deliver(ViewRootImpl.java:3642)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.ViewRootImpl$InputStage.onDeliverToNext(ViewRootImpl.java:3695)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.ViewRootImpl$InputStage.forward(ViewRootImpl.java:3661)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.ViewRootImpl$AsyncInputStage.forward(ViewRootImpl.java:3787)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.ViewRootImpl$InputStage.apply(ViewRootImpl.java:3669)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.ViewRootImpl$AsyncInputStage.apply(ViewRootImpl.java:3844)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.ViewRootImpl$InputStage.deliver(ViewRootImpl.java:3642)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.ViewRootImpl$InputStage.onDeliverToNext(ViewRootImpl.java:3695)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.ViewRootImpl$InputStage.forward(ViewRootImpl.java:3661)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.ViewRootImpl$InputStage.apply(ViewRootImpl.java:3669)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.ViewRootImpl$InputStage.deliver(ViewRootImpl.java:3642)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.ViewRootImpl.deliverInputEvent(ViewRootImpl.java:5922)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.ViewRootImpl.doProcessInputEvents(ViewRootImpl.java:5896)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.ViewRootImpl.enqueueInputEvent(ViewRootImpl.java:5857)
08-03 11:18:25.289 15393 15393 E AndroidRuntime: at android.view.ViewRootImpl$WindowInpu
The problem is that in order to find root cause of crash i need a line number in my code but the crash log in only Google Code.
I have tried to google about the crash in this method :
Attempt to invoke virtual method 'void iqe.a(boolean)' on a null
object reference
However, this crash happens sometimes with method isu.a(boolean) instead of iqe.a(boolean).
The Google and StackOverflow does not have any content / information about this crash or method.
I have tried to search for this method in my entire project but this does not result in any result.
Since the crash log does not have any line from my project i feel it very difficult to analyse what is the problem.
When crash occurs:
*
*Play video, new Activity opens. > Press Back Button.
*Repeat Step 1 10-20 times.
| |
doc_23526722
|
So the query will only involve the Users table and I have to do a query like:-
Select Users
FROM Users
WHERE Dateleft is less than 30 days from date jointed.
Database is MS SQL 2008.
What I have so far is:-
SELECT * FROM Users WHERE (Dateleft >= Datejoined - 30)
But it doesn't work.
http://sqlfiddle.com/#!3/f2da70/14
A: you should use the DATEDIFF function:
http://msdn.microsoft.com/de-de/library/ms189794.aspx
A:
SELECT * FROM [Users] WHERE Dateleft < DATEADD(dd,30,Datejoined)
| |
doc_23526723
|
I have a content security policy that works as expected on desktop, but it breaks the site on mobile (safari). The content security policy is inside meta tags. I am using nonces and hashes. On mobile I get the error stating that it refused to execute inline script because it violates the Content Security Policy directive which includes the hashes and nonces. The error also states that I need either a hash or nonce in the code to execute the code, but they are already present there, and that's how it works well on desktop. The problem is that on mobile it's acting as if the hashes and nonces didn't exist. Any tips are appreciated.
A: In CSP, if you include a nonce for script-src or style-src, unsafe-inline will be ignored if the browser understands nonces. Therefore, in order to be compatible with older browsers that don't understand CSP2 (for example, Safari on iOS 9 and earlier), include both your nonce AND unsafe-inline.
The newer browsers will follow the nonce and ignore the unsafe-inline. The older browsers will not understand the nonce, and thus fall back to the unsafe-inline.
See https://csp.withgoogle.com/docs/strict-csp.html
script-src nonce-{random} 'unsafe-inline'
The nonce directive means that elements will be allowed to execute only if they contain a nonce attribute matching the randomly-generated value which appears in the policy.
Note: In the presence of a CSP nonce the unsafe-inline directive will be ignored by modern browsers. Older browsers, which don't support nonces, will see unsafe-inline and allow inline scripts to execute.
| |
doc_23526724
|
A: When (x-1)! is divided by (x-1) for x > 1, the remainder will always be 0. Since it's given that the remainder is x, you need to find all x such that x is congruent to 0 modulo x-1. (Notice that x itself is congruent to 1 mod x - 1).
| |
doc_23526725
|
$test = array("a","b","c");
$treevar = "test";
${$treevar}['k'] = array(1,2,3); # Works
$letter = "l";
${$treevar[$letter]} = array(1,2,3); # Gives error
$treevar = "test['m']";
${$treevar} = array(1,2,3); # Does nothing (visible)
$treevar = 'test["n"]';
${$treevar} = array(1,2,3); # Does nothing (visible)
$test["o"] = array(1,2,3); # Works
print_r($test);
gives
Warning: Illegal string offset 'l' in /home/backiiq199/domains/mvantloo.nl/public_html/kb/index_dropbox.php on line 17
Array
(
[0] => a
[1] => b
[2] => c
[k] => Array
(
[0] => 1
[1] => 2
[2] => 3
)
[o] => Array
(
[0] => 1
[1] => 2
[2] => 3
)
)
What I eventually want is to build an <UL>/<LI> tree from an directory tree including folders and files.
As step in the middle, I want to build an multi-level array, so I can sort and filter and all those kind of fun, then transform the array to <UL>/<LI> structure.
To create the array I want that something like this would work (while it doesn't):
$treevar = "tree['level1']['level2']['level3']";
${$treevar} = array(1,2,3);
So where I first build the entire variable including all array elements, then create the variable variable.
The main reason is because the order of files and folders is chaotic, so I need to be able to add all missing parts of the tree. This I want to build by looping through the path structure.
I hope I'm a bit clear. Also other ideas are welcome of course.
| |
doc_23526726
|
So before editing it, I thought I should try compiling it as it is to see if this works fine. If not, I would have to solve that problem first before editing the code.
And here I am since its not working. I get errors since because of "undefined references" and I dont know why.
99% of the errors are because some emlrtAlias or other functions have undefinded references. Those functions are in the emlrt.h file, but I can include the folder in the path, I can copy the fild in the directory where all the .cpp's are, its not working and I dont know why.
Here is the code I am compiling with as well as the errors:
mex('-v','-compatibleArrayDims',['-I',matlabroot,'\extern\include'],'*.cpp')
Verbose mode is on.
... Looking for compiler 'MinGW64 Compiler (C++)' ...
... Looking for environment variable 'MW_MINGW64_LOC' ...Yes ('C:\ProgramData\MATLAB\SupportPackages\R2020b\3P.instrset\mingw_w64.instrset').
... Looking for file 'C:\ProgramData\MATLAB\SupportPackages\R2020b\3P.instrset\mingw_w64.instrset\bin\g++.exe' ...Yes.
... Looking for folder 'C:\ProgramData\MATLAB\SupportPackages\R2020b\3P.instrset\mingw_w64.instrset' ...Yes.
... Looking for environment variable 'MW_MINGW64_LOC' ...Yes ('C:\ProgramData\MATLAB\SupportPackages\R2020b\3P.instrset\mingw_w64.instrset').
... Executing command 'C:\ProgramData\MATLAB\SupportPackages\R2020b\3P.instrset\mingw_w64.instrset\bin\g++ -dumpmachine' ...Yes ('x86_64-w64-mingw32').
... Looking for environment variable 'MW_MINGW64_LOC' ...Yes ('C:\ProgramData\MATLAB\SupportPackages\R2020b\3P.instrset\mingw_w64.instrset').
... Executing command 'C:\ProgramData\MATLAB\SupportPackages\R2020b\3P.instrset\mingw_w64.instrset\bin\g++ -dumpversion' ...Yes ('6.3.0').
Found installed compiler 'MinGW64 Compiler (C++)'.
Set PATH = C:\ProgramData\MATLAB\SupportPackages\R2020b\3P.instrset\mingw_w64.instrset\bin;C:\Program Files\MATLAB\R2020b\extern\include\win64;C:\Program Files\MATLAB\R2020b\extern\include;C:\Program Files\MATLAB\R2020b\simulink\include;C:\Program Files\MATLAB\R2020b\lib\win64;C:\Program Files (x86)\ImageMagick-7.0.8-Q16-HDRI;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0\;C:\WINDOWS\System32\OpenSSH\;C:\Program Files\MATLAB\R2020b\runtime\win64;C:\Program Files\MATLAB\R2020b\bin;C:\Program Files\MATLAB\R2019b\runtime\win64;C:\Program Files\MATLAB\R2019b\bin;C:\Program Files\Microsoft SQL Server\120\Tools\Binn\;C:\Program Files\dotnet\;C:\Program Files\Microsoft SQL Server\130\Tools\Binn\;C:\Program Files\Microsoft SQL Server\Client SDK\ODBC\170\Tools\Binn\;C:\Program Files (x86)\IncrediBuild;C:\Program Files\MiKTeX\miktex\bin\x64\;C:\Users\Marc\AppData\Local\Microsoft\WindowsApps;
Set INCLUDE = C:\ProgramData\MATLAB\SupportPackages\R2020b\3P.instrset\mingw_w64.instrset\include;C:\ProgramData\MATLAB\SupportPackages\R2020b\3P.instrset\mingw_w64.instrset\lib\gcc\x86_64-w64-mingw32\6.3.0\include\c++;C:\ProgramData\MATLAB\SupportPackages\R2020b\3P.instrset\mingw_w64.instrset\lib\gcc\x86_64-w64-mingw32\6.3.0\include;C:\ProgramData\MATLAB\SupportPackages\R2020b\3P.instrset\mingw_w64.instrset\lib\gcc\x86_64-w64-mingw32\6.3.0\include\c++\x86_64-w64-mingw32;C:\ProgramData\MATLAB\SupportPackages\R2020b\3P.instrset\mingw_w64.instrset\lib\gcc\x86_64-w64-mingw32\6.3.0\include\c++\backward;C:\ProgramData\MATLAB\SupportPackages\R2020b\3P.instrset\mingw_w64.instrset\x86_64-w64-mingw32\include;C:\ProgramData\MATLAB\SupportPackages\R2020b\3P.instrset\mingw_w64.instrset\include;C:\ProgramData\MATLAB\SupportPackages\R2020b\3P.instrset\mingw_w64.instrset\lib\gcc\x86_64-w64-mingw32\6.3.0\include\c++;C:\ProgramData\MATLAB\SupportPackages\R2020b\3P.instrset\mingw_w64.instrset\lib\gcc\x86_64-w64-mingw32\6.3.0\include;C:\ProgramData\MATLAB\SupportPackages\R2020b\3P.instrset\mingw_w64.instrset\lib\gcc\x86_64-w64-mingw32\6.3.0\include\c++\x86_64-w64-mingw32;C:\ProgramData\MATLAB\SupportPackages\R2020b\3P.instrset\mingw_w64.instrset\lib\gcc\x86_64-w64-mingw32\6.3.0\include\c++\backward;C:\ProgramData\MATLAB\SupportPackages\R2020b\3P.instrset\mingw_w64.instrset\x86_64-w64-mingw32\include;
Set LIB = C:\ProgramData\MATLAB\SupportPackages\R2020b\3P.instrset\mingw_w64.instrset\lib;;C:\ProgramData\MATLAB\SupportPackages\R2020b\3P.instrset\mingw_w64.instrset\lib;;
Set MW_TARGET_ARCH = win64;win64;
Set LIBPATH = C:\Program Files\MATLAB\R2020b\extern\lib\win64;C:\Program Files\MATLAB\R2020b\extern\lib\win64;
Options file details
-------------------------------------------------------------------
Compiler location: C:\ProgramData\MATLAB\SupportPackages\R2020b\3P.instrset\mingw_w64.instrset
Options file: C:\Users\Marc\AppData\Roaming\MathWorks\MATLAB\R2020b\mex_C++_win64.xml
CMDLINE2 : C:\ProgramData\MATLAB\SupportPackages\R2020b\3P.instrset\mingw_w64.instrset\bin\g++ -m64 -Wl,--no-undefined -shared -static -s -Wl,"C:\Program Files\MATLAB\R2020b/extern/lib/win64/mingw64/exportsmexfileversion.def" C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\grain_struct_grower_para.obj C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\rt_nonfinite.obj C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\_coder_grain_struct_grower_para_info.obj C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\_coder_grain_struct_grower_para_mex.obj C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\cpp_mexapi_version.obj -L"C:\Program Files\MATLAB\R2020b\extern\lib\win64\mingw64" -llibmx -llibmex -llibmat -lm -llibmwlapack -llibmwblas -llibMatlabDataArray -llibMatlabEngine -o grain_struct_grower_para.mexw64
CXX : C:\ProgramData\MATLAB\SupportPackages\R2020b\3P.instrset\mingw_w64.instrset\bin\g++
COMPILER : C:\ProgramData\MATLAB\SupportPackages\R2020b\3P.instrset\mingw_w64.instrset\bin\g++
DEFINES : -DMX_COMPAT_32 -DMATLAB_DEFAULT_RELEASE=R2017b -DUSE_MEX_CMD -m64 -DMATLAB_MEX_FILE
MATLABMEX : -DMATLAB_MEX_FILE
CFLAGS : -fexceptions -fno-omit-frame-pointer
CXXFLAGS : -fexceptions -fno-omit-frame-pointer -std=c++11
INCLUDE : -I"C:\Program Files\MATLAB\R2020b\extern\include" -I"C:\Program Files\MATLAB\R2020b/extern/include" -I"C:\Program Files\MATLAB\R2020b/simulink/include"
CXXOPTIMFLAGS : -O2 -fwrapv -DNDEBUG
CXXDEBUGFLAGS : -g
LDXX : C:\ProgramData\MATLAB\SupportPackages\R2020b\3P.instrset\mingw_w64.instrset\bin\g++
LINKER : C:\ProgramData\MATLAB\SupportPackages\R2020b\3P.instrset\mingw_w64.instrset\bin\g++
LDFLAGS : -m64 -Wl,--no-undefined
LDTYPE : -shared -static
LINKEXPORT : -Wl,"C:\Program Files\MATLAB\R2020b/extern/lib/win64/mingw64/mexFunction.def"
LINKEXPORTVER : -Wl,"C:\Program Files\MATLAB\R2020b/extern/lib/win64/mingw64/exportsmexfileversion.def"
LIBLOC : C:\Program Files\MATLAB\R2020b\extern\lib\win64\mingw64
LINKLIBS : -L"C:\Program Files\MATLAB\R2020b\extern\lib\win64\mingw64" -llibmx -llibmex -llibmat -lm -llibmwlapack -llibmwblas -llibMatlabDataArray -llibMatlabEngine
LDOPTIMFLAGS : -s
LDDEBUGFLAGS : -g
OBJEXT : .obj
LDEXT : .mexw64
SETENV : set COMPILER=C:\ProgramData\MATLAB\SupportPackages\R2020b\3P.instrset\mingw_w64.instrset\bin\gcc
set CXXCOMPILER=C:\ProgramData\MATLAB\SupportPackages\R2020b\3P.instrset\mingw_w64.instrset\bin\g++
set COMPFLAGS=-c -fexceptions -fno-omit-frame-pointer -DMX_COMPAT_32 -DMATLAB_DEFAULT_RELEASE=R2017b -DUSE_MEX_CMD -m64 -DMATLAB_MEX_FILE -DMATLAB_MEX_FILE
set CXXCOMPFLAGS=-c -fexceptions -fno-omit-frame-pointer -std=c++11 -DMX_COMPAT_32 -DMATLAB_DEFAULT_RELEASE=R2017b -DUSE_MEX_CMD -m64 -DMATLAB_MEX_FILE -DMATLAB_MEX_FILE
set OPTIMFLAGS=-O2 -fwrapv -DNDEBUG
set DEBUGFLAGS=-g
set LINKER=C:\ProgramData\MATLAB\SupportPackages\R2020b\3P.instrset\mingw_w64.instrset\bin\gcc
set CXXLINKER=C:\ProgramData\MATLAB\SupportPackages\R2020b\3P.instrset\mingw_w64.instrset\bin\g++
set LINKFLAGS=-m64 -Wl,--no-undefined -shared -static -L"C:\Program Files\MATLAB\R2020b\extern\lib\win64\mingw64" -llibmx -llibmex -llibmat -lm -llibmwlapack -llibmwblas -llibMatlabDataArray -llibMatlabEngine -Wl,"C:\Program Files\MATLAB\R2020b/extern/lib/win64/mingw64/mexFunction.def"
set LINKDEBUGFLAGS=-g
set NAME_OUTPUT= -o "%OUTDIR%%MEX_NAME%%MEX_EXT%"
MINGWROOT : C:\ProgramData\MATLAB\SupportPackages\R2020b\3P.instrset\mingw_w64.instrset
MINGWTARGET : x86_64-w64-mingw32
VERSION : 6.3.0
MATLABROOT : C:\Program Files\MATLAB\R2020b
ARCH : win64
SRC : "C:\Users\Marc\Promo\Promo_matlab\linked_grains_modelCPP\codegen\mex\grain_struct_grower_para\grain_struct_grower_para.cpp";"C:\Users\Marc\Promo\Promo_matlab\linked_grains_modelCPP\codegen\mex\grain_struct_grower_para\rt_nonfinite.cpp";"C:\Users\Marc\Promo\Promo_matlab\linked_grains_modelCPP\codegen\mex\grain_struct_grower_para\_coder_grain_struct_grower_para_info.cpp";"C:\Users\Marc\Promo\Promo_matlab\linked_grains_modelCPP\codegen\mex\grain_struct_grower_para\_coder_grain_struct_grower_para_mex.cpp";"C:\Program Files\MATLAB\R2020b\extern\version\cpp_mexapi_version.cpp"
OBJ : C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\grain_struct_grower_para.obj;C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\rt_nonfinite.obj;C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\_coder_grain_struct_grower_para_info.obj;C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\_coder_grain_struct_grower_para_mex.obj;C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\cpp_mexapi_version.obj
OBJS : C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\grain_struct_grower_para.obj C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\rt_nonfinite.obj C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\_coder_grain_struct_grower_para_info.obj C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\_coder_grain_struct_grower_para_mex.obj C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\cpp_mexapi_version.obj
SRCROOT : C:\Users\Marc\Promo\Promo_matlab\linked_grains_modelCPP\codegen\mex\grain_struct_grower_para\grain_struct_grower_para
DEF : C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\grain_struct_grower_para.def
EXP : "grain_struct_grower_para.exp"
LIB : "grain_struct_grower_para.lib"
EXE : grain_struct_grower_para.mexw64
ILK : "grain_struct_grower_para.ilk"
MANIFEST : "grain_struct_grower_para.mexw64.manifest"
TEMPNAME : grain_struct_grower_para
EXEDIR :
EXENAME : grain_struct_grower_para
OPTIM : -O2 -fwrapv -DNDEBUG
LINKOPTIM : -s
CMDLINE1_0 : C:\ProgramData\MATLAB\SupportPackages\R2020b\3P.instrset\mingw_w64.instrset\bin\g++ -c -DMX_COMPAT_32 -DMATLAB_DEFAULT_RELEASE=R2017b -DUSE_MEX_CMD -m64 -DMATLAB_MEX_FILE -I"C:\Program Files\MATLAB\R2020b\extern\include" -I"C:\Program Files\MATLAB\R2020b/extern/include" -I"C:\Program Files\MATLAB\R2020b/simulink/include" -fexceptions -fno-omit-frame-pointer -std=c++11 -O2 -fwrapv -DNDEBUG "C:\Users\Marc\Promo\Promo_matlab\linked_grains_modelCPP\codegen\mex\grain_struct_grower_para\grain_struct_grower_para.cpp" -o C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\grain_struct_grower_para.obj
CMDLINE1_1 : C:\ProgramData\MATLAB\SupportPackages\R2020b\3P.instrset\mingw_w64.instrset\bin\g++ -c -DMX_COMPAT_32 -DMATLAB_DEFAULT_RELEASE=R2017b -DUSE_MEX_CMD -m64 -DMATLAB_MEX_FILE -I"C:\Program Files\MATLAB\R2020b\extern\include" -I"C:\Program Files\MATLAB\R2020b/extern/include" -I"C:\Program Files\MATLAB\R2020b/simulink/include" -fexceptions -fno-omit-frame-pointer -std=c++11 -O2 -fwrapv -DNDEBUG "C:\Users\Marc\Promo\Promo_matlab\linked_grains_modelCPP\codegen\mex\grain_struct_grower_para\rt_nonfinite.cpp" -o C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\rt_nonfinite.obj
CMDLINE1_2 : C:\ProgramData\MATLAB\SupportPackages\R2020b\3P.instrset\mingw_w64.instrset\bin\g++ -c -DMX_COMPAT_32 -DMATLAB_DEFAULT_RELEASE=R2017b -DUSE_MEX_CMD -m64 -DMATLAB_MEX_FILE -I"C:\Program Files\MATLAB\R2020b\extern\include" -I"C:\Program Files\MATLAB\R2020b/extern/include" -I"C:\Program Files\MATLAB\R2020b/simulink/include" -fexceptions -fno-omit-frame-pointer -std=c++11 -O2 -fwrapv -DNDEBUG "C:\Users\Marc\Promo\Promo_matlab\linked_grains_modelCPP\codegen\mex\grain_struct_grower_para\_coder_grain_struct_grower_para_info.cpp" -o C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\_coder_grain_struct_grower_para_info.obj
CMDLINE1_3 : C:\ProgramData\MATLAB\SupportPackages\R2020b\3P.instrset\mingw_w64.instrset\bin\g++ -c -DMX_COMPAT_32 -DMATLAB_DEFAULT_RELEASE=R2017b -DUSE_MEX_CMD -m64 -DMATLAB_MEX_FILE -I"C:\Program Files\MATLAB\R2020b\extern\include" -I"C:\Program Files\MATLAB\R2020b/extern/include" -I"C:\Program Files\MATLAB\R2020b/simulink/include" -fexceptions -fno-omit-frame-pointer -std=c++11 -O2 -fwrapv -DNDEBUG "C:\Users\Marc\Promo\Promo_matlab\linked_grains_modelCPP\codegen\mex\grain_struct_grower_para\_coder_grain_struct_grower_para_mex.cpp" -o C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\_coder_grain_struct_grower_para_mex.obj
CMDLINE1_4 : C:\ProgramData\MATLAB\SupportPackages\R2020b\3P.instrset\mingw_w64.instrset\bin\g++ -c -DMX_COMPAT_32 -DMATLAB_DEFAULT_RELEASE=R2017b -DUSE_MEX_CMD -m64 -DMATLAB_MEX_FILE -I"C:\Program Files\MATLAB\R2020b\extern\include" -I"C:\Program Files\MATLAB\R2020b/extern/include" -I"C:\Program Files\MATLAB\R2020b/simulink/include" -fexceptions -fno-omit-frame-pointer -std=c++11 -O2 -fwrapv -DNDEBUG "C:\Program Files\MATLAB\R2020b\extern\version\cpp_mexapi_version.cpp" -o C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\cpp_mexapi_version.obj
-------------------------------------------------------------------
Building with 'MinGW64 Compiler (C++)'.
C:\ProgramData\MATLAB\SupportPackages\R2020b\3P.instrset\mingw_w64.instrset\bin\g++ -c -DMX_COMPAT_32 -DMATLAB_DEFAULT_RELEASE=R2017b -DUSE_MEX_CMD -m64 -DMATLAB_MEX_FILE -I"C:\Program Files\MATLAB\R2020b\extern\include" -I"C:\Program Files\MATLAB\R2020b/extern/include" -I"C:\Program Files\MATLAB\R2020b/simulink/include" -fexceptions -fno-omit-frame-pointer -std=c++11 -O2 -fwrapv -DNDEBUG "C:\Users\Marc\Promo\Promo_matlab\linked_grains_modelCPP\codegen\mex\grain_struct_grower_para\grain_struct_grower_para.cpp" -o C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\grain_struct_grower_para.obj
C:\ProgramData\MATLAB\SupportPackages\R2020b\3P.instrset\mingw_w64.instrset\bin\g++ -c -DMX_COMPAT_32 -DMATLAB_DEFAULT_RELEASE=R2017b -DUSE_MEX_CMD -m64 -DMATLAB_MEX_FILE -I"C:\Program Files\MATLAB\R2020b\extern\include" -I"C:\Program Files\MATLAB\R2020b/extern/include" -I"C:\Program Files\MATLAB\R2020b/simulink/include" -fexceptions -fno-omit-frame-pointer -std=c++11 -O2 -fwrapv -DNDEBUG "C:\Users\Marc\Promo\Promo_matlab\linked_grains_modelCPP\codegen\mex\grain_struct_grower_para\rt_nonfinite.cpp" -o C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\rt_nonfinite.obj
C:\ProgramData\MATLAB\SupportPackages\R2020b\3P.instrset\mingw_w64.instrset\bin\g++ -c -DMX_COMPAT_32 -DMATLAB_DEFAULT_RELEASE=R2017b -DUSE_MEX_CMD -m64 -DMATLAB_MEX_FILE -I"C:\Program Files\MATLAB\R2020b\extern\include" -I"C:\Program Files\MATLAB\R2020b/extern/include" -I"C:\Program Files\MATLAB\R2020b/simulink/include" -fexceptions -fno-omit-frame-pointer -std=c++11 -O2 -fwrapv -DNDEBUG "C:\Users\Marc\Promo\Promo_matlab\linked_grains_modelCPP\codegen\mex\grain_struct_grower_para\_coder_grain_struct_grower_para_info.cpp" -o C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\_coder_grain_struct_grower_para_info.obj
C:\ProgramData\MATLAB\SupportPackages\R2020b\3P.instrset\mingw_w64.instrset\bin\g++ -c -DMX_COMPAT_32 -DMATLAB_DEFAULT_RELEASE=R2017b -DUSE_MEX_CMD -m64 -DMATLAB_MEX_FILE -I"C:\Program Files\MATLAB\R2020b\extern\include" -I"C:\Program Files\MATLAB\R2020b/extern/include" -I"C:\Program Files\MATLAB\R2020b/simulink/include" -fexceptions -fno-omit-frame-pointer -std=c++11 -O2 -fwrapv -DNDEBUG "C:\Users\Marc\Promo\Promo_matlab\linked_grains_modelCPP\codegen\mex\grain_struct_grower_para\_coder_grain_struct_grower_para_mex.cpp" -o C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\_coder_grain_struct_grower_para_mex.obj
C:\ProgramData\MATLAB\SupportPackages\R2020b\3P.instrset\mingw_w64.instrset\bin\g++ -c -DMX_COMPAT_32 -DMATLAB_DEFAULT_RELEASE=R2017b -DUSE_MEX_CMD -m64 -DMATLAB_MEX_FILE -I"C:\Program Files\MATLAB\R2020b\extern\include" -I"C:\Program Files\MATLAB\R2020b/extern/include" -I"C:\Program Files\MATLAB\R2020b/simulink/include" -fexceptions -fno-omit-frame-pointer -std=c++11 -O2 -fwrapv -DNDEBUG "C:\Program Files\MATLAB\R2020b\extern\version\cpp_mexapi_version.cpp" -o C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\cpp_mexapi_version.obj
C:\ProgramData\MATLAB\SupportPackages\R2020b\3P.instrset\mingw_w64.instrset\bin\g++ -m64 -Wl,--no-undefined -shared -static -s -Wl,"C:\Program Files\MATLAB\R2020b/extern/lib/win64/mingw64/exportsmexfileversion.def" C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\grain_struct_grower_para.obj C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\rt_nonfinite.obj C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\_coder_grain_struct_grower_para_info.obj C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\_coder_grain_struct_grower_para_mex.obj C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\cpp_mexapi_version.obj -L"C:\Program Files\MATLAB\R2020b\extern\lib\win64\mingw64" -llibmx -llibmex -llibmat -lm -llibmwlapack -llibmwblas -llibMatlabDataArray -llibMatlabEngine -o grain_struct_grower_para.mexw64
Error using mex
C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\grain_struct_grower_para.obj:grain_struct_grower_para.cpp:(.text+0x29): undefined
reference to `emlrtAlias'
C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\grain_struct_grower_para.obj:grain_struct_grower_para.cpp:(.text+0x35): undefined
reference to `emlrtAlias'
C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\grain_struct_grower_para.obj:grain_struct_grower_para.cpp:(.text+0x6b): undefined
reference to `emlrtCheckBuiltInR2012b'
C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\grain_struct_grower_para.obj:grain_struct_grower_para.cpp:(.text+0x74): undefined
reference to `emlrtMxGetData'
C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\grain_struct_grower_para.obj:grain_struct_grower_para.cpp:(.text+0x80): undefined
reference to `emlrtDestroyArray'
C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\grain_struct_grower_para.obj:grain_struct_grower_para.cpp:(.text+0x89): undefined
reference to `emlrtDestroyArray'
C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\grain_struct_grower_para.obj:grain_struct_grower_para.cpp:(.text+0x92): undefined
reference to `emlrtDestroyArray'
C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\grain_struct_grower_para.obj:grain_struct_grower_para.cpp:(.text+0xd0): undefined
reference to `emlrtAlias'
C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\grain_struct_grower_para.obj:grain_struct_grower_para.cpp:(.text+0xdc): undefined
reference to `emlrtAlias'
C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\grain_struct_grower_para.obj:grain_struct_grower_para.cpp:(.text+0x112): undefined
reference to `emlrtCheckBuiltInR2012b'
C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\grain_struct_grower_para.obj:grain_struct_grower_para.cpp:(.text+0x11b): undefined
reference to `emlrtMxGetData'
C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\grain_struct_grower_para.obj:grain_struct_grower_para.cpp:(.text+0x128): undefined
reference to `emlrtDestroyArray'
C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\grain_struct_grower_para.obj:grain_struct_grower_para.cpp:(.text+0x131): undefined
reference to `emlrtDestroyArray'
C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\grain_struct_grower_para.obj:grain_struct_grower_para.cpp:(.text+0x13a): undefined
reference to `emlrtDestroyArray'
C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\grain_struct_grower_para.obj:grain_struct_grower_para.cpp:(.text+0x17b): undefined
reference to `emlrtCreateNumericArray'
C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\grain_struct_grower_para.obj:grain_struct_grower_para.cpp:(.text+0x189): undefined
reference to `emlrtMxSetData'
.
.
.
.
C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\_coder_grain_struct_grower_para_mex.obj:_coder_grain_struct_grower_para_mex.cpp:(.text+0x1ba):
undefined reference to `omp_destroy_nest_lock'
C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\_coder_grain_struct_grower_para_mex.obj:_coder_grain_struct_grower_para_mex.cpp:(.text+0x1dc):
undefined reference to `omp_destroy_lock'
C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\_coder_grain_struct_grower_para_mex.obj:_coder_grain_struct_grower_para_mex.cpp:(.text+0x1e8):
undefined reference to `omp_destroy_nest_lock'
C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\_coder_grain_struct_grower_para_mex.obj:_coder_grain_struct_grower_para_mex.cpp:(.text+0x1f1):
undefined reference to `emlrtReportParallelRunTimeError'
C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\_coder_grain_struct_grower_para_mex.obj:_coder_grain_struct_grower_para_mex.cpp:(.text+0x1f9):
undefined reference to `emlrtCleanupOnException'
C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\_coder_grain_struct_grower_para_mex.obj:_coder_grain_struct_grower_para_mex.cpp:(.text+0x229):
undefined reference to `omp_get_num_procs'
C:\Users\Marc\AppData\Local\Temp\mex_788353070741094_6308\_coder_grain_struct_grower_para_mex.obj:_coder_grain_struct_grower_para_mex.cpp:(.text+0x246):
undefined reference to `emlrtCreateRootTLS'
collect2.exe: error: ld returned 1 exit status
Would be glad if somebody could tell me where the problem is.
| |
doc_23526727
|
I followed his indication on what to put on my hmtl/css/js files, but after a week of not getting anywhere I came to ask if I could get a little help.
Here is my javascript file :
'use strict';
angular
.module('myApp', ['mwl.calendar', 'ui.bootstrap', 'ngTouch', 'ngAnimate', 'oc.lazyLoad', 'hljs'])
.config(function(calendarConfig) {
calendarConfig.dateFormatter = 'moment';
})
.controller('KitchenSinkCtrl', function($http, $rootScope, $compile, $q, $location, $ocLazyLoad, plunkGenerator, moment, alert) {
var vm = this;
vm.calendarView = 'month';
vm.viewDate = new Date();
vm.events = [
{
title: 'Un event',
type: 'warning',
startsAt: moment().startOf('week').subtract(2, 'days').add(8, 'hours').toDate(),
endsAt: moment().startOf('week').add(1, 'week').add(9, 'hours').toDate(),
draggable: true,
resizable: true
}
];
vm.isCellOpen = true;
vm.eventClicked = function(event) {
alert.show('Clicked', event);
};
vm.eventEdited = function(event) {
alert.show('Edited', event);
};
vm.eventDeleted = function(event) {
alert.show('Deleted', event);
};
vm.eventTimesChanged = function(event) {
alert.show('Dropped or resized', event);
};
vm.toggle = function($event, field, event) {
$event.preventDefault();
$event.stopPropagation();
event[field] = !event[field];
};
});
Here is my HTML file :
<!DOCTYPE html>
<!--[if lt IE 7]> <html lang="en" ng-app="myApp" class="no-js lt-ie9 lt-ie8 lt-ie7"> <![endif]-->
<!--[if IE 7]> <html lang="en" ng-app="myApp" class="no-js lt-ie9 lt-ie8"> <![endif]-->
<!--[if IE 8]> <html lang="en" ng-app="myApp" class="no-js lt-ie9"> <![endif]-->
<!--[if gt IE 8]><!--> <html lang="en" ng-app="myApp" class="no-js"> <!--<![endif]-->
<head>
<meta charset="utf-8">
<title>Calendrier</title>
<meta name="description" content="Calendrier">
<meta name="viewport" content="width=device-width">
<style type="text/css">
[ng-cloak] {
display: none;
}
</style>
<link href="//maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css" rel="stylesheet">
<link href="bower_components/angular-bootstrap-calendar/dist/css/angular-bootstrap-calendar.min.css" rel="stylesheet">
<script src="bower_components/angular-bootstrap-calendar/docs/examples/kitchen-sink/javascript.js" src=""></script>
<script src="bower_components/angular-bootstrap-calendar/dist/js/angular-bootstrap-calendar-tpls.min.js"></script>
<link href="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.3.0/styles/github.min.css" rel="stylesheet">
<link href="app.css" rel="stylesheet">
</head>
<div>
<h2 class="text-center ng-binding">{{ vm.calendarTitle }}</h2>
<div class="row">
<div class="col-md-6 text-center">
<div class="btn-group">
<button
class="btn btn-primary"
mwl-date-modifier
date="vm.viewDate"
decrement="vm.calendarView">
Précédent
</button>
<button
class="btn btn-default"
mwl-date-modifier
date="vm.viewDate"
set-to-today>
Aujourd'hui
</button>
<button
class="btn btn-primary"
mwl-date-modifier
date="vm.viewDate"
increment="vm.calendarView">
Suivant
</button>
</div>
</div>
<br class="visible-xs visible-sm">
<div class="col-md-6 text-center">
<div class="btn-group">
<label class="btn btn-primary" ng-model="vm.calendarView" uib-btn-radio="'year'">Année</label>
<label class="btn btn-primary "ng-model="vm.calendarView" uib-btn-radio="'month'">Mois</label>
<label class="btn btn-primary" ng-model="vm.calendarView" uib-btn-radio="'week'">Semaine</label>
<label class="btn btn-primary" ng-model="vm.calendarView" uib-btn-radio="'day'">Jour</label>
</div>
</div>
</div>
<br>
<mwl-calendar
events="vm.events"
view="vm.calendarView"
view-title="vm.calendarTitle"
view-date="vm.viewDate"
on-event-click="vm.eventClicked(calendarEvent)"
on-event-times-changed="vm.eventTimesChanged(calendarEvent); calendarEvent.startsAt = calendarNewEventStart; calendarEvent.endsAt = calendarNewEventEnd"
edit-event-html="'<i class=\'glyphicon glyphicon-pencil\'></i>'"
delete-event-html="'<i class=\'glyphicon glyphicon-remove\'></i>'"
on-edit-event-click="vm.eventEdited(calendarEvent)"
on-delete-event-click="vm.eventDeleted(calendarEvent)"
cell-is-open="vm.isCellOpen"
day-view-start="06:00"
day-view-end="22:00"
day-view-split="30"
cell-modifier="vm.modifyCell(calendarCell)"
class="ng-isolate-scope">
</mwl-calendar>
<br><br><br>
<h3 id="event-editor">
Modifier les events
<button
class="btn btn-primary pull-right"
ng-click="vm.events.push({title: 'New event', type: 'important', draggable: true, resizable: true})">
Ajouter
</button>
<div class="clearfix"></div>
</h3>
<table class="table table-bordered">
<thead>
<tr>
<th>Libellé</th>
<th>Type</th>
<th>Date</th>
<th>Durée</th>
<th>Annuler</th>
</tr>
</thead>
<tbody>
<tr ng-repeat="event in vm.events track by $index">
<td>
<input
type="text"
class="form-control"
ng-model="event.title">
</td>
<td>
<select ng-model="event.type" class="form-control">
<option value="important">Réunion</option>
<option value="warning">Evènement</option>
<option value="info">Visite</option>
</select>
</td>
<td>
<p class="input-group" style="max-width: 250px">
<input
type="text"
class="form-control"
readonly
uib-datepicker-popup="dd MMMM yyyy"
ng-model="event.startsAt"
is-open="event.startOpen"
close-text="Close" >
<span class="input-group-btn">
<button
type="button"
class="btn btn-default"
ng-click="vm.toggle($event, 'startOpen', event)">
<i class="glyphicon glyphicon-calendar"></i>
</button>
</span>
</p>
<uib-timepicker
ng-model="event.startsAt"
hour-step="1"
minute-step="15"
show-meridian="true">
</uib-timepicker>
</td>
<td>
<p class="input-group" style="max-width: 250px">
<input
type="text"
class="form-control"
readonly
uib-datepicker-popup="dd MMMM yyyy"
ng-model="event.endsAt"
is-open="event.endOpen"
close-text="Close">
<span class="input-group-btn">
<button
type="button"
class="btn btn-default"
ng-click="vm.toggle($event, 'endOpen', event)">
<i class="glyphicon glyphicon-calendar"></i>
</button>
</span>
</p>
<uib-timepicker
ng-model="event.endsAt"
hour-step="1"
minute-step="15"
show-meridian="true">
</uib-timepicker>
</td>
<td>
<button
class="btn btn-danger"
ng-click="vm.events.splice($index, 1)">
Supprimer
</button>
</td>
</tr>
</tbody>
</table>
</div>
<script src="//cdnjs.cloudflare.com/ajax/libs/moment.js/2.13.0/moment.min.js"></script>
<script src="//cdnjs.cloudflare.com/ajax/libs/interact.js/1.2.4/interact.min.js"></script>
<script src="//ajax.googleapis.com/ajax/libs/angularjs/1.5.5/angular.min.js"></script>
<script src="//ajax.googleapis.com/ajax/libs/angularjs/1.5.5/angular-touch.min.js"></script>
<script src="//ajax.googleapis.com/ajax/libs/angularjs/1.5.5/angular-animate.min.js"></script>
<script src="//cdnjs.cloudflare.com/ajax/libs/angular-ui-bootstrap/1.3.2/ui-bootstrap-tpls.min.js"></script>
<script src="//cdn.rawgit.com/ocombe/ocLazyLoad/1.0.9/dist/ocLazyLoad.min.js"></script>
<script src="//cdnjs.cloudflare.com/ajax/libs/highlight.js/9.3.0/highlight.min.js"></script>
<script src="//cdn.rawgit.com/pc035860/angular-highlightjs/v0.6.1/build/angular-highlightjs.min.js"></script>
<script src="bower_components/angular-bootstrap-calendar/dist/js/angular-bootstrap-calendar-tpls.min.js"></script>
<script src="bower_components/angular-bootstrap-calendar/docs/docs.js"></script>
<script src="bower_components/angular-bootstrap-calendar/docs/examples/helpers.js"></script>
</html>
I don't think the css file is needed but please tell me if you need it.
And here is a quick screenshot of what my project structure is, because it might be what I'm doing wrong, since I'm using mattewis92 calendar, I'm not sure of where to put my files, and what dependencies to add.
http://www.hostingpics.net/viewer.php?id=373355574Ev.png
Thanks for your help, if anyhting else is needed please tell me so.
A: Well, after looking at the demo code in the plunker edit, something I should have done way earlier, I found that the problem was that I was referring to "myApp", in both my html and js files, instead of "mwl.calendar.docs".
Now I found some other issues, like the buttons to display a small calendar to chose a date when wyou create an event that doesn't work, or that the default date when I open it is not set, so nothing displays until I try to change the date, but I should be able to solve those by myself.
(even so I don't mind receiving advices on what I've done so far, I know it's a lot of copy paste in the end, but since I'm only a beginner, I might have done some mistakes even by simply doing that)
| |
doc_23526728
|
Exception: cvc-complex-type.2.1: Element 'Date' must have no character or element information item [children], because the type's
content type is empty.
Basically in my XML file Date element is empty
My XML Date element:
<Date> </Date>
Generated XSD file:
<xs:element name="Date">
<xs:complexType/>
</xs:element>
based on this I have created XSD file and validate then it's getting above exception
But if I did without space between date element.
Example:
<Date></Date>
Then it's working fine. How can I handle that empty space?
A: The generator is mistaken:
<Date> </Date>
and
<Date></Date>
are not equivalent.
To accept both, use the following definition for Date instead:
<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema">
<xs:element name="Date">
<xs:simpleType>
<xs:restriction base="xs:string">
<xs:whiteSpace value="collapse"/>
<xs:pattern value=""/>
</xs:restriction>
</xs:simpleType>
</xs:element>
</xs:schema>
Perhaps there is a setting where you can direct your generator not to ignore whitespace in Date such that it could be guided toward generating the above definition automatically; otherwise, you may just have to replace it manually.
If you're interested in actually allowing dates in your Date element too, see Allow XSD date element to be empty string.
A: It looks to me as if everything is behaving as it should. If the schema defines an element to have empty content, then whitespace content is not allowed. If you want to allow whitespace, then don't define an empty content model. You could for example define it with a simple type of xs:string restricted by a pattern that only permits whitespace.
| |
doc_23526729
|
I've been using this code to generate US city names with an LSTM model. The code works fine and I do manage to get city names.
Right now, I am trying to save the model so I can load it in a different application without training the model again.
Here is the code of my basic application :
from __future__ import absolute_import, division, print_function
import os
from six import moves
import ssl
import tflearn
from tflearn.data_utils import *
path = "US_cities.txt"
maxlen = 20
X, Y, char_idx = textfile_to_semi_redundant_sequences(
path, seq_maxlen=maxlen, redun_step=3)
# --- Create LSTM model
g = tflearn.input_data(shape=[None, maxlen, len(char_idx)])
g = tflearn.lstm(g, 512, return_seq=True, name="lstm1")
g = tflearn.dropout(g, 0.5, name='dropout1')
g = tflearn.lstm(g, 512, name='lstm2')
g = tflearn.dropout(g, 0.5, name='dropout')
g = tflearn.fully_connected(g, len(char_idx), activation='softmax', name='fc')
g = tflearn.regression(g, optimizer='adam', loss='categorical_crossentropy',
learning_rate=0.001)
# --- Initializing model and loading
model = tflearn.models.generator.SequenceGenerator(g, char_idx)
model.load('myModel.tfl')
print("Model is now loaded !")
#
# Main Application
#
while(True):
user_choice = input("Do you want to generate a U.S. city names ? [y/n]")
if user_choice == 'y':
seed = random_sequence_from_textfile(path, 20)
print("-- Test with temperature of 1.5 --")
model.generate(20, temperature=1.5, seq_seed=seed, display=True)
else:
exit()
And here is what I get as an output :
Do you want to generate a U.S. city names ? [y/n]y
-- Test with temperature of 1.5 --
rk
Orange Park AcresTraceback (most recent call last):
File "App.py", line 46, in <module>
model.generate(20, temperature=1.5, seq_seed=seed, display=True)
File "/usr/local/lib/python3.5/dist-packages/tflearn/models/generator.py", line 216, in generate
preds = self._predict(x)[0]
File "/usr/local/lib/python3.5/dist-packages/tflearn/models/generator.py", line 180, in _predict
return self.predictor.predict(feed_dict)
File "/usr/local/lib/python3.5/dist-packages/tflearn/helpers/evaluator.py", line 69, in predict
o_pred = self.session.run(output, feed_dict=feed_dict).tolist()
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 717, in run
run_metadata_ptr)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 894, in _run
% (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (1, 25, 61) for Tensor 'InputData/X:0', which has shape '(?, 20, 61)'
Unfortunately, I can't see why the shape has changed when using generate() in my app. Could anyone help me solve this problem?
Thank you in advance
William
A: SOLVED?
One solution would be to simply add "modes" to the python script thanks to the argument parser :
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("mode", help="Train or/and test", nargs='+', choices=["train","test"])
args = parser.parse_args()
And then
if args.mode == "train":
# define your model
# train the model
model.save('my_model.tflearn')
if args.mode == "test":
model.load('my_model.tflearn')
# do whatever you want with your model
I dont really understand why this works and why when you're trying to load a model from a different script it doesn't.
But I guess this should be fine for the moment...
| |
doc_23526730
|
I Have all the movement of the pieces down apart from the Pawn which is the hardest because the Pawn has to be able to make to different moves
The Pawn should be able to move twice at the start and then only one after that
Currently I have set the pawn to only move twice but I am stuck on getting the rest of the logic to work
I have been working with the idea of an if/else statement
I could use some help writing it
Here is the Code so far for the Pawn and I have included comment's for your use
Update to Problem it is only with the black pawn is not moving right I was able to set the white one the right way but not the black I dont know why it doesnt work
//Pawn Movement
private boolean isValidPawnMove(int sourceRow, int sourceColumn, int targetRow, int targetColumn) {
boolean isValid = false;
if( isTargetLocationFree() ){
if( sourceColumn == targetColumn){
if( sourcePiece.getColor() == Piece.COLOR_WHITE ){
// White Pawn
if( sourceRow+1 == targetRow || sourceRow == 1 && targetRow == 3){//Pawns can move to at the start then only 1 after that
isValid = true;
}else{
isValid = false;
}
}
else{
// Black Pawn
if( sourceRow-1 == targetRow || sourceRow == -1 && targetRow == -3){
isValid = true;
}else{
isValid = false;
}
}
}else{
//If you try to move left or right into a different Column
isValid = false;
}
//Take square occupied by an opponent’s piece, which is diagonally in front
}else if( isTargetLocationCaptureable() ){
if( sourceColumn+1 == targetColumn || sourceColumn-1 == targetColumn){
//One column to the right or left
if( sourcePiece.getColor() == Piece.COLOR_WHITE ){
//White Piece
if( sourceRow+1 == targetRow ){
//Move one up
isValid = true;
}else{
//Not moving one up
isValid = false;
}
}else{
//Black Piece
if( sourceRow-1 == targetRow ){
//Move one down
isValid = true;
}else{
//Not moving one down
isValid = false;
}
}
}else{
//Not one column to the left or right
isValid = false;
}
}
return isValid;
}
Thanks for any help you can provide
A: In your class for the pawns, you can have an instance boolean, say,
boolean hasMoved
and initially make it false. If this boolean is false, then the pawn can move one OR two units. Whenever you move a pawn, you check this boolean, and if it's false, set it to true after moving it. That should work out.
A: I think the easiest solution is to explicitly check the source and target rows as white pawns can only move two forward from the second rank, so your logic becomes (for white):
if( sourceRow+1 == targetRow || sourceRow == 2 && targetRow == 4) {
Obviously you will also need to check that (sourceColumn, 3) is also empty.
A: In chess a pawn can choose if it wants to go one or two squares forward if it has never moved and therefore still stands on the second row. You have to ask, if it's on the second row and the two squares in front of it are empty and not only the "target location".
if( sourcePiece.getColor() == Piece.COLOR_WHITE ){
// White Pawn
if( sourceRow+1 == targetRow ){
isValid = true;
} else if (sourceRow+2 == targetRow && sourceRow == ROW_2) {
if ((isFreeSquare(sourceColumn+1) && isFreeSquare(sourceColumn+2)) {
isValid = true;
} else {
isValid = false;
}
} else {
isValid = false;
}
}
else{
// Black Pawn
...
}
You can leave the isFreeSquare(sourceColumn+2) code because you asked it with isTargetLocationFree() already. For black you have to ask if the pawn is still on ROW_7.
| |
doc_23526731
|
currently its above
as shown in below image
does any one knows this?
my layout is
1st for facebook comment
<reference name="product.info">
<block type="facebookcomments/catalog_product_comments" name="product.info.facebookcomments" template="facebookcomments/catalog/product/comments.phtml"/>
2nd is for facebook like
<reference name="product.info">
<block type="facebookilike/catalog_product_facebookilike" name="product.info.facebookilike" template="facebookilike/catalog/product/facebookilike.phtml"/>
still not getting this
A: In catalog_product_view handle of your theme/module layout:
(note i'm pseudo-coding based on the debug block names in your screenshot. I may not have the correct block/template/handles, but you get the idea)
<reference name="content">
<block type="mageplace_facebook/like_catalog_product_facebooklike"
after="product.info"
name="facebook.like.block.name"
template="path/to/facebooklike/template.phtml" />
</reference>
Check the layout xml of the Facebook Comments module, as it's obviously in the right place.
| |
doc_23526732
|
A: I think that's a problem, because you can only add a role to an existing app via the Graph API if you have a User Access Token of one of the administrators of this app:
https://developers.facebook.com/docs/graph-api/reference/app#roles
An App Access Token (which you could generate with App Id and App Secret) is not enough.
| |
doc_23526733
|
state
district
month
rainfall
max_temp
min_temp
max_rh
min_rh
wind_speed
advice
Orissa
Kendrapada
february
0.0
34.6
19.4
88.2
29.6
12.0
chances of foot rot disease in paddy crop; apply urea at 3 weeks after transplanting at active tillering stage for paddy;......
Jharkhand
Saraikela Kharsawan
february
0
35.2
16.6
29.4
11.2
3.6
provide straw mulch and go for intercultural operations to avoid moisture losses from soil; chance of leaf blight disease in potato crop; .......
Below is my code through which the model is made.
def create_model():
input1 = tf.keras.layers.Input(shape=(1,), name='state')
input2 = tf.keras.layers.Input(shape=(1,), name='district')
input3 = tf.keras.layers.Input(shape=(1,), name='month')
input4 = tf.keras.layers.Input(shape=(1,), name='rainfall')
input5 = tf.keras.layers.Input(shape=(1,), name='max_temp')
input6 = tf.keras.layers.Input(shape=(1,), name='min_temp')
input7 = tf.keras.layers.Input(shape=(1,), name='max_rh')
input8 = tf.keras.layers.Input(shape=(1,), name='min_rh')
input9 = tf.keras.layers.Input(shape=(1,), name='wind_speed')
xz= [input1, input2, input3, input4, input5, input6, input7, input8, input9]
x1= layers.Dense(128, activation='relu')(input1)
x2=layers.Dense(128, activation='relu')(input2)
x3=layers.Dense(128, activation='relu')(input3)
x4=layers.Dense(128, activation='relu')(input4)
x5=layers.Dense(128, activation='relu')(input5)
x6=layers.Dense(128, activation='relu')(input6)
x7=layers.Dense(128, activation='relu')(input7)
x8=layers.Dense(128, activation='relu')(input8)
x9=layers.Dense(128, activation='relu')(input9)
base_model = layers.Add()([x1,x2, x3, x4, x5, x6, x7, x8, x9])
first_output = layers.Dense(30, name='output_1')(base_model)
second_output = layers.Dense(30, name='output_2')(base_model)
third_output = layers.Dense(30, name='output_3')(base_model)
fourth_output = layers.Dense(30, name='output_4')(base_model)
fifth_output = layers.Dense(30, name='output_5')(base_model)
models = tf.keras.Model(inputs=xz,
outputs=[first_output, second_output, third_output, fourth_output, fifth_output])
return models
The code for my model compilation.
model=create_model()
optimizer = tf.keras.optimizers.Adam(learning_rate=0.01)
model.compile(optimizer=optimizer,
loss={'output_1': 'categorical_crossentropy',
'output_2': 'categorical_crossentropy',
'output_3': 'categorical_crossentropy',
'output_4': 'categorical_crossentropy',
'output_5': 'categorical_crossentropy'},
metrics={'output_1':tf.keras.metrics.Accuracy(),
'output_2':tf.keras.metrics.Accuracy(),
'output_3':tf.keras.metrics.Accuracy(),
'output_4':tf.keras.metrics.Accuracy(),
'output_5':tf.keras.metrics.Accuracy()})
Finally, the problem I am facing, the loss and accuracy. Loss is too high.
Epoch 499/500
2/2 [==============================] - 0s 11ms/step - loss: 66362.0130 - output_1_loss: 5827.9458 - output_2_loss: 10478.4935 - output_3_loss: 16566.5957 - output_4_loss: 16831.8887 - output_5_loss: 16657.0967 - output_1_accuracy: 0.0000e+00 - output_2_accuracy: 0.0000e+00 - output_3_accuracy: 0.0000e+00 - output_4_accuracy: 0.0000e+00 - output_5_accuracy: 0.0000e+00
Epoch 500/500
2/2 [==============================] - 0s 11ms/step - loss: 66362.0130 - output_1_loss: 5827.9458 - output_2_loss: 10478.4935 - output_3_loss: 16566.5957 - output_4_loss: 16831.8887 - output_5_loss: 16657.0967 - output_1_accuracy: 0.0000e+00 - output_2_accuracy: 0.0000e+00 - output_3_accuracy: 0.0000e+00 - output_4_accuracy: 0.0000e+00 - output_5_accuracy: 0.0000e+00
Kindly help me and correct me where I am wrong. I am total newbie to this field.
Alternative Model Update
model = tf.keras.Sequential([
feature_layer,
layers.Dense(128, activation='relu'),
layers.Dense(128, activation='relu'),
layers.Dropout(.1),
layers.Dense(150),
])
opt = Adam(learning_rate=0.01)
model.compile(optimizer=opt,
loss='mean_squared_error',
metrics=['accuracy'])
It have the [5,30] shaped input reshaped to [150].
A: To enhance the model structure please see the following example code, including a "model_simple" alternative for the original network. Train the both with the same input data, vary the structure of the "model_simple" and find out what structure results in the best accuracy.
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
def create_model():
input1 = tf.keras.layers.Input(shape=(1,), name='state')
input2 = tf.keras.layers.Input(shape=(1,), name='district')
input3 = tf.keras.layers.Input(shape=(1,), name='month')
input4 = tf.keras.layers.Input(shape=(1,), name='rainfall')
input5 = tf.keras.layers.Input(shape=(1,), name='max_temp')
input6 = tf.keras.layers.Input(shape=(1,), name='min_temp')
input7 = tf.keras.layers.Input(shape=(1,), name='max_rh')
input8 = tf.keras.layers.Input(shape=(1,), name='min_rh')
input9 = tf.keras.layers.Input(shape=(1,), name='wind_speed')
xz= [input1,input2,input3,input4,input5,input6,input7,input8,input9]
x1= layers.Dense(128, activation='relu')(input1)
x2=layers.Dense(128, activation='relu')(input2)
x3=layers.Dense(128, activation='relu')(input3)
x4=layers.Dense(128, activation='relu')(input4)
x5=layers.Dense(128, activation='relu')(input5)
x6=layers.Dense(128, activation='relu')(input6)
x7=layers.Dense(128, activation='relu')(input7)
x8=layers.Dense(128, activation='relu')(input8)
x9=layers.Dense(128, activation='relu')(input9)
base_model = layers.Add()([x1,x2, x3, x4, x5, x6, x7, x8, x9])
first_output = layers.Dense(30,name='output_1')(base_model)
second_output = layers.Dense(30,name='output_2')(base_model)
third_output= layers.Dense(30,name='output_3')(base_model)
fourth_output= layers.Dense(30,name='output_4')(base_model)
fifth_output = layers.Dense(30,name='output_5')(base_model)
models = tf.keras.Model(inputs=xz,
outputs=[first_output,second_output,third_output,fourth_output,fifth_output])
return models
def create_model_simple():
input1 = tf.keras.layers.Input(shape=(1,), name='state')
input2 = tf.keras.layers.Input(shape=(1,), name='district')
input3 = tf.keras.layers.Input(shape=(1,), name='month')
input4 = tf.keras.layers.Input(shape=(1,), name='rainfall')
input5 = tf.keras.layers.Input(shape=(1,), name='max_temp')
input6 = tf.keras.layers.Input(shape=(1,), name='min_temp')
input7 = tf.keras.layers.Input(shape=(1,), name='max_rh')
input8 = tf.keras.layers.Input(shape=(1,), name='min_rh')
input9 = tf.keras.layers.Input(shape=(1,), name='wind_speed')
#xz= [input1,input2,input3,input4,input5,input6,input7,input8,input9]
#x1=layers.Dense(128, activation='relu')(input1)
#x2=layers.Dense(128, activation='relu')(input2)
#x3=layers.Dense(128, activation='relu')(input3)
#x4=layers.Dense(128, activation='relu')(input4)
#x5=layers.Dense(128, activation='relu')(input5)
#x6=layers.Dense(128, activation='relu')(input6)
#x7=layers.Dense(128, activation='relu')(input7)
#x8=layers.Dense(128, activation='relu')(input8)
#x9=layers.Dense(128, activation='relu')(input9)
yhdistelma=layers.concatenate([input1,input2, input3, input4, input5, input6, input7, input8, input9])
#base_model = layers.Add()([x1,x2, x3, x4, x5, x6, x7, x8, x9])
first_output = layers.Dense(30,name='output_1')(yhdistelma)
second_output = layers.Dense(30,name='output_2')(yhdistelma)
third_output= layers.Dense(30,name='output_3')(yhdistelma)
fourth_output= layers.Dense(30,name='output_4')(yhdistelma)
fifth_output = layers.Dense(30,name='output_5')(yhdistelma)
models = tf.keras.Model(inputs=[input1,input2,input3,input4,input5, input6, input7, input8, input9],
outputs=[first_output,second_output,third_output,fourth_output,fifth_output])
return models
model=create_model()
optimizer = tf.keras.optimizers.Adam(learning_rate=0.01)
model.compile(optimizer=optimizer,
loss={'output_1': 'categorical_crossentropy',
'output_2': 'categorical_crossentropy',
'output_3': 'categorical_crossentropy',
'output_4': 'categorical_crossentropy',
'output_5': 'categorical_crossentropy'},
metrics={'output_1':tf.keras.metrics.Accuracy(),
'output_2':tf.keras.metrics.Accuracy(),
'output_3':tf.keras.metrics.Accuracy(),
'output_4':tf.keras.metrics.Accuracy(),
'output_5':tf.keras.metrics.Accuracy()})
model.summary()
keras.utils.plot_model(model,'model_structure.png',show_dtype=True)
#Let's create a more simple model version:
model_simple=create_model_simple()
model.compile(optimizer=optimizer,
loss={'output_1': 'categorical_crossentropy',
'output_2': 'categorical_crossentropy',
'output_3': 'categorical_crossentropy',
'output_4': 'categorical_crossentropy',
'output_5': 'categorical_crossentropy'},
metrics={'output_1':tf.keras.metrics.Accuracy(),
'output_2':tf.keras.metrics.Accuracy(),
'output_3':tf.keras.metrics.Accuracy(),
'output_4':tf.keras.metrics.Accuracy(),
'output_5':tf.keras.metrics.Accuracy()})
model_simple.summary()
keras.utils.plot_model(model_simple,'model_simple_structure.png',show_dtype=True)
...especially, please note that the key difference between your original and more simple model is that "Add" has been replaced with "Concatenate". The "Add" results in output size of same than one of its inputs, but the size of "Concatenate" output is much much higher, that kind of things may have an effect for the performance.
| |
doc_23526734
|
HTML:
<html>
<body>
<div></div>
<div></div>
</body>
</html>
CSS:
body {
width: 80%;
height: 100%;
}
div {
width: 40%;
max-width: 500px;
padding-top: 100%;
background-image: url(http://placehold.it/500x500);
background-repeat: no-repeat;
background-position: center center;
-webkit-background-size: cover;
-moz-background-size: cover;
-o-background-size: cover;
background-size: cover;
margin: 2.5%;
float: left;
}
A: You have max-width: 500px, and then you overwrite it with the background-size: cover. If you want to use the background-size: cover, you should put a second div within that div and use that background-size: cover style on that nested div so it covers the parent div, which has a max width of 500px.
For example:
<div class="parent">
<div class="child">
</div>
</div>
.parent {max-width: 500px}
.child {background-size: cover; //include image}
Did not test the code, but you get the gist of it.
| |
doc_23526735
|
Now,my question is, there are multiple processes running on the system, and how does it is possible for all the processes to have one to one mapping with the physical addresses??
For example, when kernel is accessing a kernel logical address on process A's context, and now the preemption happens, and what happens,when kernel access the logical address in process B's context?
on a similar line, what happens for the PC's with only 512MB RAM?. How does the, one-one mapping of 1G kernel space happens for those PC's?
A: It may help first to consider that the kernel part (let's say 1GB) of total virtual address space does not all get used. And the total physical memory isn't all mapped to kernel space.
Kernel space will have virtual memory mappings for the physical RAM that it uses, plus any memory mapped peripherals that are defined. Those aren't paged.
Each process in user space could have as much as 3 GB of virtual memory for its code+data. For physical memory there are two extremes, it may shed light to look at each.
Large physical memory: if the processor supports big physical addressing e.g. 36-bit, there could be 64 GB of physical memory. You could have multiple processes, each with 3 GB code+data, and they would not even have to swap pages out to secondary storage. Each context switch would set up MMU to map the new executing process's physical memory back into user space.
Small physical memory: let's say 512 MB is there, and kernel uses 128 MB of that. The remaining 384 MB will hold user processes' code+data. IF user processes need more than that, pages will swap between secondary storage and RAM as needed.
A: Here is a link, which provides pretty good clarification for my question.
http://duartes.org/gustavo/blog/post/anatomy-of-a-program-in-memory
"In Linux, kernel space is constantly present and maps the same physical memory in all processes. Kernel code and data are always addressable, ready to handle interrupts or system calls at any time. By contrast, the mapping for the user-mode portion of the address space changes whenever a process switch happens:"
Answer to first part of question : Linux kernel space remains same across all processes , and process context switch doesn't matter. Kernel space remain mapped to same RAM pages across all process contexts.
Answer to second part of question : The size of physical RAM size (512 MB or 2GB) is irrelevant for kernel address space. As with rule, kernel has 1G kernel address space available, and whatever allocation it does, its done with those addressses. Mapping of those addresses to available RAM (512MB or 2GB) is the job of MMU.
In a 1G or more RAM case, entire 1G will be mapped for kernel address space, whereas in 512 MB RAM case, it will be 512MB. It doesn't hurts the user space addresses, as everything is virtual addresses, and they will be swapped out for demand, including those of kernel space pages.
Note: Here I assume 1G/3G split , and that's not a hard rule.
A: Well, in a traditional multi-core system, all processors have access to all RAM. In Linux, each process has it's own address space on the 3GB side, while the 1GB side stays constant (I think) because the kernel is, in a way, a process that's always there. Because the kernel part of virtual memory stays the same (and because of that, there is one kernel address space), the kernels address space doesn't change when it preempts a process.
Quite simply, the kernel only maps those 512 MB. The other 512 MB of virtual address space is just mapped to the nothing page entry, which just tells the CPU that no memory should be accessible at that address, and to raise a CPU exception whenever it is accessed.
| |
doc_23526736
|
I am using following code to implement authentication
<?php
set_time_limit(0);
ini_set('default_socket_timeout',300);
session_start();
//----------Instagram API Keys-----------//
define("CLIENT_ID",'7f56a1c25fea4949bb8d718809e11a88');
define("CLIENT_SECRET",'purposely hidden');
define("REDIRECT_URI",'localhost/dp/index.php');
define("imgDir",'pics/');
?>
<html>
<head>
</head>
<body>
<a href="https://api.instagram.com/oauth/authorize/?client_id=<?php echo CLIENT_ID;?>&redirect_uri=<?php echo REDIRECT_URI; ?>&response_type=code">Login</a>
</body>
</html>
| |
doc_23526737
|
this query should get all records that do not have "registrationType1" field empty/blank
query:
{
"size": 20,
"_source": [
"registrationType1"
],
"query": {
"bool": {
"must_not": [
{
"term": {
"registrationType1": ""
}
}
]
}
}
}
the results below still contains "registrationType1" with empty values
results:
**"_source": {
"registrationType1": ""}}
, * {
"_index": "oh_animal",
"_type": "animals",
"_id": "3842002",
"_score": 1,
"_source": {
"registrationType1": "A&R"}}
, * {
"_index": "oh_animal",
"_type": "animals",
"_id": "3842033",
"_score": 1,
"_source": {
"registrationType1": "AMHA"}}
, * {
"_index": "oh_animal",
"_type": "animals",
"_id": "3842213",
"_score": 1,
"_source": {
"registrationType1": "AMHA"}}
, * {
"_index": "oh_animal",
"_type": "animals",
"_id": "3842963",
"_score": 1,
"_source": {
"registrationType1": ""}}
, * {
"_index": "oh_animal",
"_type": "animals",
"_id": "3869063",
"_score": 1,
"_source": {
"registrationType1": ""}}**
PFB mappings for the field above
"registrationType1": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword"
}
}
}
A: You need to use the keyword subfield in order to do this:
{
"size": 20,
"_source": [
"registrationType1"
],
"query": {
"bool": {
"must_not": [
{
"term": {
"registrationType1.keyword": "" <-- change this
}
}
]
}
}
}
A: If you do not specify any text value on the text fields, there is basically nothing to analyze and return the documents accordingly.
In similar way, if you remove must_not and replace it with must, it would show empty results.
What you can do is, looking at your mapping, query must_not on keyword field. Keyword fields won't be analysed and in that way your query would return the results as you expect.
Query
POST myemptyindex/_search
{
"query": {
"bool": {
"must_not": [
{
"term": {
"registrationType1.keyword": ""
}
}
]
}
}
}
Hope this helps!
A: I am using elasticsearch version 7.2,
I replicated your data and ingested in my elastic index,and tried querying with and without .keyword.
I am getting the desired result when using the ".keyword" in the field name.It is not returning the docs which have registrationType1="".
Note - The query does not works when not using the ".keyword"
I have added my sample code below, have a look if that helps.
from elasticsearch import Elasticsearch
es = Elasticsearch()
es.indices.create(index="test", ignore=400, body={
"mappings": {
"_doc": {
"properties": {
"registrationType1": {
"type": "text",
"field": {
"keyword": {
"type": "keyword"
}
}
}
}
}
}
})
data = {
"registrationType1": ""
}
es.index(index="test",doc_type="_doc",body=data,id=1)
search = es.search(index="test", body={
"size": 20,
"_source": [
"registrationType1"
],
"query": {
"bool": {
"must_not": [
{
"term": {
"registrationType1.keyword": ""
}
}
]
}
}
})
print(search)
Executing the above should not return any results as we are inserting empty for the field
A: There was some issue with the mappings itself, I deleted the index and re-indexed it with new mappings and its working now.
| |
doc_23526738
|
using
*
*pytest 3.4.1
*python 3.5 and above
This is my test case under tests/test_8_2_openpyxl.py
class TestSomething(unittest.TestCase):
def setUp(self):
# do setup stuff here
def tearDown(self):
# do teardown stuff here
def test_case_1(self):
# test case here...
I use unittest style to write my test case. I use pytest to run the tests.
I have also setup and teardown functions following unittest conventions
My commandline to run the tests become
pytest -s -v tests/test_8_2_openpyxl.py
It works as expected
What I want
When I debug sometimes, i want to be able to easily turn off either setup or teardown or both at the same time using some kind of commandline option
pytest -s -v tests/test_8_2_openpyxl.py --skip-updown
in order to skip both teardown and setup
pytest -s -v tests/test_8_2_openpyxl.py --skip-setup
in order to skip setup
pytest -s -v tests/test_8_2_openpyxl.py --skip-teardown
in order to skip teardown
What I tried and didn't work
Tried sys.argv
I have tried using sys.argv
class TestSomething(unittest.TestCase):
def setUp(self):
if '--skip-updown' in sys.argv:
return
# do setup stuff here
and then
`pytest -s -v tests/test_8_2_openpyxl.py --skip-updown
This didn't work and my error message is
usage: pytest [options] [file_or_dir] [file_or_dir] [...]
pytest: error: argument --skip-updown: expected one argument
Tried sys.argv
I have tried using sys.argv
class TestSomething(unittest.TestCase):
def setUp(self):
if '--skip-updown' in sys.argv:
return
# do setup stuff here
and then
pytest -s -v tests/test_8_2_openpyxl.py --skip-updown
This didn't work and my error message is
usage: pytest [options] [file_or_dir] [file_or_dir] [...]
pytest: error: argument --skip-updown: expected one argument
Tried conftest.py and config.getoption
I setup a conftest.py in the project root
def pytest_addoption(parser):
parser.addoption("--skip-updown", default=False)
@pytest.fixture
def skip_updown(request):
return request.config.getoption("--skip-updown")
And then
class TestSomething(unittest.TestCase):
def setUp(self):
if pytest.config.getoption("--skip-updown"):
return
# do setup stuff here and then
pytest -s -v tests/test_8_2_openpyxl.py --skip-updown
Then I get
usage: pytest [options] [file_or_dir] [file_or_dir] [...]
pytest: error: argument --skip-updown: expected one argument
What I tried and worked but not as ideal
Tried conftest and config.getoption but this time declare --skip-updown=True
Exactly the same as before except this time in my command line I declare --skip-updown=True
pytest -s -v tests/test_8_2_openpyxl.py --skip-updown=True
My question
This is very close to what I want, but I was hoping not to have to declare the value --skip-updown=True
Or maybe I am doing it all wrong in the first place and there's an easier way using sys.argv.
A: Fix addoption:
def pytest_addoption(parser):
parser.addoption("--skip-updown", action='store_true')
See the docs at https://docs.python.org/3/library/argparse.html
Or maybe I am doing it all wrong in the first place and there's an easier way using sys.argv.
No, what you're doing is the right and the only way.
| |
doc_23526739
|
Short example:
(arrays used with a single value for this example, but that's just to shorten the example)
=COUNTIFS(B:B;">="&A1) --> does work
=COUNTIFS(B:B;{">="&A1}) --> returns an error
Same issue if I try to nest a formula within the array
=COUNTIFS(B:B;">="&EDATE(TODAY();-6)) --> does work
=COUNTIFS(B:B;{">="&EDATE(TODAY();-6)}) --> returns an error
Full example:
Consider those values
| A | B |
|----------|---------|
| =today() | 1/1/15 |
|----------|---------|
| | |
|----------|---------|
| | 1/7/15 |
|----------|---------|
| | |
|----------|---------|
| | 1/1/16 |
|----------|---------|
| | 1/7/16 |
|----------|---------|
Note that the date notation is d/m/aa (months in the middle).
What I want to achieve is to count all the dates in column B that are greater that a given date OR equals space.
=SUM(COUNTIFS(B1:B6;{">=42483";""})) --> does work and returns 3 (42483 being today's value)
=SUM(COUNTIFS(B1:B6;{">="&A1;""})) --> error in formula
Same issue with a formula if I want to count all the dates for the past 6 months OR space cells.
=SUM(COUNTIFS(B1:B6;{">=42300";""})) --> does work and returns 4 (42300 being 6 months ago's value)
=SUM(COUNTIFS(B1:B6;{">="&EDATE(TODAY();-6);""})) --> error in formula
Any idea if that's even possible ?
Thanks
A: Your semi-colon needs to be a comma. Try:
=COUNTIF(A:A,">="&A1)
| |
doc_23526740
|
Please help! Thanks!!
dateDiff: function(date1, date2){
var diff = {}
var tmp = date2 - date1;
tmp = Math.floor(tmp/1000);
diff.sec = tmp % 60;
tmp = Math.floor((tmp-diff.sec)/60);
diff.min = tmp % 60;
tmp = Math.floor((tmp-diff.min)/60);
diff.hour = tmp % 24;
tmp = Math.floor((tmp-diff.hour)/24);
diff.day = tmp;
return diff;
},
A: Try this function
function addZero(number)
{
if(number<10)
return "0"+number;
else
return number;
}
A: You can use the slice method
diff.sec = tmp % 60;
if( diff.sec < 10 ){
diff.sec = ("0" + diff.sec).slice(-2);
}
JSFiddle with sample value
var test = 9;
if( test < 10 ){
test = ("0" + test).slice(-2);
}
console.log(test);
| |
doc_23526741
|
<html>
<body>
<div id="parent" style="width:300px;overflow:scroll;">
<div class="child" style="width:80px; float:left;">lorem</div>
<div class="child" style="width:80px; float:left;">ipsum</div>
<div class="child" style="width:80px; float:left;">dolore</div>
<div class="child" style="width:80px; float:left;">lorem</div>
</div>
</body>
</html>
A: .child {
display: inline-block;
}
#parent {
white-space: nowrap;
}
Here is example: http://jsfiddle.net/qnpGm/
UPDATE:
in ie6/ie7 this will work only on elements with a natural display: inline.
Thanks for comments :)
A: try adding white-space:nowrap; to your #parent style. Haven't tested this on your code, but I've used this in similar situations where I've needed to expand a div containing a number of child divs without setting a width for the parent.
A: Hence "inline-block" will come up with problems regarding crossbrowser-compactibility (IE 6), I guess this isn't possible without a bit of extra markup.
So let's assume "parent" is your viewport. Then you'll need a container, wrapped around the "child"-elements, having the same width as all of them - in your case 320px. You can calculate the with using either server-side languages or javascript.
<html>
<body>
<div id="parent" style="width:300px;overflow:scroll;">
<div id="view" style="width:320px;">
<div class="child" style="width:80px; float:left;">lorem</div>
<div class="child" style="width:80px; float:left;">ipsum</div>
<div class="child" style="width:80px; float:left;">dolore</div>
<div class="child" style="width:80px; float:left;">lorem</div>
</div>
</div>
</body>
</html>
| |
doc_23526742
|
I can't totally access to my webservice. When i want to consume my webservice a have this error (this is not in my code but in System.ServiceModel.WasHosting.dll) :
[NullReferenceException: The object reference is not defined to an instance of an object] System.Runtime.AsyncResult.End(IAsyncResult result) +390
System.ServiceModel.Activation.HostedHttpRequestAsyncResult.End(IAsyncResult result) +175
System.ServiceModel.Activation.AspNetRouteServiceHttpHandler.EndProcessRequest(IAsyncResult result) +7
System.Web.CallHandlerExecutionStep.InvokeEndHandler(IAsyncResult ar) +152 System.Web.CallHandlerExecutionStep.OnAsyncHandlerCompletion(IAsyncResult ar) +126
My computer : windows 10.
Version Microsoft .NET Framework :4.0.30319; Version ASP.NET :4.7.2556.0
I think this is not a programming error in my code because the webservice work on others computers. this error happens only on my computer.
I find this :https://social.msdn.microsoft.com/Forums/sqlserver/en-US/d72570c0-98ea-41cb-8423-94c96abcb2e8/wcf-service-activation-problem?forum=wcf on the subject but this hotfix isn't for windows 10 and im not sure these two errors are related
Im not asking what is a nullReferenceException but how to fix my problem in System.ServiceModel.WasHosting.dll.
If anyone knows this error please help me.
A: I was having this exact same problem and have just managed to fix it after wasting many days!
I tried uninstalling all .NET Core / Framework / Visual Studio (I had 2013 - 2019, reinstalled only 2019), .NET repair and uninstall tools etc and was stuck with the exact same problem.
So I figured it must be some sort of IIS express config laying around.... I found and renamed (just incase, seems like it recreates and this can be safely removed) this folder %USERPROFILE%\Documents\IISExpress to IISExpress.old , ran my solution.... and it now seems to be working fine!
| |
doc_23526743
|
dyld: Library not loaded: /usr/local/opt/openssl/lib/libcrypto.1.0.0.dylib
Referenced from: /usr/local/opt/libevent/lib/libevent-2.1.6.dylib
Reason: image not found
Trace/BPT trap: 5
A: These steps are worked for me.
brew uninstall --ignore-dependencies openssl
brew install openssl
A: Reinstalling openssl did not work for me, but
brew upgrade tmux
got me to version 3.0a and got rid of the error.
A: Looks like you don't have openssl lib. Try to install with brew install openssl
A: I tried the answer by @Kaifei, but ran into problems with openssh. At the end, I also had to do:
brew upgrade openssh
to resolve the issue. This blog talks about this solution.
| |
doc_23526744
|
Some document says python shell job is suitable for simple jobs whereas spark for more complicated jobs, is that correct? Could you please share more experience on this?
Many thanks
A: Use AWS Glue Python shell when you do not need too much of a compute power to run light ETL workloads. Use AWS Glue with Spark when you must scale either horizontally, vertically, or both.
Source: What are the best use cases for aws glue python shell jobs vs. spark jobs?
A:
which are the best/typical use cases for each of them? Some document
says python shell job is suitable for simple jobs whereas spark for
more complicated jobs, is that correct?
AWS Glue is quick development facility/service for ETL jobs, given by AWS.
IMHO it is very quick development if you know what needs to be done in your etl pipeline.
*
*Glue has components like Discover, Develop, Deploy.
In Discover... automatic crawling (run or schedule a crawler multiple times) is the important feature which differentiates with other tools I observed.
*Glue has seems like integration feature to connect to AWS eco system services (where as spark you need to do it)
Typical use case of AWS Glue could be...
1) Load data from Dataware houses.
2) Build a data lake on amazon s3 .
See this presentation of AWS for more insight.
Custom Spark Job also can do the same thing, but it needs to be developed from the scratch. and it doesnt have in built automatic crawling kind of feature.
But if you develop a spark job for etl you have fine grained control to implement complicated jobs.
Both glue, spark has same goal for ETL. AFAIK, Glue is for simple jobs such as loading from source to destination. Where as Spark job can do wide variety of transformations in a controlled way.
Conclusion : For simple use cases of ETL (which can be done with out much development experience ) go with Glue. For customized ETL
which has many dependencies/transformations go with spark job.
| |
doc_23526745
|
I am having two grids and a button. Initially the second grid will remain empty and the first grid will have some records.. When I select a few records in the first grid and click on the button, then the second grid should get populated with the only the selected rows of first grid.
Here is my code:
Ext.QuickTips.init();
var getLocalStore = function() {
return Ext.create('Ext.data.ArrayStore', {
model: 'Company',
data: Ext.grid.dummyData
});
};
var getSelectedStore = function() {
return Ext.create('Ext.data.ArrayStore', {
model: 'Company'
});
};
var sm = Ext.create('Ext.selection.CheckboxModel');
var grid1 = Ext.create('Ext.grid.Panel', {
id: 'grid1',
store: getSelectedStore(),
columns: [
{text: "Company", width: 200, dataIndex: 'company'},
{text: "Price", renderer: Ext.util.Format.usMoney, dataIndex: 'price'},
{text: "Change", dataIndex: 'change'},
{text: "% Change", dataIndex: 'pctChange'},
{text: "Last Updated", width: 135, renderer: Ext.util.Format.dateRenderer('m/d/Y'), dataIndex: 'lastChange'}
],
columnLines: true,
width: 600,
height: 300,
frame: true,
title: 'Framed with Checkbox Selection and Horizontal Scrolling',
iconCls: 'icon-grid',
renderTo: 'grid1'
});
var grid2 = Ext.create('Ext.grid.Panel', {
id: 'grid2',
store: getLocalStore(),
selModel: sm,
columns: [
{text: "Company", width: 200, dataIndex: 'company'},
{text: "Price", renderer: Ext.util.Format.usMoney, dataIndex: 'price'},
{text: "Change", dataIndex: 'change'},
{text: "% Change", dataIndex: 'pctChange'},
{text: "Last Updated", width: 135, renderer: Ext.util.Format.dateRenderer('m/d/Y'), dataIndex: 'lastChange'}
],
columnLines: true,
width: 600,
height: 300,
frame: true,
title: 'Framed with Checkbox Selection and Horizontal Scrolling',
iconCls: 'icon-grid',
renderTo: 'grid'
});
Ext.widget('button', {
text: 'Click Me',
renderTo: 'btn',
listeners: {
click: function(this1, evnt, eOpts ){
var records = sm.getSelection();
getSelectedStore().loadData(records,true);
grid1.getView().refresh();
/*Ext.each(records, function (record) {
alert(record.get('company'));
});*/
}
}
});
Please let me what's going wrong.
A: First, you are defining the functions getSelectedStore and getLocalStore which return new store instances when invoked. That way in your click handler you would be grabbing an empty store each time! Lose the function bit and just set variables like this:
var storeToSelectFrom = Ext.create('Ext.data.ArrayStore', {
model: 'Company',
data: someDataToChooseFrom
});
var storeToPutTo = Ext.create('Ext.data.ArrayStore', {
model: 'Company'
});
Then, define your grids using those variables as the stores:
var grid1 = Ext.create('Ext.grid.Panel',{
store: storeToSelectFrom,
selType: 'checkboxmodel'
// rest of your configs
});
var grid2 = Ext.create('Ext.grid.Panel',{
store: storeToPutTo
// rest of your configs
});
Then, create the button with a click handler:
Ext.widget('button', {
handler: function (button, event) {
var selected = grid1.getSelectionModel().getSelection();
grid2.getStore().add(selected);
}
// rest of your configs
});
| |
doc_23526746
|
Thus I installed YouCompleteMe & compiled it. Originally I got an error because a trial of Kite shut the server down. But this I deactivated and now I restart and restart the server just to get it shutdown.
And the YcmToggleLogs does not show anything :-(
I followed all of the advice given here: YCM error. The ycmd server SHUT DOWN (restart wit...the instructions in the documentation
But still it doesn't work.
And of course I followed along with the official install manual:
Install YCM plugin via Vundle
Install cmake, macvim and python
Note that the system vim is not supported.
brew install cmake macvim python
.
Install mono, go, node and npm
brew install mono go nodejs
Compile YCM
cd ~/.vim/bundle/YouCompleteMe python3 install.py --all
Btw, when compiling everything I get several warning along these lines:
ld: warning: text-based stub file /*****/CoreFoundation.tbd and library file
/****//CoreFoundation.framework/CoreFoundation are out of sync.
Falling back to library file for linking.
Any ideas what I can do to get it off the ground?
Thanks.
A: On Linux, with VIM >8.1
To solve the issue, run the installer from the plugin.
Go to vim folder
cd ~/.vim/plugged/YouCompleteMe
and run the installer.sh
./installer.sh
hope this solution solves your as well.
| |
doc_23526747
|
import matplotlib.pyplot as plt
import numpy as np
from pylab import *
l1=1.
l2=5.
t1=20.
t2=50.
tf=120.
def f1(t):
if t<t1:
L = l1
elif t1<=t<t2:
L = l2
else:
L=l1
g=L*t
return g
a=np.linspace(0.,100,1000)
values1=map(f1,a)
fig1=plt.figure(1)
plt.plot(a,values1,color='red')
plt.show()
The plot of the pulse is the following
def f2(t):
if t<t1:
L = l1
elif t1<=t<t2:
L = l2
else:
L=l1
return L
values2=map(f2,a)
fig2=plt.figure(2)
plt.plot(a,values2,color='blue')
plt.show()
I want to make a figure with the red curve as the main plot and a little inset in the top margin of the figure showing the blue curve, without any x axis or y axis, just to make the viewer understand when the change in the parameter L happens.
A: Maybe you could use inset_axes from mpl_toolkits.axes_grid1.inset_locator
See for example: https://matplotlib.org/gallery/axes_grid1/inset_locator_demo.html
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
fig, axs = plt.subplots(1, 1)
# Create inset of width 1.3 inches and height 0.9 inches
# at the default upper right location
axins = inset_axes(axs, width='20%', height='20%', loc=2)
And then plot your data in axins:
axins.plot(data)
You can also switch off the ticks and labes using:
axins.axes.get_yaxis().set_visible(False)
axins.axes.get_xaxis().set_visible(False)
A: I think that subplots will do what you want. If you make the top subplot smaller, and take the ticks/labels off it looks like its in the margins. Here's a code snippet that sets up the plot.
f = plt.figure()
# Make 2 subplots arranged vertically with different ratios
(ax, ax2) = f.subplots(2,1, gridspec_kw={'height_ratios':[1,4]})
#remove the labels on your top subplot
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
ax.plot(a, f2(a))
ax2.plot(a, f1(a), 'r:') #red curve main plt
plt.show()
I used this code to plot a few sinusoids and it came out as follows:
Is this what you're looking for?
| |
doc_23526748
|
PHP is set to GMT and JavaScript is set to UTC; how do these standards differ, and could this be causing the problem?
A: From Coordinated Universal Time on Wikipedia:
Coordinated Universal Time (UTC) is a time standard based on International Atomic Time (TAI) with leap seconds added at irregular intervals to compensate for the Earth's slowing rotation.
From Greenwich Mean Time on Wikipedia:
UTC is an atomic time scale which only approximates GMT with a tolerance of 0.9 second
A: One is measured from the sun and another from an atomic clock.
For your purposes, they are the same.
A: For computers, GMT is UTC+0 - so they are the equivalent.
A: If you strictly go by the definition of what UTC and GMT are, there is no real practical difference as others have pointed out.
However one needs to be careful as there are certain cases where (possibly legacy) terminology is used such as in the Microsoft Timezone index values
The difference is that in that context, what is referred to as the "GMT timezone" (code 55) is, in reality, the "GMT locale" which is the locale used by Dublin, Edinburgh, Lisbon, London (all of which observe daylight savings time) which is differentiated from Greenwich Standard Time (code 5A) which is used by Monrovia and Reykjavik both of which do not observe daylight savings time.
The practical difference is that if a system is set up to use UTC (code 80000050 under the semantics specified above) then it will not automatically switch to daylight savings time while if you set your time zone to GMT (code 55) then there's a good chance it automatically switches to BST during the summer without you noticing.
| |
doc_23526749
|
omp_set_num_threads(num_t);
#pragma omp parallel shared(a,b,c) private(i,j,k) num_threads(num_t)
{
#pragma omp for schedule(static)
for (int i = 0; i < m; i++)
{
std::cout << omp_get_thread_num()<< "\n";
for (int j = 0; (j < n); j++)
{
c[i + j*m] = 0.0;
for (int k = 0; k < q; k++)
{
c[i+j*m] += a[i*q + k]*b[j*q + k];
}
}
}
}
A: To test first, I recommend you to use this:
#pragma omp parallel for private(...) shared(...) schedule(...) num_threads (X)
where "X" is the number of threads to be created. In theory, the previous line must have a similar effect to yours, but C++ can be picky sometimes (specially with the "parallel" clause)
Btw, maybe is not your case, but be careful using "text keys" {}. OpenMP's functionality can be different depending on adding them to the code block or not.
| |
doc_23526750
|
Could somebody explain briefly what it is, and maybe how to do it, if you could refer me to a site which explains in an easy manner i would be grateful.
an example of code one could deadlist:
\x90\xb8\x02\x00\x00\x00\x83\xf8\x03\x74\x07\xb8\x73\x80\x04\x08\xeb\x01\xd8\x31\xc0\x50\xbb\x9e\x9a\x9a\x99\xf7\xdb\x53\xbb\x9c\x9a\x9e\x9b\xf7\xdb\x53\x90\xcc
| |
doc_23526751
|
The problem is that when I run the app I get this:
Neither of the buttons that I added are there. Here is my .h file:
//
// RootBeerTVCViewController.h
// BaseApp
//
// Created by Blaine Anderson on 10/12/12.
// Copyright (c) 2012 UIEvolution, Inc. All rights reserved.
//
#import <UIKit/UIKit.h>
@interface RootBeerTVCViewController : UIViewController <UITableViewDelegate, UITableViewDataSource>
@property (weak, nonatomic) IBOutlet UIButton *nameButton;
@property (weak, nonatomic) IBOutlet UIButton *locationButton;
@property(strong, nonatomic) NSMutableArray* rootList;
-(IBAction)sort:(id)sender;
@end
And here is the .M file:
//
// RootBeerTVCViewController.m
// BaseApp
//
// Created by Blaine Anderson on 10/12/12.
// Copyright (c) 2012 UIEvolution, Inc. All rights reserved.
//
#import "RootBeerTVCViewController.h"
#import "GlobalData.h"
@interface RootBeerTVCViewController ()
@end
@implementation RootBeerTVCViewController
@synthesize rootList;
- (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil
{
self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil];
if (self) {
// Custom initialization
}
return self;
}
- (void)viewDidLoad
{
[super viewDidLoad];
[GlobalData sharedData].mViewManager.mNavController.navigationBarHidden=NO;
// Do any additional setup after loading the view from its nib.
rootList = [[GlobalData sharedData] mRootList ];
NSLog(@"RootList in view did load: %@", rootList);
UITableView *tableView = [[UITableView alloc] initWithFrame:[[UIScreen mainScreen] applicationFrame] style:UITableViewStylePlain];
tableView.autoresizingMask = UIViewAutoresizingFlexibleHeight|UIViewAutoresizingFlexibleWidth;
tableView.delegate = self;
tableView.dataSource = self;
//[tableView addSubview:sortingView];
[tableView reloadData];
self.view = tableView;
}
- (void)viewDidUnload
{
[self setLocationButton:nil];
[self setNameButton:nil];
[super viewDidUnload];
// Release any retained subviews of the main view.
// e.g. self.myOutlet = nil;
}
- (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation
{
return (interfaceOrientation == UIInterfaceOrientationPortrait);
}
#pragma mark - Table view data source
- (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView
{
rootList = [[GlobalData sharedData] mRootList ];
NSLog(@"RootList: %@", rootList);
// Return the number of sections.
NSLog(@"RootList count: %i", rootList.count);
return 1;
}
- (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section
{
// Return the number of sections.
NSLog(@"RootList row count: %i", rootList.count);
return rootList.count;
}
- (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath
{
rootList =[[GlobalData sharedData].mRootBeerParser rootBeerList];
NSLog(@"RootList Cell: %@", rootList);
RootBeers* mRootBeer = [rootList objectAtIndex:indexPath.row];
static NSString *CellIdentifier = @"Cell";
UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier];
if(cell == nil){
cell = [[UITableViewCell alloc] initWithStyle:UITableViewCellStyleSubtitle reuseIdentifier:@"Cell"];
cell.selectionStyle = UITableViewCellSelectionStyleNone;
}
NSLog(@"Cell Name: %@", mRootBeer.brewer);
// Configure the cell...
cell.textLabel.text = mRootBeer.name;
cell.detailTextLabel.text = mRootBeer.location;
return cell;
}
/*
// Override to support conditional editing of the table view.
- (BOOL)tableView:(UITableView *)tableView canEditRowAtIndexPath:(NSIndexPath *)indexPath
{
// Return NO if you do not want the specified item to be editable.
return YES;
}
*/
/*
// Override to support editing the table view.
- (void)tableView:(UITableView *)tableView commitEditingStyle:(UITableViewCellEditingStyle)editingStyle forRowAtIndexPath:(NSIndexPath *)indexPath
{
if (editingStyle == UITableViewCellEditingStyleDelete) {
// Delete the row from the data source
[tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:UITableViewRowAnimationFade];
}
else if (editingStyle == UITableViewCellEditingStyleInsert) {
// Create a new instance of the appropriate class, insert it into the array, and add a new row to the table view
}
}
*/
/*
// Override to support rearranging the table view.
- (void)tableView:(UITableView *)tableView moveRowAtIndexPath:(NSIndexPath *)fromIndexPath toIndexPath:(NSIndexPath *)toIndexPath
{
}
*/
/*
// Override to support conditional rearranging of the table view.
- (BOOL)tableView:(UITableView *)tableView canMoveRowAtIndexPath:(NSIndexPath *)indexPath
{
// Return NO if you do not want the item to be re-orderable.
return YES;
}
*/
#pragma mark - Table view delegate
- (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath
{
RootBeers* mRootBeer =[rootList objectAtIndex:indexPath.row];
[GlobalData sharedData].mRootBeer = mRootBeer;
[[GlobalData sharedData].mViewManager pushView:DETAILVIEW animated:YES];
}
- (IBAction)sort:(id)sender {
rootList =[[GlobalData sharedData].mRootBeerParser rootBeerList];
[[GlobalData sharedData].mRootBeerParser sortRootBeerByName:rootList];
}
@end
If someone could give me an idea of what I'm doing wrong, that would be great. I hope that I have provided enough information, if not, please let me know and I'll be happy to supply more.
A: In your viewDidLoad you create a new UITableView of screen size and then set it as VC view, thus overriding the view that was loaded from xib. As a result you are only seeing the table you created in code.
| |
doc_23526752
|
*
*Calculate distances between all points and the initial centroids.
*Assign all points to their closest centroid.
Here is my code:
def init(ds, k, random_state=42):
np.random.seed(random_state)
centroids = [ds[0]]
for _ in range(1, k):
dist_sq = np.array([min([np.inner(c-x,c-x) for c in centroids]) for x in ds])
probs = dist_sq/dist_sq.sum()
cumulative_probs = probs.cumsum()
r = np.random.rand()
for j, p in enumerate(cumulative_probs):
if r < p:
i = j
break
centroids.append(ds[i])
return np.array(centroids)
k = 4
centroids = init(pixels, k, random_state=42)
print(centroids)
# First centroid
centroids[0]
#Calculate distances between all points and the initial centroids.
# Assign all points to their closest centroid.
| |
doc_23526753
|
I have also seen the thousands of posts online that say that you can put any generic class that extends a particular base class into a collection of the type of the base class. I understand this perfectly well.
My problem differs from the linked post above and the others in one basic way - my generic classes have a base class that is also generic. Worse still, this is part of a very large MVVM application framework (built in-house) and the generic base class also has a base class and so on... each base class adds certain properties and functionality and it looks a bit like this:
DataListEntry<T> <-- BaseSynchronisableDataType<T> <--
BaseAuditDataType <-- BaseDataType <-- BaseAnimatableDataType
The application collection classes use similar inheritance:
DataList<T> <-- BaseSynchronisableCollection<T> <--
BaseAnimatableCollection<T> <-- BaseCollection<T> <--
SortableObservableCollection<T>
Even worse still, each generic declaration has constraints so that for instance, the definition of BaseSynchronisableDataType<T> looks like this:
public abstract class BaseSynchronisableDataType<T> : BaseAuditDataType,
ISynchronisable<T>, ICloneable<T>, IEquatable<T> where T : class, ICloneable<T>,
IEquatable<T>, new()
So each generic collection type is tied through these constraints to a generic base class. Hopefully by now, you can see the scale of my problem.
I have tried using interfaces (not shown above) to remove the link from the collections to their respective base classes, but this is also failing because of some of the generic constraints on related classes. For example, I couldn't create a collection of the type of an interface because there are neccessary generic 'class' constraints on some base classes and so I get errors saying that the type of the collection 'must be a non-abstract type with a public parameterless constructor in order to use it as parameter 'T' in the generic type or method'.
One last point to note describes exactly what I am trying to do:
I need to populate a collection with different classes that all extend the DataList<T> base class. These classes are different only in name and have exactly the same properties in them. They are declared as follows:
public class Writer : DataListEntry<Writer>
public class Artist : DataListEntry<Artist>etc.
If you have any ideas, then please let me know... I've suffered for 2 days on this problem already and my boss is none too pleased! Many thanks in advance, Sheridan.
A: There's a key principle in play here that you need to understand- that a class Foo<Child> is not a subclass of Foo<Parent> even when Child is a subclass of Parent. That means that a List<Foo<Parent>> cannot contain instances of List<Foo<Child>> any more than it can contain Strings or Int32s.
To understand why this is the case, imagine the following code (which doesnt compile, but illustrates why the above statement needs to be true):
var myIntList = new List<Int>();
var myObjectList = (List<Object>)myIntList;
// Uh oh, now I can add a string to a list of integers...
myObjectList.Add("Foo");
Your use of the curiously recurring template pattern eliminates the inheritance hierarchy between all of your classes. Because they don't share a base class anymore, they cannot be put into a list more specific than List<Object>
The best approach in your case is probably to make a non-generic interface which DataListEntry implements, and make your list of that interface. If the interface provides all of the members that you need in an instance of that type, you are all set.
For example:
public interface IDataListEntry {
bool QuacksLikeADuck { get; }
bool WalksLikeADuck { get; }
}
public abstract class DataListEntry<T> : IDataListEntry where ... {
// Implement these in subclasses
abstract bool QuacksLikeADuck { get; }
abstract bool WalksLikeADuck { get; }
}
Then you can:
List<IDataListEntry> myDataListEntries = new List<IDataListEntry>();
myDataListEntries.Add(new Writer(...));
myDataListEntries.Add(new Artist(...));
IEnumerable ducks = myDataListEntries.Where(dle => dle.WalksLikeADuck && dle.QuacksLikeADuck);
Or (probably more appropriate to your situation), if you need to know the Type of the T in the particular instance of the IDataListEntry:
public interface IDataListEntry {
Type TheTypeOfT { get; }
}
public class DataListEntry<T> : IDataListEntry where ... {
Type TheTypeOfT { get { return typeof(T); } }
}
and then do:
List<IDataListEntry> myDataListEntries = new List<IDataListEntry>();
myDataListEntries.Add(new Writer(...));
myDataListEntries.Add(new Artist(...));
IEnumerable artists = myDataListEntries.Where(dle => typeof(Artist).IsAssignableFrom(dle.TheTypeOfT));
A: I would go all the way back to the basics, and use some simple linq to get the class filtered list.
Declare you list of type object DataList<object> L then when you get asked for a type, call L.OfType<Type>() which filters the list. object is going to be the most generic thing you can use after all. you Might be able to use the base type that the all extend, but because its abstract I dont know if you can declare a list on that type or not.
in my own code, I use generic constraints to achieve something similar.
public abstract class BusinessObjectBase : //some interfaces
{
//class stuff and events
}
I have a bunch of objects delcared that extend my base class, and now I can do this
Collection<BusinessObjectBase> temp = new Collection<BusinessObjectBase>();
temp.Add(new RXEvents());
temp.Add(new RXBattery());
temp.Add(new RXBHA());
where each of those classes Im adding to the list are all creating by extending BusinessObjectBase. You are attempting something very similar, but your base implementation is different. By declaring the base type itself as templated, you are breaking the base type. Base is note the same as Base, and the two dont implement anything in common other than object.
Base<> is not related to Base<X> or Base<Y>. Now if you delcared it like this
public abstract class BaseSynchronisableDataType<T> : BaseAuditDataType, ISynchronisable<T>, ICloneable<T>, IEquatable<T> where T : MyCustomBaseClass, ICloneable<T>, IEquatable<T>, new()
You could then use MyCustomBaseClass as the list type, because you are guaranteed that all the objects that <T> represents, are its children. This would seem to defeat the purpose of creating the BaseSynchronisableDataType though...
A: Ok, so the problem was that I couldn't declare the following
DataList<Artist> artists = new DataList<Artist>();
when the BaseDataListEntry class was NOT generic because of a generic constraint on the generic BaseSynchronisableCollection<T> class that DataList<T> extends. It required T to be of the type BaseSynchronisableDataType<T> which the BaseDataListEntry class could not extend because it was not generic. I needed the BaseDataListEntry class to NOT be generic so that I could put all the different collections of DataList<T> into a collection of DataList<BaseDataListEntry> in the view model (one at a time dependant on the user's selection).
The solution was to create an IBaseSynchronisableDataType<T> interface and use that in the generic constraint instead of the concrete BaseSynchronisableDataType<T> class. Then, I implemented that in the BaseDataListEntry class, so now all the constraints are satisfied. Not sure why I didn't see it earlier.
Thanks for all you time.
| |
doc_23526754
|
I originally used the AVAudioPlayer for playback, and in the simulator at 120 bpm, playing 16th notes it sung beautifully, but on my handset, as soon as I
upped the tempo a little over 60 bpm playing just 1/4 notes, it ran like a dog and wouldn't keep in time. My elation was very short lived.
To reduce latency, I tried to implement playback via Audio Units using the Apple MixerHost project as a template for an audio engine, but kept getting a bad access error after I bolted it on and connected everything up.
After many hours of it doing my head in, I gave up on that avenue of thought and I bolted on the Novocaine audio engine instead.
I have now run into a brick wall trying to connect it up to my model.
On the most basic level, my model is a Neck object containing an NSDictionary of Note objects.
Each Note object knows what string and fret of the guitar neck it's on and contains its own AVAudioPlayer.
I build a chromatic guitar neck containing either 122 notes (6 strings by 22 frets) or 144 notes (6 strings by 24 frets) depending on the neck size selected in the user preferences.
I use these Notes as my single point of truth so all scalar Notes generated by the music engine are pointers to this chromatic note bucket.
@interface Note : NSObject <NSCopying>
{
NSString *name;
AVAudioPlayer *soundFilePlayer;
int stringNumber;
int fretNumber;
}
I always start off playback with the root Note or Chord of the selected scale and then generate the note to play next so I am always playing one note behind the generated note. This way, the next Note to play is always queued up ready to go.
Playback control of these Notes is a achieved with the following code:
- (void)runMusicGenerator:(NSNumber *)counter
{
if (self.isRunning) {
Note *NoteToPlay;
// pulseRate is the time interval between beats
// staticNoteLength = 1/4 notes, 1/8th notes, 16th notes, etc.
float delay = self.pulseRate / [self grabStaticNoteLength];
// user setting to play single, double or triplet notes.
if (self.beatCounter == CONST_BEAT_COUNTER_INIT_VAL) {
NoteToPlay = [self.GuitarNeck generateNoteToPlayNext];
} else {
NoteToPlay = [self.GuitarNeck cloneNote:self.GuitarNeck.NoteToPlayNow];
}
self.GuitarNeck.NoteToPlayNow = NoteToPlay;
[self callOutNoteToPlay];
[self performSelector:@selector(runDrill:) withObject:NoteToPlay afterDelay:delay];
}
- (Note *)generateNoteToPlayNext
{
if ((self.musicPaused) || (self.musicStopped)) {
// grab the root note on the string to resume
self.NoteToPlayNow = [self grabRootNoteForString];
//reset the flags
self.musicPaused = NO;
self.musicStopped = NO;
} else {
// Set NoteRingingOut to NoteToPlayNow
self.NoteRingingOut = self.NoteToPlayNow;
// Set NoteToPlaNowy to NoteToPlayNext
self.NoteToPlayNow = self.NoteToPlayNext;
if (!self.NoteToPlayNow) {
self.NoteToPlayNow = [self grabRootNoteForString];
// now prep the note's audio player for playback
[self.NoteToPlayNow.soundFilePlayer prepareToPlay];
}
}
// Load NoteToPlayNext
self.NoteToPlayNext = [self generateRandomNote];
}
- (void)callOutNoteToPlay
{
self.GuitarNeck.NoteToPlayNow.soundFilePlayer.delegate = (id)self;
[self.GuitarNeck.NoteToPlayNow.soundFilePlayer setVolume:1.0];
[self.GuitarNeck.NoteToPlayNow.soundFilePlayer setCurrentTime:0];
[self.GuitarNeck.NoteToPlayNow.soundFilePlayer play];
}
Each Note's AVAudioPlayer is loaded as follows:
- (AVAudioPlayer *)buildStringNotePlayer:(NSString *)nameOfNote
{
NSString *soundFileName = @"S";
soundFileName = [soundFileName stringByAppendingString:[NSString stringWithFormat:@"%d", stringNumber]];
soundFileName = [soundFileName stringByAppendingString:@"F"];
if (fretNumber < 10) {
soundFileName = [soundFileName stringByAppendingString:@"0"];
}
soundFileName = [soundFileName stringByAppendingString:[NSString stringWithFormat:@"%d", fretNumber]];
NSString *soundPath = [[NSBundle mainBundle] pathForResource:soundFileName ofType:@"caf"];
NSURL *fileURL = [NSURL fileURLWithPath:soundPath];
AVAudioPlayer *audioPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:fileURL error:nil];
return notePlayer;
}
Here is where I come a cropper.
According to the Novocaine Github page ...
Playing Audio
Novocaine *audioManager = [Novocaine audioManager];
[audioManager setOutputBlock:^(float *audioToPlay, UInt32 numSamples, UInt32 numChannels) {
// All you have to do is put your audio into "audioToPlay".
}];
But in the downloaded project, you use the following code to load the audio ...
// AUDIO FILE READING OHHH YEAHHHH
// ========================================
NSURL *inputFileURL = [[NSBundle mainBundle] URLForResource:@"TLC" withExtension:@"mp3"];
fileReader = [[AudioFileReader alloc]
initWithAudioFileURL:inputFileURL
samplingRate:audioManager.samplingRate
numChannels:audioManager.numOutputChannels];
[fileReader play];
fileReader.currentTime = 30.0;
[audioManager setOutputBlock:^(float *data, UInt32 numFrames, UInt32 numChannels)
{
[fileReader retrieveFreshAudio:data numFrames:numFrames numChannels:numChannels];
NSLog(@"Time: %f", fileReader.currentTime);
}];
Here is where I really start to get confused because the first method uses a float and the second one uses a URL.
How do you pass a "caf" file to a float? I am not sure how to implement Novocaine - it is still fuzzy in my head.
My questions that I hope someone can help me with are as follows ...
*
*Are Novocaine objects similar to AVAudioPlayer objects, just more versatile and tweaked to the max for minimum latency? i.e. self contained audio playing (/recording/generating) units?
*Can I use Novocaine in my model as it is? i.e. 1 Novocaine object per chromatic note or should I have 1 novocain object that contains all the Chromatic Notes? Or do I just store the URL in the note instead and pass that to a Novocaine player?
*How can I put my audio into "audioToPlay" when my audio is a "caf" file and "audioToPlay" take a float?
*If I include and declare a Novocaine property in Note.m do I then have to rename the class to Note.mm in order to use the Novocaine object?
*How do I play multiple Novocaine objects concurrently in order to reproduce chords and intervals?
*Can I loop a Novocaine object's playback?
*Can I set the playback length of a note? i.e. play a 10 sec note for only 1 sec?
*Can I modify the above code to use Novocaine?
*Is the method I am using for runMusicGenerator the correct one to use in order to maintain a tempo that is up to professional standards?
A: Novocaine makes your life easier by eliminating the need for you to setup the RemoteIO AudioUnit manually. This includes having to painfully fill a bunch of CoreAudio structs and providing a bunch of callbacks such as this audio process callback.
static OSStatus PerformThru(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData);
Instead Novocaine handles that in its implementation and then calls your block, which you set by doing this.
[audioManager setOutputBlock: ^(float *audioToPlay, UInt32 numSamples, UInt32 numChannels){} ];
Whatever you write to audioToPlay gets played.
*
*Novocaine sets up the RemoteIO AudioUnit for you. This is a low-level CoreAudio API, different from the high-level AVFoundation, and very low-latency as expected. You are right in that Novocaine is self-contained. You can record, generate, and process audio in realtime.
*Novocaine is a singleton, you cannot have multiple Novocaine instances. One way to do it is to store your guitar sound/sounds in a separate class or array, and then write a bunch of methods, using Novocaine to play them.
*You have a bunch of options. You can use Novocaine's AudioFileReader to play your .caf file for you. You do this by allocating an AudioFileReader and then passing the URL of the .caf file you want to play, as per example code. You then stick [fileReader retrieveFreshAudio:data numFrames:numFrames numChannels:numChannels] in your block, as per example code. Each time your block is called, AudioFileReader grabs and buffers a chunk of audio from disk and puts it in audioToPlay which subsequently gets played. There are some disadvantages with this. For short sounds (such as your guitar sound I'm assuming) repeatedly calling retrieveFreshAudio is a performance hit. It is generally a better idea (for short sounds) to perform a synchronous, sequential read of the entire file into memory. Novocaine does not provide a way to do this (yet). You will have to use ExtAudioFileServices to do this. The Apple example project MixerHost details how to do this.
*If you are using AudioFileReader yes. You only rename to .mm when you are #import ing from Obj-C++ headers or #include ing C++ headers.
*As mentioned earlier, only 1 Novocaine instance is allowed. You can achieve polyphony by mixing multiple audio sources. This is simply just adding buffers together. If you have made multiple versions of the same guitar sound at different pitches, just read them all in to memory, and mix away. If you only want to have one guitar sound, then you have to, in realtime, change the playback rate of however many notes you are playing and then mixdown.
*Novocaine is agnostic to what you are actually playing and does not care how long you are playing a sample for. In order to loop a sound, you have to maintain a count of how many samples have elapsed, check if you are at the end of your sound, and then set that count back to 0.
*Yes. Assuming a 44.1k sample rate, 1 sec of audio = 44100 samples. You would then reset your count when it reaches 44100.
*Yes. It looks something like this. Assuming you have 4 guitar sounds which are mono and longer than 1 second long, and you have read them into memory float *guitarC, *guitarE, *guitarG, *guitarB; (jazzy CMaj7 chord w00t), and want to mix them down for 1 second and loop that back in mono:
[audioManager setOutputBlock:^(float *data, UInt32 numFrames, UInt32 numChannels){
static int count = 0;
for(int i=0; i<numFrames; ++i){
//Mono mix each sample of each sound together. Since result can be 4x louder, divide the total amp by 4.
//You should be using `vDSP_vadd` from the accelerate framework for added performance.
data[count] = (guitarC[count] + guitarE[count] + guitarG[count] + guitarB[count]) * 0.25;
if(++count >= 44100) count = 0; //Plays the mix for 1 sec
}
}];
*
*Not exactly. Using performSelector or any mechanism scheduled on a runloop or thread is not guaranteed to be precise. You might experience timing irregularities when the CPU load fluctuates, for example. Use the audio block if you want sample accurate timing.
| |
doc_23526755
|
A: Just use a before_action callback to set the default locale.
class Admin::DashboardController
before_action :set_default_locale
# ...
private
def set_default_locale
I18n.default_locale = :en
end
end
A: before_action :set_locale
def set_locale
I18n.locale = params[:locale] || I18n.default_locale
end
more details in link and http://guides.rubyonrails.org/i18n.html
| |
doc_23526756
|
This is my PagerTabStrip class
public class maintabs extends Fragment {
FragmentPagerAdapter adapterViewPager;
PagerTabStrip pagerTabStrip;
View rootView;
public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) {
rootView = inflater.inflate(R.layout.tabs_main, container, false);
ViewPager vpPager = (ViewPager) rootView.findViewById(R.id.vpPager);
pagerTabStrip = (PagerTabStrip) rootView.findViewById(R.id.pager_header);
pagerTabStrip.setDrawFullUnderline(true);
adapterViewPager = new SampleFragmentPagerAdapter(getFragmentManager());
vpPager.setAdapter(adapterViewPager);
return rootView;
}
}
A: I've found a solution.
Just use a FragmentStatePagerAdapter instead FragmentPagerAdapter
| |
doc_23526757
|
I am using following code.
$headers = 'From: My Name <test12df432abc@gmail.com>' . "\r\n";
wp_mail($to, $subjects, $message, $headers); // not working
wp_mail($to, $subjects, $message ); // working
I think this happening is because my From: address doesn't match to the domain i'm sending the email from. But is there any way for me to accomplish above using wp_mail.
I am getting following debugging information using smtp debug
2017-02-12 13:48:40 CLIENT -> SERVER: QUIT
2017-02-12 13:48:40 SMTP -> get_lines(): $data is ""
2017-02-12 13:48:40 SMTP -> get_lines(): $str is "* ..* Service closing transmission channel
"
2017-02-12 13:48:40 SERVER -> CLIENT: * ..* Service closing transmission channel
2017-02-12 13:48:40 Connection: closed
A: Try this:
$headers = array(
'From: My Name <test12df432abc@gmail.com>'
);
$headers = implode( PHP_EOL, $headers );
wp_mail( $to, $subjects, $message, $headers );
PHP_EOL will add the proper line break based on the system it's on (where \r\n is for Windows, \n is for Unix). Using implode() will make sure it's only added if needed. In this case, you're only sending one header so the line break isn't neaded anyways. But if you want to send more headers:
$headers = array(
'Bcc: secretuser123@aol.com',
'From: My Name <test12df432abc@gmail.com>',
'Reply-To: webmaster@hotmail.com'
);
$headers = implode( PHP_EOL, $headers );
wp_mail( $to, $subjects, $message, $headers );
| |
doc_23526758
|
header("Content-Type: text/x-vcard;charset=utf-8;");
header("Content-Disposition: attachment; filename=card.vcf");
header("Pragma: no-cache");
header("Expires: 0");
echo $vcard_serialized;
on chrome from Pc, it downloads card.vcf, but from mobile it downloads card.vcf.html... why?
A: I have the same issue, but now I already fixed it using the codes below:
header('Content-Description: Download vCard');
header('Content-Type: text/vcard');
header('Content-Disposition: attachment; filename='.$your_filename_here);
header('Content-Transfer-Encoding: binary');
header('Expires: 0');
header('Cache-Control: must-revalidate, post-check=0, pre-check=0');
header('Pragma: public');
ob_clean();
flush();
echo $vcard_serialized; //echo the content
exit;
| |
doc_23526759
|
Then I have an ArrayList<GameObject> selector that contains items that the user currently has selected. Let's say the user clicks on a tank, then this tank would be stored in selector. If he then right click somewhere he is telling the tank to go to the mouse's coordinates. And I also need to tell all other players this by sending the selector ArrayList and the mouse coordinates to the server so that the server can pass it on to the other clients.
Now to the problem. Sending the selector means sending a lot of unnecessary data (for instance textures) since the class GameObject holds this info. And I would also have to implement serializable to every class GameObject uses. So my question is if I somehow can have an ArrayList that only stores some sort of pointers to the actual gameObjects ArrayList. So that when the user selects the tank, I store a pointer to the tank in the ArrayList gameObjects.
I realize it might be a bit confusing. Hope you understand.
| |
doc_23526760
|
Wanted to understand a little better operation, I take the location permission and bluetooth.
After the scan starts, I turn Bluetooth on my phone to off, on Moto G2 Android 6.0 Scan still keeps giving me the expected result when I test on a Samsung S8 Android 9 and Sony Xperia T2 Ultra Android 5.1 in the log I get which was bluetooth disabled and the scan was stopped.
I can only perform the test when I purchase it as follows
bluetoothManager = getSystemService(Context.BLUETOOTH_SERVICE) as BluetoothManager
bluetoothAdapter = bluetoothManager.adapter
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.LOLLIPOP) {
bluetoothScanner = bluetoothAdapter.bluetoothLeScanner
}
@TargetApi(Build.VERSION_CODES.M)
class BleScanCallback(resultMap: MutableMap) : ScanCallback() {
var resultOfScan = resultMap
@RequiresApi(Build.VERSION_CODES.LOLLIPOP)
@TargetApi(Build.VERSION_CODES.M)
override fun onScanResult(callbackType: Int, result: ScanResult?) {
addScanResult(result)
Log.v("Main Activity", "I found a ble device ${result}")
Log.v("Main Activity", "I found a ble device ${result?.device?.address}")
}
override fun onBatchScanResults(results: MutableList<ScanResult>?) {
results?.forEach { result -> addScanResult(result) }
}
override fun onScanFailed(errorCode: Int) {
Log.v("Main Activity","Bluetooth LE scan failed. Error code: $errorCode")
}
fun addScanResult(scanResult: ScanResult?) {
val bleDevice = scanResult?.device
val deviceAddress = bleDevice?.address
resultOfScan.put(deviceAddress, bleDevice)
}
scanResult is bringing the necessary information when bluetooth is online, I already set it up as the image below
https://i.stack.imgur.com/o9jGRm.png
I see that this makes scanning even off
A: There is no way to detect BLE devices with bluetooth off
Bluetooth must be enabled
Set up BLE
Before your application can communicate over BLE, you need
to verify that BLE is supported on the device, and if so, ensure that
it is enabled.
A: On some Android devices including Pixel phones, Android One devices, and unmodified AOSP builds, turning off bluetooth in the quick settings panel doesn't really turn off bluetooth. Instead, it merely blocks bluetooth connections and pairing in software, yet allows Bluetooth LE scans to continue unaffected. As @Jorgesys correctly notes, it is impossible to detect BLE devices if the Bluetooth radio is really turned off, so let me say again: despite what the quick settings panel says, bluetooth is not necessarily powered off.
On supported devices, this happens only if two things are true:
*
*Bluetooth is turned on in the full settings menu (On Android 9: Settings -> Connected Devices -> Connection preferences -> Bluetooth ON)
*The user has selected to "Allow apps and services to scan for nearby devices at any time, even when Bluetooth is off. This can be used, for example, to improve location-based features and services." (Settings -> Security & Location -> Location -> Advanced -> Scanning -> Bluetooth scanning ON)
| |
doc_23526761
|
I have this:
protected void onPostExecute(String result) {
super.onPostExecute(result);
try {
JSONObject jsonObject = new JSONObject(result);
JSONArray jsonArray = new JSONArray(jsonObject.getString("cast"));
But I want to avoid calling JSONArray jsonArray = new JSONArray(jsonObject.getString("cast")); because then I make an array of the entire JSONObject instead of just the first five items.
Thanks!
A: You can either modify the JSON response or create a new JSONArray and put the first five items. Please refer the code:
JSONArray studentArray = jsonObject.optJSONArray("students");
JSONArray firstFiveStudentArray = new JSONArray();
for (int i = 0; i <= 5; i++) {
JSONObject studentObj = studentArray.optJSONObject(i);
if (studentObj != null) {
firstFiveStudentArray.put(studentObj);
}
}
| |
doc_23526762
|
When I call my child component in my parent component, I get an error:
Type '{}' is missing the following properties from type 'IProps': className, disabled ts(2739)
I thought that because I have default props on my child component, they would fill in for any missing props when calling the component from other components.
I know I can make individual props optional in the interface IProps in my child component using className?: string but this is not a solution I'm looking for as it presents more problems than it solves.
I'd prefer not to have to note each default prop when I call a child from another component like below as for some components, I have many props:
<Child class={''} disabled={false} />
I'm sure there's a fairly simple solution for this but I can't find any direction so far. Any advice or direction would be welcome.
// Parent component:
import React, { FC } from 'react'
import Child from './child'
const Parent: FC = () => {
return (
<Child />
)
}
export default Parent
// Child component:
import React, { FC } from 'react'
interface IProps {
className: string
disabled: boolean
}
const Child: FC<IProps> = ({ className, disabled }: IProps) => {
return (
<button className={className} disabled={disabled}>
Click here
</button>
)
}
Child.defaultProps = {
className: '',
disabled: false,
}
export default Child
A: Solved it, for anyone looking at this answer: just need to pass in the default props into the component as well as any props as per code below:
import React, { FC } from 'react'
interface IProps {
className: string
disabled: boolean
}
const Child: FC<IProps & Child.defaultProps> = ({ className, disabled }: IProps) => {
return (
<button className={className} disabled={disabled}>
Click here
</button>
)
}
Child.defaultProps = {
className: '',
disabled: false,
}
A: You can provide default arguments to stop this happening:
const Child: FC<IProps> = ({ className = 'foo', disabled = false }: IProps) => {
...
}
This should actually get around the optional props problem you have, especially if you use these default arguments (i.e., lazy devs not checking what props are required/optional). Now you can make them optional ...
A: You did right but missed to pass className and disabled props the child component.
const Parent: FC = () => {
return (
)
}
| |
doc_23526763
|
I think the answer will be "port-forwarding".But how can I do that ?
A: You can use SSH port forwarding to access your services from host machine in the following way:
ssh -R 30000:127.0.0.1:8001 $USER@192.168.0.20
In which 8001 is port on which your service is exposed, 192.168.0.20 is minikube IP.
Now you'll be able to access your application from your laptop pointing the browser to http://192.168.0.20:30000
A: If you mean to access your machine from the internet, then the answer is yes "port-forwarding" and use the external ip address [https://www.whatismyip.com/]. The configurations go into your router settings. Check your router manual.
| |
doc_23526764
|
root\
default.aspx
web.config
subfolder\
page.aspx
web.config
If I access page.aspx by going to locahost/subfolder/page.aspx it reads the web.config in the subfolder just fine.
However, I have a route to the page setup like so:
protected void Application_Start(object sender, EventArgs e)
{
RegisterRoutes(RouteTable.Routes);
}
public void RegisterRoutes(RouteCollection routes)
{
routes.MapPageRoute("", "test", "~/subfolder/page.aspx");
}
And when I try to access the page via that route, by going to localhost/test, the page loads just fine but it fails to read the values from the web.config in the sub folder.
Am I missing something? Is there some other step to allow a sub web.config to work with routes?
I'm accessing the sub web.config using:
var test = WebConfigurationManager.AppSettings["testSetting"];
A: I've been able to solve my issue by adding the following to my Global.asax:
protected void Application_BeginRequest(object sender, EventArgs e)
{
HttpRequest request = HttpContext.Current.Request;
Route route = RouteTable.Routes.Where(x => (x as Route)?.Url == request.Url.AbsolutePath.TrimStart('/')).FirstOrDefault() as Route;
if (route != null)
{
if (route.RouteHandler.GetType() == typeof(PageRouteHandler))
{
HttpContext.Current.RewritePath(((PageRouteHandler)route.RouteHandler).VirtualPath, request.PathInfo, request.Url.Query.TrimStart('?'), false);
}
}
}
By doing this, I fake out the Url property of the Request object to use the "real" URL to the page for any request with a Url that matches an existing page route. This way, when WebConfigurationManager pulls up config (which it does by current virtual path), it pulls it up using the appropriate page.
| |
doc_23526765
|
Below is the code I tried:
#include "contiki.h"
#include "stdio.h" /* For printf() */
#include "stdlib.h"
PROCESS(random_process, "Random process");
AUTOSTART_PROCESSES(&random_process);
PROCESS_THREAD(random_process, ev, data)
{
PROCESS_BEGIN();
int r=rand();
printf("Hello, world. Random Number is %d",r);
PROCESS_END();
}
While generating the makefile I get the below error:
user@instant-contiki:~/Desktop/Random$ make target=native random_sample
TARGET not defined, using target 'native'
CC random_sample.c
LD random_sample.native
contiki-native.a(broadcast-annou): In function `set_timers':
/home/user/contiki-2.7/core/net/rime/broadcast-announcement.c:171: undefined reference to `random_rand'
collect2: ld returned 1 exit status
make: *** [random_sample.native] Error 1
rm random_sample.co
Can someone please help me with this? Thanks in advance.
A: You have not configured your project properly, you have to setup Makefile and project-conf.h to start with contiki, read the following hello-world example: http://github.com/contiki-os/contiki/tree/master/examples/hello-world.
I recommend you use the example in the link as a project start files.
| |
doc_23526766
|
I'm having problems while deploying because npm started using ^1.2.3 version notations and it's not compatible with the current npm in my application:
remote: npm ERR! Error: No compatible version found: through@'^2.3.4'
remote: npm ERR! Valid install targets:
remote: npm ERR! ["0.0.1","0.0.2","0.0.3","0.0.4","0.1.0","0.1.1","0.1.2","0.1.3","0.1.4","1.0.0","1.1.0","1.1.1","1.1.2","2.0.0","2.1.0","2.2.0","2.2.1","2.2.2","2.2.4","2.2.5","2.2.6","2.2.7","2.3.1","2.3.2","2.3.3","2.3.4"]
Is there a way of fixing this, or I'll have to go back to outdated packages?
A: OpenShift does not provide root access to developers, but you can still select a custom version of npm by running your own nodejs binary in user space.
Developers can also package up their own custom nodejs cartridge, allowing teams to define and standardize their dependencies in a reusable way.
Here is an answer that helps you run a custom version of Nodejs on OpenShift
You can also try working with user-defined npm globals on OpenShift
| |
doc_23526767
|
$html = file_get_contents($url);
$pattern = '/[A-Z0-9._%+-]+(@|\(at\)|\[at\])[A-Z0-9.-]+\.[A-Z]{2,4}\b/i'; //also (at) and [at]
preg_match_all($pattern,$html,$emails);
foreach ($emails[0] as $m)
{
$m[] = $m;
}
foreach($m as $n){echo $n."<br>";}
This is just an example for illustration of the question! Don't judge it on common sense.
NOW what i want is 2 things:
*
*Stop the process on Click on a button from the user. This means: outputting the already collected array $m[].
*Stop the process (or better: stop the array-collecting process and jump to the echo of the already collected array) based on time (for example max. 1 minute collecting the array, THAN jumping to echoing.
I don't want to echo live and setting max execution time will stop the script without echoing.
Thanks for your wise advise on both subquestions
A: You have to run script through ajax, and button click regsiter session by ajax
foreach ($emails[0] as $m)
{
if (!isset($_SESSION['STOP'])) {
$m[] = $m;
}
else { //stop code here }
}
| |
doc_23526768
|
// Get the modal
var modal = document.getElementById('reserveer-modal');
// Get the button that opens the modal
var btn = document.getElementById("reserveer-knop");
// Get the <span> element that closes the modal
var span = document.getElementsByClassName("close")[0];
// When the user clicks the button, open the modal
btn.onclick = function() {
var x = window.innerWidth;
if (x > 768) {
//event.preventDefault();
modal.style.display = "block";
} else {
//event.preventDefault();
}
}
// When the user clicks on <span> (x), close the modal
span.onclick = function() {
modal.style.display = "none";
}
// When the user clicks anywhere outside of the modal, close it
window.onclick = function(event) {
if (event.target == modal) {
modal.style.display = "none";
}
}
A: Try to change onclick to addEventListener and see if that helps you..
// When the user clicks the button, open the modal
btn.addEventListener('click', function () {
var x = window.innerWidth;
if (x > 768) {
//event.preventDefault();
modal.style.display = "block";
} else {
//event.preventDefault();
}
});
You can also pass named function to addEventListener
A: Binding the click event listener to the element should fix the problem you've been having.
btn.addEventListener("click", function() {
var x = window.innerWidth;
if (x > 768) {
//event.preventDefault();
modal.style.display = "block";
} else {
//event.preventDefault();
}
});
Alternatively, you could try using the touchstart event, which works just like the "mousedown" event, just for mobile.
elem.addEventListener("touchstart", handler);
Your code would look like this:
btn.addEventListener("touchstart", function() {
var x = window.innerWidth;
if (x > 768) {
//event.preventDefault();
modal.style.display = "block";
} else {
//event.preventDefault();
}
});
A: Had the same issue, after setting z-index to 100 it worked.
Seems like in my case there was a z-index issue.
A: Make sure you don't have any async/await functions in your code or any arrow functions () => {}. Mobile browsers seem to use older versions of JavaScript before async/await or arrow function were introduced.
| |
doc_23526769
|
Can Visual Studio do the same thing like XCode?
Thank you!
A: I believe what you are looking for is solution build configurations, check this link out:
http://msdn.microsoft.com/en-us/library/kwybya3w(v=vs.110).aspx
Here is a good example of including a reference for a specific configuration.
Visual Studio Project: How to include a reference for one configuration only?
You will need to research the topic a bit, but here is how to get started:
1. Open your Solution
2. In solution explorer right click the solution
3. Select Configuration Manager
4. Create a new configuration or modify one of the default ones.
Example of a solution with many build configurations:
Each of these configurations have custom configs and some have different references based on the configuration.
| |
doc_23526770
|
My question is, how can I see the exact xcodebuild command line that xcode is using to build a working simulator build. I just need to copy that into my shell script but it's proving elusive. I did a find in the build logs from xcode but there's no mention of xcodebuild there.
A: You can't. Xcode itself doesn't invoke xcodebuild during the build process. This post has more information on executing xcodebuild to build for the simulator.
| |
doc_23526771
|
I want the audio data as numpy array to process it, but I don't seem to be able to convert the blob properly.
The audio blob contains:
[Float32Array[32768], Float32Array[32768]]
In python, I tried:
@socketio.on('gotaudio')
def get_audio(blob):
//CONVERT THE BLOB
data = blob[0]
dat = np.array(json.loads(data));
//DO SOME SIGNAL PROCESSING
fftData=abs(np.fft.rfft(dat))**2;
....
But this throws the error:
TypeError: expected string or buffer
How can I transform the audio blob correctly so that it can be processed with np.fft?
A: Have you tried using base64.b64decode() on it first? (base64 is in the standard lib)
It would help to get an example blob.
| |
doc_23526772
|
In the past 2 days I was trying to export this table to excel file. Finally i was able to do this by using an xml builder template.
Here is my file
xml.instruct! :xml, :version=>"1.0", :encoding=>"UTF-8"
xml.Workbook({
'xmlns' => "urn:schemas-microsoft-com:office:spreadsheet",
'xmlns:o' => "urn:schemas-microsoft-com:office:office",
'xmlns:x' => "urn:schemas-microsoft-com:office:excel",
'xmlns:html' => "http://www.w3.org/TR/REC-html40",
'xmlns:ss' => "urn:schemas-microsoft-com:office:spreadsheet"
}) do
xml.Styles do
xml.Style 'ss:ID' => 'Default', 'ss:Name' =>'Normal' do
xml.Alignment 'ss:Vertical' => 'Bottom','ss:Horizontal' => 'Center'
xml.Borders
xml.Font 'ss:FontName' => 'Verdana'
xml.Interior
xml.NumberFormat
end
xml.Style 'ss:ID' => 'header' do
xml.Alignment 'ss:Vertical' => 'Bottom',
'ss:Horizontal' => 'Center'
xml.Font 'ss:FontName' => 'Arial','ss:Bold'=>'1'
xml.Interior 'ss:Color'=>'#99CCFF', 'ss:Pattern'=>'Solid'
end
end
xml.Worksheet 'ss:Name' => 'Projects Reports' do
xml.Table 'ss:DefaultColumnWidth'=>'100','ss:DefaultRowHeight' => '15' do
# Header
xml.Row 'ss:StyleID' => 'header' do
xml.Cell { xml.Data 'ID', 'ss:Type' => 'String' }
xml.Cell { xml.Data 'NAME', 'ss:Type' => 'String' }
xml.Cell { xml.Data 'Actual Hours', 'ss:Type' => 'String' }
xml.Cell { xml.Data 'Estimated Hours', 'ss:Type' => 'String' }
xml.Cell { xml.Data 'Deadline', 'ss:Type' => 'String' }
end
# Rows
for project in @projects
xml.Row do
xml.Cell { xml.Data project.id, 'ss:Type' => 'Number' }
xml.Cell { xml.Data project.name, 'ss:Type' => 'String' }
xml.Cell { xml.Data project.working_hours, 'ss:Type' => 'String' }
xml.Cell { xml.Data project.estimated_hours, 'ss:Type' => 'String' }
xml.Cell { xml.Data project.deadline, 'ss:Type' => 'String' }
end
end
end
end
end
In my original html view I display in the bottom of the table the sum of project working hours and I want that this property will be also displayed in the Excel file.
I did some checking in Google but didn’t find anything that can help me. I will appreciate if anyone can give me some direction to how I can write it in the xml builder file
| |
doc_23526773
|
ODATA -> Blob storage (JSON)
JSON -> Snowflake table
Copy Data -> Copy Data - Lookup
Both copy data is working fine.
In the lookup (query), i have given. (Need to add 1 value in table, its a variant column)
Update T1 set source_json = object_insert(source_json,device_type,web_browser,TRUE);)
When i use the above query in snowflake database it works fine, the table has 25K rows.
When run from pipeline, it gives the below error.
Multiple SQL statements in a single API call are not supported; use one API call per statement instead.
Any suggestions please.
A: Some of the workarounds are provided below.
Execute multiple SQL files using SnowSql (command line utility) as described below:
snowsql -c cc -f file1.sql -f file2.sql -f file3.sql
Once we have downloaded and installed the snowsql tool, we can wrap up all our SQL queries in a single .sql file and call that file using bash.
For example, suppose that we have written all the queries which we would like to run around in a file named abc.sql stored in /tmp.
We can then run the following command:
snowsql -a enter_accountname -u enter_your_username -f /tmp/abc.sql
For reference:
Workaround for multiple sql statement in a single api call are not supported
Multiple single api call are not supported use one api call per statement instead
A: Thanks for the reply. Requirement got changed.
Our flow
Lookup1 -> Copy data -> Copy data > Lookup2
We passed the values from the lookup1 and ran the stored procedure.
| |
doc_23526774
|
The problem is that my heroku app fails to connect to the socket.
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((daemon_socket_vars['host'], daemon_socket_vars['port']))
s.send("Hi!")
s.close()
The heroku app fails on the second line after timing out. When I run something identical on either my laptop, or a friend's laptop (while the python script that's acting as the server is running on my laptop in both cases) it works. Does anyone know why heroku would have problems with this? Thanks!
A: When running on Heroku, your server should bind to port specified in the environment variable PORT (say 7880, just for the sake of this discussion). It is not guaranteed to be 80, 5000, 8000, 8080, or anything else.
To the outside world, however, this will appear as port 80 or port 443. That is, if connecting from outside of Heroku, your client will be connecting to port 80.
One final caveat: when connecting from outside Heroku, your client will go thru the "Heroku Routing Mesh", which among other things does the 80-->something port "translation". The thing is, the routing mesh is an HTTP routing mesh: it will only accept incoming HTTP requests, and will route them (after sometimes altering them, like adding headers etc.) to your dyno.
So you can't just write a plain-sockets app on the Heroku and connect to it directly, you'll have to use HTTP as your transport.
| |
doc_23526775
|
How can i best go about this without crucially breaking play framework?
A: The simple answer would be to tell you that ZooKeeper is not meant to be used as a general datastore/database; however, I am inclined to believe that you are really looking for something like MongoDB.
Check out MongoDB Replica Sets and Election/Voting. This should give you want you want. Much easier to manage than ZooKeeper and more useful for general application data storage needs.
| |
doc_23526776
|
The usual use-case is N <= 8 and M <= 128
I do this operation a lot in an innerloop on an embedded device. Writing a trivial implementation is easy but not fast enough for my taste (e.g. brute force search until a solution is found).
I wonder if anyone has a more elegant solution in his bag of tricks.
A: int nr = 0;
for ( int i = 0; i < M; ++i )
{
if ( bits[i] )
++nr;
else
{
nr = 0; continue;
}
if ( nr == n ) return i - nr + 1; // start position
}
What do you mean by brute force? O(M*N) or this O(M) solution? if you meant this, then I'm not sure how much more you can optimize things.
It's true we could achieve constant improvements by walking over every byte instead of every bit. This comes to mind:
When I say byte I mean a sequence of N bits this time.
for ( int i = 0; i < M; i += N )
if ( bits[i] == 0 ) // if the first bit of a byte is 0, that byte alone cannot be a solution. Neither can it be a solution in conjunction with the previous byte, so skip it.
continue;
else // if the first bit is 1, then either the current byte is a solution on its own or it is a solution in conjunction with the previous byte
{
// search the bits in the previous byte.
int nrprev = 0;
while ( i - nrprev >= 0 && bits[i - nrprev] ) ++nrprev;
// search the bits in the current byte;
int nrcurr = 0;
while ( bits[i + nrcurr + 1] && nrcurr + nrprev <= N ) ++nrcurr;
if ( nrcurr + nrprev >= N ) // solution starting at i - nrprev + 1.
return i - nrprev + 1;
}
Not tested. Might need some additional conditions to ensure correctness, but the idea seems sound.
A: Hacker's Delight, chapter 6-2.
A: Unroll the inner loop with a lookup table.
There are four classes of byte:
00000001 - // Bytes ending with one or more 1's. These start a run.
11111111 - // All 1's. These continue a run.
10000000 - // Bytes starting with 1's but ending with 0's. These end a run.
10111000 - // All the rest. These can be enders or short runs.
Make a lookup table that lets you distinguish these. Then process the bit array one byte at a time.
edit
I'd like to be a little less vague about the contents of the lookup table. In specific, I'll suggest that you need three tables, each with 256 entries, for the following characteristics:
Number of bits set.
Number of bits set before first zero.
Number of bits set after last zero.
Depending on how you do it, you may not need the first.
A: I do something similar on an embedded device running on a MIPS core. The MIPS architecture includes the CLZ instruction ("Count Leading Zeroes") which will return the number of leading zero-bits for the specified register. If you need to count the leading one-bits, simply invert the data before calling CLZ.
Example, assuming you have a C-language function CLZ as an alias for the assembly instruction:
unsigned numbits = 0, totalbits = 0;
while (data != 0 && numbits != N) {
numbits = CLZ(data); // count leading zeroes
data <<= numbits; // shift off leading zeroes
totalbits += numbits; // keep track of how many bits we've shifted off
numbits = CLZ(~data); // count leading ones
data <<= numbits; // shift off leading ones
totalbits += numbits; // keep track of how many bits we've shifted off
}
At the end of this loop, totalbits will indicate the offset (in bits, from the left) of the first run of N consecutive one-bits. Each line inside the loop can be represented in a single assembly instruction (except the fourth line, which requires a second for the invert operation).
Other non-MIPS architectures may have similar instructions available.
A: Simple SWAR answer:
Given the value V you're inspecting, take N M-bit-wide registers. For all n in N, set register n to V >> n.
Dump bitwise AND(all N) into another M-wide register. Then simply find the bits set in that register and that will be the start of the an all-bits run.
I'm sure if you don't have an M-bit-wide registers you can adapt this to a smaller register size.
A: This can be easily solved, and you don't need a count-zeroes instruction.
y = x ^ x-1
gives you a string of 1's up to the least-significant 1-bit in x.
y + 1
is the next individual bit which may be 1 or 0, and
x ^ x-(y+1)
gives you a string of 1's from that bit until the next 1-bit.
Then you can multiply the search pattern by (y+1) and recurse…
I'm working on an algorithm to fetch the strings… hold on…
Yeah… easily solved… while I'm working on that, note there's another trick. If you divide a word into substrings of n bits, then a series of ≥2n-1 1's must cover at least one substring. For simplicity, assume the substrings are 4 bits and words are 32 bits. You can check the substrings simultaneously to quickly filter the input:
const unsigned int word_starts = 0x11111111;
unsigned int word = whatever;
unsigned int flips = word + word_starts;
if ( carry bit from previous addition ) return true;
return ~ ( word ^ flips ) & word_starts;
This works because, after the addition operation, each bit (besides the first) in flips corresponding to a 1-bit in in word_starts equals (by the definition of binary addition)
word ^ carry_from_right ^ 1
and you can extract the carry bits by xoring with word again, negating, and ANDing. If no carry bits are set, a 1-string won't exist.
Unfortunately, you have to check the final carry bit, which C can't do but most processors can.
A: If you're on an intel-compatible platform, the BSF (Bit Scan Forward) and BSR (Bit Scan Reverse) asm instructions could help you drop the first and last zero bits. This would be more efficient than the brute-force approach.
A: This might be a bit over the top for what you are doing but I needed something heavyweight for a custom file system block allocation. If N < 32 then you can remove the second half of the the code.
For backward compatibility the most significant bit of the first word is regarded as bit 0.
Note that the algorithm uses a sentinel word (all zeros) at the end to stop any search rather than continually checking for end of array. Also note that the algorithm allows searching to start from any position in the bit array (typically the end of the last successful allocation) rather than always starting from the beginning of the bit array.
Supply your own compiler specific msbit32() function.
#define leftMask(x) (((int32_t)(0x80000000)) >> ((x) - 1)) // cast so that sign extended (arithmetic) shift used
#define rightMask(x) (1 << ((x) - 1))
/* Given a multi-word bitmap array find a run of consecutive set bits and clear them.
*
* Returns 0 if bitrun not found.
* 1 if bitrun found, foundIndex contains the bit index of the first bit in the run (bit index 0 is the most significant bit of the word at lowest address).
*/
static int findBitRun(int runLen, uint32_t *pBegin, uint32_t *pStartMap, uint32_t *pEndMap, uint32_t *foundIndex)
{
uint32_t *p = pBegin;
unsigned int bit;
if (runLen == 1)
{ // optimise the simple & hopefully common case
do {
if (*p)
{
bit = msbit32(*p);
*p &= ~(1 << bit);
*foundIndex = ((p - pStartMap) * 32ul) + (31 - bit);
return 1;
}
if (++p > pEndMap)
{
p = pStartMap;
}
} while (p != pBegin);
}
else if (runLen < 32)
{
uint32_t rmask = (1 << runLen) - 1;
do {
uint32_t map = *p;
if (map)
{
// We want to find a run of at least runLen consecutive ones within the word.
// We do this by ANDing each bit with the runLen-1 bits to the right
// if there are any ones remaining then this word must have a suitable run.
// The single bit case is handled above so can assume a minimum run of 2 required
uint32_t w = map & (map << 1); // clobber any 1 bit followed by 0 bit
int todo = runLen - 2; // -2 as clobbered 1 bit and want to leave 1 bit
if (todo > 2)
{
w &= w << 2; // clobber 2 bits
todo -= 2;
if (todo > 4)
{
w &= w << 4; // clobber 4 bits
todo -= 4;
if (todo > 8)
{
w &= w << 8; // clobber 8 bits
todo -= 8;
}
}
}
w &= w << todo; // clobber any not accounted for
if (w) // had run >= runLen within word
{
bit = msbit32(w); // must be start of left most run
*p &= ~(rmask << ((bit + 1) - runLen));
*foundIndex = ((p - pStartMap) * 32ul) + (31 - bit);
return 1;
}
else if ((map & 1) && (p[1] & 0x80000000ul)) // assumes sentinel at end of map
{
// possibly have a run overlapping two words
// calculate number of bits at right of current word
int rbits = msbit32((map + 1) ^ map);
int lmask = rmask << ((32 + rbits) - runLen);
if ((p[1] | lmask) == p[1])
{
p[0] &= ~((1 << rbits) - 1);
p[1] &= ~lmask;
*foundIndex = ((p - pStartMap) * 32ul) + (32 - rbits);
return 1;
}
}
}
if (++p > pEndMap)
{
p = pStartMap;
}
} while (p != pBegin);
}
else // bit run spans multiple words
{
pEndMap -= (runLen - 1)/32; // don't run off end
if (pBegin > pEndMap)
{
pBegin = pStartMap;
}
do {
if ((p[0] & 1) && ((p[0] | p[1]) == 0xfffffffful)) // may be first word of run
{
uint32_t map = *p;
uint32_t *ps = p; // set an anchor
uint32_t bitsNeeded;
int sbits;
if (map == 0xfffffffful)
{
if (runLen == 32) // easy case
{
*ps = 0;
*foundIndex = (ps - pStartMap) * 32ul;
return 1;
}
sbits = 32;
}
else
{
sbits = msbit32((map + 1) ^ map);
}
bitsNeeded = runLen - sbits;
while (p[1] == 0xfffffffful)
{
if (bitsNeeded <= 32)
{
p[1] = ~(0xfffffffful << (32 - bitsNeeded));
while (p != ps)
{
*p = 0;
--p;
}
*ps &= ~rightMask(sbits);
*foundIndex = ((p - pStartMap) * 32ul) + (32 - sbits);
return 1;
}
bitsNeeded -= 32;
if (++p == pBegin)
{
++pBegin; // ensure we terminate
}
}
if ((bitsNeeded < 32) & (p[1] & 0x80000000ul))
{
uint32_t lmask = leftMask(bitsNeeded);
if ((p[1] | lmask) == p[1])
{
p[1] &= ~lmask;
while (p != ps)
{
*p = 0;
--p;
}
*ps &= ~rightMask(sbits);
*foundIndex = ((p - pStartMap) * 32ul) + (32 - sbits);
return 1;
}
}
}
if (++p > pEndMap)
{
p = pStartMap;
}
} while (p != pBegin);
}
return 0;
}
| |
doc_23526777
|
A: You can also try this.
sql = "INSERT into table_name (r_id, r_name) VALUES (1, null)"
records_array = ActiveRecord::Base.connection.execute(sql)
A: How about nil instead of #{nil}
Table.column = nil
A: try this
Model.find_by_sql "SELECT * FROM table where column is NULL"
A: Model.find_by_column(nil)
This will give record which column's value is nil
A: You can update all records using:
Model.update_all(:field_name => nil)
Then you can find all records that are nil using:
Model.where(:field_name => nil)
| |
doc_23526778
|
I am getting this error:
org.apache.jasper.JasperException: /WEB-INF/pages/calendarEntry.jsp (line: 5, column: 46) According to TLD or attribute directive in tag file, attribute var does not accept any expressions
Here's my jsp file
<%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %>
<%@ taglib prefix="fmt" uri="http://java.sun.com/jsp/jstl/fmt" %>
<div class="col-sm-9 col-sm-offset-3 col-md-10 col-md-offset-2 main">
<c:set var="eventDate" value="${calendarEntry.date}"/>
<h1 class="page-header">Calendar Event on <fmt:formatDate value="date" var="${eventDate}" /></h1>
The error is happening at the last line. fmt
Web App declartion
<web-app version="3.1"
xmlns="http://java.sun.com/xml/ns/javaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://java.sun.com/xml/ns/javaee
http://java.sun.com/xml/ns/javaee/web-app_3_1.xsd">
Maven Depedencies
<dependency>
<groupId>jstl</groupId>
<artifactId>jstl</artifactId>
<version>1.2</version>
</dependency>
<dependency>
<groupId>javax.servlet</groupId>
<artifactId>javax.servlet-api</artifactId>
<version>3.1.0</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>javax.servlet.jsp</groupId>
<artifactId>javax.servlet.jsp-api</artifactId>
<version>2.3.0</version>
<scope>provided</scope>
</dependency>
Deployment Environment - Tomcat 8
A: <fmt:formatDate value="date" var="${eventDate}" />
Switch value and var.
<fmt:formatDate var="date" value="${eventDate}" />
A: Thanks for helping everyone. I realised it was a stupid mistake from my side..
Instead of
<fmt:formatDate value="date" var="${eventDate}" />
It should be
<fmt:formatDate type="date" value="${calendarEntry.date}"
A: I had a similar problem, and this answer points to basically trying two different taglib declarations. Perhaps try both of them?
Format Date with fmt:formatDate JSP
Switching to the taglib you have declared in your jsp file solved my problem, ironically.
<%@ taglib prefix="fmt" uri="http://java.sun.com/jsp/jstl/fmt" %>
vs
<%@ taglib prefix="fmt" uri="http://java.sun.com/jstl/fmt" %>
A: In netbeans it does not create a web.xml file automatically now.(previously in j2ee it was created. it is optional for some cases.I face the same issue with the jstl remove attribute and after I created the web.xml file the issue was gone.but corrected one is a new project
A: <%@ taglib prefix="c" uri="http://java.sun.com/jstl/core" %>
Change the above to:
<%@ taglib prefix="c" uri="http://java.sun.com/jstl/core_rt" %>
A: I had similar issue. I changed Tomcat version to- apache-tomcat-7.0.39 instead of -apache-tomcat-7.0.54 from SERVER- Runtime Environment
| |
doc_23526779
|
So I defined a shared LSTM Network like so:
def build_LSTM(layer_1_units=64, layer_2_units=128, dense_units_1=16, dropout=0.2, end_activation='softmax', optimizer='Adam'):
model = tf.keras.models.Sequential([
kl.LSTM(layer_1_units, return_sequences=True, input_shape=(SEQ_LEN, 56), name='Encoder/LSTM_1'),
kl.LSTM(layer_2_units, name='Encoder/LSTM_2'),
kl.BatchNormalization(name='Encoder/BatchNorm'),
kl.Dropout(dropout, name='Encoder/Dropout'),
kl.Dense(dense_units_1, activation='relu', name='Encoder/Dense')
])
return model
I also defined a class for each market, which has the following model as a member:
class MarketModel(tf.keras.Model):
def __init__(self, encoder_model, name):
super(MarketModel, self).__init__()
self.dense1 = kl.Dense(64, activation='relu', name=name + '/Dense_1')
self.out = kl.Dense(2, activation='softmax', name=name + '/Out')
self.encoder = encoder_model
def call(self, inputs):
x = self.encoder(inputs)
x = self.dense1(x)
return self.out(x)
So far so good, the models can all be trained on their respective data.
The LSTM model is built once and passed to each MarketModel as the encoder_model.
My goal is to have the LSTM learn to create a latent space which is then used by the additional Dense layers for prediction.
After checking the histograms, however, I realized that the encoder network weights are not changing at all.
I checked the trainable_variables and all layers are listed, so in theory this should work, right?
I also saved the encoder weights before a training step via
old_enc_weights = tf.identity(market.model.encoder.layers[4].weights[0])
and compared them to the weights after training
print(market.model.encoder.layers[4].weights[0] - old_enc_weights)
and sure enough, the weights did not change at all (the printed out result only contains 0's)
What am I missing? Shouldn't the gradient propagate through the Sequential LSTM network as well? Since I am only adding two layers, the gradient should not vanish, right?
| |
doc_23526780
|
Hello dear,
On flutter:1.12.13+hotfix.8, when I build release apk file get some error Like below:
Thanks!
| |
doc_23526781
|
I'm using this script on a landing page separate from Joomla. I had check the phpinfo and this is what it's show.
mail.add_x_header On On
mail.force_extra_parameters no value no value
mail.log no value no value
sendmail_from no value no value
sendmail_path no value no value
I'm wondering if the joomla is interfering with the mail function of php or if the function is not properly set up in php.ini
Thank you!
A: I am sure there are lot of plugins you can install.
Or,
Try PHPMailer. I have used it in the past. It's very easy to implement.
https://github.com/PHPMailer/PHPMailer
Hope this helps.
A: It means you haven't correctly setup your mail server in Joomla!
To setup the mail server go to Global Configurations > Server > Mail settings
Choose the best option for your mail settings from the dropdown and setup!
If you have a cPanel, go to Mail section and find info about your email, authentication type and more!
Hope this helps!
| |
doc_23526782
|
doesn't exist: SHOW FIELDS FROM gateway_options
A: I've had the same problem. Basically, there's a way to define the order in which extensions are loaded but not when their migrations are ran.
config.extensions = [:all, :site]
More info here.
The way I do it, is simply by renaming the "db" folder of the extensions' migrations needing to be ran later. When the others have ran, I rename it back to its original name and run the migrations again. Dirty, but it works.
There could probably be a way to make a rake task and automate this.
| |
doc_23526783
|
If i put no location in and search it returns all results regardless of location, which is fine. If i put in a location that does not exist and some keywords, It returns all results matching the keywords and seems to ignore the location.
Also if i leave the keywords empty and search by a location that does exist, it seem that it ignores the location again and just returns all results.
So it would seem my logic for setting the location is not working.
$keys = explode(" ",$tag);
$search_sql = "SELECT DISTINCT providers.* FROM providers JOIN provider_tags ON providers.id = provider_tags.provider_Id JOIN tags ON provider_tags.tag_id = tags.id WHERE tags.tag_name LIKE '%$tag%' OR providers.provider_name LIKE '%$tag%' OR providers.provider_contact_name LIKE '%$tag%' OR providers.provider_features LIKE '%$tag%' ";
foreach($keys as $k){
$search_sql .= " OR tags.tag_name LIKE '%$k%' OR providers.provider_name LIKE '%$k%' OR providers.provider_contact_name LIKE '%$k%' OR providers.provider_features LIKE '%$k%' ";
}
$search_sql .= " AND (providers.provider_town LIKE '%{$location}%' OR providers.provider_local_area LIKE '%{$location}%' OR providers.provider_postcode LIKE '%{$location}%')";
echo $search_sql;
$gettags = mysqli_query($con, $search_sql) or die(mysqli_error($con));
A: You are adding a bunch of OR conditions in a loop and then a big AND condition for the location. Your AND condition is checked with the last OR of the loop. If any of the others OR in the condition is true then you get a result no matters the AND condition.
Edit :
You'll probably get the results you want if you :
*
*wrap every OR conditions with parenthesis;
*wrap all the OR conditions together.
Something like :
$search_sql = "SELECT DISTINCT providers.* FROM providers JOIN provider_tags ON providers.id = provider_tags.provider_Id JOIN tags ON provider_tags.tag_id = tags.id WHERE ( (tags.tag_name LIKE '%$tag%' OR providers.provider_name LIKE '%$tag%' OR providers.provider_contact_name LIKE '%$tag%' OR providers.provider_features LIKE '%$tag%') ";
foreach($keys as $k){
$search_sql .= " OR (tags.tag_name LIKE '%$k%' OR providers.provider_name LIKE '%$k%' OR providers.provider_contact_name LIKE '%$k%' OR providers.provider_features LIKE '%$k%') ";
}
$search_sql .= ") AND (providers.provider_town LIKE '%{$location}%' OR providers.provider_local_area LIKE '%{$location}%' OR providers.provider_postcode LIKE '%{$location}%')";
| |
doc_23526784
|
Ex: n=3, myString = "001" or "002" or ... "999" (except number 0 at begin)
p/s: I am using Ruby 1.8.7
A: n.times.map { (0..9).to_a.sample }.join
A: If it's for a password or something:
require 'securerandom'
random_number = SecureRandom.random_number(10**n)
formatted_number = "0#{random_number}"
Edit: If it doesn't need to be secure:
random_number = rand(10**n)
formatted_number = "0#{random_number}"
| |
doc_23526785
|
Note that I don't want to reserve
A: The other way is to define static array as large as possible and write your own malloc/free subroutines. It is simple especially if there is no multithreading or other kind of shared usage of the allocated blocks. You keep the address of first empty block and in the beginning of each block is stored the size of block and address of next free block.
PS: allocated (reserved) blocks also contain block size as prefix. The address of next block is not used here and can be 0 as flag for "reserved" memory. More simple solution is to have only block size and flag free/used_block but in this way you have to scan multiple reserved blocks until reaching free block which is slower than a chain of only free blocks.
A: The mmap2 and brk system calls are the easiest way to do this in assembly. The mmap2 syscall is more difficult to use in assembly, but if you need a large amount of dynamically allocated memory, this is the way to go.
brk is easy to use, it works by moving the "program break" (the boundary of your program's memory space) effectively allocating more memory for your program. This is the way to go if you need a small amount of dynamic memory (e.g. less than a full page).
| |
doc_23526786
|
A: The migrate Task has a "target" attribute which lets you specify that.
target - The target version up to which Flyway should consider
migrations. Migrations with a higher version number will be ignored.
The special value current designates the current version of the
schema.
Doc for CommandLine: https://flywaydb.org/documentation/usage/commandline/migrate
Example for maven
mvn -Dflyway.target=5.1 flyway:migrate
A: In case you use flyway command line and you want to migrate only to V3 you should do something like this:
flyway -configFiles=myconf.conf -target=3 migrate
A: If you would like to test you migrations, you can use Java API:
@Test
public void test() {
final FluentConfiguration fluentConfiguration = Flyway.configure()
.dataSource(dataSource);
fluentConfiguration.target("3") // stable version
.load()
.migrate();
// ... your SQL injection queries
fluentConfiguration
.target("latest") // remaining versions you need to test
.load()
.migrate();
// ... your SQL select checks
}
| |
doc_23526787
|
Is it possible to handle such error in Python script? By handle I mean keep trying to save file after some time by using time.sleep function. I have tried with most common approach:
import shutil
try:
shutil.copy2('Track_Changes_Testing.xlsx', destination_on_sharepoint)
except Exception as err:
print(err)
But I only get Windows error message popping out
A: You could do this with time.sleep() and a while loop, like you requested. However be aware that this script will run forever if the file does continue to be used. I am not sure if that behavior is intended or would be ideal
import shutil
import time
import sys
while True:
print("Runned") #debugging
try:
shutil.copy2('Track_Changes_Testing.xlsx', destination_on_sharepoint)
break
except (OSError, IOError):
print("OSError or IOError")
# wair 5 min before trying again
time.sleep(300)
except shutil.Error as e:
print(f"Error while copying: {e}" )
# wair 5 min before trying again
time.sleep(300)
except:
print("Unexpected error:", sys.exc_info()[0])
# wair 5 min before trying again
time.sleep(300)
| |
doc_23526788
|
>>>import scrapy
>>>dir(scrapy)
['Field', 'FormRequest', 'Item', 'Request', 'Selector', 'Spider', '__all__', '__builtins__', '__doc__', '__file__', '__name__', '__package__', '__path__', '__version__', '_txv', 'exceptions', 'http', 'item', 'link', 'selector', 'signals', 'spiders', 'twisted_version', 'utils', 'version_info']
In the documentation is said:
A module is a file containing Python definitions and statements.
So I try find scrapy file to see definitions names inside:
$find / -name "scrapy*" -print
/usr/local/lib/python2.7/dist-packages/scrapy
/usr/local/lib/python2.7/dist-packages/scrapy/templates/project/scrapy.cfg
/usr/local/bin/scrapy
/root/tutorial/scrapy.cfg
But inside that files no have any names like dir(scrapy) results.
I'm complete new in python and OO, an try understand more about flush in Scrapy Framework.
A: After importing a module, just type the module name again to see the actual file corresponding to that module
>>> import scrapy
>>> scrapy
<module 'scrapy' from 'venv/scrapy/local/lib/python2.7/site-packages/scrapy/__init__.pyc'>
From the above output, we can see that it is from the file 'venv/scrapy/local/lib/python2.7/site-packages/scrapy/__init__.pyc' and the corresponding python source file would be 'venv/scrapy/local/lib/python2.7/site-packages/scrapy/__init__.py'
If you openthe file 'venv/scrapy/local/lib/python2.7/site-packages/scrapy/__init__.py' in your favourite editor, you see that it imports a lot from sub-modules
# Declare top-level shortcuts
from scrapy.spiders import Spider
from scrapy.http import Request, FormRequest
from scrapy.selector import Selector
from scrapy.item import Item, Field
So if you want to see the definition for 'Item', you chave to check the file item.py under venv/scrapy/lib/python2.7/site-packages/scrapy/
| |
doc_23526789
|
On https://www.npmjs.com/package/cordova-sqlite-storage it says that:
The following features are available in litehelpers /
cordova-sqlite-ext: ...
- Pre-populated database (Android/iOS/macOS/Windows)
So what I need is a sqlite database outside the webapp and a phonegap plugin that can read from this db. So, is it correct that the plugin above can do that???
Or is there any other way how to accomplish that task?
A: cordova-sqlite-storage stores its database in the private storage directory of the app.
This is accessible only to your app and is located on the internal data partition.
For example, if your app package ID is foo.bar.com and your database has name: store.db then it will be located at /data/data/foo.bar.com/databases/store.db.
The location data/data/foo.bar.com/ is referenced as cordova.file.applicationStorageDirectory from cordova-plugin-file.
You can use the cordova-sqlite-evcore-extbuild-free variant of cordova-sqlite-storage:
Custom Android database location (supports external storage directory)
The "external storage directory" is on the "SD card" which is usually the internal memory partition accessed via the mount points /sdcard/ or /storate/emulated/0/.
Since Android 4.4, apps only have write access in the "application sandbox" directory on the SD card e.g. /sdcard/Android/data/foo.bar.com/ (cordova.file.externalApplicationStorageDirectory).
All other areas of the SD card are read-only (e.g. the root of /sdcard/ - cordova.file.externalRootDirectory) so while you could read from a database here, to write to it you'd need to copy it to either the private or external storage directory of the app. You could do this using cordova-plugin-file, for example.
| |
doc_23526790
|
How can i handle this better? Not have it logged so many times would be nice.
fetchData = (url) => {
return new Promise((res, rej) => {
fetch(url)
.then((r) => r.text())
.then((text) => {
res(text);
})
.catch((e) => rej(e));
});
};
getLogs = async () => {
try {
const { user } = this.props.auth;
const regxmlPromise = this.fetchData(
`http://127.0.0.1:60000/onexagent/api/registerclient?name=${user.id}`
);
const clientIdPromise = this.fetchData(
`/api/avaya/${user.id}/getclientid/`
);
const resolves = await Promise.all([regxmlPromise, clientIdPromise]);
const [registration, clientId] = resolves || [];
const nextNotification = await this.fetchData(
`http://127.0.0.1:60000/onexagent/api/nextnotification?clientid=${clientId}`
);
if (registration) {
const regxml = new XMLParser().parseFromString(registration);
if (regxml.attributes.ResponseCode === "0") {
axios.post(`/api/avaya/${user.id}/register/`, regxml);
console.log("ClientId sent!");
}
}
if (nextNotification) {
const xml = new XMLParser().parseFromString(nextNotification);
if (
xml.children[0].name === "VoiceInteractionCreated" ||
xml.children[0].name === "VoiceInteractionMissed" ||
xml.children[0].name === "VoiceInteractionTerminated"
) {
console.log(xml);
console.log(xml.children[0].name);
axios.post(`/api/avaya/${user.id}/logcalls/`, xml);
}
}
} catch (error) {
console.log(error);
}
};
timer = (time) => {
const date = new Date(time);
return `${date.getHours()}:${date.getMinutes()}:${date.getSeconds()}`;
};
componentDidMount() {
this.getLogs();
this.callsInterval = setInterval(this.getLogs, 5000);
}
componentWillUnmount() {
clearInterval(this.callsInterval);
}
To make things more clear, here's a screenshot of the console. I get the Get error with connection refused AND the actual error i catch and log. So even if i catch the error and do nothing with it, i still get that connection refused logged in the console no matter what.
One interesting thing is that if i use fetch-node, i get a different behavior and error message. The error with node is an object and has different props. In React, if i console.log the fetch req error, i only get "TypeError: Failed to fetch"
A: So if you are just trying to hide the network error messages you can click the gear box in Chrome and hit "hide network." Apparently in Chrome, even if you catch network errors they will still bubble up to the console unless you select that. I was using Firefox and did not see the problem at first.
Some more info can be found here: Catching net::ERR_NAME_NOT_RESOLVED for fixing bad img links
A: You can do exponential backoff: https://en.wikipedia.org/wiki/Exponential_backoff
Basically just increase the time between each fail attempt. Pretty common practice.
If you just don't want to see it you could just catch it but not log it.
| |
doc_23526791
|
<audio src="bg.mp3" autoplay="autoplay" loop="loop"></audio>
Can anyone help me in this.
A: I think you must use JS (jQuery) function to do get document ready
$(document).ready(function()
Play an audio file using jQuery when a button is clicked
You have answer here (if I follow you right)
A: You're going to need JavaScript for that. Remove the autoplay attribute:
<audio id="my_audio" src="bg.mp3" loop="loop"></audio>
and add a script like this:
window.onload = function() {
document.getElementById("my_audio").play();
}
Or if you use jQuery:
$(document).ready(function() {
$("#my_audio").get(0).play();
});
A: Just Copy and Paste this Code in Body section of your HTML Code.
<audio autoplay>
<source src="song.mp3" type="audio/mpeg">
</audio>
and make sure that your audio file should be in same folder.
A: For a repeated tone place the below code
<audio src="./audio/preloader.mp3" autoplay="autoplay" loop="loop"></audio>
and for a single time, remove the loop. Then it should be like this
<audio src="./audio/preloader.mp3" autoplay="autoplay"></audio>
| |
doc_23526792
|
My problem is the following:
I have a few prices and dates in number format that I want to plot, for example:
Prices = repmat([10; 5; 3; 4; 11; 12; 5; 2],10,1);
Dates = [726834:726834+8*10-1]';
If I plot them like this:
plot(Dates,Prices)
dateaxis('x',17)
I get x-axis values that I don't want, because they look irregular (I guess they follow certain rules but they don't look nice). How can I best set them to, e.g., always the first of the month, or first of January and first of July, or such? I know that I can probably use set(gca, 'xtick', ?? ??); but I lack some overview of how exactly I can do this and the Matlab help doesn't help me.
A: This code labels the plot with the first day of every month. To get every January or July, only certain elements of the month array should be selected. The strategy is to get every last day of the month using eomdate and add by 1. Figure 1 gives you the first day of each month, and Figure 2 gives you the months you select in the array months_to_display.
Prices = repmat([10; 5; 3; 4; 11; 12; 5; 2],10,1);
Dates = [726834:726834+8*10-1]';
firstDate = strsplit(datestr(Dates(1)-1, 'dd,mm,yyyy'),',');
lastDate = strsplit(datestr(Dates(end), 'dd,mm,yyyy'),',');
months = mod(str2double(firstDate{2}):str2double(lastDate{2})+12*(str2double(lastDate{3})-str2double(firstDate{3})),12);
months(months == 0) = 12;
years = zeros(1,length(months));
currYear = str2double(firstDate{3});
for i = 1:length(months)
years(i) = currYear;
if (months(i) == 12)
currYear = currYear + 1;
end
end
dayCount = eomdate(years,months);
firstDates = dayCount+1;
figure(1)
plot(Dates, Prices)
xticks(firstDates);
xticklabels(datestr(firstDates));
months_to_display = [1 7];
months_to_display = months_to_display - 1;
months_to_display(months_to_display == 0) = 12;
months_to_collect = ismember(months, months_to_display);
months = months(months_to_collect);
years = years(months_to_collect);
dayCount = eomdate(years,months);
firstDates = dayCount+1;
figure(2)
plot(Dates, Prices)
xticks(firstDates);
xticklabels(datestr(firstDates));
| |
doc_23526793
|
Can anyone help me in this.. I am not able to find out any IMAP Server module in perl.. ?
A: I don't know how you searched, but searching for metacpan imapserver points you to Net::IMAPServer.
But, this module is far from simple because IMAP itself is a complex protocol. This means writing a server which is both simple but also has enough of functionality needed in practice might not be possible. And if you need to check the performance of your client you better install a real server like dovecot, because otherwise you might be restricted by the performance offered by your simple test server.
| |
doc_23526794
|
For instance, I might get 08/10/2018, but I want my end result to be 08/01/2018.
Is it possible to do something along the lines of this below (obviously doesn't work but looking for suggestions).
SELECT TO_CHAR(sysdate,'MM/01/YYYY') FROM DUAL;
In this case, sysdate would be replaced with a big list of case statements and calculations. The only way I see to do it is do
TO_CHAR(huge_calculations,'MM')||'01'||TO_CHAR(huge_calculations_again,'YYYY')
A: Here are a few expressions that return the first day of the month for a given date:
Simply:
SELECT TRUNC(sysdate, 'MM') FROM DUAL;
Or:
SELECT TRUNC(sysdate) - TO_NUMBER(TO_CHAR(sysdate,'DD')) + 1 FROM dual
Or using LAST_DAY:
SELECT ADD_MONTHS((LAST_DAY(sysdate)+1),-1) FROM DUAL;
Demo on DB Fiddle
| |
doc_23526795
|
I tried following code but this is rendering output to the browser after process completes.
$v = view('users.account_varification',compact('AccessToken'));
$content = $v->render();
echo $content;
Please help me.
A: I have used following code to show loading while process is going on.
PHP output buffer has limit of 4KB so we can show limited amount of content to the browser while process is going on.
//STARTs - show loading
// Turn off output buffering
ini_set('output_buffering', 'off');
// Turn off PHP output compression
ini_set('zlib.output_compression', false);
//Flush (send) the output buffer and turn off output buffering
//ob_end_flush();
while (@ob_end_flush());
// Implicitly flush the buffer(s)
ini_set('implicit_flush', true);
ob_implicit_flush(true);
ob_start(null, 4096);
$v = view('users.loading');
echo $content = $v->render();
ob_end_flush();
//ENDs - show loading
| |
doc_23526796
|
Now, Material Design generally says you should use a "500" color as a primary and a "700"-shade of that same color as the primary dark color. But since Chrome automatically calculates this value (and the 500/700 difference depends on the color) it doesn't completely match the Material colors and is difficult to predict without the formula Chrome uses.
So my question is: what formula does Chrome use to calculate the darker shade so I can predict it?
Related post: Meta themecolor in Chrome
| |
doc_23526797
|
I find that the entries made by the TelegramBot are not retrieved with this command, only message from users.
My question is what do I need to do to be able to have the bot channel posts in my getupdates?
| |
doc_23526798
|
"RestartPolicy": {
"Name": "",
"MaximumRetryCount": 0
},
when Name is "", what is the restart policy?
Thanks,
A: Moreover whatever the restart policy is you can update the restart policy of the existing container with
docker update --restart=unless-stopped my-container
| |
doc_23526799
|
Please see below example data structure. In reality, all data are normalized already.
A1
A2
A3
B1
B2
C1
C2
D1
D2
D3
protein1
15
30
28
6
7
9
30
45
66
43
protein2
2
4
3
56
54
23
25
12
13
5
protein3
2
4
3
56
54
23
25
12
13
5
protein4
2
4
3
56
54
23
25
12
13
5
A: One way to do this:
First reshape the data into a format that the model can handle. This uses the tidyverse package.
df_long <- df %>%
pivot_longer(cols = 2:ncol(.)) %>%
pivot_wider(names_from = prot, values_from = value) %>%
separate(name, into = c("trt"), sep = "\\d")
Which looks like:
trt protein1 protein2 protein3 protein4
<chr> <dbl> <dbl> <dbl> <dbl>
1 A 15 2 2 2
2 A 30 4 4 4
3 A 28 3 3 3
4 B 6 56 56 56
5 B 7 54 54 54
6 C 9 23 23 23
7 C 30 25 25 25
8 D 45 12 12 12
9 D 66 13 13 13
10 D 43 5 5 5
Then you can easily use whatever model/statistical test you would like to apply. For example, to generate an ANOVA for each column, you could define a helper function and then map over the columns:
fit_aov <- function(col) {
aov(col ~ trt, data = df_long)
}
anovas <- map(df_long[, 2:ncol(df_long)], fit_aov)
summary(anovas$protein2)
Df Sum Sq Mean Sq F value Pr(>F)
trt 3 3648 1216.0 165.8 3.69e-06 ***
Residuals 6 44 7.3
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.