id
stringlengths 5
11
| text
stringlengths 0
146k
| title
stringclasses 1
value |
|---|---|---|
doc_23525200
|
try, someFunction('path/to/files', 'algorithm'), exit(); catch ME, warning(ME.message), exit(); end
For this, I generate the following command to handle ssh, executing matlab and run the above command:
C:\plink.exe user@server -pw ****** "matlab -nodesktop -nosplash -noawt -r 'try, someFunction('path/to/files', 'algorithm'), exit(); catch ME, warning(ME.message), exit(); end'
Running the above command, I get the following error in MATLAB:
Warning: Undefined function or variable 'path/to/files'.
As it turns out, in matlab, the command is constructed like following:
someFunction(path/to/files, algorithm)
which is without "single quotes": thank you, plink :( .
Can you please help to generate the correct command? or if there is already a question asked with similar problem, I would be thankful to direct me to it.
Thanks,
A: It's not Plink fault. It's how Windows command-line interpreter works.
Adding cmd and batch-file tags, so that you may get answers from experts on the field.
Anyway, I can see two solutions:
*
*Put your command to a file like:
matlab -nodesktop -nosplash -noawt -r "try, someFunction('path/to/files', 'algorithm'), exit(); catch ME, warning(ME.message), exit(); end"
And use the file (command.txt) with Plink like:
C:\plink.exe user@server -pw ****** -m command.txt
*If you do not want to use a separate file for the command, this should work too:
echo matlab -nodesktop -nosplash -noawt -r "try, someFunction('path/to/files', 'algorithm'), exit(); catch ME, warning(ME.message), exit(); end" | C:\plink.exe user@server -pw ****** -T
(Note the -T switch).
| |
doc_23525201
|
When the third combo box is changed, I am using a nested If statement to determine what row this combination lies in (so I can populate textboxes on the form). However, the first If Statement is failing to trigger (i.e. return a 'true' value). There is an acceptable value in the cell, so it should progress to the next If statement, but it just jumps to the end of my While statement.
Private Sub cmb_State_Change()
Dim Project, licence, state As String
Dim selectedrow As Integer
Dim LastRow As Integer
Dim i, j As Integer
selededrow = 0
Project = cmb_Project.Value
licence = cmb_Licence.Value
state = cmb_State.Value
i = 1
j = 3
While selectedrow = 0
If Worksheets("Entitlements").Cells(i, j) = Project Then
i = i + 6
If Worksheets("Entitlements").Cells(i, j) = licence Then
i = i - 1
If Worksheets("Entitlements").Cells(i, j) = state Then
selectedrow = j
End If
End If
Else
j = j + 1
i = i - 5
End If
Wend
End Sub
Can anybody see why it would be behaving like this?
A: Cells takes its arguments as rows then columns so you need to reverse i and j in your code. When you do Range("C4") it is columns then rows i.e. column C, row 4 - but Cells is the other way around.
So, currently you have
If Worksheets("Entitlements").Cells(i, j) = Project Then
i = i + 6
If Worksheets("Entitlements").Cells(i, j) = licence Then
i = i - 1
If Worksheets("Entitlements").Cells(i, j) = state Then
selectedrow = j
Which is making your second lookup 6 rows down - not 6 columns across.
Rewrite those as:
If Worksheets("Entitlements").Cells(j, i) = Project Then
i = i + 6
If Worksheets("Entitlements").Cells(j, i) = licence Then
i = i - 1
If Worksheets("Entitlements").Cells(j, i) = state Then
selectedrow = j
Another option
You can just rewrite the code block as this:
r = 3
While selectedrow = 0
If Worksheets("Entitlements").Cells(r, 1) = Project And _
Worksheets("Entitlements").Cells(r, 7) = licence And _
Worksheets("Entitlements").Cells(r, 6) = State Then
selectedrow = r
Else
r = r + 1
End If
Wend
An even better option
Using the While..Wend loop means the code will run to the last row (over million rows) in the sheet if there is no match. You can use a standard bit of code to find the last row in your data:
Set ws = Worksheets("Entitlements")
LastRow = ws.Cells(ws.Rows.Count, "A").End(xlUp).Row
Then use a For..Next loop over that range. For example:
Option Explicit
Private Sub cmb_State_Change()
Dim Project As String, licence As String, state As String
Dim selectedrow As Integer
Dim LastRow As Integer
Dim r As Integer
Dim ws As Worksheet
selectedrow = 0
Project = "hello" 'cmb_Project.Value
licence = "world" 'cmb_Licence.Value
state = "stuff" 'cmb_State.Value
Set ws = Worksheets("Entitlements")
LastRow = ws.Cells(ws.Rows.Count, "A").End(xlUp).Row
For r = 3 To LastRow
If ws.Cells(r, 1) = Project And _
ws.Cells(r, 7) = licence And _
ws.Cells(r, 6) = state Then
selectedrow = r
Exit For
Next r
End Sub
Note the use of Option Explicit to catch any typos in your code. In your original question you had Dim selectedrow As Integer and selededrow = 0 which would have thrown a compile time error if you were using `Option Explicit'.
| |
doc_23525202
|
any help would be appreciated.
A: you can just go to http://sourceforge.net/projects/itext/files/latest/download
the file you download will be a zip file, you need unzip it and grab the itextpdf-5.5.6.jar. its size is around 2087KB, you may need itext-pdfa-5.5.6.jar or itext-xtra-5.5.6.jar but itextpdf-5.5.6.jar is the core one.I don't think you need worry about other jars.
| |
doc_23525203
|
A: You can use the global 'allowInterrupts' property for this:
set the allowInterrupts to false
... task you don't want interrupted ...
set the allowInterrupts to true
While this property is false, Ctrl-Period will have no effect.
| |
doc_23525204
|
import pytest
import tornado
from tornado.testing import AsyncTestCase
from tornado.httpclient import AsyncHTTPClient
from tornado.web import Application, RequestHandler
import urllib.parse
class TestRESTAuthHandler(AsyncTestCase):
@tornado.testing.gen_test
def test_http_fetch_login(self):
data = urllib.parse.urlencode(dict(username='admin', password='123456'))
client = AsyncHTTPClient(self.io_loop)
response = yield client.fetch("http://localhost:8080//#/login", method="POST", body=data)
# Test contents of response
self.assertIn("Automation web console", response.body)
Received error when running test:
raise TimeoutError('Operation timed out after %s seconds' % timeout)
tornado.ioloop.TimeoutError: Operation timed out after 5 seconds
A: Set ASYNC_TEST_TIMEOUT environment variable.
Runs the IOLoop until stop is called or timeout has passed.
In the event of a timeout, an exception will be thrown. The default timeout is 5 seconds; it may be overridden with a timeout keyword argument or globally with the ASYNC_TEST_TIMEOUT environment variable. -- from http://www.tornadoweb.org/en/stable/testing.html#tornado.testing.AsyncTestCase.wait
A: You need to use AsyncHTTPTestCase, not just AsyncTestCase. A nice example is in Tornado's self-tests:
https://github.com/tornadoweb/tornado/blob/d7d9c467cda38f4c9352172ba7411edc29a85196/tornado/test/httpclient_test.py#L130-L130
You need to implement get_app to return an application with the RequestHandler you've written. Then, do something like:
class TestRESTAuthHandler(AsyncHTTPTestCase):
def get_app(self):
# implement this
pass
def test_http_fetch_login(self):
data = urllib.parse.urlencode(dict(username='admin', password='123456'))
response = self.fetch("http://localhost:8080//#/login", method="POST", body=data)
# Test contents of response
self.assertIn("Automation web console", response.body)
AsyncHTTPTestCase provides convenient features so you don't need to write coroutines with "gen.coroutine" and "yield".
Also, I notice you're fetching a url with a fragment after "#", please note that in real life web browsers do not include the fragment when they send the URL to the server. So your server would see the URL only as "//", not "//#/login".
| |
doc_23525205
|
class MyMigrationName < ActiveRecord::Migration[5.2]
def up
sql = <<~SQL
...
create materialized view if not exists foo_1 as ... ;
create materialized view if not exists foo_2 as ... ;
...
SQL
execute sql
end
def down
...
end
I am considering switching from this current approach to a different one, where the SQL code is stored inside separate SQL files, for example in db/migrate/concerns/create_foo_matviews.sql. The code is read from the file and executed from inside the migrations, like so:
class MyMigrationName < ActiveRecord::Migration[5.2]
def up
execute File.read(File.expand_path('./concerns/create_foo_matviews.rb', __FILE__))
end
def down
...
end
The pros of this approach are:
*
*It is easier to see the differences between the old and the new SQL code using git diff (especially important given that materialized views' definitions are big, but the actual changes in migrations are relatively small).
*The SQL file adds syntax highlighting to the SQL code.
*There is less copy/pasted code if I only change the relevant parts in the SQL file.
Are there any problems associated with this proposed approach? If yes, what would be an alternative solution to maximize maintainability?
See also
*
*Is it possible to use an external SQL file in a Rails migration?
*Running sql file using rails migration file
*Execute SQL-Statement from File with ActiveRecord
A: I'd leave it in the Migration.
Mainly because the migration then contains everything that actually makes up the DB change.
You would need to have two external SQL files (up and down) that I need to search/find first before I understand what the migration does.
Depending on the Editor you are using, you will get (limited) syntax highlighting
The migrations that execute custom SQL would all look the same, just the name of the external file would be different.
What problem are you trying to solve? Just the "bulky" strings? I don't think that this is problem (to be honest once the migration is run, you not go back to it anyhow) that is worth spennding a lot of time on. Just to the simplest thing: SQL in heredoc string.
There are also gems that allow you to create (materialized) views with normal migration code (by adding support for create_view or similar) but i'd not add an additional dependency for something this simple.
Also consider changing from schema.rb to structure.sql, if not yet done.
A: Sound like you want to create your own helpers to create materialized views, something like add_index or add_column.
You could make a module named like MaterializedMigrations in your lib directory. then you can required it in a initializer and for last you include it in your migration code, like this:
class MyMigrationName < ActiveRecord::Migration[5.2]
include MaterializedMigrations
def up
create_materialized_view("name_of_view")
end
end
The helper API is only a suggestion, you could design better API for your use cases.
| |
doc_23525206
|
import React, { Component } from 'react';
import { View } from 'react-native';
import Splash from './Splash';
import createHistory from 'history/createMemoryHistory';
const history = createHistory();
class SplashContainer extends Component {
goToLogin = () => {
history.push('/Login');
}
goToRegister = () => {
history.push('/SignUp');
}
render () {
console.log(history)
return (
<Splash
goToLogin={this.goToLogin}
goToRegister={this.goToRegister}
/>
);
}
}
export default SplashContainer;
import React from 'react';
import { StyleSheet, View, Text } from 'react-native';
import { Button } from 'native-base';
import { Link } from 'react-router-native';
import PropTypes from 'prop-types';
const Splash = (props) => {
console.log(props)
return (
<View style={styles.container}>
<Button light block onPress={props.goToLogin}>
<Text>Login</Text>
</Button>
<Button dark block bordered style={{marginTop: 10}} onPress={props.goToRegister}>
<Text>Register</Text>
</Button>
</View>
);
}
Splash.propTypes = {
goToLogin: PropTypes.func.isRequired,
goToRegister: PropTypes.func.isRequired
}
export default Splash;
A: I don't know your Router config, but your methods should be:
goToLogin = () => {
const { history } = this.props
history.push('/Login');
}
history will passed down via props of component inside Router's stack.
| |
doc_23525207
|
Success
How to include two namespace in dwl 2.0?
A: You mean just use two XML namespaces in an XML output?
%dw 2.0
output application/xml
ns orders http://www.acme.com/shemas/Orders
ns stores http://www.acme.com/shemas/Stores
---
root:
orders#orders: {
stores#shipNodeId: "SF01",
stores#shipNodeId @(shipsVia:"LA01"): "NY03"
}
Output:
<?xml version='1.0' encoding='UTF-8'?>
<root>
<orders:orders xmlns:orders="http://www.acme.com/shemas/Orders">
<stores:shipNodeId xmlns:stores="http://www.acme.com/shemas/Stores">SF01</stores:shipNodeId>
<stores:shipNodeId xmlns:stores="http://www.acme.com/shemas/Stores" shipsVia="LA01">NY03</stores:shipNodeId>
</orders:orders>
</root>
Taken from the cookbook in the docs: https://docs.mulesoft.com/mule-runtime/4.3/dataweave-cookbook-include-xml-namespaces
| |
doc_23525208
|
Any idea?
| |
doc_23525209
|
I have tried doing so with this code, but the legends don't exactly look the same.
ggplot(mpg, aes(displ, hwy, colour = class)) +
geom_point() +
geom_smooth(method = "lm", se = F) +
theme(legend.position = "bottom", legend.box = "horizontal") +
scale_color_discrete(NULL) +
guides(fill = guide_legend(ncol = 1, nrow = 1, byrow = TRUE))
A: You are setting nrow and ncol to be one, and you are also setting the wrong guide - you should adjust the colour legend, not fill.
library(ggplot2)
ggplot(mpg, aes(displ, hwy, colour = class)) +
geom_point() +
geom_smooth(method = "lm", se = F) +
theme(legend.position = "bottom", legend.box = "horizontal") +
scale_color_discrete(NULL) +
guides(color = guide_legend(nrow = 1))
#> `geom_smooth()` using formula = 'y ~ x'
| |
doc_23525210
|
A: you can use Java via JMX APIs to periodically poll for queue stats (see this guide)
for the notification approach, you'd need to use advisory messages to monitor messages delivered to a queue (see this guide)
A: For a JMX-free approach, you can also use the XML feed served by the activemq console page. The XML feed is hosted at http://ip:port/admin/xml/queues.jsp
This will have tags similar to this for each queue:
<queue name="your queue">
<stats size="0" consumerCount="1" enqueueCount="0" dequeueCount="0"/>
....
</queue>
Just parse this XML in your code and you are good to go.
A: Yes it is possible in Java.
Starting from version 5.8 of ActiveMQ jolokia agent comes embedded. So it is possible for you to get all stats that JMX can pull using HTTP request which will retuen you stats as JSON and then you can check current values and raise Email alert using SMTP if values go beyond threshold you have decided.
Lets say you want to pull Broker stats using Jolokia hit below URL in your browser enter AMQ console username and password which is admin by default
http://servername.com:8161/api/jolokia/read/org.apache.activemq:type=Broker,brokerName=localhost
Or if you dont want to go through all this trouble , You can use ready made Python script which I have created to Monitor AMQ Heap, Queue params and Broker availability.You can take a look , it may help you in developing your custiom script or program
AMQMonitor and Alerting script
| |
doc_23525211
|
Here is the action I'm trying to do and the method I'm trying to call.
butCalcFact.setOnAction(new EventHandler<ActionEvent>() {
@Override
public void handle(ActionEvent event) {
String text = tfInput.getText();
tfResult.setText(Long.toString(tfInput.factorial()));
}
});
/** Return the factorial for the specified number */
public static long factorial(int n) {
if (n == 0) // Base case
return 1;
else
return n * factorial(n - 1); // Recursive call
}
A: You are not passing any integer into the factorial method.
butCalcFact.setOnMouseClicked(event -> {
tfResult.setText(factorial(Integer.parseInt(tfInput.getText())) + "");
});
| |
doc_23525212
|
std::__1::__tree<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::less<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >::__insert_unique(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) + 156, stop reason = EXC_BAD_ACCESS (code=1, address=0x0)
where the fix was to upgrade a library not directly involved in the crash, to correct for a C++ ABI issue. I'm just surprised an ABI issue could have an effect so far from the cause, and wondering if the standard library itself is having some state corrupted?
A: C++ doesn't offer a protected environment. If any part of the code does something forbidden (e.g. deleting an object twice, writing off the limits of an array ...) then any other place of the code can do anything, either immediately or after a long time.
Actually often the problem is that errors just don't apparently cause any harm as the program (apparently) simply works.
An error about a violation of the ABI is very low level (for example the machine code could be required to preserve a certain register and it doesn't) and there's nothing you can be surprised about. Welcome to "undefined behavior" hell.
In the specific std::set and std::map in certain implementations are known to depend on sentinels, so overwriting a global variable can affect even maps and sets created later.
Also almost everything in C++ depends on dynamically allocated memory and a program violating the ABI can corrupt the data structures related to that and the effects can manifest millions of executed instructions later (when for example that corrupted free block gets reallocated for something else).
| |
doc_23525213
|
I need the text message/call to come from the program but when it appears on the phone to act like it's a real sms message/call. Like the emulator Control does in Eclipse.
I have tried Telephonymanager over and over and the closest I have come is trying to fake it with a notification Manager.
ok so Let me explain this better I need to make an activity where my program sends a real text message and phone call to the phone. any ideas? (Like someone else is sending it to the phone)
| |
doc_23525214
|
It is my belief that this isn't possible (as the users dont know html to add formatting even if i could preserve it when writing to the db) unless you use one of those text element controls (http://jqueryte.com/demos) with the formatting optionsacross the top. It would look ludicrous havingone of those for a 10 one line text fields wouldnt it?
Or can you have the formattting bar at the top of the page and apply it to multiple input fields. Granted i have never seen that on any website
Any help is much appreciated
A: You can use something simple like richtextarea and tweak it so the formatting controls only are visible while the textbox has focus.
Another option is to do what you stated, use a full blown rich text editor, but look for one that can be restricted and preferably hides the controls whil not focused, one example is TinyMCE. It should ofcourse only load the whole editor code once, and be able to apply to multiple fields on a page.
Another lightweight one is CLEeditor, but it seems to be fading away. It was capable of using the standard keystrokes Ctrl-B for Bold and Ctrl-I for italic without having the toolbar visible (however it may not work in all browsers any more since it has not been updated in ages).
| |
doc_23525215
|
Till now i have tried these all
1.
<video id="v-control" width="100%" autoplay="" loop="" tabindex="0">
<source type="video/mp4" src="assets/img/MyVideo.mp4" alt=" MyVideo" />
<source type="video/webm" src="assets/img/MyVideo.webm" alt=" MyVideo" />
</video>
2.
Jquery plugin for background video
<div data-vide-bg="MyVideo">
<video id="v-control" width="100%" autoplay="" loop="" tabindex="0">
<source type="video/mp4" src="assets/img/MyVideo.mp4" alt=" MyVideo" />
<source type="video/webm" src="assets/img/MyVideo.webm" alt=" MyVideo" />
</video>
</div>
and included
<script src="//ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<script src="src/jquery.vide.js"></script>
in index.html
Still i am facing the same issue. Can anyone help , where am i doing wrong?
A: Try using onloadedmetadata="this.muted=true" within video tag as below, Hope this helps:
<video onloadedmetadata="this.muted=true" autoplay loop preload="auto">
<source src="Video_url" type="video/mp4">
</video>
A: This works for me Ionic 5+ / Angular 12+
.html
<video autoplay muted loop playsinline preload="auto" onloadedmetadata="this.muted = true">
<source [src]="video" type="video/mp4" />
</video>
.ts
video = 'assets/videos/background-video.mp4';
Important note: You must use around 1 MB video here. If it'll have around 5 MB then it won't work on the web.
| |
doc_23525216
|
import numpy as np
import os, sys
import argparse
from PIL import Image
from freeze_graph import freeze_graph
import tensorflow as tf
import time
from net import *
sys.path.append(os.path.join(os.path.dirname(sys.path[0]), "./"))
from custom_vgg16 import *
# gram matrix per layer
def gram_matrix(x):
assert isinstance(x, tf.Tensor)
b, h, w, ch = x.get_shape().as_list()
features = tf.reshape(x, [b, h*w, ch])
# gram = tf.batch_matmul(features, features, adj_x=True)/tf.constant(ch*w*h, tf.float32)
gram = tf.matmul(features, features, adjoint_a=True)/tf.constant(ch*w*h, tf.float32)
return gram
# total variation denoising
def total_variation_regularization(x, beta=1):
assert isinstance(x, tf.Tensor)
wh = tf.constant([[[[ 1], [ 1], [ 1]]], [[[-1], [-1], [-1]]]], tf.float32)
ww = tf.constant([[[[ 1], [ 1], [ 1]], [[-1], [-1], [-1]]]], tf.float32)
tvh = lambda x: conv2d(x, wh, p='SAME')
tvw = lambda x: conv2d(x, ww, p='SAME')
dh = tvh(x)
dw = tvw(x)
tv = (tf.add(tf.reduce_sum(dh**2, [1, 2, 3]), tf.reduce_sum(dw**2, [1, 2, 3]))) ** (beta / 2.)
return tv
parser = argparse.ArgumentParser(description='Real-time style transfer')
parser.add_argument('--gpu', '-g', default=-1, type=int,
help='GPU ID (negative value indicates CPU)')
parser.add_argument('--dataset', '-d', default='dataset', type=str,
help='dataset directory path (according to the paper, use MSCOCO 80k images)')
parser.add_argument('--style_image', '-s', type=str, required=True,
help='style image path')
parser.add_argument('--batchsize', '-b', type=int, default=1,
help='batch size (default value is 1)')
parser.add_argument('--ckpt', '-c', default=None, type=int,
help='the global step of checkpoint file desired to restore.')
parser.add_argument('--lambda_tv', '-l_tv', default=10e-4, type=float,
help='weight of total variation regularization according to the paper to be set between 10e-4 and 10e-6.')
parser.add_argument('--lambda_feat', '-l_feat', default=1e0, type=float)
parser.add_argument('--lambda_style', '-l_style', default=1e1, type=float)
parser.add_argument('--epoch', '-e', default=2, type=int)
parser.add_argument('--lr', '-l', default=1e-3, type=float)
parser.add_argument('--pb', '-pb', default=True, type=bool, help='save a pb format as well.')
args = parser.parse_args()
data_dict = loadWeightsData('./vgg16.npy')
batchsize = args.batchsize
gpu = args.gpu
dataset = args.dataset
epochs = args.epoch
learning_rate = args.lr
ckpt = args.ckpt
lambda_tv = args.lambda_tv
lambda_f = args.lambda_feat
lambda_s = args.lambda_style
style_image = args.style_image
save_pb = args.pb
gpu = args.gpu
style_name, _ = os.path.splitext(style_image.split(os.sep)[-1])
fpath = os.listdir(args.dataset)
imagepaths = []
for fn in fpath:
base, ext = os.path.splitext(fn)
if ext == '.jpg' or ext == '.png':
imagepath = os.path.join(dataset, fn)
imagepaths.append(imagepath)
data_len = len(imagepaths)
iterations = int(data_len / batchsize)
print ('Number of traning images: {}'.format(data_len))
print ('{} epochs, {} iterations per epoch'.format(epochs, iterations))
style_np = np.asarray(Image.open(style_image).convert('RGB').resize((224, 224)), dtype=np.float32)
styles_np = [style_np for x in range(batchsize)]
if gpu > -1:
device = '/gpu:{}'.format(gpu)
else:
device = '/cpu:0'
with tf.device(device):
inputs = tf.placeholder(tf.float32, shape=[batchsize, 224, 224, 3], name='input')
net = FastStyleNet()
saver = tf.train.Saver(restore_sequentially=True)
saver_def = saver.as_saver_def()
target = tf.placeholder(tf.float32, shape=[batchsize, 224, 224, 3])
outputs = net(inputs)
# style target feature
# compute gram maxtrix of style target
vgg_s = custom_Vgg16(target, data_dict=data_dict)
feature_ = [vgg_s.conv1_2, vgg_s.conv2_2, vgg_s.conv3_3, vgg_s.conv4_3, vgg_s.conv5_3]
gram_ = [gram_matrix(l) for l in feature_]
# content target feature
vgg_c = custom_Vgg16(inputs, data_dict=data_dict)
feature_ = [vgg_c.conv1_2, vgg_c.conv2_2, vgg_c.conv3_3, vgg_c.conv4_3, vgg_c.conv5_3]
# feature after transformation
vgg = custom_Vgg16(outputs, data_dict=data_dict)
feature = [vgg.conv1_2, vgg.conv2_2, vgg.conv3_3, vgg.conv4_3, vgg.conv5_3]
# compute feature loss
loss_f = tf.zeros(batchsize, tf.float32)
for f, f_ in zip(feature, feature_):
loss_f += lambda_f * tf.reduce_mean(tf.subtract(f, f_) ** 2, [1, 2, 3])
# compute style loss
gram = [gram_matrix(l) for l in feature]
loss_s = tf.zeros(batchsize, tf.float32)
for g, g_ in zip(gram, gram_):
loss_s += lambda_s * tf.reduce_mean(tf.subtract(g, g_) ** 2, [1, 2])
# total variation denoising
loss_tv = lambda_tv * total_variation_regularization(outputs)
# total loss
loss = loss_s + loss_f + loss_tv
# optimizer
train_step = tf.train.AdamOptimizer(learning_rate).minimize(loss)
with tf.Session(config=tf.ConfigProto(allow_soft_placement=True)) as sess:
ckpt_directory = './ckpts/{}/'.format(style_name)
if not os.path.exists(ckpt_directory):
os.makedirs(ckpt_directory)
# training
tf.global_variables_initializer().run()
if ckpt:
if ckpt < 0:
checkpoint = tf.train.get_checkpoint_state(ckpt_directory)
input_checkpoint = checkpoint.model_checkpoint_path
else:
input_checkpoint = ckpt_directory + style_name + '-{}'.format(ckpt)
saver.restore(sess, input_checkpoint)
print ('Checkpoint {} restored.'.format(ckpt))
for epoch in range(1, epochs + 1):
imgs = np.zeros((batchsize, 224, 224, 3), dtype=np.float32)
for i in range(iterations):
for j in range(batchsize):
p = imagepaths[i * batchsize + j]
imgs[j] = np.asarray(Image.open(p).convert('RGB').resize((224, 224)), np.float32)
feed_dict = {inputs: imgs, target: styles_np}
loss_, _= sess.run([loss, train_step,], feed_dict=feed_dict)
print('[epoch {}/{}] batch {}/{}... loss: {}'.format(epoch, epochs, i + 1, iterations, loss_[0]))
saver.save(sess, ckpt_directory + style_name, global_step=epoch)
if save_pb:
if not os.path.exists('./pbs'):
os.makedirs('./pbs')
freeze_graph(ckpt_directory, './pbs/{}.pb'.format(style_name), 'output')
and when i run, it trains it on the images (i'm only using one image at the moment just to get the whole process working) and prints out this at the command line:
D:\myName\tensorflow-fast-neuralstyle>python train.py -s picasso.jpg -d trainTest -g 0
C:\ProgramData\Anaconda3\lib\site-packages\h5py\__init__.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
Number of traning images: 1
2 epochs, 1 iterations per epoch
2018-05-16 18:47:33.268196: I T:\src\github\tensorflow\tensorflow\core\platform\cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2018-05-16 18:47:33.582973: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1356] Found device 0 with properties:
name: GeForce GTX 1070 major: 6 minor: 1 memoryClockRate(GHz): 1.645
pciBusID: 0000:01:00.0
totalMemory: 8.00GiB freeMemory: 6.63GiB
2018-05-16 18:47:33.590004: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1435] Adding visible gpu devices: 0
2018-05-16 18:47:34.243696: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-05-16 18:47:34.247206: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:929] 0
2018-05-16 18:47:34.249841: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:942] 0: N
2018-05-16 18:47:34.252015: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6405 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1)
[epoch 1/2] batch 1/1... loss: 32216618.0
[epoch 2/2] batch 1/1... loss: 27523674.0
2018-05-16 18:47:55.451428: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1435] Adding visible gpu devices: 0
2018-05-16 18:47:55.456462: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-05-16 18:47:55.462478: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:929] 0
2018-05-16 18:47:55.465806: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:942] 0: N
2018-05-16 18:47:55.468555: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6405 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1)
Which is all fine, until I get this error:
InvalidArgumentError (see above for traceback): Cannot assign a device for operation 'save/SaveV2': Could not satisfy explicit device specification '/device:GPU:0' because no supported kernel for GPU devices is available.
Registered kernels:
device='CPU'
It seems like the script can find my GPU to run on, but somethings stopping it from completing which I dont understand. All the other posts about this error say set the 'allow_soft_placement' argument to True, but in this script it already is.
Any help would be massively appreciated.
Thanks!
p.s the trained model will be used by this generate.py file
import numpy as np
import argparse
import tensorflow as tf
import os
from PIL import Image
parser = argparse.ArgumentParser(description='Real-time style transfer image generator')
parser.add_argument('--input', '-i', type=str, help='content image')
parser.add_argument('--gpu', '-g', default=-1, type=int,
help='GPU ID (negative value indicates CPU)')
parser.add_argument('--style', '-s', default=None, type=str, help='style model name')
parser.add_argument('--ckpt', '-c', default=-1, type=int, help='checkpoint to be loaded')
parser.add_argument('--out', '-o', default='stylized_image.jpg', type=str, help='stylized image\'s name')
parser.add_argument('--pb', '-pb', default=False, type=bool, help='load with pb')
args = parser.parse_args()
if not os.path.exists('./images/output/'):
os.makedirs('./images/output/')
outfile_path = './images/output/' + args.out
content_image_path = args.input
style_name = args.style
ckpt = args.ckpt
load_with_pb = args.pb
gpu = args.gpu
original_image = Image.open(content_image_path).convert('RGB')
img = np.asarray(original_image.resize((224, 224)), dtype=np.float32)
shaped_input = img.reshape((1,) + img.shape)
if gpu > -1:
device = '/gpu:{}'.format(gpu)
else:
device = '/cpu:0'
with tf.device(device):
with tf.Session(config=tf.ConfigProto(allow_soft_placement=True)) as sess:
if load_with_pb:
from tensorflow.core.framework import graph_pb2
graph_def = graph_pb2.GraphDef()
with open('./pbs/{}.pb'.format(style_name), "rb") as f:
graph_def.ParseFromString(f.read())
input_image, output = tf.import_graph_def(graph_def, return_elements=['input:0', 'output:0'])
else:
if ckpt < 0:
checkpoint = tf.train.get_checkpoint_state('./ckpts/{}/'.format(style_name))
input_checkpoint = checkpoint.model_checkpoint_path
else:
input_checkpoint = './ckpts/{}/{}-{}'.format(style_name, style_name, ckpt)
saver = tf.train.import_meta_graph(input_checkpoint + '.meta')
saver.restore(sess, input_checkpoint)
graph = tf.get_default_graph()
input_image = graph.get_tensor_by_name('input:0')
output = graph.get_tensor_by_name('output:0')
out = sess.run(output, feed_dict={input_image: shaped_input})
out = out.reshape((out.shape[1:]))
im = Image.fromarray(np.uint8(out))
im = im.resize(original_image.size, resample=Image.LANCZOS)
im.save(outfile_path)
A: I/O nodes such as tf.train.Saver cannot be placed on the GPU. Create them outside your tf.device context after creating your net.
| |
doc_23525217
|
I have a USB CDC device which I need to be notified of when connected/disconnected on Windows. My approach is to use RegisterDeviceNotification and an "invisible" Window to receive WM_DEVICECHANGE notifications. This part is working so far.
Now as far as I found out I need to get the list of USB devices that is plugged, iterate over it and filter out the devices with my PID/VID? I assume that I am then able to get more informations about the device including the COM port?
Is the only way to achieve my goal to use SetupDi calls in setupapi.h? Is using WDK / DDK the only way to achieve my goal?
As soon as that is working I open-source it on http://github.com/vinzenzweber/USBEventHandler. The Mac version is available already!
A: After digging through tons of useless documentation at msdn and some debugging I found the missing link: SetupDi calls in setupapi.h: More infos as well as source code for Mac and Windows can be found in my USBEventHandler project at github.com with sources for Mac and Windows.
| |
doc_23525218
|
-----Updates -----
HI,
I had used the datalength thing earlier. But strangely its returning wrong values. Is there any other issue thts specific tht i shlould check.
A: You want DATALENGTH();
SELECT DATALENGTH(ntextcol) FROM T
A: You can use DATALENGTH itself.
For datatype ntext storage size in bytes, is two times the number of characters entered
This might have caused you confusion.
A: You can use DATALENGTH to get the length of a NTEXT
A:
Create table TestTable
(
Id int identity,NtextCol NTEXT
)
GO
insert into TestTable
Select 'yogesh'
GO
insert into TestTable
Select 'bhadauriya'
Select Datalength(NtextCol)--get lenght of the data
From TestTable
Go
Drop table TestTable
A: You can also use LEN ( string_expression ), where string_expression can be string expression to be evaluated. string_expression can be a constant, variable, or column of either character or binary data.
A: ssilas777 mentioned "Storage size in bytes, is two times the number of characters entered"
With this in mind, datalength([nTextColumn]) / 2 as nTextColumn_length should return the specific answer being asked in the original post.
| |
doc_23525219
|
Based on the fetched records the function is returning the real datatype from the if condition there.
Please explain me the steps by steps to how to create a for loop in cursor in mysql function?
My code is like :-
DELIMITER $$;
CREATE FUNCTION score_exam (e integer)
RETURNS real
BEGIN
Declare t float ;
Declare c float;
Declare d int;
Declare e int;
Declare f int;
Declare g int;
Declare h int;
DECLARE done INT DEFAULT FALSE;
DECLARE r CURSOR FOR SELECT num, quen_n, cha_nu, sele_anr, cor_ans FROM equestion WHERE exm_nm = e;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE;
set t=0.0;
set c=0.0;
OPEN r;
read_loop: LOOP
FETCH r INTO d,e,f,g,h;
IF done THEN
LEAVE read_loop;
END IF;
set t = t + 1.0;
IF g = h THEN
set c = c + 1.0;
END IF;
END LOOP;
IF t > 0.0 THEN
RETURN c/t;
ELSE
RETURN 0.0;
END IF;
CLOSE r;
END $$
If anyone would help me out, I will be greatful to him.
A: It looks like all you are trying to do is get a score based on an e-Question(Exam). Looking for total questions for the test and how many correct answers... If no questions offered for an exam, just return 0 to prevent divide by zero error. That said, it looks like you are going through way too much hoops to write such a complex function which can be done in a single query.. Also, you have an incoming parameter of "e" for the exam to be computed, yet do a fetch of "quen_n" into e which will screw up your fetching won't it? I would rename the parameter just for clarification. Your query of extra elements doesn't even appear to be used, and interpreting as "num" is a row number or pk id in the table, Quen_n is a question number from the exam, etc... The only thing you care about is how many questions were there and how many correct... So, here's the query I would write and just get the answer.
SELECT
count(*) as TotalQuestions,
sum( if( sele_anr = cor_ans, 1, 0 )) as CorrectAnswers
FROM
equestion
WHERE
exm_nm = ExamToScore
Have this as your fetch, and return from that. If no record returned from the result, return 0... if one record returned, just do the division and return that.
| |
doc_23525220
|
I've tested it with Chrome, Safari and IE8 and it works well, just not Firefox.
I've narrowed it down to the following code:
.navigator({
// select #flowtabs to be used as navigator
navi: "#flowtabs",
// select A tags inside the navigator to work as items (not direct children)
naviItem: 'a',
// assign "current" class name for the active A tag inside navigator
activeClass: 'current',
// make browser's back button work
history: true
})
It looks like anything that has to do with the navigator plugin doesn't get fired. I used Firebug and it has no feedback.
Any ideas?
A: Set history to false or just remove the history option and it should work! Hope you don't mind much about keeping history/back button working...
A: This actually works and you can keep your history functionality. I located this in the help forums:
Inside your jQuery tools JavaScript file find:
history.pushState( {i:0} )
history.pushState( {i:c} )
replace with:
history.pushState( {i:0}, '' )
history.pushState( {i:c}, '' )
Actual credit should go to the person who found the solution:
http://flowplayer.org/tools/forum/55/83477
| |
doc_23525221
|
This is the function and all of the css and html design.
function six() {
var s = document.getElementsByClassName("trim");
for (var i = 0; i < s.length; i++) {
console.log(s[i]);
s[i].innerHTML = Math.floor((Math.random() * 37) + 1);
}
}
.lotobox {
width: 550px;
height: 100px;
border: 1px solid black;
background-color: #0080ff;
color: #E52A34;
font-size: 25px;
text-align: center;
margin: auto 0;
padding-bottom: 15px;
}
.numbers {
width: 550px;
height: 530px;
border: 1px solid black;
background-color: darkcyan;
color: black;
text-align: center;
}
th {
border: 4px solid black;
width: 70px;
height: 100px;
padding: 10px 20px 10px;
background-color: gray;
color: black;
text-align: center;
font-family: vardana;
font-size: 40px;
}
td {
text-align: center;
}
#button {
width: 110px;
height: 40px;
margin: 0 auto;
}
.table1 {
margin: 0 auto;
}
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Loto Aplication</title>
<link rel="stylesheet" type="text/css" href="mystyle.css">
</head>
<body>
<div class="lotobox">
<h1>Loto Box</h1>
</div>
<div class="numbers">
<br><br>
<table class="table1">
<tr>
<th class="trim"></th>
<th class="trim"></th>
<th class="trim"></th>
</tr>
<tr>
<th class="trim"></th>
<th class="trim"></th>
<th class="trim"></th>
</tr>
</table>
<br>
<button id="button" onclick="six()">go!</button>
</div>
</body>
</html>
A: The math.random() function isn't "seeded"- thus likely returning the same value each time you load the page and call the function as you've exhibited.
Check out this previous StackOverflow thread about writing your own javascript random seed generator: Seeding the random number generator in Javascript
You can use that in conjunction with the array/checker functionality suggested in the comments to ensure unique numbers.
A: Create an array of the numbers from 1-37 randomly sorted then pop() the last one from array in your loop. pop() removes the last elemtn of an array so you will never have duplicates
const nums = new Array(37).fill()
.map((_, i) => i + 1)
.sort(() => Math.random() - .5)
function six() {
var s = document.getElementsByClassName("trim");
for (var i = 0; i < s.length; i++) {
s[i].innerHTML = nums.pop();
}
}
.lotobox {
width: 550px;
height: 100px;
border: 1px solid black;
background-color: #0080ff;
color: #E52A34;
font-size: 25px;
text-align: center;
margin: auto 0;
padding-bottom: 15px;
}
.numbers {
width: 550px;
height: 530px;
border: 1px solid black;
background-color: darkcyan;
color: black;
text-align: center;
}
th {
border: 4px solid black;
width: 70px;
height: 100px;
padding: 10px 20px 10px;
background-color: gray;
color: black;
text-align: center;
font-family: vardana;
font-size: 40px;
}
td {
text-align: center;
}
#button {
width: 110px;
height: 40px;
margin: 0 auto;
}
.table1 {
margin: 0 auto;
}
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Loto Aplication</title>
<link rel="stylesheet" type="text/css" href="mystyle.css">
</head>
<body>
<div class="lotobox">
<h1>Loto Box</h1>
</div>
<div class="numbers">
<br><br>
<table class="table1">
<tr>
<th class="trim"></th>
<th class="trim"></th>
<th class="trim"></th>
</tr>
<tr>
<th class="trim"></th>
<th class="trim"></th>
<th class="trim"></th>
</tr>
</table>
<br>
<button id="button" onclick="six()">go!</button>
</div>
</body>
</html>
| |
doc_23525222
|
clothes = ["shirt", "dress", "pants", "jacket", "hat"]
for colour_item in colours:
for clothes_item in clothes:
print("I am wearing a ",colour_item," ",clothes_item)
This is the code i am trying to change into while loops to produce all outcomes, ie 15 outcomes, the best i can get with while loops is 3.
A: You can try using a while loop while also keeping an index counter variable for each list, though it is essentially just a for loop:
colours = ["red", "green", "blue"]
clothes = ["shirt", "dress", "pants", "jacket", "hat"]
colorIndex = 0
while(colorIndex < len(colours)):
clothesIndex = 0
while(clothesIndex < len(clothes)):
print("I am wearing a",colours[colorIndex],clothes[clothesIndex])
clothesIndex += 1
colorIndex += 1
Output:
I am wearing a red shirt
I am wearing a red dress
I am wearing a red pants
I am wearing a red jacket
I am wearing a red hat
I am wearing a green shirt
I am wearing a green dress
I am wearing a green pants
I am wearing a green jacket
I am wearing a green hat
I am wearing a blue shirt
I am wearing a blue dress
I am wearing a blue pants
I am wearing a blue jacket
I am wearing a blue hat
A: If you are willing to do a little math, you can do this in a single while loop.
colours = ["red", "green", "blue"]
clothes = ["shirt", "dress", "pants", "jacket", "hat"]
n = 0
l = len(clothes)
while n < len(colours) * len(clothes):
print(f"I am wearing a {colours[n // l]} {clothes[n % l]}")
n += 1
Which prints the expected:
I am wearing a red shirt
I am wearing a red dress
I am wearing a red pants
I am wearing a red jacket
I am wearing a red hat
I am wearing a green shirt
I am wearing a green dress
I am wearing a green pants
I am wearing a green jacket
I am wearing a green hat
I am wearing a blue shirt
I am wearing a blue dress
I am wearing a blue pants
I am wearing a blue jacket
I am wearing a blue hat
A: colours = ["red", "green", "blue"]
clothes = ["shirt", "dress", "pants", "jacket", "hat"]
i=0
while i<len(colours):
j=0
colour_item = colours[i]
while j<len(clothes):
clothes_item = clothes[j]
print("I am wearing a ",colour_item," ",clothes_item)
j+=1
i+=1
| |
doc_23525223
|
Problem is that the method cannot throw exceptions and the class is anonymous.
Here is the code:
mUserMode.addActionListener(new ActionListener() {
@Override
public void actionPerformed(ActionEvent e) {
if(currentCard == 0) {
return;
}
boolean IsEmptyFields = true, isCheckedAnswers = false;
// check if all fields is fill in ...
endOfCycle: for(Component component: panelForAddingQuesions.getComponents()) {
if(component instanceof JTextField) {
JTextField question = (JTextField)component;
if(question.getText().length() == 0) {
IsEmptyFields = false;
break endOfCycle;
}
}
}
// and if there is one correct answer in every question
// check if all fields is fill in ...
for(Entry<JTextField, ArrayList<JCheckBox>> entrySets: equivalenceOfQuestionFiledsAndItsAnswers.entrySet()) {
isCheckedAnswers = false;
for(JCheckBox checkbox: entrySets.getValue()) {
if(checkbox.isSelected()) {
isCheckedAnswers = true;
}
}
}
if(IsEmptyFields) {
JOptionPane.showMessageDialog(MainActivity.this,
"Error", "Error",
JOptionPane.ERROR_MESSAGE);
}
else if(isCheckedAnswers) {
JOptionPane.showMessageDialog(MainActivity.this,
"Error","Error",
JOptionPane.ERROR_MESSAGE);
}
else {
cardLayout.last(cardPanel);
currentCard = 0;
}
// It doesn't help
//MainActivity.this.mAdminMode.setEnabled(true);
}
});
There is the method(аctionPerformed) in anonymous class. I want on a condition to cancel switching ChechBoxItem of elements i.e. to stop this operation. But as ,anyway , the method аctionPerformed is completed, there will be automatic a switching of checkboxes as it will be notified View. And I need to prevent it directly in a method actionPerformed
A: You should call MainActivity.this.mAdminMode.setSelected(true);, not setEnabled(true).
| |
doc_23525224
|
private void Form1_Load(object sender, EventArgs e)
{
String cs = "Database=something;User=-;Password=-";
MySqlConnection dbconn = new MySqlConnection(cs);
dbconn.Open();
DataSet ds = new DataSet();
MySqlDataAdapter adapter = new MySqlDataAdapter(
"select * from reservasjon WHERE Rom_nr IS NULL ", dbconn);
adapter.Fill(ds);
this.listBox1.DataSource = ds.Tables[0];
this.listBox1.DisplayMember = "Rnr";
}
private void button1_Click(object sender, EventArgs e)
{
String cs = "Database=something;User=-;Password=-";
MySqlConnection dbconn = new MySqlConnection(cs);
dbconn.Open();
DataSet ds2 = new DataSet();
MySqlDataAdapter adapter2 = new MySqlDataAdapter(
"select * from rom WHERE etasje = '1' AND opptatt='1'", dbconn);
adapter2.Fill(ds2);
this.listBox2.DataSource = ds2.Tables[0];
this.listBox2.DisplayMember = "Rom_nr";
}
private void listBox1_SelectedIndexChanged(object sender, EventArgs e)
{
string value1 = listBox1.SelectedIndex.ToString();
}
private void listBox2_SelectedIndexChanged(object sender, EventArgs e)
{
string value2 = listBox2.SelectedIndex.ToString();
}
private void button4_Click(object sender, EventArgs e)
{
string value1 = listBox2.Text;
string value2 = listBox1.Text;
MySqlDataReader dr = null;
try
{
String cs = "Database=something;User=-;Password=-";
string selectStatement = "UPDATE reservasjon SET Rom_nr='102' WHERE Rnr='2';";
System.Data.DataTable dt = new System.Data.DataTable();
MySqlConnection dbconn = new MySqlConnection(cs);
dbconn.Open();
MySqlDataAdapter sqlDa = new MySqlDataAdapter();
sqlDa.SelectCommand = new MySqlCommand(selectStatement, dbconn);
MySqlCommandBuilder cb = new MySqlCommandBuilder(sqlDa);
sqlDa.Fill(dt);
dt.Rows[0]["Rnr"] = "";
sqlDa.UpdateCommand = cb.GetUpdateCommand();
sqlDa.Update(dt);
}
catch (Exception s)
{
Console.WriteLine(s.Message);
}
}
A: This should do the trick :
private void button4_Click(object sender, EventArgs e)
{
string value1 = listBox2.Text;
string value2 = listBox1.Text;
MySqlDataReader dr = null;
try
{
String cs = "Database=something;User=-;Password=-";
string selectStatement = "UPDATE reservasjon SET Rom_nr='"+value2+"' WHERE Rnr='"value1+"';";
System.Data.DataTable dt = new System.Data.DataTable();
MySqlConnection dbconn = new MySqlConnection(cs);
dbconn.Open();
MySqlDataAdapter sqlDa = new MySqlDataAdapter();
sqlDa.SelectCommand = new MySqlCommand(selectStatement, dbconn);
MySqlCommandBuilder cb = new MySqlCommandBuilder(sqlDa);
sqlDa.Fill(dt);
dt.Rows[0]["Rnr"] = "";
sqlDa.UpdateCommand = cb.GetUpdateCommand();
sqlDa.Update(dt);
}
catch (Exception s)
{
Console.WriteLine(s.Message);
}
}
A: That's cause your variable value1 and value2 is not available outside the event handler method since you made them local. Declare them globally or just use the listBox1.SelectedIndex.ToString() directly in your SQL query
| |
doc_23525225
|
They all work well apart from 2, these try to upload test files.
This works well when run local the code is
var filePath = Path.Combine(Environment.CurrentDirectory, @"Data\image1.jpg");
addFile.SendKeys(filePath);
The test files are stored here and they are set to 'Copy always'
So they deploy ok but they do not seem to be making their way up to the build yaml file.
Currently the steps are
clone
build
push_image
The clone step is pulling from the correct repo and the data files exists there.
Any ideas please?
Kev
A: Can you try this:
public static string GetBasePath
{
get
{
var basePath =
System.IO.Path.GetDirectoryName((System.Reflection.Assembly.GetExecutingAssembly().Location));
basePath = basePath.Substring(0, basePath.Length - 10);
return basePath;
}
}
var filePath = Path.Combine(GetBasePath, @"Data\image1.jpg");
A: solved it.
Move the files to the project root and changed code -
var filePath = Path.Combine(Environment.CurrentDirectory, @"image1.jpg");
addFile.SendKeys(filePath);
Now all good, thanks for your help
| |
doc_23525226
|
Here is my HTML:
<div id='respond_4'></div>
<form method='post' id='cropper_4_infoForm' action=''>
<input type='text' name='imag_title' value='' />
<input type='hidden' name='image_filename' id='image_4_filename' value='' />
<input type='hidden' name='image_x' id='image_4_x' />
<input type='hidden' name='image_y' id='image_4_y' />
<input type='hidden' name='image_width' id='image_4_width' />
<input type='hidden' name='image_height' id='image_4_height' />
<input type='hidden' name='image_rotate' id='image_4_rotate' />
</form>
<button type='button' class='btn' onclick='SaveCropImageInfo();'><div class='icon icon-save'></div> Opslaan</button>
Here is my JavaScript:
function SaveCropImageInfo() {
console.log('going');
$('#cropper_4_infoForm').ajaxSubmit({
target: '#respond_4',
url: '/cms/modules/website/include/ajax/saveCropImageInfo.php?imag_id=4'
});
}
I have another form on this page that works on the same principal, but with different target and different url, but that form does work as intended. Hope one of you guys can help me out. :)
| |
doc_23525227
|
$var= "hello world";
$var = -s $var;
print $var;
When we print the value of $var, it shows a error like
Use of uninitialized value $var in print at line 3.
Can anyone explain how this works. What is the -s does? Is it a function? I couldn't find snyhing about it in perldoc.
A: The -s file test operator accepts either a file name string or a valid opened file handle, and returns the size of the file in bytes. If the file doesn't exist (I presume you have no file called hello world) then it returns undef
It is documented in perldoc -f -X
There is also a perl command-line switch -s which is unrelated. It is documented in perldoc perlrun. That is the documentation that you have found, but it is irrelevant to using -s within a Perl program
A: -s is one of many file tests available in Perl. This particular test returns file size in bytes, so it can be used to check if file is empty or not.
In your sample code the test returned undef, as it could not find file named hello world.
You can read more about file tests in Perl here: http://perldoc.perl.org/functions/-X.html
A: -s is an oddly named function documented in -X. But despite the dash in its name, -s is just like any other function.
-s returns the size of the file provided as an argument. On error, it returns undef and sets $!.
To find out what error you are getting, check if the size is undefined.
defined( my $size = -s $qfn )
or die("Can't stat \"$qfn\": $!\n");
In this case, it's surely because hello world isn't a path to a file.
| |
doc_23525228
|
CREATE TABLE "ALMAT"."PRODUCT"
( "ID" NUMBER(*,0) NOT NULL ENABLE,
"NAME" VARCHAR2(50 BYTE),
"PRICE" NUMBER(*,0),
"DESCRIPTION" VARCHAR2(180 BYTE),
"CREATE_DATE" DATE,
"UPDATE_DATE" DATE,
CONSTRAINT "PRODUCT_PK" PRIMARY KEY ("ID"))
i want to update data in this table, this is my stored procedure:
CREATE OR REPLACE PROCEDURE UPDATEPRODUCT(prod_id int, prod_name varchar2 default null, prod_price int default null) AS
BEGIN
update product
set
name = prod_name,
price = prod_price,
update_date = sysdate
where id = prod_id;
commit;
END UPDATEPRODUCT;
im using optional parameters, how can i update only 1 column? for example: only "NAME" or "PRICE".
A: You can use NVL function here. So your updated procedure would look alike -
CREATE OR REPLACE PROCEDURE UPDATEPRODUCT(prod_id int,
prod_name varchar2 default null,
prod_price int default null) AS
BEGIN
UPDATE product
SET name = NVL(prod_name, name),
price = NVL(prod_price, price),
update_date = sysdate
WHERE id = prod_id;
COMMIT;
EXCEPTION
WHEN OTHERS THEN
RAISE;
END UPDATEPRODUCT;
A: Use COALESCE (or NVL) to keep the current value when a NULL value is passed in (or the default is used):
CREATE OR REPLACE PROCEDURE UPDATEPRODUCT(
prod_id PRODUCT.ID%TYPE,
prod_name PRODUCT.NAME%TYPE DEFAULT NULL,
prod_price PRODUCT.PRICE%TYPE DEFAULT NULL
)
AS
BEGIN
UPDATE product
SET name = COALESCE(prod_name, name),
price = COALESCE(prod_price, price),
update_date = SYSDATE
WHERE id = prod_id;
END UPDATEPRODUCT;
Also, do not COMMIT in a stored procedure as it prevents you from chaining multiple procedures together in a single transaction and rolling them all back as a block. Instead, COMMIT from the PL/SQL block that calls the procedure.
| |
doc_23525229
|
A: Nadim Shaikh posted a tutorial on LinkedIn How-To: Salesforce & Amazon Alexa - Internet of Things, which gives a fast track to query some facts from Salesforce via Alexa.
Salesforce itself also posted some videos:
*
*Alexa Integration to SalesForce in Minutes with No Code
*Voice Enabled Salesforce Apps - Getting started with Amazon Alexa for Business
And finally a GitHub Project for a private skill with Salesforce integration from AlexaDevs /Amazon, would be a starting point to see how it could be done.
| |
doc_23525230
|
This is its structure. How do I extract this data into tables?
<course>
<lesson id="I00C8A1A645094C819BC9A0EBE2563E27">
<element name="cmi.core.student_name">Michael,Robin</element>
<element name="cmi.core.student_id">73Y4TZ0000K0</element>
<element name="cmi.core.credit">credit</element>
<element name="cmi.core.lesson_mode">normal</element>
<element name="cmi.core.lesson_status">completed</element>
<element name="cmi.core.entry" />
</lesson>
<lesson id="I66BCB22712934777BE7EB16468D43F7A">
<element name="cmi.core.student_name">Michael,Robin</element>
<element name="cmi.core.student_id">73Y4TZ0000K0</element>
<element name="cmi.core.credit">credit</element>
<element name="cmi.core.lesson_mode">normal</element>
<element name="cmi.core.lesson_status">completed</element>
<element name="cmi.core.entry" />
</lesson>
One more note is that the above record comes from just one column. For simplicity, assume there are only two columns. Column 1 is a uniqueidentifier, with values: "C00707", "C00708","C00709", etc. Each row has a record similar to above for Column 2. So I just need to break this Column 2 down, element by element, row by row.
A: select
ID,
T.N.value('(element[@name="cmi.core.student_name"])[1]', 'nvarchar(max)') as student_name,
T.N.value('(element[@name="cmi.core.student_id"])[1]', 'nvarchar(max)') as student_id,
T.N.value('(element[@name="cmi.core.credit"])[1]', 'nvarchar(max)') as credit,
T.N.value('(element[@name="cmi.core.lesson_mode"])[1]', 'nvarchar(max)') as lesson_mode,
T.N.value('(element[@name="cmi.core.lesson_status"])[1]', 'nvarchar(max)') as lesson_status
from YourTable
cross apply XMLCol.nodes('/course/lesson') as T(N)
SE-Data
| |
doc_23525231
|
class foo{
public:
int data;
};
Now I want to add a method to this class, to do some comparison, to see if its data is equal to one of given numbers.
Of course, I can write if(data==num1|| data == num2|| data ==num3.....), but honestly speaking, I feel sick when I write data == every time I compare it to a number.
So, I hope I would be able to write something like this:
if(data is equal to one of these(num1,num2,num3,num4,num5...))
return true;
else
return false;
I want to implement this statement, data is equal to one of these(num1, num2, num3, num4, num5...)
Here is my approach:
#include <stdarg.h>
bool is_equal_to_one_of_these(int count,...){
int i;
bool equal = false;
va_list arg_ptr;
va_start(arg_prt,count);
for(int x=0;x<count;x++){
i = va_arg(arg_ptr,int);
if( i == data ){
equal = true;
break;
}
}
va_end(arg_ptr);
return equal;
}
This piece of code will do the job for me. But every time I use this method, I'll have to count the parameters and pass it in.
Does anyone have a better idea?
A: The answers using std::initializer_list are fine, but I want to add one more possible solution which is exactly what you where trying with that C variadic in a type-safe and modern way: Using C++11 variadic templates:
template<typename... NUMBERS>
bool any_equal( const foo& f , NUMBERS&&... numbers )
{
auto unpacked = { numbers... };
return std::find( std::begin( unpacked ) , std::end( unpacked ) , f.data )
!= std::end( unpacked );
};
Of course this only works if all values passed are of the same type. If not the initializer list unpacked cannot be deduced nor initialized.
Then:
bool equals = any_equal( f , 1,2,3,4,5 );
EDIT: Here is a are_same metafunction to ensure that all the numbers passed are of the same type:
template<typename HEAD , typename... TAIL>
struct are_same : public and_op<std::is_same<HEAD,TAIL>::value...>
{};
Where and_op performs n-ary logical and:
template<bool HEAD , bool... TAIL>
struct and_op : public std::integral_constant<bool,HEAD && and_op<TAIL...>::value>
{};
template<>
struct and_op<> : public std::true_type
{};
This makes possible to force the usage of numbers of the same type in a simple way:
template<typename... NUMBERS>
bool any_equal( const foo& f , NUMBERS&&... numbers )
{
static_assert( all_same<NUMBERS...>::value ,
"ERROR: You should use numbers of the same type" );
auto unpacked = { numbers... };
return std::find( std::begin( unpacked ) , std::end( unpacked ) , f.data )
!= std::end( unpacked );
};
A: Any optimization is going to depend on properties of the set of numbers being compared to.
If there's a definite upper bound, you can use a std::bitset. Testing membership (that is, indexing into the bitset, which behaves like an array), is O(1), effectively a few fast instructions. This is often the best solution up to limits in the hundreds, although depending on the application millions could be practical.
A: The easy way
The simplest approach is to write a member function wrapper called in() around std::find with a pair of iterators to look for the data in question. I wrote a simple template<class It> in(It first, It last) member function for that
template<class It>
bool in(It first, It last) const
{
return std::find(first, last, data) != last;
}
If you have no access to the source of foo, you can write a non-member functions of signature template<class T> bool in(foo const&, std::initializer_list<T>) etc., and call it like
in(f, {1, 2, 3 });
The hard way
But let's go completely overboard with that: just add two more public overloads:
*
*one taking a std::initializer_list parameter that calls the previous one with the begin() and end() iterators of the corresponding initializer list argument.
*one for an arbitrary container as input that will do a little tag dispatching to two more private overloads of a detail_in() helper:
*
*one overload doing a SFINAE trick with trailing return type decltype(c.find(data), bool()) that will be removed from the overload set if the container c in question does not have a member function find(), and that returns bool otherwise (this is achieved by abusing the comma operator inside decltype)
*one fallback overload that simply takes the begin() and end() iterators and delegates to the original in() taking two iterators
Because the tags for the detail_in() helper form an inheritance hierarchy (much like the standard iterator tags), the first overload will match for the associative containers std::set and std::unordered_set and their multi-cousins. All other containers, including C-arrays, std::array, std::vector and std::list, will match the second overload.
#include <algorithm>
#include <array>
#include <initializer_list>
#include <type_traits>
#include <iostream>
#include <set>
#include <unordered_set>
#include <vector>
class foo
{
public:
int data;
template<class It>
bool in(It first, It last) const
{
std::cout << "iterator overload: ";
return std::find(first, last, data) != last;
}
template<class T>
bool in(std::initializer_list<T> il) const
{
std::cout << "initializer_list overload: ";
return in(begin(il), end(il));
}
template<class Container>
bool in(Container const& c) const
{
std::cout << "container overload: ";
return detail_in(c, associative_container_tag{});
}
private:
struct sequence_container_tag {};
struct associative_container_tag: sequence_container_tag {};
template<class AssociativeContainer>
auto detail_in(AssociativeContainer const& c, associative_container_tag) const
-> decltype(c.find(data), bool())
{
std::cout << "associative overload: ";
return c.find(data) != end(c);
}
template<class SequenceContainer>
bool detail_in(SequenceContainer const& c, sequence_container_tag) const
{
std::cout << "sequence overload: ";
using std::begin; using std::end;
return in(begin(c), end(c));
}
};
int main()
{
foo f{1};
int a1[] = { 1, 2, 3};
int a2[] = { 2, 3, 4};
std::cout << f.in({1, 2, 3}) << "\n";
std::cout << f.in({2, 3, 4}) << "\n";
std::cout << f.in(std::begin(a1), std::end(a1)) << "\n";
std::cout << f.in(std::begin(a2), std::end(a2)) << "\n";
std::cout << f.in(a1) << "\n";
std::cout << f.in(a2) << "\n";
std::cout << f.in(std::array<int, 3>{ 1, 2, 3 }) << "\n";
std::cout << f.in(std::array<int, 3>{ 2, 3, 4 }) << "\n";
std::cout << f.in(std::vector<int>{ 1, 2, 3 }) << "\n";
std::cout << f.in(std::vector<int>{ 2, 3, 4 }) << "\n";
std::cout << f.in(std::set<int>{ 1, 2, 3 }) << "\n";
std::cout << f.in(std::set<int>{ 2, 3, 4 }) << "\n";
std::cout << f.in(std::unordered_set<int>{ 1, 2, 3 }) << "\n";
std::cout << f.in(std::unordered_set<int>{ 2, 3, 4 }) << "\n";
}
Live Example that -for all possible containers- prints 1 and 0 for both number sets.
The use cases for the std::initializer_list overload are for member-ship testing for small sets of numbers that you write out explicitly in calling code. It has O(N) complexity but avoids any heap allocations.
For anything heavy-duty like membership testing of large sets, you could store the numbers in an associative container like std::set, or its multi_set or unordered_set cousins. This will go to the heap when storing these numbers, but has O(log N) or even O(1) lookup complexity.
But if you happen to have just a sequence container full of numbers around, you can also throw that to the class and it will happily compute membership for you in O(N) time.
A: It isn't pretty, but this should work:
class foo {
bool equals(int a) { return a == data; }
bool equals(int a, int b) { return (a == data) || (b == data); }
bool equals(int a, int b, int c) {...}
bool equals(int a, int b, int c, int d) {...}
private:
int data;
}
And so on. That'll give you the exact syntax you were after. But if you are after the completely variable number of arguments then either the vector, or std::initalizer list might be the way to go:
See: http://en.cppreference.com/w/cpp/utility/initializer_list
This example shows it in action:
#include <assert.h>
#include <initializer_list>
class foo {
public:
foo(int d) : data(d) {}
bool equals_one_of(std::initializer_list<int> options) {
for (auto o: options) {
if (o == data) return true;
}
return false;
}
private:
int data;
};
int main() {
foo f(10);
assert(f.equals_one_of({1,3,5,7,8,10,14}));
assert(!f.equals_one_of({3,6,14}));
return 0;
}
A:
Does anyone have better idea ? thanks for sharing !
There's a standard algitrithm for that:
using std::vector; // & std::begin && std::end
// if(data is equal to one of these(1,2,3,4,5,6))
/* maybe static const */vector<int> criteria{ 1, 2, 3, 4, 5, 6 };
return end(criteria) != std::find(begin(criteria), end(criteria), data);
Edit: (all in one place):
bool is_equal_to_one_of_these(int data, const std::vector<int>& criteria)
{
using std::end; using std::begin; using std::find;
return end(criteria) != find(begin(criteria), end(criteria), data);
}
auto data_matches = is_equal_to_one_of_these(data, {1, 2, 3, 4, 5, 6});
Edit:
I prefer the interface in terms of a vector, instead of an initializer list, because it is more powerful:
std:vector<int> v = make_candidate_values_elsewhere();
auto data_matches = is_equal_to_one_of_these(data, v);
The interface (by using a vector), doesn't restrict you to define the values, where you call is_equal_to_one_of_these.
A: set is a good option, but if you really want to roll your own, initializer_list is convienient:
bool is_in( int val, initializer_list<int> lst )
{
for( auto i : lst )
if( i == val ) return true;
return false;
}
use is trivial:
is_in( x, { 3, 5, 7 } ) ;
it's O(n) thou, set / unordered is faster
A: I would recommend to use standard container like std::vector, but that would still imply a linear complexity with worst-case runtime of O(N).
class Foo{
public:
int data;
bool is_equal_to_one_of_these(const std::vector<int>& arguments){
bool matched = false;
for(int arg : arguments){ //if you are not using C++11: for(int i = 0; i < arguments.size(); i++){
if( arg == data ){ //if you are not using C++11: if(arguments[i] == data){
matched = true;
}
}
return matched;
}
};
std::vector<int> exampleRange{ {1,2,3,4,5} };
Foo f;
f.data = 3;
std::cout << f.is_equal_to_one_of_these(exampleRange); // prints "true"
A: There are many ways of doing this with the STL.
If you have an incredibly large number of items and you want to test if your given item is a member of this set, use set or unordered_set. They allow you to check membership in log n and constant time respectively.
If you keep the elements in a sorted array, then binary_search will also test membership in log n time.
For small arrays, a linear search with find might however preform significantly faster (as there is no branching). A linear search might even do 3-8 comparisons in the time it takes the binary search to 'jump around'. This blog post suggests there to be a break-even point at proximately 64 items, below which a linear search might be faster, though this obviously depends on the STL implementation, compiler optimizations and your architecture's branch prediction.
A: If data is really an integral or enumerated type, you can use a switch:
switch (data) {
case 1:
case 2:
case 2000:
case 6000:
case /* whatever other values you want */:
act_on_the_group();
break;
default:
act_on_not_the_group();
break;
}
A: If data, num1, .. num6 are between 0 and 31, then you can use
int match = ((1<<num1) | (1<<num2) | ... | (1 << num6));
if( ( (1 << data) & match ) != 0 ) ...
If num1 to num6 are constants, the compiler will compute match at compile time.
| |
doc_23525232
|
I have checked the following setting and it is set to Yes:
Library settings >>> General Settings >>> Advanced Settings >>> Allow items from this document library to appear in search results?
I have also added the Default content access account to the members group of the site (as I want to also index drafts). The documents not being indexed are a mixture of checked in and drafts.
It might be useful to know that the sites containing the document libraries use the Team Site template, the site collection has publishing turned on. Everything was created programatically.
I'm pulling my hair out with this so any suggestions of what I can check or how I can approach this issue would be greatly appreciated - I have only been working with SharePoint for 2 months so I'm still learning.
TIA
A: check that the folder item is approved and published.
The effected libraries had the option Library Settings > Versioning Settings > Draft item security set to 'Only users who can edit items'.
My crawl account did have full control of the documents in the library and I logged in using the crawl account to confirm this.
When i changed the above option to 'Any user who can read items' then reset the index and did a full crawl, the documents where indexed and are now being served up by search.
I had previously read that the crawler ignores this option but it seems that it interprates it in an unexpected way.
Interestingly (and frustratingly) when i changed the 'Draft Item Security' option to 'Any user who can read items' and did an incremental craw
A: Ensure that the service account that is setup on the Search Service Application (under "Applcation Management" > "manage service applications" in Central Admin) has the correct permissions to view the documents. Also ensure that this account is not too highly privileged to ensure private documents are not displayed in the search results. I would recommend setting up a service account called “spsearch”, which has read access granted to the content sources.
| |
doc_23525233
|
I think Anko will ignore the passed value 10 of _id and pass a new value to _id automatically, but in fact the value 10 of _id is inserted into the table.
How can make Anko ignore the passed value of _id when _id is INTEGER + PRIMARY_KEY+ AUTOINCREMENT ?
Insert Data
SettingManage().addSetting(MSetting(10L,"My Settings",2000L,"This is description!"))
Design Table
class DBSettingHelper(mContext: Context = UIApp.instance) : ManagedSQLiteOpenHelper(
mContext,
DB_NAME,
null,
DB_VERSION) {
companion object {
val DB_NAME = "setting.db"
val DB_VERSION = 5
val instance by lazy { DBSettingHelper() }
}
override fun onCreate(db: SQLiteDatabase) {
db.createTable( DBSettingTable.TableNAME , true,
DBSettingTable._ID to INTEGER + PRIMARY_KEY+ AUTOINCREMENT ,
DBSettingTable.Name to TEXT,
DBSettingTable.CreatedDate to INTEGER,
DBSettingTable.Description to TEXT
)
}
override fun onUpgrade(db: SQLiteDatabase, oldVersion: Int, newVersion: Int) {
db.dropTable(DBSettingTable.TableNAME, true)
onCreate(db)
}
}
class DBSetting(val mMutableMap: MutableMap<String, Any?>) {
var _id: Long by mMutableMap
var name: String by mMutableMap
var createdDate: Long by mMutableMap
var description: String by mMutableMap
constructor(_id: Long, name: String, createdDate: Long, description: String)
: this(HashMap()) {
this._id = _id
this.name = name
this.createdDate = createdDate
this.description=description
}
}
object DBSettingTable {
val TableNAME = "SettingTable"
val _ID = "_id"
val Name = "name"
val CreatedDate = "createdDate"
val Description="description"
}
data class MSetting (
val _id: Long,
val name: String,
val createdDate: Long,
val description: String
)
Business Logic
class SettingManage {
fun addSetting(mMSetting:MSetting){
DBSettingManage().addDBSetting(DbDataMapper().convertMSetting_To_DBSetting(mMSetting))
}
}
class DBSettingManage(private val mDBSettingHelper: DBSettingHelper =DBSettingHelper.instance) {
fun addDBSetting(mDBSetting: DBSetting)=mDBSettingHelper.use{
insert(DBSettingTable.TableNAME,*mDBSetting.mMutableMap.toVarargArray())
}
}
class DbDataMapper {
fun convertMSetting_To_DBSetting(mMSetting: MSetting) =with(mMSetting){
DBSetting(_id,name,createdDate,description)
}
fun convertDBSetting_To_MSetting(mDBSetting: DBSetting)=with(mDBSetting){
MSetting(_id,name,createdDate,description )
}
}
fun <T : Any> SelectQueryBuilder.parseList(parser: (Map<String, Any?>) -> T): List<T> =
parseList(object : MapRowParser<T> {
override fun parseRow(columns: Map<String, Any?>): T = parser(columns)
})
A: Anko, in your usage, is a wrapper for SQLite. SQL itself overrides auto increment when a custom value is passed. If no value is passed -> Automatic value. Otherwise -> manual. It assumes it's unique because of PRIMARY_KEY, but that's a different problem.
As far as I know, there is no integrated feature into Anko that allows for overriding this manually. Instead, the only thing you can do is not pass a value. Any SQL query that's wrong won't be caught by Anko itself, but by the raw SQL. It's SQL that throws any exceptions meaning Anko won't check for missing data.
Simply, don't pass an ID. You can write a function that still takes in the ID but discards it if the row is set to PRIMARY_KEY and AUTO_INCREMENT.
I have done more digging in the source code and there's no support for ignoring passed values if it's set to auto increment.
This means the only way you can get it to actually automatically increment is by not passing any values. If you are using manual queries of any kind, you simply don't pass any value. So, don't give the database the ID. Because it's auto increment it'll automatically add it and that's on the base SQL level.
Ank doesn't ignore it by default because it's essentially a wrapper for SQLite. Meaning it has to be able to be passed if the ID should be overridden (see the link in the second sentence of this answer). As such, Anko ignoring it would cause more problems than it'd do good. You have to make sure it isn't passed instead
| |
doc_23525234
|
I have a video that is 4:15 long (255 seconds). When I create such a video in Adobe Premiere and upload it to YouTube, everything is fine. The problem occurs when I create such a video in FFMPEG. Before the HD versions are processed (>= 1080p) the video works fine, but when the HD versions appear, its duration on the scrollbar becomes 4:00 (240 seconds). If you go to the end of the video at 4:00, it continues to play for another 15 seconds.
Here is an example of such a video clip: https://youtu.be/G_tMctEklM4
Screenshot: https://i.stack.imgur.com/HxWs5.png
I would be grateful for your help.
UPD
I think I figured out what the problem was.
I took 5 video files of 25 seconds each and merged them. The resulting file is ~125 seconds long, I uploaded it to YouTube - everything is fine. Then I tried to build my project - again, problems. Apparently, the problem is that I merged 42 files: 2 files for ~20 seconds and 40 files for 5-6 seconds. These are not integers, here are some of the files:
06.143000, 05.983000, 06.488000, 06.018000, 05.809000
So, I understand FFMPEG rounds them down to integer number, and due to the fact that there are a lot of such files - 42, so there is a shift of 15-20 seconds (about 0.5 seconds from each file). But what to do with it I still do not understand.
Related topic: https://www.reddit.com/r/ffmpeg/comments/e1o3hv/ffconcat_filter_with_millisecond_precision_how/f8qvztl/
and this (???): ffmpeg - avoid duration approximation of generated files
| |
doc_23525235
|
So the filters are:
*
*Date Range Filter (pre-existing)
*Subsite Filter (pre-existing)
*Store Filter (NEW)
Please notice that I am not doing a static report - I would like EXISTING reports to take into consideration my NEW custom filter.
I have used the ideas introduced in the post by Mohammed Syam in order to provide the frontend part of the new filter (and it is there, with combobox coming up, URL adding an additional query string parameter). I am just not sure where would I go to tweak the "receiving end" for existing reports.
| |
doc_23525236
|
Yeoman set up a nice app folder for me, but for some reason it does not display. I thought maybe I was missing a server.js, but that does not seem to have fixed anything when I added it. Any advice? Thanks!
A: Make sure you are matching the port all the way through - your browsers URL:PORT, the EC2 routing rules and your NodeJS settings. It looks like you might be listening to a port higher than 80 on the server.
As you mentioned in your comment, if you want to listen on a port below 1024 you will need to run the command as a privileged user.
A: I didn't run node as root on my AWS server, so it was not setting up my nicely built app that Yeoman made for me.
http://www.stackoverflow.com/questions/9164915 was where I realized my mistake. I am new to linux OS soo, I am learning. :)
| |
doc_23525237
|
I have a JSON that looks like this:
my_json = { "ak": None, "bn": None, "br": None, "sk" : None}
I then have the following class
class MyClass:
ak: str = None
bn: str = None
br: str = None
sk: str = None
When I do this:
new_instance = jsons.load(my_json, MyClass)
I am getting the following error.
jsons.exceptions.DeserializationError: NoneType cannot be deserialized into str
The end result I would like is for new_instance to look like this:
new_instance.ak = None
new_instance.bn = None
new_instance.br = None
new_instance.bk = None
I understand what the error means but I am trying to find a way to get the intended result. Is there something I can do to MyClass to allow for this? Or is there any other way to resolve this?
----EDIT----
I found that this will get me the intended result:
class MyClass:
ak: str
bn: str
br: str
sk: str
A: For what it's worth, I finally figured out why this was happening.
The fix is self-explanatory. I have to declare MyClass like so:
class MyClass:
ak: str
bn: str
br: str
sk: str
def __init__(self, ak, bn, br, sk):
self.ak = ak
self.bn = bn
self.br = br
self.sk = sk
The JSONS module then works nicely. The order of the variables in the __init__ function does not matter - the module matches the variable name to the key in the json document. If the json document is missing a key, a NoneType is then passed to the __init__ function, which is very nice.
| |
doc_23525238
|
I am aiming to learn Excel using VBA and hoping for guidance to align Pictures horizontally such as 5 pictures in one row then below a new row begins and repeat. For now I am using a hard value of 5, just to have it occur once though the results aren't what I expected. Here are two steps of the problem
*
*Seems to take the first image then make a new row right away
*Then vertically aligns two images on different new rows
I considered needing an additional counter to keep track of so the Macro will know when to introduce a new row.
Sub pictureCode()
'Automatically space and align shapes
Dim shp As Shape
Dim counter As Long
Dim dTop As Double
Dim dLeft As Double
Dim dHeight As Double
Const dSPACE As Double = 50
'Set variables
counter = 1
ActiveSheet.Shapes.SelectAll
'Loop through selected shapes
For Each shp In Selection.ShapeRange
With shp
'If not first shape then move it below previous shape and align left.
If counter = 5 Then
.Top = dTop
.Left = dLeft + dWidth + dSPACE
Else
.Top = dTop + dHeight + dSPACE
.Left = dLeft
End If
'Store properties of shape for use in moving next shape in the collection.
dTop = .Top
dLeft = .Left
dHeight = .Height
End With
'Add to shape counter
counter = counter + 1
Next shp
End Sub
A: Try the next code, please, It aligns shapes using the row reference (Top and Left):
Sub testAlignShapes()
Dim sh As Worksheet, s As Shape, i As Long, colAlign As Long, startRow As Long
Dim dWidth As Double, dSpace As Double, rngAlign As Range, iRows As Long, nrShLine As Long
Set sh = ActiveSheet
colAlign = 9 'column number to align the shapes
startRow = 2 ' starting row
nrShLine = 3 'how many shapes on the same row
iRows = 3 ' after how many rows will start the following shapes row
For Each s In sh.Shapes
Set rngAlign = sh.cells(startRow, colAlign)
i = i + 1
If i <= nrShLine Then
s.top = rngAlign.top: s.left = rngAlign.left + dWidth + dSpace
dWidth = dWidth + s.width: dSpace = dSpace + 10
If i = 3 Then i = 0: dWidth = 0: dSpace = 0: startRow = startRow + iRows
End If
Next
End Sub
| |
doc_23525239
|
A: You need to be able to run a sql statement (and have proper permissions for it to execute).
DBCC CHECKIDENT
(
table_name
[, { NORESEED | { RESEED [, new_reseed_value ] } } ]
)
[ WITH NO_INFOMSGS ]
A: Visual Studio Server Explorer is just a client. You want to reset identity column value, in server table. Do you fill the difference?! Say between Email and Outlook, Internet and Firefox?
(I hope I've gave you a good enough tip what to google for)
| |
doc_23525240
|
but the height of the child div is not fixed, I want to make the child div scrollable independent
of the parent div scroll.How can I do it. is there a way to set a custom thumb sze for scrollbar of child as it covers the whole height of the child div.
A: is it what you mean
<html lang="en">
<head>
<title>Login page</title>
<style>
.custom{
width: 300px;
height: 300px;
padding: 20px;
background-color: #EEE;
border: 10px solid #CCC;
margin: 30px;
overflow: scroll;
}
.child{
height: 800px;
width: 800px;
background-color: #00F;
}
</style>
</head>
<body>
</body>
<div id="main" class="custom">
<div class="child">
child div
</div>
</div>
</html>
A: Set the overflow of the child div/element to auto to make it scroll:
overflow: auto;
For scrollbar thumb you may find these useful:
https://www.codegrepper.com/code-examples/css/custom+scrollbar+thumb+size
https://www.w3schools.com/howto/howto_css_custom_scrollbar.asp
| |
doc_23525241
|
In the first example if I comment the bean definition the listener stops receiving messages. AFAIK JmsListenerContainerFactory is @JMSListener support and is created by default. In my understanding I can define mine if I need some customization, but in my example there is nothing special on that bean definition, just the concurrency setting.
Here you have bean definition:
@Bean
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory() {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(connectionFactory());
factory.setConcurrency("1-1"); //nro. min-max consumers
return factory;
}
So, please enlighten me about the need of that bean.
A: If you are using Spring Boot, it automatically configures a container factory using the properties in application.yml/properties.
If you are NOT using Spring Boot, you have to define your own factory.
| |
doc_23525242
|
Thanks
A: Have you tried to set the Market option?
According to this example page, you should try something like this (note &Market=sl-SL argument):
http://api.bing.net/json.aspx?AppId=your_AppId&Query=your_query&Sources=Web&Version=2.0&Market=sl-SL&Options=EnableHighlighting&Web.Count=10&Web.Offset=0&JsonType=callback&JsonCallback=SearchCompleted
A:
First off, Slovenia is currently not a Bing Market or Country.
There are 2 mutually exclusive options to configure a localization.
Since Slovenia is not yet supported, you might want to use 2. to combine results from relevant markets.
*
*Using mkt and setLang
The values for mkt - Market Code are here.
The query value setLang, "The language to use for user interface strings. Specify the language using the ISO 639-1 2-letter language code. For example, the language code for English is EN. The default is EN (English)."
https://api.cognitive.microsoft.com/bing/v7.0/search?q=microsoft&mkt=en-US&setLang=EN
*
*Using cc and Accept-Language
The values for cc - Country Code are here.
This allows you to specify multiple languages via the header value Accept-Language.
https://api.cognitive.microsoft.com/bing/v7.0/search?q=microsoft&cc=US
True, setting the Accept-Language does very little for the actual
result. If you want to localize outside of a Bing market country, you'll like have to include a translation service.
A: Values for the Market parameter
| |
doc_23525243
|
However when I try to launch something like diskmgmnt.msc I need to use the ShellExecute api instead. In this case how do I specify the parent of the new process.
| |
doc_23525244
|
I have implemented a solr suggester for list of cities and areas. I have user FuzzyLookupFactory for this. My schema looks like this:
<fieldType name="suggestTypeLc" class="solr.TextField" positionIncrementGap="100">
<analyzer>
<charFilter class="solr.PatternReplaceCharFilterFactory" pattern="[^a-zA-Z0-9]" replacement=" " />
<tokenizer class="solr.WhitespaceTokenizerFactory"/>
<filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
<filter class="solr.LowerCaseFilterFactory"/>
</analyzer>
</fieldType>
synonym.txt is used for mapping older city names with new ones, like Madras=>Chennai, Saigon=>Ho Chi Minh city
My suggester definition looks like this:
<searchComponent name="suggest" class="solr.SuggestComponent">
<lst name="suggester">
<str name="name">suggestions</str>
<str name="lookupImpl">FuzzyLookupFactory</str>
<str name="dictionaryImpl">DocumentDictionaryFactory</str>
<str name="field">searchfield</str>
<str name="weightField">searchscore</str>
<str name="suggestAnalyzerFieldType">suggestTypeLc</str>
<str name="buildOnStartup">false</str>
<str name="buildOnCommit">false</str>
<str name="storeDir">autosuggest_dict</str>
</lst>
</searchComponent>
My request handler looks like this:
<requestHandler name="/suggest" class="solr.SearchHandler" startup="lazy">
<lst name="defaults">
<str name="suggest">true</str>
<str name="suggest.count">10</str>
<str name="suggest.dictionary">suggestions</str>
<str name="suggest.dictionary">results</str>
</lst>
<arr name="components">
<str>suggest</str>
</arr>
</requestHandler>
Now the problem is that suggester is showing the exact matches first But it is case sensitive. for eg,
/suggest?suggest.q=mumbai (starting with a lower case "m")
will give, exact result at 4th place:
{
"responseHeader":{
"status":0,
"QTime":19},
"suggest":{
"suggestions":{
"mumbai":{
"numFound":10,
"suggestions":[{
"term":"Mumbai Domestic Airport",
"weight":11536},
{
"term":"Mumbai Chhatrapati Shivaji Intl Airport",
"weight":11376},
{
"term":"Mumbai Pune Highway",
"weight":2850},
{
"term":"Mumbai",
"weight":2248},
.....
Whereas, calling /suggest?suggest.q=Mumbai (starting with an upper case "M")
is giving exact result at 1st place:
{
"responseHeader":{
"status":0,
"QTime":16},
"suggest":{
"suggestions":{
"Mumbai":{
"numFound":10,
"suggestions":[{
"term":"Mumbai",
"weight":2248},
{
"term":"Mumbai Domestic Airport",
"weight":11536},
{
"term":"Mumbai Chhatrapati Shivaji Intl Airport",
"weight":11376},
{
"term":"Mumbai Pune Highway",
"weight":2850},
...
What am I missing here ? What can be done to make Mumbai as the first result even if it is called from a lower case "mumbai" as query. I thought the case sensitivity is being handled by "suggestTypeLc" field I've generated.
A: There is a hidden config-parameter for FuzzyLookupFactory is exactMatchFirst which is descibed as:
If true, the default, exact suggestions are returned first, even if they are prefixes or other strings in the FST have larger weights.
According to your config suggestions are ranked by searchscore field (in your config it refers to: <str name="weightField">searchscore</str>). This is why you when you query as mumbai all suggestions are sorted by weights.
But according to exactMatchFirst=true you will have Mumbai on top (for the query=Mumbai) despite provided weighting mechanisms. And this is actually how exactMatchFirst impacts the ordering.
Unfortunately I didn't find option for tuning your suggester rather than getting rid of weightField at all.
Try turning off weighting-by-fields or alternatively try another lookup implementation, for instance AnalyzingInfixLookupFactory.
| |
doc_23525245
|
I try customize notification email layout
My code to send email like this :
public function toMail($notifiable)
{
return (new MailMessage)
->subject('Test')
->view('vendor.mail.markdown.message',['data'=>$this->data]);
}
The view like this :
@component('mail::layout')
{{-- Header --}}
@slot('header')
@component('mail::header', ['url' => config('app.url')])
{{ config('app.name') }}
@endcomponent
@endslot
{{-- Body --}}
{{ $slot }} test
{{-- Subcopy --}}
@isset($subcopy)
@slot('subcopy')
@component('mail::subcopy')
{{ $subcopy }}
@endcomponent
@endslot
@endisset
{{-- Footer --}}
@slot('footer')
@component('mail::footer')
© {{ date('Y') }} {{ config('app.name') }}. All rights reserved.
@endcomponent
@endslot
@endcomponent
If the code executed, there exist error like this :
(2/2) ErrorException No hint path defined for [mail]. (View:
C:\xampp\htdocs\myshop\resources\views\vendor\mail\markdown\message.blade.php)
How can I solve the error?
A: If you are using markdown in your template, you need to use the ->markdown() method rather than the ->view() method on your MailMessage
public function toMail($notifiable)
{
return (new MailMessage)
->subject('Test')
->markdown('vendor.mail.markdown.message', ['data' => $this->data]);
}
A: In an application migrated through different Laravel versions (and now at 5.6) I had to modify the file config/mail.php, changing the parameter markdown/paths from resource_path('views/vendor/mail') to resource_path('views/vendor/mail/markdown'), so it found the base templates for my Markdown mails.
| |
doc_23525246
|
def split_matrix(k, n):
split_points = [round(i * k / n) for i in range(n + 1)]
split_ranges = [(split_points[i], split_points[i + 1],) for i in range(len(split_points) - 1)]
return split_ranges
import numpy as np
k = 100
arr = np.zeros((k,k,))
idx = 0
for i in range(k):
for j in range(i + 1, k):
arr[i, j] = idx
idx += 1
def parallel_calc(array, k, si, endi):
for i in range(si, endi):
for j in range(k):
# do some expensive calculations
for start_i, stop_i in split_matrix(k, cpu_cnt):
parallel_calc(arr, k, start_i, stop_i)
Do you have any suggestions as to the implementation or library function?
A: After a number of geometrical calculations on a side I arrived at the following partitioning that gives roughly the same number of points of the matrix in each of the vertical (or horizontal, if one wants) partitions.
def offsets_for_equal_no_elems_diag_matrix(matrix_dims, num_of_partitions):
if 2 == len(matrix_dims) and matrix_dims[0] == matrix_dims[1]: # square
k = matrix_dims[0]
# equilateral right angle triangles have area of side**2/2 and from this area == 1/num_of_partitions * 1/2 * matrix_dim[0]**2 comes the below
# the k - ... comes from the change in the axis (for the calc it is easier to start from the smallest triangle piece)
div_points = [0, ] + [round(k * math.sqrt((i + 1)/num_of_partitions)) for i in range(num_of_partitions)]
pairs = [(k - div_points[i + 1], k - div_points[i], ) for i in range(num_of_partitions - 1, -1, -1)]
return pairs
A: I thin you should update your split_matrix method, as it returns one split range less, than you want (setting cpu_cnt=4 will return only 3 tuples, and not 4):
def split_matrix(k, n):
split_points = [round(i * k / n) for i in range(n+1)]
return [(split_points[i], split_points[i + 1],) for i in range(len(split_points) - 1)]
Edit: If your data locality is not so string you could try this: create a queue of tasks, in which you add all indices/entries for which this calculation shall be performed. Then you initialize your parallel workers (e.g. using multiprocessing) and let them start. This worker now pick a element out of the queue, calculate the result, store it (e.g. in another queue) and continue with the next item, and so on.
If this is not working for your data, I don't think, that you can improve anymore.
| |
doc_23525247
|
public virtual Author Author { get; set; }
public virtual Author Author1 { get; set; }
public virtual Author Author2 { get; set; }
And I want to have something like:
public virtual Author Writer { get; set; }
public virtual Author Supervisor { get; set; }
public virtual Author Reviewer { get; set; }
Coresponding to mai SQL table columns:
[WriterId] [int] NOT NULL,
[SupervisorId] [int] NOT NULL,
[ReviewerId] [int] NOT NULL,
I know that I can add partial classes to add extra properties like in below example:
public virtual Author AuthorWriter
{
get
{
return this.Author1;
}
set
{
this.Author1 = value;
}
}
but I would have to much code to write. So my question is if is there a way in Visual Studio 2013 to configure how virtual properties names are generated by Entity Framework?
I mention that I do not want to change the generation of C# model way because the database is larger and I do not have access to modify its structure, moreover if the database is changed I should regenerate my C# model.
Thanks in advance
| |
doc_23525248
| ||
doc_23525249
|
Here's the code I'm referencing to.
Function.prototype.method = function(name, func) {
this.prototype[name] = func;
return this;
}
/* SIMPLE CONSTRUCTOR */
function Person(name, age) {
this.name = name;
this.age = age;
}
/* ADD METHODS */
Person.method('getName', function() { return this.name; });
Person.method('getAge', function() { return this.age; });
var rclark = new Person('Ryan Clark', 22);
console.log(rclark.getName()); // string(Ryan Clark)
console.log(rclark.getAge()); // number(22)
I tried omitting 'return this' to see if the code would break but it doesn't? What exactly does 'return this' do? I'll keep progressing through this book but I want to make sure I'm understanding everything. Any help will be appreciated.
A: It allows for chaining so you can do something like this:
/* ADD METHODS */
Person.method('getName', function() { return this.name; })
.method('getAge', function() { return this.age; });
A: return this returns the object on which method() was called, and after it has been modified by adding the passed method to it.
Omitting it won't break your code, but it is a better style that allows chained method invocation, so you can for instance:
Person.method('getName', function() { return this.name; }).method('getAge', function() { return this.age; });
| |
doc_23525250
|
string requestMessageString = "<soapenv:Envelope
xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:inf="http://www.informatica.com/"
xmlns:wsdl="http://www.informatica.com/wsdl/">
<soapenv:Header>
<inf:Security>
<UsernameToken>
<Username>john</Username>
<Password>jhgfsdjgfj</Password>
</UsernameToken>
</inf:Security>
</soapenv:Header>
<soapenv:Body>
<wsdl:doClient_ws_IbankRequest>
<wsdl:doClient_ws_IbankRequestElement>
<!--Optional:-->
<wsdl:Client_No>00460590</wsdl:Client_No>
</wsdl:doClient_ws_IbankRequestElement>
</wsdl:doClient_ws_IbankRequest>
</soapenv:Body>
</soapenv:Envelope>"
and i am sending the message like this
Message requestMsg = Message.CreateMessage(MessageVersion.Soap11, "http://tempuri.org/IService1/IbankClientOperation", requestMessageString );
Message responseMsg = null;
BasicHttpBinding binding = new BasicHttpBinding();
IChannelFactory<IRequestChannel> channelFactory = binding.BuildChannelFactory<IRequestChannel>();
channelFactory.Open();
EndpointAddress address = new EndpointAddress(this.Url);
IRequestChannel channel = channelFactory.CreateChannel(address);
channel.Open();
responseMsg = channel.Request(requestMsg);
but the problem is that the actual message which is sent over wire has a SOAP message inside a SOAP message...
i somehow want to convert my RAW message into SOAP structure
A: You can't use Soap11 as message version and you cannot use BasicHttpBinding. Try:
Message requestMsg = Message.CreateMessage(MessageVersion.None, "http://tempuri.org/IService1/IbankClientOperation", requestMessageString );
CustomBinding binding = new CustomBinding(new HttpTransportBindingElement());
IChannelFactory<IRequestChannel> channelFactory = binding.BuildChannelFactory<IRequestChannel>();
channelFactory.Open();
But anyway if you have SOAP request why don't you simply use WebClient or HttpWebRequest to post the request to the server?
A: I got the answer from this question
wcf soap message deserialization error
A: You can convert (deserialize) your SOAP message into an object that your service expects. Here's a sketch of what works for me:
var invoice = Deserialize<Invoice>(text);
var result = service.SubmitInvoice(invoice);
where Deserialize is this:
private T Deserialize<T>(string text)
{
T obj;
var serializer = new DataContractSerializer(typeof(T));
using (var ms = new MemoryStream(Encoding.Default.GetBytes(text)))
{
obj = (T)serializer.ReadObject(ms);
}
return obj;
}
Since SOAP is XML, you can easily adjust it structure (remove or change namespace, for example) before deserializing.
| |
doc_23525251
|
My project is a desktop game only and uses the screen implementations combinede with a game super class.
Is there any other way to get text input inside the window, so not as a pop up?
A: As said in the comment, the right way to make a login screen is to use Scene2d.ui and its TextField. Check out the Scene2d.ui wiki page and Skin Composer to get going.
| |
doc_23525252
|
Uncaught Error: Parameter 1 (filter) is required.
at validate (extensions::schemaUtils:36)
at validateListenerArguments (extensions::webRequestEvent:19)
at WebRequestEventImpl.addListener (extensions::webRequestEvent:92)
at WebRequestEvent.publicClassPrototype.(anonymous function) [as addListener] (extensions::utils:138:26)
at window.onload (bkg.js:3)
I have looked at several other questions and have been unable to find out what is going on. I also did a Google search for my error and nothing came up.
bkg.js (background script)
window.onload = function(){
chrome.webRequest.onBeforeRequest.addListener(
function(details) {
var allowed = ["*://*.google.com/*", "*://*.nbclearn.com/*"];
chrome.tabs.getSelected(null, function(tab) {
var tabUrl = tab.url;
if ($.inArray(tabUrl, allowed) == -1){
return {
cancel: true
}
}
else {
return {
cancel: false
}
}
},
{urls: ["*://*/*"]},
["blocking"]);
});
};
I expected that this would allow only websites from the allowed array to load and the others would be blocked. Instead I get the error from above and the extension does nothing. What does the error I am getting mean, and what can I do to fix it?
A: Here is your code with the indentation corrected:
window.onload = function(){
chrome.webRequest.onBeforeRequest.addListener(function(details) {
var allowed = ["*://*.google.com/*", "*://*.nbclearn.com/*"];
chrome.tabs.getSelected(null, function(tab) {
var tabUrl = tab.url;
if ($.inArray(tabUrl, allowed) == -1) {
return {cancel: true}
} else {
return {cancel: false}
}
}, {urls: ["*://*/*"]}, ["blocking"]); // all these are chrome.tabs.getSelected arguments
}); //chrome.webRequest addListener arguments are missing
};
As you can see, you are passing {urls: ["*://*/*"]}, ["blocking"] as arguments to chrome.tabs.getSelected, instead of chrome.webRequest listener. Following the documentation example, you can do:
window.onload = function(){
chrome.webRequest.onBeforeRequest.addListener(function(details) {
return {cancel: (details.url.indexOf("google.com/") == -1 && details.url.indexOf("nbclearn.com/") == -1)} },
{urls: ["<all_urls>"]},
["blocking"]);
};
In order to block all requests except those from those 2 domains.
You can use Array.prototype.every to have the whitelisted domains in an array. For example:
window.onload = function(){
var allowed = ["chrome.com/", "nbclearn.com/", "example.com/"];
chrome.webRequest.onBeforeRequest.addListener(function(details) {
var isForbidden = allowed.every(function(url) {
return details.url.indexOf(url) == -1;
});
return {cancel: isForbidden}
}, {urls: ["<all_urls>"]}, ["blocking"]);
};
| |
doc_23525253
|
How can I remove the debugging information when compiling go code with gc?
Note:
Using gccgo doesn't solve the problem. If I don't compile with '-g' the executable is broken and only outputs:
no debug info in ELF executable errno -1
fatal error: no debug info in ELF executable
runtime stack:
no debug info in ELF executable errno -1
panic during panic"
A: I recommend usage of -ldflags="-s -w" which removes symbol table and debugging information.
As a bonus with Go 1.13 -trimpath can be used to reduce length of file paths stored in the file
A: The go linker has a flag -w which disables DWARF debugging information generation. You can supply linker flags to go tool build commands as follows:
go build -ldflags '-w'
Another approach on Linux/Unix platforms is using command strip against the compiled binary. This seems to produce smaller binaries than the above linker option.
| |
doc_23525254
|
string xml = @"<Programs><ProgramName>in.sy.prog.n.r1.test-package</ProgramName><ProgramName>un.sy.nopr.n.r1.test-package</ProgramName><ProgramName>sr.pt.mang.n.r1.test-package</ProgramName><ProgramName>in.sy.prog.n.r1.test-packageENCAP</ProgramName><ProgramName>in.sy.prog.n.r1.test-packageENCAPTwo</ProgramName><ProgramName>in.sy.prog.n.r1.test-package2</ProgramName></Programs>";
System.Xml.Linq.XDocument doc = XDocument.Parse(xml);
var programNameCount =
(from el in doc.Descendants("Programs")
where el.Element("ProgramName").Value.ToLower().StartsWith("in.")
select el.Element("ProgramName")).Count();
A: You want to get the count of ProgramName not Programs
var programNameCount = (from el in docx.Descendants("ProgramName")
where el.Value.ToLower().StartsWith("in.")
select el)
.Count();
A: I don't think that you need el.Element in either the projection nor the filter. Furthermore, you want to count the Descendants of ProgramName, whose value starts with in. and not the Descendants of Programs.
var programNameCount = (from el in doc.Descendants("ProgramName")
where el.Value.ToLower().StartsWith("in.")
select el).Count();
| |
doc_23525255
|
Recently, we changed our SVN repository to use SSL und LDAP credentials, i.e. the urls have been changed from http://sunversion.url:8080/repo/trunk to https://sunversion.url:8443/repo/trunk and we now have an AD account for anonymous SVN checkouts.
In order to force Hudson to checkout the head revision we added @HEAD to the urls, e.g. http://sunversion.url:8080/repo/trunk@HEAD.
Additionally, we set up the projects to poll for SVN changes every 2 minutes.
This worked well before the changes, i.e. the poll would receive update notifications and start the build. During the build the updates would then be downloaded.
After the change to SSL the polls seem broken. Builds still get the head revision using urls with @HEAD but polls for changes don't get any notifications, i.e. the log says "No changes".
Removing @HEAD from the urls makes the polls work again, but now we can't be sure that its actually the head revision that will be used in a build.
Any ideas?
A: Seems like there was a change in the global configuration that now allows to configure the default update strategy: the Subversion Revision Policy configuration.
From the documentation:
Queue time
revision created based on build scheduled time will be used, default value.
Build time
revision created base on build run time will be used.
Head revision
HEAD revision will be used.
This still doesn't explain why @HEAD doesn't work anymore but seems to solve our problem.
I hope this will help others who are running into similar issues.
| |
doc_23525256
|
public static function getElementByAttributeValue(\DOMDocument $domNode, $attribute, $value) {
/** @var \DOMNode $node */
foreach($domNode->childNodes as $node) {
if($node->attributes && $node->attributes->length > 0) {
$attrValue = self::getAttribute($attribute, $node->attributes);
if($attrValue && strcmp($attrValue, $value) == 0) {
return $node;
}
}
if($node->hasChildNodes()) {
return self::getElementByAttributeValue($node, $attribute, $value);
}
}
}
this returns NULL even if the element is present in DOMDocument.
and I also tried this:
$xpath = new \DOMXPath($domNode);
return $xpath->query("[@" . $attribute . "=\"" . $value . "\"]")->item(0);
the xpath->query returns false and it fails in getting ->item out of false.
Any solutions please?
A: Aah the second solution works fine but I need to do this:
$xpath = new \DOMXPath($domNode);
return $xpath->query("//*[@" . $attribute . "=\"" . $value . "\"]")->item(0);
Notice the //* before the xpath expression
| |
doc_23525257
|
string[] BeneficiaryFullName1 = ex.BeneficiaryFullName1.Split(' ');
if (BeneficiaryFullName1 != null ||
BeneficiaryFullName1.Length >= 0 &&
BeneficiaryFullName1.Length < 1)
{
ex.BeneficiaryFullName1 = BeneficiaryFullName1[0];
}
if (BeneficiaryFullName1 != null ||
BeneficiaryFullName1.Length >= 0 &&
BeneficiaryFullName1.Length < 2)
{
ex.BeneficiaryFullName1 = BeneficiaryFullName1[0];
ex.BeneficiaryFullName2 = BeneficiaryFullName1[1];
}
However, I am not getting it right. When there is "" empty return it keeps going to the second if statement and gives an error. What am I doing wrong?
A: Remove BeneficiaryFullName1 != null (which is redundant) and BeneficiaryFullName1.Length < ... (which is erroneous: if we have a name with, say, 4 words we can and should take first two of them):
// We want at most 3 items, empty ones (e.g. trailing spaces) removed
string[] BeneficiaryFullName1 = ex.BeneficiaryFullName1.Split(
' ', 3, StringSplitOptions.RemoveEmptyEntries);
// If we have at least 2 items, take the 2nd
if (BeneficiaryFullName1.Length >= 2)
ex.BeneficiaryFullName2 = BeneficiaryFullName1[1];
// If we have at least 1 item, take the 1st
if (BeneficiaryFullName1.Length >= 1)
Bex.BeneficiaryFullName2 = BeneficiaryFullName1[0];
| |
doc_23525258
|
<!ELEMENT note (to,from,heading,body)>
<!ELEMENT to (#PCDATA)>
<!ATTLIST to
type CDATA #FIXED "email"
default CDATA #FIXED "you@foo.bar"
>
<!ELEMENT from (#PCDATA)>
<!ATTLIST from
type CDATA #FIXED "email"
default CDATA #FIXED "me@foo.bar"
>
<!ELEMENT heading (#PCDATA)>
<!ATTLIST heading
type CDATA #FIXED "string"
>
<!ELEMENT body (#PCDATA)>
<!ATTLIST body
type CDATA #FIXED "string"
>
Trying to load this data with MSXML fails with error message "Cannot have a DTD declaration outside of a DTD".
When wrapping the DTD into some DOCTYPE element like this:
<?xml version="1.0"?>
<!DOCTYPE note [
... content of DTD file above ...
]>
<note/>
the parser succeeds but does not allow to iterate over the DTD structure. The (Delphi) code to load and display the document looks like this:
var
XmlDoc : iXmlDomDocument2;
i : integer;
begin
XmlDoc := CoDomDocument60.Create();
XmlDoc.SetProperty('NewParser', FALSE);
XmlDoc.SetProperty('ProhibitDTD', FALSE);
XmlDoc.Async := FALSE;
XmlDoc.ValidateOnParse := FALSE;
XmlDoc.Load(FileName);
if ( XmlDoc.ParseError.ErrorCode <> 0 ) then
raise Exception.Create(XmlDoc.ParseError.Reason);
// This will display "note"
ShowMessage('DocType: ' + XmlDoc.Doctype.Name);
// This will display the embedded DTD as a string
ShowMessage('DTD: ' + XmlDoc.Doctype.Xml);
// These loops will display nothing
for i := 0 to XmlDoc.Doctype.Entities.Length-1 do
ShowMessage('Entity: ' + XmlDoc.Doctype.Entities[i].NodeName);
for i := 0 to XmlDoc.Doctype.Notations.Length-1 do
ShowMessage('Notation: ' + XmlDoc.Doctype.Notations[i].NodeName);
for i := 0 to XmlDoc.Doctype.ChildNodes.Length-1 do
ShowMessage('Child: ' + XmlDoc.Doctype.ChildNodes[i].NodeName);
end;
Is there a way to iterate over the DTD node structure using MSXML6?
| |
doc_23525259
|
<div style="position: relative; margin: 0 auto; max-width: 800px; max-height: 600px; overflow-x: hidden; overflow-y: scroll; background-image: url('/design/clan_flag.gif'); background-size: 100%;">
<img src="/design/clan_flag.gif" width="100%" style="visibility: hidden;" />
<div style="position: absolute;">
a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>a<br>
</div>
http://jsfiddle.net/JDbHw/
as you can see, the text cant start at the top, because of the "helper" image, how to aling top? In addition, if you have thick enough browser, it will shows a horizontal toolbar (guess because of the inner helper image too)
A: To position the div with text on top of the image you can
*
*add top:0px to the styling in conjunction with position:absolute
*or use position:relative on a parent div and for the two(text/picture div) and set the text div to have a higher z-index
| |
doc_23525260
|
Starting here:
and finishing with this:
A: If macro function is ok, here is sample code
Sub Split()
Dim meargedline As String
Dim rowNumber As Integer
rowNumber = 2
Dim splitData
For i = 2 To 4
meargedline = Cells(i, 3)
splitData = VBA.Split(meargedline, Chr(10))
For j = LBound(splitData, 1) To UBound(splitData, 1)
Cells(j + rowNumber, 4) = Cells(i, 1)
Cells(j + rowNumber, 5) = Cells(i, 2)
Cells(j + rowNumber, 6) = splitData(j)
Next j
rowNumber = rowNumber + UBound(splitData, 1) + 1
Next i
A: This will do a basic loop through and generate an output. Note the constants and make sure you specify for your workbook. You can see an example of it here.
Sub runSplitter()
Const topRightCellAddress = "E20"
Const startCellToSetValues = "A1" 'where new rows will be placed
Const sheetOneName = "Start" 'make sure these match"
Const sheetTwoName = "Output"
Const codeOfLineSplitter = 10 'asci code line splitter
Dim firstSheet As Worksheet, secondSheet As Worksheet
Set firstSheet = Sheets(sheetOneName)
Set secondSheet = Sheets(sheetTwoName)
Dim aCell As Range
Set aCell = firstSheet.Range(topRightCellAddress)
Dim aRR() As String
Dim r As Long
Do While Not IsEmpty(aCell)
aRR = Split(aCell.Value2, Chr(codeOfLineSplitter), -1)
Dim i As Long
With secondSheet.Range(startCellToSetValues)
For i = LBound(aRR) To UBound(aRR)
.Offset(r, 0).Value2 = aCell.Offset(0, -2).Value2
.Offset(r, 1).Value2 = aCell.Offset(0, -1).Value2
.Offset(r, 2).Value2 = aRR(i)
r = r + 1
Next i
End With
Set aCell = aCell.Offset(1, 0)
Loop
End Sub
A: you can use power query to achieve the target you want. Click here for reference https://www.youtube.com/watch?v=wJ6y2anloW4.
| |
doc_23525261
|
I've been trying to get the year (année) and the ''kilométrage''
so far ive been trying
année = page2.find(
but I can't seem to find how to get those specific data out of the box. I did try
everything = page2.find("div", class_= "box").text
but when I create the new excel folder, the box content all goes in the same cell, which I guess is normal, but its not what I need. I only want the "année" and "kilométrage" data, in seperate columns in the new folder
| |
doc_23525262
|
Input :
2018-04-12 14:43 Error Hello
2018-04-13 11:33 Error Hello1
2018-04-14 15:43 Error Hello2
2018-04-14 12:22 Error Hello3
2018-04-15 19:44 Error Hello4
2018-04-16 16:43 Error Hello5
Output :
2018-04-13 11:33 Error Hello1
2018-04-14 15:43 Error Hello2
2018-04-14 12:22 Error Hello3
Note : I have tried with below sed command but it showing blank output because the mention time is not there in file.
sed -n '/2018-04-12 14:50/,/2018-04-14 14:20/p' log_file
A: awk provides string comparison with the > and < operator and string concatenation by simply joining adjacent strings. A simple version to collect entries between "2018-04-12 14:50" and "2018-04-14 14:20" could be:
$ awk '$1" "$2 > "2018-04-12 14:50" && $1" "$2 < "2018-04-14 14:20"' log
2018-04-13 11:33 Error Hello1
2018-04-14 12:22 Error Hello3
(note: "2018-04-14 15:43 Error Hello2" does not fall within the requested range)
A: The line with 2018-04-14 15:43 from your sample does not fall within the range you specified in your sed command.
Anyway. here's what I've got:
awk -v a="2018-04-12 14:50" -v b="2018-04-14 14:20" \
'$1 " " $2>=a{n=1} $1 " " $2>b{n=0} n' log_file
Or, broken out for easier reading (and commenting):
awk -v a="2018-04-12 14:50" -v b="2018-04-14 14:20" '
$1 " " $2 >= a { n=1 } # If the current line is greater than our start, set mark
$1 " " $2 > b { n=0 } # If the current line is greater than our end, unset mark
n # If our mark is set, print the line
' log_file
This solution evaluates the first to "words" on each line against the input variables you set with awk's -v option.
This works because awk's > operator evaluates sorting order when used with strings, and your dates are thankfully ISO 8601, so sorting works.
A: Assuming the datetime fields are sequential in nature (your third line is out of order so I'm assuming here it should have the date 2018-04-13, which I've modified it to), you can do this with a simple awk command as per the following transcripts (ignore the pax> bit, that's my prompt):
pax> awk '$1"_"$2>"2018-04-14_14:20"{exit} $1"_"$2>="2018-04-12_14:50"{print}' infile
2018-04-13 11:33 Error Hello1
2018-04-13 15:43 Error Hello2
2018-04-14 12:22 Error Hello3
The first clause simply exits when you find a date beyond the desired end. The second clause will (if the first clause hasn't yet caused an exit) print out each line where the date and time is at or beyond the start.
If those lines really are allowed to be out of order and you want lines within the date range wherever they may be in the file, you just have to process the entire file printing out those that match:
pax> awk '$1"_"$2>="2018-04-12_14:50"&&$1"_"$2<="2018-04-14_14:20"{print}' infile
2018-04-13 11:33 Error Hello1
2018-04-13 15:43 Error Hello2
2018-04-14 12:22 Error Hello3
Changing back to your original file, with the date of the third line being out of order, gives you the correct output in that case as well:
2018-04-13 11:33 Error Hello1
2018-04-14 12:22 Error Hello3
A: $ awk -v beg='2018-04-12 14:50' -v end='2018-04-14 14:20' '{cur=$1" "$2} beg<=cur && cur<=end' file
2018-04-13 11:33 Error Hello1
2018-04-14 12:22 Error Hello3
| |
doc_23525263
|
Sys.setenv(ENV_VAR = 'foo')
#' @export
my_funciton <- function(){
v <- Sys.getenv(ENV_VAR)
if (v == 'foo') ... else if (v == 'bar') ...
}
but when i build and reload the package in RStudio, I run Sys.getenv(ENV_VAR) gives "", i.e, when loading the package, it did not set environment ENV_VAR as foo. Predictably my_function also raise error: Error in Sys.getenv(ENV_VAR) : object 'ENV_VAR' not found
A: just as @joran commented, .onLoad function is what I need.
| |
doc_23525264
|
import matplotlib.pyplot as plt
from matplotlib.gridspec import GridSpec
import datetime as dt
import pandas as pd
now_date = dt.datetime(2018,10,1,9)
d_tw_ini = now_date - dt.timedelta(hours = 1)
d_tw_fin = now_date + dt.timedelta(hours = 3)
dts = pd.date_range(start=d_tw_ini, end=d_tw_fin, freq='1H', name='ini', closed='left')
data=pd.DataFrame({'val':[0.5,0.4,0.7,0.9]})
ev1=[dt.datetime(2018,10,1,9,5),dt.datetime(2018,10,1,10,50)]
data['t']=dts.values
data.set_index('t',inplace=True)
fig = plt.figure()
gs = GridSpec(1, 1)
ax_1 = fig.add_subplot(gs[0, 0])
data.plot(ax=ax_1, y='val')
ax_1.axvspan(ev1[0],ev1[1], alpha=0.3, color= 'red')
Result
A: Juan, it looks when you used pandas to plot, the hourly indexing seems to cause issues with how axvspan gets plotted.
I replaced
data.plot(ax=ax_1, y='val')
with
ax_1.plot(data.index, data['val'])
which generates the image below, but unfortunately you lose the automated x-axis formatting.
Adding the two lines below will result in the same date formatting as your example.
ax_1.set_xticks([x for x in data.index])
ax_1.set_xticklabels([str(x)[11:16] for x in data.index])
Below is the full code to produce the above plot.
import matplotlib.pyplot as plt
from matplotlib.gridspec import GridSpec
import datetime as dt
import pandas as pd
now_date = dt.datetime(2018,10,1,9)
d_tw_ini = now_date - dt.timedelta(hours = 1)
d_tw_fin = now_date + dt.timedelta(hours = 3)
dts = pd.date_range(start=d_tw_ini, end=d_tw_fin, freq='1h', name='ini',
closed='left')
data=pd.DataFrame({'val':[0.5,0.4,0.7,0.9]})
ev1=[dt.datetime(2018,10,1,9,5,0),dt.datetime(2018,10,1,10,50,0)]
data['t']=dts.values
data.set_index('t',inplace=True)
fig = plt.figure()
gs = GridSpec(1, 1)
ax_1 = fig.add_subplot(gs[0, 0])
# modified section below
ax_1.plot(data.index, data['val'])
ax_1.axvspan(ev1[0],ev1[1], alpha=0.3, color= 'red')
ax_1.set_xticks([x for x in data.index])
ax_1.set_xticklabels([str(x)[11:16] for x in data.index])
plt.show()
| |
doc_23525265
|
x <- stringdist_inner_join(tbl.nomatch, tbl.reuters, by ="company", method="jw")
Thank you very much.
| |
doc_23525266
|
After recent update. One of the file extension has been removed, and another one used a new icon instead of previous icon.
But after opening the new app, seems OS still remembered the old plist config, I tried to restart launch service, but didn't work.
What should I do to reset it? By code or by terminal. Both are OK.
| |
doc_23525267
|
I am a bit confuse about what to use at which level and how to integrate them together.
My specific question is:
Which library (if there is any) to use for sending data to remote server and to get data (which will be in json format)?
or is there any sample demo app??
A: For creating a fully fledge application you need to maintain a Server database and local android sqlite database having same schema. There is a background service which can capture GPS location whenever device moves and can save gps location over local and real server. if device is on offline mode it uses it's local sqlite databse and will sync to server as soon as comes in online mode.
A: you might try the FlexJSON library.
It is pretty nice if you need to deal with Serialization/Deserialization of Data.
| |
doc_23525268
|
class MyForm(forms.Form):
def clean_f(self):
f = self.cleaned_data['f']
if f.count('%'):
f = f.replace('%', '')
return f
This doesn't change the form. I want the user to see the 'stripped' value, but it always shows the submitted value.
Is it possible to do this with a simple form clean_xxx method?
Otherwise I will use my AJAX form processor.
Thanks
| |
doc_23525269
|
import { IonContent } from "@ionic/angular";
export class ChatroomPage implements OnInit {
messageForm: FormGroup;
messages: any[];
messenger: any;
@ViewChild(IonContent) content: IonContent;
constructor(
private navExtras: NavExtrasService,
private api: RestApiService,
private httpNative: HTTP
) { }
ngOnInit() {
this.content.scrollToBottom(300);
}
}
In the html file:
<ion-header>
<ion-toolbar color="primary">
<ion-title>Chatroom</ion-title>
</ion-toolbar>
</ion-header>
<!-- display previous message -->
<ion-content padding id="content">
<ion-list>
<ion-item *ngFor="let message of messages">
{{ message.message }}
</ion-item>
</ion-list>
</ion-content>
<!-- chat message input -->
<ion-footer>
<form [formGroup]="messageForm" (submit)="sendMessage()" (keydown.enter)="sendMessage()">
<ion-input formControlName="message" type="text" placeholder="Enter your message"></ion-input>
<ion-button type="submit">Send</ion-button>
</form>
</ion-footer>
The error displayed is:
ng:///ChatroomPageModule/ChatroomPage_Host.ngfactory.js:5 ERROR TypeError: Cannot read property 'scrollToBottom' of undefined
Please enlighten me what did I do wrong. Most tutorials I found are using Ionic 3 and they use Content from ionic-angular instead of IonContent from @ionic/angular. I cannot seem to use Content in Ionic 4 as it doesn't have the scrollToBottom method.
A: Most of your code is fine. You just need to do 2 changes and that should work for you, in Ionic 4. Here are the changes:
Change 1 (HTML FILE):
Replace:
<ion-content padding id="content">
with:
<ion-content padding #content>
Change 2 (TS FILE):
Replace:
scrollToBottomOnInit() {
this.content.scrollToBottom(300);
}
with:
scrollToBottomOnInit() {
setTimeout(() => {
if (this.content.scrollToBottom) {
this.content.scrollToBottom(400);
}
}, 500);
}
NOTE:
If you do not import IonContent (similar to the way you already did), the code will fail to compile and you will see console errors such as this:
ERROR Error: Uncaught (in promise): ReferenceError: Cannot access 'MessagesPageModule' before initialization
where MessagesPageModule is the Module associated with the page that you are trying to implement the feature in.
A: Tomas Vancoillie is right, but when you add new text and add to list, it won't push it up above input text. Therefore to push text to array and update view to bottom again use ngZone.
1.
import { Component, ViewChild,NgZone } from '@angular/core';
*In constructor add
public _zone: NgZone
*Call your function
this._zone.run(() => {
setTimeout(() => {
this.contentchat.scrollToBottom(300);
});
});
A: This works for me on December 2019.
.html
<ion-content #content>
</ion-content>
.ts
@ViewChild('content', { static: false }) content: IonContent;
constructor(){}
ngOnInit(): void {
this.scrollToBottom();
}
scrollToBottom(): void {
this.content.scrollToBottom(300);
}
A: You can reach the bottom of the content with the method scrollToBottom()
scrollToBottom(duration?: number) => Promise<void>
Add an ID to the ion-content
<ion-content #content>
</ion-content>
Get the content ID in .ts and call the scrollToBottom method with a chosen duration
@ViewChild('content') private content: any;
ngOnInit() {
this.scrollToBottomOnInit();
}
scrollToBottomOnInit() {
this.content.scrollToBottom(300);
}
https://ionicframework.com/docs/api/content
EDIT:
ViewChild gets the correct data with the provided content ID
@ViewChild('content') private content: any;
ngOnInit vs ionViewDidEnter / ionViewWillEnter
ngOnInit doesn't trigger if you come back from a navigation stack, ionViewWillEnter / ionViewDidEnter will. So if you place the function in ngOnInit, the scrollToBottom won't work if you navigate back.
A: @ViewChild(IonContent) content: IonContent;
scrollToBottom() {
setTimeout(() => {
if (this.content.scrollToBottom) {
this.content.scrollToBottom();
}
}, 400);
}
Anywhere in function use:
this.scrollToBottom();
A: Due to recent changes on ionic 4, I found the code in suggested answer no longer works for me. Hope this helps all the new comers.
import { IonContent } from '@ionic/angular';
export class IonicPage implements OnInit {
@ViewChild(IonContent, {read: IonContent, static: false}) myContent: IonContent;
constructor() {}
ScrollToBottom(){
setTimeout(() => {
this.myContent.scrollToBottom(300);
}, 1000);
}
}
No id specified in .html file for < ion-content >
Official documentation refers to ion-content.
Ionic version used listed below at the time of this post.
Ionic CLI : 5.4.13
Ionic Framework : @ionic/angular 4.11.3
@angular/cli : 8.1.3
A: This finally worked for me. You can try it out.
.ts
import { Component, OnInit, ViewChild, NgZone } from '@angular/core';
/.. class declaration .../
@ViewChild('content') content : IonContent;
constructor(public _zone: NgZone){
}
ngOnInit(): void {
this.scrollToBottom();
}
scrollToBottom()
{
this._zone.run(() => {
const duration : number = 300;
setTimeout(() => {
this.content.scrollToBottom(duration).then(()=>{
setTimeout(()=>{
this.content.getScrollElement().then((element:any)=>{
if (element.scrollTopMax != element.scrollTop)
{
// trigger scroll again.
this.content.scrollToBottom(duration).then(()=>{
// loaded... do something
});
}
else
{
// loaded... do something
}
});
});
});
},20);
});
}
A: A mi me funcionó con la implementacion de AfterViewChecked del ciclo de vida de angular
en angular 9 con fecha al 30/10/2020
*
*Importar
import { Component, OnInit, ViewChild, AfterViewChecked } from '@angular/core';
import { IonContent } from '@ionic/angular';
*implementar AfterViewChecked
export class PublicationsProductPage implements AfterViewChecked {
*Crear el metodo scrollToBottom
scrollToBottom() { this.content.scrollToBottom(); }
*Llamar el metodo scrollToBottom desde la implementación de AfterViewChecked
ngAfterViewChecked(){ this.scrollToBottom(); }
con este codigo te aseguras de que siempre se dirija al final del ionconten
| |
doc_23525270
|
java.lang.NoClassDefFoundError: com/sun/javafx/scene/control/skin/TableColumnHeader
at tornadofx.NodesKt.isInsideRow(Nodes.kt:492)
[...]
In the code below if I use onDoubleClick it works, but I'd like to be able to use onUserSelect or at least understand why this doesn't work.
package com.example.demo.app
import tornadofx.*
class MainView : View("listview demo") {
val things = SortedFilteredList<String>()
init {
things.add("aaa")
things.add("bbb")
}
override val root = listview(things) {
onUserSelect {
println("user select")
}
/*
onDoubleClick {
println("double click")
}
*/
}
}
class MyApp: App(MainView::class)
Running ubuntu 18.04.3. Building with gradle 5.6.3, kotlin 1.3.50, tornadofx 1.7.19. The gradle javafxplugin is getting the default javafx but I have also tried specifying versions 11-13 explicitly and get the same behavior. I also tried installing ubuntu openjfx package version 11.0.2+1-1~18.04.2.
A: Sounds like you're trying to run TornadoFX 1 with JDK/JavaFX newer than 8. Please either downgrade Java/JavaFX to 8, or run with TornadoFX 2.0.0-SNAPSHOT, which is available from oss.sonatype.org. This version supports Java/JavaFX 13.
| |
doc_23525271
|
My problem is that it uses comma as decimal character and I can't get readHTMLTable to handle it correctly. The values end up as factor instead of numeric. This could be solved externally, but I would like to do it all in R.
I tried to pass dec="," in the hope that the elipsis would pass it down the execution pipe but it didn't work.
Next trest was inspired from the help for readHTMLTable I tried using elFun
library(XML)
tryAsNumeric <- function(node) {
val = xmlValue(node)
ans = as.numeric(gsub(",", ".", val))
if(is.numeric(ans))
ans
else
val
}
tmp_list <- readHTMLTable("teeChart.xls", elFun = tryAsNumeric)
And ended up with this message
There were 50 or more warnings (use warnings() to see the first 50)
> warnings()
Warning messages:
1: In (function (node) ... : NAs introduced by coercion
2: In (function (node) ... : NAs introduced by coercion
3: In (function (node) ... : NAs introduced by coercion
4: In (function (node) ... : NAs introduced by coercion
Truncated list for brevity.
Here is a reduced table for reproducibility. (teeChart.xls)
<table border="1">
<tr><td></td><td>Lägenhet 053</td><td></td><td>Lägenhet 054</td><td></td><td>Lägenhet 055</td><td></td></tr>
<tr><td>Index</td><td>X</td><td>Y</td><td>X</td><td>Y</td><td>X</td><td>Y</td></tr>
<tr><td>0</td><td>42309</td><td>20,8249988555908</td><td>42309</td><td>20,2000007629395</td><td>42309</td><td>22,2000007629395</td></tr>
<tr><td>1</td><td>42309,0416666667</td><td>20,7000007629395</td><td>42309,0416666667</td><td>20,2000007629395</td><td>42309,0416666667</td><td>22,125</td></tr>
<tr><td>2</td><td>42309,0833333333</td><td>20,6000003814697</td><td>42309,0833333333</td><td>20,2000007629395</td><td>42309,0833333333</td><td>22,0249996185303</td></tr>
</table>
A: Set colClasses? Also from the help ?readHTMLTable:
library(XML)
tryAsNumeric <- function(node) {
val = xmlValue(node)
ans = as.numeric(gsub(",", ".", val))
if(all(is.numeric(ans)))
ans
else
val
}
txt <- readLines(n=7)
<table border="1">
<tr><td></td><td>Lägenhet 053</td><td></td><td>Lägenhet 054</td><td></td><td>Lägenhet 055</td><td></td></tr>
<tr><td>Index</td><td>X</td><td>Y</td><td>X</td><td>Y</td><td>X</td><td>Y</td></tr>
<tr><td>0</td><td>42309</td><td>20,8249988555908</td><td>42309</td><td>20,2000007629395</td><td>42309</td><td>22,2000007629395</td></tr>
<tr><td>1</td><td>42309,0416666667</td><td>20,7000007629395</td><td>42309,0416666667</td><td>20,2000007629395</td><td>42309,0416666667</td><td>22,125</td></tr>
<tr><td>2</td><td>42309,0833333333</td><td>20,6000003814697</td><td>42309,0833333333</td><td>20,2000007629395</td><td>42309,0833333333</td><td>22,0249996185303</td></tr>
</table>
doc <- htmlParse(txt, asText=TRUE)
( res <- readHTMLTable(doc, elFun = tryAsNumeric, colClasses = rep("numeric", 7)) )
# $`NULL`
# NA NA NA NA NA NA NA
# 1 NA NA NA NA NA NA NA
# 2 0 42309.00 20.825 42309.00 20.2 42309.00 22.200
# 3 1 42309.04 20.700 42309.04 20.2 42309.04 22.125
# 4 2 42309.08 20.600 42309.08 20.2 42309.08 22.025
str(res)
# List of 1
# $ NULL:'data.frame': 4 obs. of 7 variables:
# ..$ NA: num [1:4] NA 0 1 2
# ..$ NA: num [1:4] NA 42309 42309 42309
# ..$ NA: num [1:4] NA 20.8 20.7 20.6
# ..$ NA: num [1:4] NA 42309 42309 42309
# ..$ NA: num [1:4] NA 20.2 20.2 20.2
# ..$ NA: num [1:4] NA 42309 42309 42309
# ..$ NA: num [1:4] NA 22.2 22.1 22
library(XML)
txt <- readLines(n=7)
<table border="1">
<tr><td></td><td>Lägenhet 053</td><td></td><td>Lägenhet 054</td><td></td><td>Lägenhet 055</td><td></td></tr>
<tr><td>Index</td><td>X</td><td>Y</td><td>X</td><td>Y</td><td>X</td><td>Y</td></tr>
<tr><td>0</td><td>42309</td><td>20,8249988555908</td><td>42309</td><td>20,2000007629395</td><td>42309</td><td>22,2000007629395</td></tr>
<tr><td>1</td><td>42309,0416666667</td><td>20,7000007629395</td><td>42309,0416666667</td><td>20,2000007629395</td><td>42309,0416666667</td><td>22,125</td></tr>
<tr><td>2</td><td>42309,0833333333</td><td>20,6000003814697</td><td>42309,0833333333</td><td>20,2000007629395</td><td>42309,0833333333</td><td>22,0249996185303</td></tr>
</table>
doc <- htmlParse(txt)
m <- as.matrix(readHTMLTable(doc, which=1))
colnames(m) <- m[1,]
m <- m[-1, ]
m <- gsub(",", ".", m)
as.data.frame(structure(as.numeric(m), .Dim=dim(m), .Dimnames = dimnames(m)))
# Index X Y X Y X Y
# 1 0 42309.00 20.825 42309.00 20.2 42309.00 22.200
# 2 1 42309.04 20.700 42309.04 20.2 42309.04 22.125
# 3 2 42309.08 20.600 42309.08 20.2 42309.08 22.025
| |
doc_23525272
|
if ( get_option( 'woocommerce_enable_review_rating' ) === 'yes' ) {
$comment_form['comment_field'] = '<div class="comment-form-rating 5"><label for="rating">' . esc_html__( 'Your rating', 'woocommerce' ) . '</label><select name="rating" id="rating" required>
<option value="">' . esc_html__( 'Rate…', 'woocommerce' ) . '</option>
<option value="5">' . esc_html__( 'Perfect', 'woocommerce' ) . '</option>
<option value="4">' . esc_html__( 'Good', 'woocommerce' ) . '</option>
<option value="3">' . esc_html__( 'Average', 'woocommerce' ) . '</option>
<option value="2">' . esc_html__( 'Not that bad', 'woocommerce' ) . '</option>
<option value="1">' . esc_html__( 'Very poor', 'woocommerce' ) . '</option>
</select></div>';
}
I have tried this code from functions.php, which is:
add_action('woocommerce_after_shop_loop_item', 'my_print_stars' );
function my_print_stars(){
global $wpdb;
global $post;
$count = $wpdb->get_var("
SELECT COUNT(meta_value) FROM $wpdb->commentmeta
LEFT JOIN $wpdb->comments ON $wpdb->commentmeta.comment_id = $wpdb->comments.comment_ID
WHERE meta_key = 'rating'
AND comment_post_ID = $post->ID
AND comment_approved = '1'
AND meta_value > 0
");
$rating = $wpdb->get_var("
SELECT SUM(meta_value) FROM $wpdb->commentmeta
LEFT JOIN $wpdb->comments ON $wpdb->commentmeta.comment_id = $wpdb->comments.comment_ID
WHERE meta_key = 'rating'
AND comment_post_ID = $post->ID
AND comment_approved = '1'
");
if ( $count > 0 ) {
$average = number_format($rating / $count, 2);
echo '<div class="starwrapper" itemprop="aggregateRating" itemscope itemtype="http://schema.org/AggregateRating">';
echo '<span class="star-rating" title="'.sprintf(__('Rated %s out of 5', 'woocommerce'), $average).'"><span style="width:'.($average*16).'px"><span itemprop="ratingValue" class="rating">'.$average.'</span> </span></span>';
echo '</div>';
}
}
But this is doing nothing. I actually want to add a star rating instead of that five option star rating. I am noob in Woocommerce, please help me in this regard. Thank you.
| |
doc_23525273
|
"Oracle Database does not drop users whose schemas contain objects unless you specify CASCADE".
My understanding is that all schemas must have an associated user account (though that account can be restricted from use by restricting CREATE SESSION command, etc...). But what actually happens to previously owned objects when you use the DROP USER command? If the user no longer exists... then what account owns those objects?
For Context, I'm a developer not a DBA and don't have DCL rights on my databases so I can't test this out myself. I am working a migration project where this command may be necessary but I'd like to better understand the implications before passing a request along to my Enterprise DBA team.
A: If you attempt to drop a user (e.g. THE_USER) without specifying CASCADE you will get the following error:
ORA-01922: CASCADE must be specified to drop 'THE_USER'
Either use the CASCADE option with your DROP USER statement or manually remove all user objects before dropping the user.
HTH
| |
doc_23525274
|
inline void func_1 (int a)
{
if(a==1)
{
other_func1();
}
else
{
other_func2();
}
}
and I use in the Main like this:
int main()
{
func1(1);
func1(42);
return 0;
}
I use GCC, I think, the compiled code look like this (in “source level”):
int main()
{
other_func1()
other_func2();
return 0;
}
Is it true or am I wrong?
A: Yes, in general gcc will optimise away dead code in inline functions when it can evaluate branches at compile-time. I use this construct a lot to allow optimised code to be generated for different use cases - somewhat like template instantiation in C++.
| |
doc_23525275
|
My current problem is that certain modules do not have any content that is picked up by the descriptor, which then causes the assembly plugin to fail because the assembly is empty.
Is there any way to keep the assembly plugin from falling over when an assembly is empty? I looked at the parameters for the single goal & couldn't find anything. I don't want to have to manually enable/disable this assembly on my individual modules, I want to configure the assembly on the parent and the child modules that don't have any content to skip creation on the assembly instead of failing.
A: You could move your assembly plugin configuration into a <pluginManagement> section in the parent, then only specify the assembly plugin in the child modules that require it.
Parent:
<pluginManagement>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</plugin>
<artifactId>maven-assembly-plugin</plugin>
<version>${your.version}</version>
<configuration>
<!-- whatever you have here now -->
</configuration>
<executions>
<!-- whatever you have here now -->
</executions>
</plugin>
</plugins>
</pluginManagement>
Child that needs to create assembly:
<build>
<plugins>
<!-- Pull in config from the parent -->
<plugin>
<groupId>org.apache.maven.plugins</plugin>
<artifactId>maven-assembly-plugin</plugin>
</plugin>
.....
</plugins>
.....
</build>
| |
doc_23525276
|
Thanks,
A: Running the Grails command grails install-templates will copy into src/templates the template files used by all of the other Grails commands to generate files. You can then modify them however you wish.
Also, you may wish to look at some of the scaffolding plugins, such as Grails Twitter Bootstrap Scaffolding or Enhanced Scaffolding.
| |
doc_23525277
|
A: If you map to a type-erased container like boost::any, you can at least recover the type if you know what it is:
std::map<std::typeindex, boost::any> m;
m[typeid(Foo)] = Foo(1, true, 'x');
Foo & x = boost::any_cast<Foo&>(m[typeid(Foo)]);
A: You could use a shared_ptr<void>:
std::map<std::typeindex, std::shared_ptr<void>> m;
m[typeid(T)] = std::make_shared<T>(...);
auto pT = std::static_pointer_cast<T>(m[typeid(T)]); // pT is std::shared_ptr<T>
Or course you would add some wrapper to ensure that the two Ts per line match and you don't accidentially access an empty shared_ptr<void>.
| |
doc_23525278
|
I use RadStudio XE5 with update 1. iOS is version 7.0.4 on new iPad.
A: This sounds like an out of memory event. In your simulator, from the menu, choose "Simulate memory warning". Do this 2 or 3 times in a row. If you have an issue handling memory warnings this will "crash" your app.
If this is the case, follow this guide to reduce the memory usage of your app: http://www.raywenderlich.com/2696/
You should also look at this page about memory and performance:
https://developer.apple.com/library/ios/documentation/iphone/conceptual/iphoneosprogrammingguide/PerformanceTuning/PerformanceTuning.html
A: ok... My application was build with many different forms. Everyone was loaded and destroyed dynamically but whatever I had done, didn't help.
Finally, I rebuild application to use tabs instead of forms. Now application uses almost same memory in my iPad (much less than other applications, but fortunately, does not collapse.
SO... if you have problem with memory in your application and nothing helps, try to use tabs instead of forms. Even if it takes same memory, it seems that management is different and it works better.
M.
| |
doc_23525279
|
I want to know if TLS1.2 is not enabled how can I enable the same on BizTalk 2016 server.
A: As BizTalk 2016 is on .Net 4.6 this tries to use TLS 1.2 first but falls back to TLS 1.1 & TLS 1.0 unless they are disabled.
MS16-065: Description of the TLS/SSL protocol information disclosure vulnerability (CVE-2016-0149): May 10, 2016
Note The .NET Framework 4.6 and later versions use TLS 1.2, TLS 1.1, and TLS 1.0 as the protocol defaults. This is discussed in the Microsoft Security Advisory 2960358 topic on the Microsoft TechNet website.
If you want to make BizTalk use TLS 1.2 exclusively you need to make sure you have either Feature Pack 2 or Feature Pack 3 (I would recommend the latest always), or if not installing feature packs, install CU 5 for BizTalk 2016. You also need to ensure that any system BizTalk connects to, including the BizTalk database server support TLS 1.2
Note there is a prerequisite:
SQL Server 2012 Native Client version 11 should be installed on all BizTalk Server systems before you apply this update. If the SQL Native Client is not installed before you apply cumulative update, the installation may not complete.
Both of those articles then link you to Support for TLS 1.2 protocol in BizTalk Server and also links leading to MS16-065: Description of the TLS/SSL protocol information disclosure vulnerability (CVE-2016-0149): May 10, 2016
| |
doc_23525280
|
func getArtistProfileCardData(artistName: String) -> UserModel {
var retData: UserModel = UserModel(isArtist: false, first: "", last: "", artistName: "something", occupation: "", profileUrl: "", followers: 0)
let usersRef = db.collection("users")
usersRef.whereField("artistName", isEqualTo: artistName)
.getDocuments { (querySnapshot, err) in
if let err = err {
print("Error getting documents: \(err)")
} else {
for document in querySnapshot!.documents {
print("\(document.documentID) => \(document.data())")
do {
let data = try document.data(as: UserModel.self)
retData = data
// printing retData.artistName gives me correct output.
} catch _ {
print("Error getting document from querySnapshot.")
}
}
}
}
// printing retData.artistName gives me "something" which I initialized right in the beginning of this function.
return retData
}
IRC, the innermost closure should capture the retData variable so I should be able to extract the data inside data variable and return it out of the function. However, that isn't what's happening. Why is that so? And how would I go about achieving the desired effect?
Thank you.
| |
doc_23525281
|
I set up a test Domain and the below code works properly so it is something screwy with my domain. I have tried removing all password complexity requirements as a test but no luck. Ideas on other things to check?
$Password="pa55Word!"
$Name="Tmp.User"
[securestring]$SecPass = ConvertTo-SecureString -String $Password -AsPlainText -Force
New-ADUser -Name $Name -AccountPassword $SecPass -Enabled $True -ChangePasswordAtLogon $False
$Credential = New-Object System.Management.Automation.PSCredential $Name, $SecPass
Get-ADUser -Filter 'Name -eq $Name' -Credential $Credential
A: I have tested in my environment
I ran the same script and I am able to get the user details with new user credentials
I am able to the set the password using Set-ADAccountPassword cmdlet
The issue might be with your domain or from the DC where you are running the script
| |
doc_23525282
|
Squid Cache: Version 3.1.14
configure options: '--build=i686-linux-gnu' '--prefix=/usr' '--includedir=${prefix}/include' '--mandir=${prefix}/share/man' '--infodir=${prefix}/share/info' '--sysconfdir=/etc' '--localstatedir=/var' '--libexecdir=${prefix}/lib/squid3' '--srcdir=.' '--disable-maintainer-mode' '--disable-dependency-tracking' '--disable-silent-rules' '--datadir=/usr/share/squid3' '--sysconfdir=/etc/squid3' '--mandir=/usr/share/man' '--with-cppunit-basedir=/usr' '--enable-inline' '--enable-ssl' '--enable-async-io=8' '--enable-storeio=ufs,aufs,diskd' '--enable-removal-policies=lru,heap' '--enable-delay-pools' '--enable-cache-digests' '--enable-underscores' '--enable-icap-client' '--enable-follow-x-forwarded-for' '--enable-auth=basic,digest,ntlm,negotiate' '--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SASL,SMB,YP,DB,POP3,getpwnam,squid_radius_auth,multi-domain-NTLM' '--enable-ntlm-auth-helpers=smb_lm,' '--enable-digest-auth-helpers=ldap,password' '--enable-negotiate-auth-helpers=squid_kerb_auth' '--enable-external-acl-helpers=ip_user,ldap_group,session,unix_group,wbinfo_group' '--enable-arp-acl' '--enable-esi' '--enable-zph-qos' '--disable-translation' '--with-logdir=/var/log/squid3' '--with-pidfile=/var/run/squid3.pid' '--with-filedescriptors=65536' '--with-large-files' '--with-default-user=proxy' '--enable-linux-netfilter' 'build_alias=i686-linux-gnu' 'CFLAGS=-g -O2 -g -O2 -Wall' 'LDFLAGS=-Wl,-Bsymbolic-functions' 'CPPFLAGS=' 'CXXFLAGS=-g -O2 -g -O2 -Wall' --with-squid=/etc/squid3/squid3-3.1.14
And Here is my squid.conf:
http_port 3124
cache_mem 256 MB
maximum_object_size_in_memory 10 MB
maximum_object_size 100 MB
minimum_object_size 0 KB
cache_swap_low 90
cache_swap_high 95
cache_dir diskd /cache/squid1 5000 16 256
cache_dir diskd /cache/squid2 5000 16 256
cache_dir diskd /cache/squid3 5000 16 256
cache_dir diskd /cache/squid4 5000 16 256
cache_dir diskd /cache/squid5 5000 16 256
cache_dir diskd /cache/squid6 5000 16 256
cache_dir diskd /cache/squid7 5000 16 256
access_log /var/log/squid3/access.log squid
cache_peer x.x.x.x parent 3124 0 no-query login=PASS default no-digest
memory_replacement_policy lru
cache_replacement_policy lru
cache_store_log /var/log/squid3/store.log
emulate_httpd_log on
cache_log /var/log/squid3/cache.log
debug_options ALL,2
coredump_dir /var/spool/squid3
minimum_expiry_time 120 seconds
cache_mgr nutel.rn@dprf.gov.br
cache_effective_user squid
cache_effective_group squid
cachemgr_passwd 1234567890 all
refresh_pattern -i ([^.]+.|)jre-6u31-linux-i586\.bin 1440 50% 9999 override-expire ignore-no-cache ignore-no-store ignore-private
refresh_pattern -i exe$ 1440 50% 9999 override-expire ignore-no-cache ignore-no-store ignore-private
refresh_pattern -i com$ 1440 50% 9999 override-expire ignore-no-cache ignore-no-store ignore-private
refresh_pattern -i br$ 1440 50% 9999 override-expire ignore-no-cache ignore-no-store ignore-private
refresh_pattern -i [0-9]+$ 1440 50% 9999 override-expire ignore-no-cache ignore-no-store ignore-private
refresh_pattern -i AutoDL?BundleId=59620$ 1440 50% 9999 override-expire ignore-no-cache ignore-no-store ignore-private
refresh_pattern -i htm$ 1440 50% 9999 override-expire ignore-no-cache ignore-no-store ignore-private
refresh_pattern -i php$ 1440 50% 9999 override-expire ignore-no-cache ignore-no-store ignore-private
refresh_pattern -i html$ 1440 50% 9999 override-expire ignore-no-cache ignore-no-store ignore-private
refresh_pattern -i asp$ 1440 50% 9999 override-expire ignore-no-cache ignore-no-store ignore-private
refresh_pattern -i zip$ 0 50% 999999 ignore-reload override-lastmod override-expire reload-into-ims
refresh_pattern -i \.(mp3|mp4|m4a|ogg|mov|avi|wmv)$ 10080 90% 999999 ignore-no-cache override-expire ignore-private
refresh_pattern -i flv$ 0 50% 999999 ignore-reload override-lastmod override-expire reload-into-ims
refresh_pattern -i swf$ 0 50% 999999 ignore-reload override-lastmod override-expire reload-into-ims
refresh_pattern -i cab$ 0 50% 999999 ignore-reload override-lastmod override-expire reload-into-ims
refresh_pattern -i rar$ 0 50% 999999 ignore-reload override-lastmod override-expire reload-into-ims
refresh_pattern ^http:// 30 40% 20160
refresh_pattern ^ftp:// 30 50% 20160
refresh_pattern ^gopher:// 30 40% 20160
refresh_pattern . 1440 100% 1440 ignore-reload override-lastmod override-expire reload-into-ims
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl SSL_ports port 443 563
acl cacic_ports port 20 21 22 3306 # cacic
acl Safe_ports port 80 23 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl purge method PURGE
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access allow purge localhost
http_access deny purge
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
#Cache videos youtube
acl youtube dstdomain .youtube.com
cache allow youtube
# Aqui você irá definir o IP da sua rede interna
acl redelocal src x.x.x.x/24
cache allow redelocal
http_access allow redelocal
http_access allow localhost
http_access deny all
I´ve tried to access gmail, facebook, ...., any site that uses https doesn´t open, but any other sites that doesn´t use https opens perfectly.
What am I doing wrong?
Thanks for the help!!!
A: Everybody who played with Squid on Ubuntu, have probably encountered with this problem;.
Ubuntu Squid packages had been compiled without SSL option. Therefore, it is not possible to proxy HTTPS connections with Squid on Ubuntu Server.
Refer This
| |
doc_23525283
|
But when I try to add a custom class, the class is never added to the disabled dates.
Here is my code anyways:
<input type="text" id="picker"/>
$(function(){
$("#picker").datepicker({
maxDate: 0,
beforeShow: function(input, inst){
$('.ui-datepicker-calendar > tbody > tr > td:has(span)').each(function (index) {
console.log($(this).closest("td"));
$(this).closest("td").addClass("red");
});
}
});
});
JSFIDDLE
A: Add this CSS to your code:
td.ui-datepicker-unselectable.ui-state-disabled span{
background: red;
}
Updated jsFiddle
No need for beforeShow if you are using this CSS.
A: You need to change your css for disabled date.
.ui-datepicker-calendar td.ui-state-disabled span{
color: red;
}
JS Fiddle
| |
doc_23525284
|
Basic Block in function 'main' does not have terminator!
label %bb2
However, I'm inserting a branch and I'm even checking where it's being inserted:
LLVMPositionBuilderAtEnd(builder, t);
printf("inserting br at %s\n",
LLVMGetBasicBlockName(LLVMGetInsertBlock(builder)));
LLVMBuildBr(builder, merge_bb);
However, in my output, the basic block which is missing the terminator is one of the blocks printed:
inserting br at bb2
inserting br at bb4
I tried reading the bitcode generated with llvm-dis but it gives me another error:
llvm-dis: error: Invalid record (Producer: 'LLVM10.0.0' Reader: 'LLVM 10.0.0')
But this error only appears when the terminator error also occurs so I'm not sure if they're related.
| |
doc_23525285
|
<Grid >
<Grid.RowDefinitions>
<RowDefinition Height="40"/>
<RowDefinition Height="*"/>
</Grid.RowDefinitions>
<Grid
ManipulationMode="TranslateX,TranslateInertia,System"
Row="0" Background="White">
<Grid.ColumnDefinitions>
<ColumnDefinition Width="50"/>
<ColumnDefinition Width="*"/>
<ColumnDefinition Width="50"/>
<ColumnDefinition Width="50"/>
<ColumnDefinition Width="50"/>
<ColumnDefinition Width="50"/>
<ColumnDefinition Width="240"/>
</Grid.ColumnDefinitions>
<FontIcon
Tapped="OnBackButtonClick"
Grid.Column="0"
Width="50"
Height="40"
FontFamily="Segoe MDL2 Assets"
Glyph=""/>
<FontIcon
Tapped="SaveButton_OnClick"
Grid.Column="2"
Width="50"
Height="40"
FontFamily="Segoe MDL2 Assets"
Glyph=""/>
<FontIcon
Tapped="RedoButton_OnClick"
Grid.Column="3"
Width="50"
Height="40"
FontFamily="Segoe MDL2 Assets"
Glyph=""/>
<FontIcon
Tapped="UndoButton_OnClick"
Grid.Column="4"
Width="50"
Height="40"
FontFamily="Segoe MDL2 Assets"
Glyph=""/>
<FontIcon
Grid.Column="5"
Width="50"
Tapped="OnButtonRotateClick"
Height="40"
FontFamily="Segoe MDL2 Assets"
Glyph=""/>
<InkToolbar
Grid.Column="7"
x:Name="inkToolbar"
HorizontalAlignment="Right"
TargetInkCanvas="{x:Bind ink}" />
</Grid>
<StackPanel Grid.Row="1" >
<Grid x:Name="Container">
<Grid.ColumnDefinitions>
<ColumnDefinition Width="43*"/>
<ColumnDefinition Width="437*"/>
</Grid.ColumnDefinitions>
<Image x:Name="ImageControl"
Source="/Assets/sample.jpg"
Visibility="Visible"
Grid.ColumnSpan="2"
RenderTransformOrigin="0,0"
Margin="0"
Stretch="Fill"
/>
<InkCanvas x:Name="ink"
Visibility="Visible"
Grid.ColumnSpan="2"
HorizontalAlignment="Stretch"
VerticalAlignment="Stretch"/>
</Grid>
</StackPanel>
</Grid>
A:
Rotating image and canvas strokes 90 degrees in any direction
For your requirement, you could modify PointTransform property for InkStroke. Please refer the following code.
private void BtnSave_Click(object sender, RoutedEventArgs e)
{
IReadOnlyList<InkStroke> InkStrokeList = ink.InkPresenter.StrokeContainer.GetStrokes();
foreach (InkStroke temp in InkStrokeList)
{
temp.PointTransform = Matrix3x2.CreateRotation((float)(90 * Math.PI / 180), new Vector2(500, 500));
}
}
| |
doc_23525286
|
A: I am usually all gung-ho for LINQ, but this is a situation where I think that you should iterate through the Dictionary backward using a For/Next loop and remove the items that don't meet your conditions:
Public Class Foo
'Primary class member
Public Property Bar As Dictionary(Of DateTime, Foo)
'Remove method based on some condition (you may want to pass a parameter here too)
Public Sub Remove()
'Use a For/Next loop backwards
For index As Integer = Me.Bar.Keys.Count - 1 To 0 Step -1
'Check for if the condition is met and then remove the item by its index
'If ... Then
' Me.Bar.Remove(Me.Bar.ElementAt(index).Key)
'End If
Next
End Sub
Sub New()
Me.Bar = New Dictionary(Of DateTime, Foo)
End Sub
End Class
A: Yes, it's possible.
Just declare a method with a Func(Of...) argument, which represents the condition:
Public Sub RemoveElementIf(condition As Func(Of KeyValuePair(Of DateTime, Foo), Boolean))
If condition IsNot Nothing Then Bar = Bar.Where(Function(x) condition(x))
End Sub
Then you can use LINQ internally.
| |
doc_23525287
|
*
*An RDS cluster is setup with two RDS instances (rdslab0 and rdslab1) where rdslab0 is used as a read replica of rdslab1.
*In addition to RDS backups, selective database dumps are executed from an EC2 instance via root’s crontab (crontab -l) using mysqldump statements.
*The shell script running these backups on a nightly basis is stored at /root/db-backup.sh. 'db' is a database on this mysql RDS instance.
*The script contains several mysqldump statements when run will save the mysql .sql.gz backup dump files to a s3 bucket “backups”.
*The script ran well until a week ago but then started saving empty files since a few days. As of now in S3, db-schema-backups, db-database-backups folders have those respective mysqldump .sql.gz files saved but the files don’t contain any real backup content.
*I’m not sure if this is an issue with mysqldump statement, IAM permissions, or DB permissions or something else but the files are being saved empty. For example the backup files are being saved in Bytes instead of their actual sizes in KBs and MBs.
db-backup.sh
#!/bin/bash
# Dump the db database from the RDS readreplica and move it to S3.
# Dump the db database from the RDS readreplica and move it to S3.
# S3 Bucket has policy access to only allow db-backup IAM User PutObject
# mysqlbackup is used to backup files
bucketName="backups"
today=$(date +%d)
endPoint2="database-endpoint"
#All databases, routines, procedures, etc
mysqldump db --routines --triggers | gzip > /root/db-$(date +%Y%m%d).sql.gz
aws --profile db-backup s3 cp /root/db-$(date +%Y%m%d).sql.gz s3://dbaccount-backups/db-database-backups/
aws --profile db-backup s3api put-object-tagging --bucket $bucketName --key db-database-backups/db-$(date +%Y%m%d).sql.gz --tagging 'TagSet=[{Key=retention,Value=60}]'
rm /root/db-$(date +%Y%m%d).sql.gz
#Schema backups
mysqldump db --no-data | gzip > /root/db-schema-$(date +%Y%m%d).sql.gz
aws --profile db-backup s3 mv /root/db-schema-$(date +%Y%m%d).sql.gz s3://dbaccount-backups/db-schema-backups/
aws --profile db-backup s3api put-object-tagging --bucket $bucketName --key db-schema-backups/db-schema-$(date +%Y%m%d).sql.gz --tagging 'TagSet=[{Key=retention,Value=14}]'
| |
doc_23525288
|
The sample XML I have is as below:
<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified">
<xs:simpleType name="Test_Type">
<xs:annotation />
<xs:restriction base="xs:string">
<xs:minLength value="1"/>
<xs:maxLength value="16"/>
<xs:pattern value="[0-9a-zA-Z]{1,16}"/>
</xs:restriction>
</xs:simpleType>
</xs:schema>
And my xslt (1.0) is as following:
<?xml version="1.0"?>
<xsl:stylesheet version="1.0" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:template match="/">
<html>
<body>
<table border="1">
<tr>
<th>DataType</th>
<th>Pattern</th>
</tr>
<xsl:for-each select="//xs:simpleType">
<xsl:choose>
<xsl:when test=".//xs:pattern">
<tr>
<td>
<xsl:value-of select="@name"/>
</td>
<td>
<xsl:value-of select=".//xs:pattern[@value]"/>
</td>
</tr>
</xsl:when>
</xsl:choose>
</xsl:for-each>
</table>
</body>
</html>
</xsl:template>
</xsl:stylesheet>
I am expecting a table output format like:
DataType Pattern
Test_Type [0-9a-zA-Z]{1,16}
But it doesn't work, please shed some lights! Thanks,
A: Change the xsl:value-of from this...
<xsl:value-of select=".//xs:pattern[@value]"/>
To this
<xsl:value-of select=".//xs:pattern/@value"/>
The square brackets indicate a condition, so the first expression is selecting an xs:pattern which has a value attribute. It is not actually getting the value attribute itself.
| |
doc_23525289
|
Then what is the counter part of [DataType(DataType.EmaillAddress)]?
Or is there a site that has a list of if-you-can-do-in-data-annotation-you-can-do-it-in-fluent-api. Because I want to do the validation and mapping using Fluent Api. Thanks
A: This is a notoriously confusing area.
About your examples:
*
*A property that is annotated by [DataType(DataType.Currency)] is implemented as decimal(18,2) (in Sql Server). Close.
*The fluent mapping HasColumnType("Money") (not "Currency") creates the column as Money data type. A perfect match.
*A string property annotated by [DataType(DataType.EmailAddress)] will be created as nvarchar(max). Granted, that's enough for an email address. But it's nowhere near a data type that enforces a specific format.
Surely EF could do better than that, could it? Well, in the latter case, what should it do? There is no built-in email datatype and I think we can't expect EF to create a user-defined type on the fly with rules and all (not to mention that rules in Sql Server are deprecated).
The confusing part is that data annotations are used differently by different frameworks as is explained here.
I'm not sure whether the EF team made the right decision by implementing a subset of the annotations in code-first. Of course they can't implement all attributes in the extensive System.ComponentModel.DataAnnotations namespace. But the current implementation is half-hearted at best. The examples above are only a small demonstration - one annotation is implemented, another isn't. And, for that matter, EF happily allows you to annotate an int property as EmailAddress.
Therefore, to answer your question, there is no fluent counterpart of DataType.EmailAddress. There is nothing to counterpart.
On the other hand, to speak up for EF, not implementing the annotations at all would have forced us to do many things redundantly. If we use MVC and EF together the annotations can be applied once and both systems concur pretty well. It would have been a tedious job to make the annotations and the fluent configurations match.
Unfortunately, I can't find any source disclosing the full mapping between annotations and fluent API. Maybe that's the worst part: we have to find out by trial and error. Anybody out there to enlighten us?
A: I think your first example is a coincidence, the DataType annotation is for specifying a more appropriate CLR type. Nothing to do with databases at all, as it could also be used when serializing
The HasColumnType obviously is for databases and I thought it was for when EF made an incorrect choice, space saving issues, or you added your own custom types to the database.
In my own work I've gone with the rule that I can use annotations as long as they're in ComponentModel.DataAnnotations namespace, anything else (that is likely EF specific) is being done via Fluent, as these are the bits more likely to change if a different ORM is used.
| |
doc_23525290
|
cat file.txt | tr -d "{\"" > output.txt
But it keeps erroring with:
tr: extra operand '>'
It seems like it's not interpreting the carrot properly.
The same thing also happens when I try this:
tr -f "{\"" < input.txt > output.txt
tr: extra operand '<'
A: It looks like cmd is getting confused with "{\"". First, the backslash works correctly in escaping the quote. But two consecutive quotes are taken by cmd to mean an escaped quote. Then the rest of the line is taken as the same sentence. You can see the effect using printf:
C:\>printf "%s\n" "x\"" > nul
x"
>
nul
Thus, printf takes each word individually, but cmd sees them as all part of a quoted string, and therefore does not parse the > nul as anything other than normal words.
The solution? Use two consecutive quotes in your string:
cat file.txt | tr -d "{""" > output.txt
| |
doc_23525291
|
This is the max size and I don't know how to make it bigger.
Also when I input a command (e.g. :q) character that I didn't input comes up
like :<-[1 q when I do :q
Thank you in advance
A: Based on the screenshot (and missing menus), it looks like you're using Vim in the Windows console (cmd.exe), which cannot be maximized in the usual way. You have the following options:
*
*Use GVIM; it offers more (visual) features, and the biggest disadvantage, more clumsy shell integration, isn't that important on Windows, anyway.
*Use the Windows console menu (right mouse button on the top-left icon > Defaults > /Layout\ > Windows Size) to resize it.
*Inside Vim, you can influence the size via
:set lines=40 columns=120
and the console will resize accordingly.
| |
doc_23525292
|
update_record = nlapiLoadRecord('invoice', invoice_id)
var itemcount = update_record.getLineItemCount('item');
for (var i = 0; itemcount != null && i < itemcount; i++) {
if (jsonobject.item[i].item) {
update_record.setLineItemValue('item', 'item', i + 1, jsonobject.item[i].item)
}
}
var id = nlapiSubmitRecord(update_record, true);
nlapiLogExecution('DEBUG', 'id = ', id)
return id;
A: Instead of setLineItemValue, try using the series of selectLineItem, setCurrentLineItemValue, and commitLineItem methods. setLineItemValue is not supported in all scenarios or on all fields.
See the NS Help article titled nlobjRecord for details on all of these methods.
| |
doc_23525293
|
func playSingleVideo(pauseAll: Bool = false,foreground:Bool = false) {
if let visibleCells = self.tableView.visibleCells as? [VideoCell], !visibleCells.isEmpty {
if pauseAll {
visibleCells.forEach { $0.playerLayer?.player?.pause() }
} else {
var maxHeightRequired: Int = 50
var cellToPlay: VideoCell?
visibleCells.reversed().forEach { (cell) in
let cellBounds = self.view.convert(cell.videoView.frame, from: cell.videoView)
let visibleCellHeight = Int(self.view.frame.intersection(cellBounds).height)
if visibleCellHeight >= maxHeightRequired {
maxHeightRequired = visibleCellHeight
cellToPlay = cell
}
}
visibleCells.forEach { (cell) in
if cell === cellToPlay {
cell.slider.minimumValue = 0.0
cell.playerLayer?.player?.play()
cell.btnPlay.setImage(UIImage(named: "pause-button") , for: .normal)
cell.btnPlay.setTitle("", for: .normal)
// cell.videoView.layer.insertSublayer(cell.playerLayer!, at: 0)
} else {
cell.playerLayer?.player?.pause()
cell.btnPlay.setTitle("", for: .normal)
// cell.videoView.layer.insertSublayer(cell.playerLayer!, at: 0)
cell.slider.minimumValue = 0.0
cell.btnPlay.setImage(UIImage(named: "play-button") , for: .normal)
}
NotificationCenter.default.addObserver(forName: .AVPlayerItemDidPlayToEndTime, object: cell.playerLayer?.player?.currentItem, queue: .main) { [weak self] _ in
cell.playerLayer?.player?.seek(to: kCMTimeZero)
cell.playerLayer?.player?.play()
}
let interval = CMTime(value: 1, timescale: 2)
cell.playerLayer?.player?.addPeriodicTimeObserver(forInterval: interval, queue: DispatchQueue.main, using: { (progressTime) in
let seconds = CMTimeGetSeconds(progressTime)
let secondString = String(format: "%02d", Int(seconds) % 60)
let minutString = String(format: "%02d", Int(seconds) / 60)
print( "\(minutString):\(secondString)")
cell.lblElapsed.text = "\(minutString):\(secondString)"
cell.playerLayer?.player?.currentItem?.preferredForwardBufferDuration = TimeInterval(exactly: 100)!
cell.videoView.layer.insertSublayer(cell.playerLayer!, at: 0)
guard let duration = cell.playerLayer?.player?.currentItem?.duration else { return }
let seconds2 = CMTimeGetSeconds(duration)
if !seconds2.isNaN{
let sec = String(format: "%02d", Int(seconds2) % 60)
let min = String(format: "%02d", Int(seconds2) / 60)
print("\(min):\(sec)")
cell.lblTotal.text = "\(min):\(sec)"
let totsec = seconds / seconds2
if !self.isended{
cell.slider.setValue(Float(totsec), animated: true)
}
}
})
}
}
}
}
| |
doc_23525294
|
Thank you very much
| |
doc_23525295
|
{
$(".error").hide();
$("#submit").click(function()
{
//form validate
var name= $("input#fullname").val();
if (name == "")
{
$("label#name_error").show();
$("input#fullname").focus();
return false;
}
var email= $("input#email").val();
if (email == "")
{
$("label#email_error").show();
$("input#email").focus();
return false;
}
var subject= $("input#subject").val();
if (subject == "")
{
$("label#subject_error").show();
$("input#subject").focus();
return false;
}
var textarea= $("textarea#textarea").val();
if (textarea == "")
{
$("label#textarea_error").show();
$("textarea#textarea").focus();
return false;
}
var dataString = 'fullname='+name+'&email='+email+'&subject='+subject+'&textarea='+textarea;
//alert (dataString);return false;
$.ajax({
type: "POST",
url: "form.php",
data: dataString,
success: function()
{
$("#form").html("<div id='message'></div>");
$("#message").html("<h2 id='success'>Query Submitted!</h2>").append("<p>You will be contacted shortly...</p><p>Reload the page to submit another query.</p>").hide().fadeIn(1500)
}
})
return false;
});
});
its just a small question
I have this website which has been working perfectly for the past 2-3 months ... but today when I accidentally wandered into the resource section of chrome's console window there was an error saying
"uncaught reference error: $ is not defined"
I haven't found a satisfactory answer on the web anywhere ... your thoughts?
here is the head section... i have not included any scripts anywhere else on the page
<head>
<meta charset="utf-8">
<link rel="stylesheet" href="cq.css" type="text/css">
<script type="text/javascript" src="form.js"></script>
<link rel="shortcut icon" href="images/favicon.ico" />
<script src="jquery.js"></script>
<script src="form.js"></script>
<!--share-->
<script type="text/javascript" src="http://w.sharethis.com/button/buttons.js"></script>
<script type="text/javascript">stLight.options({publisher: "ur-c6d56dfc-f929-5bbf-a456-178fc403ae45"});</script>
<!--share ends here-->
</head>
A: You're calling your script before referencing jQuery. Change your HTML to:
<script src="jquery.js"></script>
<script type="text/javascript" src="form.js"></script>
You're also importing the same JavaScript file (form.js) twice.
| |
doc_23525296
|
EDIT: i am getting this exception too
Additional information:
Zend\Mvc\Exception\InvalidControllerException
with Message
Controller of type Account\Controller\VoucherController is invalid; must implement Zend\Stdlib\DispatchableInterface
<?php
namespace Account;
return array(
'controllers' => array(
'invokables' => array(
'Account\Controller\Account' => 'Account\Controller\AccountController',
'Account\Controller\Voucher' => 'Account\Controller\VoucherController',
),
// --------- Doctrine Settings For the Module
'doctrine' => array(
'driver' => array(
'account_entities' => array(
'class' => 'Doctrine\ORM\Mapping\Driver\AnnotationDriver',
'cache' => 'array',
'paths' => array(__DIR__ . '/../src/Account/Entity')
),
'orm_default' => array(
'drivers' => array(
'Account\Entity' => 'account_entities'
)
)
)
),
// The following section is new and should be added to your file
'router' => array(
'routes' => array(
'account' => array(
'type' => 'segment',
'options' => array(
'route' => '/account[/][:action][/:id]',
'constraints' => array(
'action' => '[a-zA-Z][a-zA-Z0-9_-]*',
'id' => '[0-9]+',
),
'defaults' => array(
'controller' => 'Account\Controller\Account',
'action' => 'index',
),
),
),
'voucher' => array(
'type' => 'segment',
'options' => array(
'route' => '/account/voucher[/][:action][/:id]',
'constraints' => array(
'action' => '[a-zA-Z][a-zA-Z0-9_-]*',
'id' => '[0-9]+',
),
'defaults' => array(
'controller' => 'Account\Controller\Voucher',
'action' => 'index',
),
),
),
),
),
'view_manager' => array(
'template_path_stack' => array(
'account' => __DIR__ . '/../view',
),
),
),
);
Now the issue is i am getting a 404, when i try to access MyHost/account/Voucher
P.S: I already have A Controller under Account/Controller/Voucher and a view under Account/View/Voucher named as index.phtml now i dont know what am i missing here.
A: As Adnrew and Timdev comments above that there is something not right in your controller, you can check few basic things in your controller, that you have following code correct. specially the typos.
namespace Account\Controller;
use Zend\Mvc\Controller\AbstractActionController;
use Zend\View\Model\ViewModel;
class VoucherController extends AbstractActionController {
// you acctions
}
| |
doc_23525297
|
<TableView id="tableview2" onClick="rowWasClicked ; goEdit" >
get me the following error message :
The event listener rowWasClicked ; goEdit is not defined.
More importantly, the function goEdit is never fired.
How to get both functions fired ?
A: If you want fired two onClick's events, a possible trick is:
<TableView id="tableview2" onClick="rowWasClicked" onClick="goEdit" >
With this code you can fire two onClick's events at the same time.
| |
doc_23525298
|
Array
(
[jeans] => Array
(
[blue] => 3
[pink] => 1
[red] => 0
)
[shirts] => Array
(
[blue] => 5
[pink] => 0
[red] => 0
)
[pijama] => Array
(
[blue] => 0
[pink] => 0
[red] => 0
)
)
How can I print if jeans or shirts have items of color?
For example:
"Jeans has 3 blue items."
"Jeans has 1 pink item."
"shirts has 5 red items."
I'm not interested in obtain the ones with 0. I was thinking in a foreach, but I don't know how to.
A: A couple of foreach loops will do this for you
$clothes = [
'jeans' => ['blue' => 3, 'pink' => 1, 'red' => 0],
'shirts' => ['blue' => 5, 'pink' => 0, 'red' => 0],
'pijama' => ['blue' => 0, 'pink' => 0, 'red' => 0],
];
foreach( $clothes as $item => $colours ){
foreach( $colours as $colour => $num ) {
if ( $num > 0 ){
printf("%s has %d %s items\n", $item, $num, $colour);
}
}
}
RESULT
jeans has 3 blue items
jeans has 1 blue items
shirts has 5 blue items
| |
doc_23525299
|
Given a binary tree containing digits from 0-9 only, each root-to-leaf
path could represent a number. Find the total sum of all root-to-leaf
numbers.
For example, for this tree:
1
/ \
2 3
/ \
4 5
the result should be equal to 12 + 13 + 24 + 25
The recursive solution for this problem is:
public int sum(TreeNode root) {
return helper(root, 0);
}
private int helper(TreeNode node, int sum){
if(node == null) return 0;
sum = sum * 10 + node.val;
if(node.left == null && node.right == null) return sum;
return helper(node.left, sum) + helper(node.right, sum);
}
I am trying to follow all recursive calls of method helper and figure out sum value at every step.
For example, on the first step variable sum equals to 0*10 + 1 = 1. Then we call helper(node.left, 1) and sum on that step equals to 1 * 10 + 2 = 12. Then we call helper(node.left, 12) and sum on that step equals to 12 * 10 + 4 = 124, which is obviously not correct, since the sum should be 24.
Could anybody explain what is wrong in my approach?
A: Try this (but I also prefer 124 + 125 + 13 as result):
public int sum(TreeNode root) {
if (root == null) return 0;
return sum(root, 0);
}
private int sum(TreeNode node, int parentValue){
if(node == null) return 0;
return sum(node.left, node.val) + sum(node.right, node.val) + parentValue * 10 + node.val;
}
A: The problem is you are missing the intermediate nodes data and 2 digit data restriction.
Remove 'if(node.left == null && node.right == null) return sum;', send only parent node value to recursive call and add current node data.
Solution of DoctorLOL is almost correct, you may want to add ignoring the root node data code to it. Following code might help.
public int sum(TreeNode root) {
return helper(root, -1);
}
private int helper(TreeNode node, int parentValue){
if(node == null) return 0;
if (parentValue != -1)
return helper(node.left, node.val) + helper(node.right, node.val) + parentValue * 10 + node.val;
else
return helper(node.left, node.val) + helper(node.right, node.val)
}
Hope it helps!
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.