id
stringlengths 5
11
| text
stringlengths 0
146k
| title
stringclasses 1
value |
|---|---|---|
doc_1000
|
public String[] SentenceDetect(String qwe) throws IOException
{
POSModel model = new POSModelLoader().load(new File("/home/jebard/chabacano/Chabacano1/src/en-pos-maxent.bin"));
PerformanceMonitor perfMon = new PerformanceMonitor(System.err, "sent");
POSTaggerME tagger = new POSTaggerME(model);
String input = "Hi. How are you? This is Mike.";
ObjectStream<String> lineStream = new PlainTextByLineStream(
new StringReader(input));
perfMon.start();
String line;
while ((line = lineStream.read()) != null) {
String whitespaceTokenizerLine[] = WhitespaceTokenizer.INSTANCE
.tokenize(line);
String[] tags = tagger.tag(whitespaceTokenizerLine);
POSSample sample = new POSSample(whitespaceTokenizerLine, tags);
System.out.println(sample.toString());
perfMon.incrementCounter();
}
perfMon.stopAndPrintFinalResult();
Error at this line
.load(new File("/home/jebard/chabacano/Chabacano1/src/en-pos-maxent.bin")
The method load(java.io.File) in type ModelLoader is not applicable for the arguments(org.apache.tomcat.jni.File)
A: This is actually not a bug in OpenNLP. It's a bug in your code, as you load the class File from the package (aka namespace) org.apache.tomcat.jni.File.
Yet, the API of OpenNLP requests you to use the class File from the standard JDK package java.io, i.e. you should import java.io.File instead.
In general, this should fix your problem.
Important hint
You should migrate your code, as models should not be loaded via POSModelLoader
Loads a POS Tagger Model for the command line tools.
Note: Do not use this class, internal use only!
Instead you can use the constructor POSModel(InputStream in) to load your model file via an InputStream referencing the actual model file.
Moreover, the class POSModelLoader was only present in previous releases of OpenNLP (versions <= 1.5.x). In the latest OpenNLP version 1.6.0 it was removed completely. Instead you can and should now use the constructor of the POSModel class to load/initialize the model you need.
A: There is some problem with XML parsing. Try this, it worked for me.
System.setProperty("org.xml.sax.driver", "org.xmlpull.v1.sax2.Driver");
try {
AssetFileDescriptor fileDescriptor =
context.getAssets().openFd("en_pos_maxent.bin");
FileInputStream inputStream = fileDescriptor.createInputStream();
POSModel posModel = new POSModel(inputStream);
posTaggerME = new POSTaggerME(posModel);
} catch (Exception e) {}
| |
doc_1001
|
#include "stdafx.h"
#include <locale>
#include <memory>
#include <string>
#include <cstring>
#include <iostream>
int main(int argc, char* argv[])
{
typedef std::codecvt<wchar_t, char, mbstate_t> cvt;
// string to convert
const char cstr[] = "тест";
size_t sz = std::strlen(cstr);
mbstate_t state;
const char *cnext;
wchar_t *wnext;
// buffer to write
wchar_t *buffer = new wchar_t[sz + 1];
std::uninitialized_fill(buffer, buffer + sz + 1, 0);
// converting char* to wchar*, using locale
cvt::result res;
std::locale l(std::locale("Russian_Russia.1251"));
res = std::use_facet<cvt>(l).in(state,
cstr, cstr + sz, cnext,
buffer, buffer + sz + 1, wnext);
if(res == cvt::error)
std::wcout << L"failed" << std::endl;
else
std::wcout << buffer << std::wcout;
delete [] buffer;
return 0;
}
I looked into sources and found out that function in() fails because function _Mbrtowc (wmbtowc.c) returns -1:
if (___mb_cur_max_l_func(locale) <= 1 ||
(MultiByteToWideChar(codepage,
MB_PRECOMPOSED|MB_ERR_INVALID_CHARS,
(char *)pst,
2,
pwc,
(pwc) ? 1 : 0) == 0))
{ /* translation failed */
*pst = 0;
errno = EILSEQ;
return -1;
}
because ___mb_cur_max_l_func() (initctyp.c) returns 1 for Russian_Russa.1251 locale. What it means? I think it is not normal, that codecvt can't convert char* into wchar_t*.
A: mbstate_t state;
You forgot to initialize that. Fix:
mbstate_t state = { 0 };
| |
doc_1002
|
import sys
import getopt
import os
import shutil
def copy_files(source, target):
"""Copies all files from the source directory to the target directory."""
shutil.copy(source, target)
def main(argv):
"""Main function."""
source_dir = ''
target_dir = ''
try:
opts, args = getopt.getopt(argv,"hi:o:",["ifile=","ofile="])
except getopt.GetoptError:
print('test.py -i <inputfile> -o <outputfile>')
sys.exit(2)
for opt, arg in opts:
if opt == '-h':
print('test.py -i <inputfile> -o <outputfile>')
sys.exit()
elif opt in ("-i", "--ifile"):
source_dir = arg
elif opt in ("-o", "--ofile"):
target_dir = arg
print('Source directory is "', source_dir)
print('Backup directory is "', target_dir)
# Check if source_dir and target_dir are valid directories
if os.path.isdir(source_dir):
print('Source directory is valid.')
else:
print('Error: Source directory is NOT valid.')
exit()
if os.path.isdir(target_dir):
print('Target directory is valid.')
else:
print('Error: Target directory is NOT valid.')
exit()
if os.path.isdir(source_dir) and os.path.isdir(target_dir):
copy_files(source_dir, target_dir)
if __name__ == "__main__":
main(sys.argv[1:])
For the command input, I have tested both of these lines in command terminal:
backup_critical_files.py -i ~ -o /run/media/lemons/[NAME OF DRIVE]/'
backup_critical_files.py -i ~ -o /dev/sda1
Is there a specific path that I am missing here? Do I need to create a folder and specify that folder on the external drive?
| |
doc_1003
|
val disposable = someFunction(someParameter, delay, subject)
.flatMapCompletable { (parameter1, parameter2) ->
anotherFunction(parameter1, parameter2, subject)
}
.retryWhen { throwable ->
throwable.filter {
it.cause?.cause is ExampleException1
|| it.cause?.cause is ExampleException2
|| it.cause is ExampleException3
}
}
.andThen(someStuff())
.subscribe({
Timber.d("Finished!")
}, {
Timber.d("Failed!")
})
How to do it properly?
A: You may use zipWith with a range to achieve this.
.retryWhen { errors -> errors.zipWith(Observable.range(1, 3), { _, i -> i }) }
The retryWhen operator gives you the stream of all the errors from your source publisher. Here you zip these with numbers 1, 2, 3. Therefore the resulting stream will emit 3 next followed by the complete. Contrary to what you may think this resubscribes only twice, as the complete emitted immediately after the third next causes the whole stream to complete.
You may extend this further, by retrying only for some errors, while immediately failing for others. For example, if you want to retry only for IOException, you may extend the above solution to:
.retryWhen { errors -> errors
.zipWith(Observable.range(1, 3), { error, _ -> error })
.map { error -> when (error) {
is IOException -> error
else -> throw error
}}
}
Since map cannot throw a checked exception in Java, Java users may use flatMap for the same purpose.
A: I think what you're trying to do can be achieved with retry exclusively:
val observable = Observable.defer {
System.out.println("someMethod called")
val result1 = 2 // some value from someMethod()
Observable.just(result1)
}
observable.flatMap { result ->
// another method is called here but let's omit it for the sake of simplicity and throw some exception
System.out.println("someMethod2 called")
throw IllegalArgumentException("Exception someMethod2")
Observable.just("Something that won't be executed anyways")
}.retry { times, throwable ->
System.out.println("Attempt# " + times)
// if this condition is true then the retry will occur
times < 3 && throwable is IllegalArgumentException
}.subscribe(
{ result -> System.out.println(result) },
{ throwable -> System.out.println(throwable.localizedMessage) })
Output:
someMethod called
someMethod2 called
Attempt# 1
someMethod called
someMethod2 called
Attempt# 2
someMethod called
someMethod2 called
Attempt# 3
Exception someMethod2
As someMethod2 always throws an Exception, after 3 attempts Exception someMethod2 is printed in onError of the observer.
A: Code with exponential delay:
YourSingle()
.retryWhen { errors: Flowable<Throwable> ->
errors.zipWith(
Flowable.range(1, retryLimit + 1),
BiFunction<Throwable, Int, Int> { error: Throwable, retryCount: Int ->
if (error is RightTypeOfException && retryCount < retryLimit) {
retryCount
} else {
throw error
}
}
).flatMap { retryCount ->
//exponential 1, 2, 4
val delay = 2.toDouble().pow(retryCount.toDouble()).toLong() / 2
Flowable.timer(delay, TimeUnit.SECONDS)
}
}
| |
doc_1004
|
For example, the below code with render about 600 divs on my page, each with its own corresponding data.
buildRow() {
return (
this.state.posts.map(events =>
<div key={events.key} className='result_box'
onClick={ () => this.click(events.name, events.description,
events.time)} id={events._id}>
<p>{events.name}</p>
{events._id}
{events.place && events.place.location && <p>
{events.place.location.city}</p>}
</div>
)
)};
Now, I want to implement a search filter function. Where I only want to return divs with a specific data parameter. For example, if I type 'Austin' in the search box, only divs with the data 'Austin' at its location will be rendered.
The code for this is below:
buildRow() {
return (
this.state.posts.map(events =>
{events.place &&
events.place.location &&
this.props.searchVal === events.place.location.city ?
<div key={events.key} className='result_box'
onClick={ () => this.click(events.name, events.description,
events.time)}
id={events._id}>
<p>{events.name}</p>
{events._id}
{events.place && events.place.location && <p>
{events.place.location.city}</p>}
</div>
: console.log(this.props.searchVal)
}
)
)
}
What I am trying to do is filter and only render divs that match the search criteria by using the ternary operator. However, this does NOT render the divs. Interestingly however, the ternary operation works as expected. For example, if I type in 'Austin' and there are 5 matching results for 'Austin', and lets say we have 600 objects, the console.log will only hit 595 times. I even tested it out where I console.log inside of the success condition, and those logs show! It appears that when it comes to rendering the divs, its not happening. Can anyone help me out?
A: It looks like the map callback isn't actually returning a value (because the expression has been put into braces). You may want to do something like this:
buildRow() {
return this.state.posts
.filter(events =>
events.place
&& events.place.location
&& events.place.location.city === this.props.searchVal)
.map(events =>
<div key={events.key} className='result_box'
onClick={() => this.click(events.name, events.description,
events.time)}
id={events._id}>
<p>{events.name}</p>
{events._id}
<p>{events.place.location.city}</p>
</div>);
}
| |
doc_1005
|
Can someone share some information on how to achieve this integration? Is it a manual integration, or are there any libraries that are available that already implement the Sync Framework protocol on these mobile platforms.
I basically want to leverage the Sync Framework from a mobile app.
| |
doc_1006
|
but when I print the data received from server socket , only the first line of the file be print , others are miss
I notice that sometimes the data will still have data after the "\n" so I add the following two lines of code , it will shows error "substring not found"
but sometimes the client will receive this kind of data "5000\n1000" so the client need to keep the 1000 ...
remain = data[data.index("\n")+1:]
data = remain
this is the data I want to send to client
1000 2000 3000 4000 5000
1000 3000 5000 7000 9000
1111 2222 3333 4444 5555
server
import socket
Input = open("./Data","r")
data = Input.read()
Input.close()
# Construct the server_socket
server_sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_sock.bind(('localhost',15000))
server_sock.listen(1)
(client_socket,address) = server_sock.accept()
# send data
for line in data:
client_socket.send(line)
client_socket.send("EOF")
client_socket.close()
server_sock.close()
client
import socket
client_sock = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
client_sock.connect(('localhost',15000))
data = ""
while True:
part = client_sock.recv(100)
data = data + part
if "\n" in data or "EOF" in data:
list = data[:data.index("\n")].split(" ")
print(list)
remain = data[data.index("\n")+1:]
data = remain
if "EOF" in data:
break
client_sock.close()
result.close()
A: I think you need to indent
client_socket.send(line)
A: I think what you actually want is:
Input.readLines()
| |
doc_1007
|
This code appears to only check whole numbers.
Here is my code in an attempt to check decimals / fractions.
$my_num = 0.38;
switch(true) {
case in_array($my_num, range(0, .20, 0.01)):
$my_num_result = "It looks like your number is between 0 - 0.20!";
break;
case in_array($my_num, range(.21, .40, 0.01)):
$my_num_result = "I am between .21 - 0.40!";
break;
}
//Result: I am between .21 - 0.40!
echo $my_num_result;
This question is a continuation of this question and answer but didn't address decimals.
A: I suggest a different approach you can use (if you don't have too many intervals to test), lets say you need to know which interval contains a number:
$intervals = [[0,.20],[.21,.40],[.41,.60]/*...*/];
$num = .32;
$message = 'I am between %s - %s!';
foreach ($intervals as $inter) {
if ( $inter[0] <= $num && $num <= $inter[1] ) {
vprintf($message, $inter);
break;
}
}
| |
doc_1008
|
How can i get reference to playedCard's SpriteRenderer?
A: Considering Cards[selectedCard] holds a reference to a GameObject, you can go with
var sp = playedCard.GetComponent<SpriteRenderer>();
Or if Cards is an array of SpriteRenderer itself, Instantiate should return a new GameObject via its spriteRenderer Reference (you may need to add an explicit cast before Instantiate in this case, as such
var sp = (SpriteRenderer)Instantiate(Cards[selectedCard], spawnLoc[gameQueue - 1], Quaternion.identity);
| |
doc_1009
|
//THIS IS CALLED FROM viewDidLoad()
let task = urlSession.dataTask(with: url!) { (data, response, error) in
guard error == nil else {
print ("Error while fetching collections: \(String(describing: error))")
return
}
if let data = data, let string = String(data: data, encoding: .utf8) {
print (string)
URL_Request_Handler.parsingJSON(fromData: data, completion: {(result) in
if let result = result {
print ("JSON IS CONVERTED")
print (result)
//This method creates another session and fires it
self.getFinalCollectionFromResult(result)
}
})
}
}
task.resume()
And here is the getFinalCollectionFromResult method:
private func getFinalCollectionFromResult(_ result: Result_Collection) {
let task = URLSession.shared.dataTask(with: URL(string:result.cover_photo.url)!, completionHandler: { (data, response, error) in
if error != nil {
print("Errror")
}
if let data = data, let image = UIImage(data: data) {
DispatchQueue.main.async {
self.collection = Collection(title: result.title, image: image)
self.collectionViewLayout.collectionView?.reloadData()
}
}
})
task.resume()
}
Is it ok to create another session right from the completion block of the first one?
A: Yes, it's perfectly fine.
But one suggestion: you should use a downloadTask for the image instead of a dataTask. Apple says the dataTask is meant for small bits of data, not large amounts of data like you'd get from an image, and downloadTask would give you the ability to pause/resume the download if you wanted to add that functionality down the road.
| |
doc_1010
|
A: This method works for any executable located in a folder which is defined in the windows PATH variable:
private string LocateEXE(String filename)
{
String path = Environment.GetEnvironmentVariable("path");
String[] folders = path.Split(';');
foreach (String folder in folders)
{
if (File.Exists(folder + filename))
{
return folder + filename;
}
else if (File.Exists(folder + "\\" + filename))
{
return folder + "\\" + filename;
}
}
return String.Empty;
}
Then use it as follows:
string pathToExe = LocateEXE("example.exe");
Like Fredrik's method it only finds paths for some executables
A: I used the CurrentVersion\Installer\Folders registry key. Just pass in the product name.
private string GetAppPath(string productName)
{
const string foldersPath = @"SOFTWARE\Microsoft\Windows\CurrentVersion\Installer\Folders";
var baseKey = RegistryKey.OpenBaseKey(RegistryHive.LocalMachine, RegistryView.Registry64);
var subKey = baseKey.OpenSubKey(foldersPath);
if (subKey == null)
{
baseKey = RegistryKey.OpenBaseKey(RegistryHive.LocalMachine, RegistryView.Registry32);
subKey = baseKey.OpenSubKey(foldersPath);
}
return subKey != null ? subKey.GetValueNames().FirstOrDefault(kv => kv.Contains(productName)) : "ERROR";
}
A: None of the answers worked for me. After hours of searching online, I was able to successfully get the installation path. Here is the final code.
public static string checkInstalled(string findByName)
{
string displayName;
string InstallPath;
string registryKey = @"SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall";
//64 bits computer
RegistryKey key64 = RegistryKey.OpenBaseKey(RegistryHive.LocalMachine, RegistryView.Registry64);
RegistryKey key = key64.OpenSubKey(registryKey);
if (key != null)
{
foreach (RegistryKey subkey in key.GetSubKeyNames().Select(keyName => key.OpenSubKey(keyName)))
{
displayName = subkey.GetValue("DisplayName") as string;
if (displayName != null && displayName.Contains(findByName))
{
InstallPath = subkey.GetValue("InstallLocation").ToString();
return InstallPath; //or displayName
}
}
key.Close();
}
return null;
}
you can call this method like this
string JavaPath = Software.checkInstalled("Java(TM) SE Development Kit");
and boom. Cheers
A: Using C# code you can find the path for some excutables this way:
private const string keyBase = @"SOFTWARE\Microsoft\Windows\CurrentVersion\App Paths";
private string GetPathForExe(string fileName)
{
RegistryKey localMachine = Registry.LocalMachine;
RegistryKey fileKey = localMachine.OpenSubKey(string.Format(@"{0}\{1}", keyBase, fileName));
object result = null;
if (fileKey != null)
{
result = fileKey.GetValue(string.Empty);
fileKey.Close();
}
return (string)result;
}
Use it like so:
string pathToExe = GetPathForExe("wmplayer.exe");
However, it may very well be that the application that you want does not have an App Paths key.
A: Have a look at MsiEnumProductsEx
A: This stackoverflow.com article describes how to get the application associated with a particular file extension.
Perhaps you could use this technique to get the application associated with certain extensions, such as avi or wmv - either Medial Player or in your case VLC player?
| |
doc_1011
|
I have add each of the variable into the dataTable like this
dt.Rows.Add(product_name, first_Column, third_Column, run_time, inspected, pass, reject, invalid, yield, start_time, stop_time);
And this is where I display the data from dataTable. But How I get the date from that?
foreach(DataRow row1 in dt.Rows)
{
//string product = string.Format("{0}",row1.ItemArray[0]); //row1.ItemArray[0];
var productName = row1.ItemArray[0];
var firstColumn = row1.ItemArray[1];
var thirdColumn = row1.ItemArray[2];
var runTime = row1.ItemArray[3];
var Inspected = row1.ItemArray[4];
var Pass = row1.ItemArray[5];
var Reject = row1.ItemArray[6];
var Invalid = row1.ItemArray[7];
var Yield = row1.ItemArray[8];
var startTime = row1.ItemArray[9];
var stopTime = row1.ItemArray[10];
//if (startTime != null || startTime < startDateTemp)
//string startDateTemp = startTime.ToLongDateString();
Console.WriteLine();
Console.WriteLine(string.Format("|{0,5}|{1,5}|{2,5}|{3,5}|{4,5}|{5,5}|{6,5}|{7,5}|{8,5}|{9,5}|{10,5}|"
, productName, firstColumn, thirdColumn, runTime, Inspected, Pass, Reject, Invalid, Yield, startTime, stopTime));
//DateTime maxDate = Convert.ToDateTime(((from DataRow dr in dt.Rows orderby Convert.ToDateTime(dr[9]) descending select dr).FirstOrDefault()[9]));
//Console.WriteLine(maxDate);
}
A: You can use DataTable's Select(string expression, string order) method instead of DataTable.Rows. That is
foreach(DataRow row1 in dt.Select('','start_time ASC'))
{
...
}
Have a look at: https://learn.microsoft.com/tr-tr/dotnet/api/system.data.datatable.select?view=netframework-4.7.2#System_Data_DataTable_Select_System_String_System_String_
| |
doc_1012
|
How can I save the data from the tables into objects? I have already read that through the plugin the jdbc driver is installed.
In the following code, you can see, how I used to save objects before getting intellij ultimate. Is there a way to avoid writing this extra code, since there is already a connection to the database?
public class DBController {
private static Statement stmt;
private static String query;
static private Connection con = null;
public void initialiseDB() {
String connectionUrl = "jdbc:sqlserver://localhost\\Me:1433;database=TestDatabase";
try {
Class.forName("com.microsoft.sqlserver.jdbc.SQLServerDriver");
con = DriverManager.getConnection(connectionUrl);
if (con != null) {
System.out.println("Connected");
}
} catch (ClassNotFoundException | SQLException e) {
e.printStackTrace();
}
}
public static void readPeople() {
query = "select * " + "from " + "TestDatabase.dbo.Test_Table";
try {
stmt = con.createStatement();
ResultSet rs = stmt.executeQuery(query);
while (rs.next()) {
String name = rs.getString("Name");
String year = rs.getString("YEAR");
// supposedly there is a Person object
Person person = new Person(name,year);
peopleList.add(person);
}
} catch (SQLException e) {
e.printStackTrace();
}
}
}
A: I don't think you are making a connection through your plugin in Intellij. You are connecting through your Java code with the DB. I suggest you looking into the Hibernate or SpringJPA frameworks. They will help you to develop your application a lot in the sense of a modern use of connecting to a database.
| |
doc_1013
|
<app-menu2></app-menu2>
<router-outlet></router-outlet>
<app-footer></app-footer>
Need some logic, I have two routers called <app-menu> and <app-menu2>.
I am developing an application which is having login. After login, I want to see only <app-menu2>.
Now displays both of the menus. I want to display <app-menu> before login and <app-menu2> after login.give me some logics.
I tried to hide the appmenus. That does not works. Please give me suggestion as soon as possible.
A: You should use 2 different pages/templates, one for authed and one for not authed.
See: https://angular.io/docs/ts/latest/guide/router.html
like
{
path: 'Login',
component: LoginComponent,
}
{
path: 'Authed',
component: YourLoggedInComponent
children: [
{
path: '',
component: ChildComponent,
}
]
}
A: I think you should look at how authentication work, and try to create a service to handle that properly. When your service is done, you should have a method .logged() exposed by the service and then you can use it in your component.
Actually your service is very unsafe, you are actually loading all users and check password within the client, anyone with a little knowledge should be able to bypass the login.
I suggest you to look at:
https://auth0.com/blog/angular-2-authentication/
and at:
http://jasonwatmore.com/post/2016/08/16/angular-2-jwt-authentication-example-tutorial
| |
doc_1014
|
I was wondering if the python pandas library know about this format? If not, is there another format (better than csv) for exchange between pandas and R?
A: I used to think for the longest time that you needed an R instance to deserialize R objects -- and loading a saved R object, or set of objects, amount to reading a (binary, likely compressed) data stream and de-serializing it.
But Davor proved me wrong. An existence proof is provided in his CPAN module Statistics-R-IO which does this in Perl. Presumably someone with enough motivation could abstract this into C library which many other projects, including Python, could load. Or use to save Pandas data for R.
Having a better data exchange would be nice. Otherwise, you can of course use language-agnostic interchange formats such as Protocol Buffers.
(Note: CPAN.org seems to be down/slow right now. Use Google Cache if need be.)
| |
doc_1015
|
fn main() {
let cool_vec: Vec<Option<Box<dyn CoolTrait>>> = vec![];
let vec_mx = Mutex::new(cool_vec);
let mythread = thread::spawn(move || {
let mxguard = vec_mx.lock().unwrap();
do_an_action(*mxguard);
});
}
Compiler complains on that last line with:
cannot move out of dereference of MutexGuard<'_, Vec<Option<Box<dyn CoolTrait>>>
move occurs because value has type Vec<Option<Box<dyn CoolTrait>>>, which does not implement the Copy trait
Is there any way to do this?
A: From the comment:
I'm trying to pass [the vector] back and forth between the threads
To send a value to a thread, you don't need a Mutex, you can just move the value into the closure that the thread will execute:
trait CoolTrait: Send {}
fn main() {
let cool_vec: Vec<Option<Box<dyn CoolTrait>>> = vec![];
let mythread = thread::spawn(move || {
do_an_action(&cool_vec);
});
// do other things...
mythread.join().unwrap();
}
Playground
Note that CoolTrait must require Send as the super-trait (or dyn CoolTrait needs to change to dyn CoolTrait + Send) in order to send the trait objects between threads.
To send the object back to the main thread once mythread is done, you can use the return value of the closure passed to thread::spawn():
fn main() {
let cool_vec: Vec<Option<Box<dyn CoolTrait>>> = vec![];
let mythread = thread::spawn(move || {
do_an_action(&cool_vec);
cool_vec
});
// cool_vec is now owned by mythread, do other things here
// ...
let cool_vec = mythread.join().unwrap();
// mythread is done and we got cool_vec back!
// ...
}
To exchange values between threads without waiting for the thread to complete, you can use channels.
| |
doc_1016
|
This query works :
...[query] => Array
(
[filtered] => Array
(
[query] => Array
(
[multi_match] => Array
(
[query] => Baden-Powell
[fields] => Array
(
[0] => title
[1] => field_auteur
[2] => body:value
)
)
)
...
The field can be empty so i want to replace this part [query] [multi_match], by a variable like this :
'query' => [
'filtered' => [
$querytitle,
...
And $querytitle =
$querytitle= "'query' => [
'multi_match' => [
'query' => $SearchSimple,
'fields' => ['title', 'field_auteur', 'body:value']
]
],
";
or $querytitle="";
The generated code is :
[query] => Array
(
[filtered] => Array
(
[0] => 'query' => [
'multi_match' => [
'query' => Baden-Powell,
'fields' => ['title', 'field_auteur', 'body:value']
]
],
The problem is the "Array [0]" before 'query'.
How I can integrate my variable $querytitle, to have a request which works ?
Thanks for your help
A: The problem comes from the fact that the $querytitle variable contains a string instead of simply containing an associative array.
Try like this instead:
$querytitle = Array('query' => [
'multi_match' => [
'query' => $SearchSimple,
'fields' => ['title', 'field_auteur', 'body:value']
]
]
);
Then you need to write your new query like this (i.e. without the square bracket after filtered):
'query' => [
'filtered' => $querytitle,
...
| |
doc_1017
|
Now everything works fine and i don't notice any lag even though my logcat says 123 frames where skipped during startup.
I read somewhere that this message can be ignored if it doesn't exceed 300+ frames skipped, but now i'm not so sure anymore because i now read that even 1 frame skipped is too much.
I also compared the ram usage with a mp3 player from the store and i noticed most mediaplayers stay below 10mb memory usage.
But mine exceeds almost 50mb and i have no idea why but if you see at details you can see that there are processes with the same name and alot 'sandboxed_processes'.
So my question is if it's ok that my app almost consumes 50mb memory and what those 'sandboxed_processes' mean.
A: Question 1 :
it depends on how many resources your application are using and needing .you should really take memory seriously, and you should use as little as possible. Garbage collector will help you to recycle the allocated memory, but you should think about the lifespan of any instance of objects you create, and also about their design so to minimize the structures. Last but not least, Studio allows you to profile the memory allocation of your app, so use it :) .
Question 2 :
Application Sandbox from the official documentation
The Android platform takes advantage of the Linux user-based
protection to identify and isolate app resources. This isolates apps
from each other and protects apps and the system from malicious apps.
To do this, Android assigns a unique user ID (UID) to each Android
application and runs it in its own process.
Android uses this UID to set up a kernel-level Application Sandbox.
The kernel enforces security between apps and the system at the
process level through standard Linux facilities, such as user and
group IDs that are assigned to apps. By default, apps can't interact
with each other and have limited access to the operating system. For
example, if application A tries to do something malicious, such as
read application B's data or dial the phone without permission (which
is a separate application), then the operating system protects against
this behavior because application A does not have the appropriate user
privileges. The sandbox is simple, auditable, and based on decades-old
UNIX-style user separation of processes and file permissions.
Because the Application Sandbox is in the kernel, this security model
extends to native code and to operating system applications. All of
the software above the kernel, such as operating system libraries,
application framework, application runtime, and all applications, run
within the Application Sandbox. On some platforms, developers are
constrained to a specific development framework, set of APIs, or
language in order to enforce security. On Android, there are no
restrictions on how an application can be written that are required to
enforce security; in this respect, native code is as sandboxed as
interpreted code.
| |
doc_1018
| ||
doc_1019
|
accepts_nested_attributes_for :questions, :reject_if => lambda { |a| a[:content].blank? }
But its rejecting if there is the textbox is not empty..
How can I make it to reject only if there is nothing entered into the textbox ?
A: Are you confusing reject_if with record validation? The reject_if merely tells the app to ignore that set of nested attributes if a condition is true. In your case, the question's attributes will be ignored if the question's content is blank. If you want to validate or otherwise ensure that the question record(s) have a non blank value for content, you'd put validation in your question model.
You also might consider changing lambda{} to proc{}.
A: reject if will save the parent object and any other amount of child objects rejecting only those that fail the reject_if condition. If this is what you want then it is fine, i suggest debugging a little bit, put in a print statement or something, maybe
lambda { |a| puts a.inspect; a[:content].blank? }
If you want the whole nested object to save all at once, then use validations.
| |
doc_1020
|
The Thread @ (Query partial entities with JPA) Provides a possible solution by tagging each attribute with a fetch type, although I'm not sure that would work in my OneToMany case.
Any assistance is greatly appreciated.
I am using a Dynamic entity graph like so:
Example slightly modified from (http://www.thoughts-on-java.org/jpa-21-entity-graph-part-2-define/)
EntityGraph<Order> graph = this.em.createEntityGraph(Order.class);
graph.addAttributeNodes("items");
Map<String, Object> hints = new HashMap<String, Object>();
hints.put("javax.persistence.loadgraph", graph);
this.em.find(Order.class, orderId, hints);
An example of the data-structure is as follows:
The Order entity:
@Entity
public class Order implements Serializable {
@Id
@GeneratedValue(strategy = GenerationType.AUTO)
@Column(name = "id", updatable = false, nullable = false)
private Long id = null;
@Version
@Column(name = "version")
private int version = 0;
@Column
private String orderNumber;
@OneToMany(mappedBy = "order")
private Set<OrderItem> items = new HashSet<OrderItem>();
...
The OrderItem entity:
@Entity
public class OrderItem implements Serializable
{
@Id
@GeneratedValue(strategy = GenerationType.AUTO)
@Column(name = "id", updatable = false, nullable = false)
private Long id = null;
@Version
@Column(name = "version")
private int version = 0;
@Column
private int quantity;
@ManyToOne
private Order order;
@ManyToOne(fetch = FetchType.LAZY)
private Product product;
| |
doc_1021
|
{
"schoolConfig": [
{
"schoolTypeCode": "C1",
"schools": [
{
"schoolId": 456,
"config": [
{
"name": "Classes",
"value": [
{
"id": 1
},
{
"id": 2
}
]
}
]
},
{
"schoolId": 123,
"config": [
{
"name": "Classes",
"value": [
{
"id": 11
}
]
}
]
}
]
},
{
"schoolTypeCode": "C2",
"schools": [
{
"schoolId":50,
"config": [
{
"name": "Classes",
"value": [
{
"id": 12
}
]
}
]
},
{
"schoolId": 10,
"config": [
{
"name": "Classes",
"value": [
{
"id": 10
}
]
}
]
}
]
}
]
}
I want to append to the JSON file which will change the config values for any filtered result. So, for example, the output JSON will have:
"value": [
{
"id": 1
},
{
"id": 2
}
{
"id": 5
}
]
The c# code written to replace the existing json with the new one is:
string json = File.ReadAllText(jsonFilePath);
dynamic jsonObj = Newtonsoft.Json.JsonConvert.DeserializeObject(json);
JToken classes = jsonObj.SelectToken("$.schoolConfig[?(@.schoolTypeCode == 'C1')].schools[?(@.schoolId == 456)].config[?(@.name == 'Classes')]");
List<JToken> appList = classes["value"].ToList();
var itemToAdd = new JObject();
itemToAdd["id"] = 5;
appList.Add(itemToAdd);
classes["value"] = Newtonsoft.Json.JsonConvert.SerializeObject(appList).ToString();
string output = Newtonsoft.Json.JsonConvert.SerializeObject(jsonObj, Newtonsoft.Json.Formatting.Indented);
File.WriteAllText(jsonFilePath, output);
The file gets modified except that the value is shown in one line (with no indentation or formatting) as
"value": "[{\"id\":1},{\"id\":2},{\"id\":5}]"
How do I ensure that the JSON file does not show extra backslashes before the quotes and is rendered in proper format as written above. Please advise.
A: The problem is that you are extracting the value array from the classes object as a List<JToken> instead of as a JArray. Then you are serializing the list to a string before adding it back to classes. If you just cast it to JArray instead of converting it to List<JToken>, you can modify the JArray directly.
Change this line:
List<JToken> appList = classes["value"].ToList();
to this:
JArray appList = (JArray)classes["value"];
and remove this line:
classes["value"] = Newtonsoft.Json.JsonConvert.SerializeObject(appList).ToString();
Working demo: https://dotnetfiddle.net/TO2zqt
| |
doc_1022
|
In order to fix the alignment issue, I updated the card template to display each line item as a row using columnset for each of the lines. Please find the template below
{
"contentType": "application/vnd.microsoft.card.adaptive",
"content": {
"type": "AdaptiveCard",
"version": "1.0",
"body": [
{
"type": "ColumnSet",
"columns": [
{
"type": "Column",
"items": [
{
"type": "TextBlock",
"size": "medium",
"weight": "bolder",
"text": "August 2022 bill",
"wrap": true
}
]
},
{
"type": "Column",
"items": [
{
"type": "TextBlock",
"size": "medium",
"weight": "bolder",
"text": "£83.46",
"horizontalAlignment": "right"
}
]
}
]
},
{
"type": "TextBlock",
"size": "small",
"text": "From 28/07/2022 to 29/08/2022",
"spacing": "medium",
"separator": true
},
{
"type": "TextBlock",
"size": "small",
"text": "0123456789",
"spacing": "small"
},
{
"type": "ColumnSet",
"columns": [
{
"type": "Column",
"items": [
{
"type": "TextBlock",
"size": "small",
"text": "Tariff charges",
"wrap": true
}
]
},
{
"type": "Column",
"width": "stretch",
"items": [
{
"type": "TextBlock",
"size": "small",
"text": "£20.00",
"horizontalAlignment": "right",
"wrap": true
}
]
}
]
},
{
"type": "ColumnSet",
"columns": [
{
"type": "Column",
"items": [
{
"type": "TextBlock",
"size": "small",
"text": "Minutes",
"wrap": true
}
]
},
{
"type": "Column",
"width": "stretch",
"items": [
{
"type": "TextBlock",
"size": "small",
"text": "£0.55",
"horizontalAlignment": "right",
"wrap": true
}
]
}
]
},
{
"type": "ColumnSet",
"columns": [
{
"type": "Column",
"items": [
{
"type": "TextBlock",
"size": "small",
"text": "Messages",
"wrap": true
}
]
},
{
"type": "Column",
"width": "stretch",
"items": [
{
"type": "TextBlock",
"size": "small",
"text": "£0.30",
"horizontalAlignment": "right",
"wrap": true
}
]
}
]
},
{
"type": "ColumnSet",
"columns": [
{
"type": "Column",
"items": [
{
"type": "TextBlock",
"size": "small",
"text": "Charges when abroad",
"wrap": true
}
]
},
{
"type": "Column",
"width": "stretch",
"items": [
{
"type": "TextBlock",
"size": "small",
"text": "£2.30",
"horizontalAlignment": "right",
"wrap": true
}
]
}
]
},
{
"type": "ColumnSet",
"columns": [
{
"type": "Column",
"items": [
{
"type": "TextBlock",
"size": "small",
"text": "Premium & Information services",
"wrap": true
}
]
},
{
"type": "Column",
"width": "stretch",
"items": [
{
"type": "TextBlock",
"size": "small",
"text": "£1.65",
"horizontalAlignment": "right",
"wrap": true
}
]
}
]
},
{
"type": "ColumnSet",
"columns": [
{
"type": "Column",
"items": [
{
"type": "TextBlock",
"size": "small",
"text": "Balance brought forward",
"wrap": true
}
]
},
{
"type": "Column",
"width": "stretch",
"items": [
{
"type": "TextBlock",
"size": "small",
"text": "£58.66",
"horizontalAlignment": "right",
"wrap": true
}
]
}
]
}
]
}
}
After updating the template, the data gets truncated on iOS device and displays well on Android. Below is the image for iOS
I am unable to figure out the inconsistent behavior across devices. Is there any specific property that causes this behavior(IMO it should have similar behavior across devices).
Or is there any better way to do this to avoid inconsistencies across devices?
| |
doc_1023
|
[5;17H 0.029[5;40H 0.736[5;63H 9.557[7;17H 0.038[7;40H 0.001 [7;63H 0.008[9;17H-34.199[9;40H 25.800[9;63H 13.799[14;17H -4.623[14;40H 0.597[14;63H218.920[19;14H
this serial data actually have escape sequence 'x1b' before open bracket. How do i get rid of them, escape sequence and text format(5;17H..) and just print sensor data x,y,z format line by line. Can somebody help me.. Thank you..
I'm using python serial code:
import serial
ser = serial.Serial('COM9', 115200, bytesize=8, timeout=0)
while True:
data = ser.read(size=8).decode("utf-8")
s = str(data)
print(data)
ser.close()
A: Sensor data record starts with \033, so split at this for instance:
data_list = data.split('\033')
for v in data:
print (v)
| |
doc_1024
|
INSERT INTO Products (code, cateogyr_code, product_category, product_name, description, price) VALUES ('0','0','null','null','null')'0.0')
The form is as follows:
<form action="controller" method="POST">
<input type="hidden" name="action" value="add">
<table>
<tr><td>Code</td><td><input name="code"></td></tr>
<tr><td>Name</td><td><input name="product_name"></td></tr>
<tr><td>Category Code</td><td><input name="category_code"></td></tr>
<tr><td>Category</td><td><input name="product_category"></td></tr>
<tr><td>Description</td><td><input name="description"></td></tr>
<tr><td>Price</td><td><input name="price" ></td></tr>
<tr><td colspan="2"><input type="submit" value="Save"></td></tr> </table> </form>
And this is my controller method:
else if (action.equals("add")) {
Product newProduct= new Product();
dao.addProduct(newProduct);
List<Product> products = dao.findAll();
address = "listproduct.jsp";
request.setAttribute("products", products);;
and this is my sql method
public void addProduct(Product product) {
String sql = "INSERT INTO Products " +
"(code, category_code, product_category, product_name, description, price)" +
" VALUES (" +
"'" + product.getCode() + "'," +
"'" + product.getCategory_code() + "'," +
"'" + product.getCategory() + "'," +
"'" + product.getName() + "'," +
"'" + product.getDescription() + "')"+
"'" + product.getPrice() + "')";
System.out.println(sql);
}
Can someone please assist me with this?
A: This is not the proper way of insertion
INSERT INTO Products (code, cateogyr_code, product_category, product_name, description, price) VALUES ('0','0','null','null','null')'0.0')
Assuming all fields are varchar use this way
INSERT INTO Products (code, cateogyr_code, product_category, product_name, description, price) VALUES ('0','0','null','null','null','0.0')
A: First mistake is in your insert statement. Change it to as :
String sql = "INSERT INTO Products " +
"(code, category_code, product_category, product_name, description, price)" +
" VALUES (" +
"'" + product.getCode() + "'," +
"'" + product.getCategory_code() + "'," +
"'" + product.getCategory() + "'," +
"'" + product.getName() + "'," +
"'" + product.getDescription() + "'," +
"'" + product.getPrice() + "')";
You are creating new object of product every time when you click on add button.
Product newProduct= new Product(); // This cause you a problem.
This new Product() statement create new object of product in which all values of variables are set to default. That means, the values of code = 0, cateogyr_code = 0, product_category = null, product_name = null, description = null, price = 0 are set.
The solution : Define a constructor with all parameters in Product class or find some other method to set the Product object in your arraylist.
A: When you write insert method in string data type, you should write single code "'\" like this instead of "'".
| |
doc_1025
|
$(document).ready(function(){
// browser window scroll position (in pixels) where the button will appear
var offset = 200,
// duration of the animation (in ms)
scroll_top_duration = 700,
// bind with the button
$back_to_top = $('.back-to-top');
// display and hide the button
$(window).scroll(function(){
( $(this).scrollTop() > offset ) ? $back_to_top.addClass('make-visible-btt') : $back_to_top.removeClass('make-visible-btt');
});
//smooth scroll to top
$back_to_top.on('click', function(event){
event.preventDefault();
$('body,html').animate({
scrollTop: 0 ,
}, scroll_top_duration
);
});
});
$(document).ready(function() {
// browser window scroll position (in pixels) where the button will appear
var offset = 200,
// duration of the animation (in ms)
scroll_top_duration = 700,
// bind with the button
$back_to_top = $('.back-to-top');
// display and hide the button
$(window).scroll(function() {
($(this).scrollTop() > offset) ? $back_to_top.addClass('make-visible-btt'): $back_to_top.removeClass('make-visible-btt');
});
//smooth scroll to top
$back_to_top.on('click', function(event) {
event.preventDefault();
$('body,html').animate({
scrollTop: 0,
}, scroll_top_duration);
});
});
.back-to-top {
position: fixed;
bottom: 20px;
right: 20px;
display: inline-block;
height: 40px;
width: 40px;
background: url(../images/back-to-top.png) no-repeat;
background-size: contain;
overflow: hidden;
text-indent: 100%;
white-space: nowrap;
border: 1px solid #aaa;
visibility: hidden;
opacity: 0;
transition: opacity .3s 0s, visibility 0s .3s;
}
.make-visible-btt {
visibility: visible;
opacity: 1;
transition: opacity 1s 0s, visibility 0s 0s;
}
.section {
border: 1px solid black;
background-color: #ededed;
height: 200px;
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<a href="#last">jump to last section</a>
<div class="section"></div>
<div class="section"></div>
<div class="section"></div>
<div class="section"></div>
<div class="section" id="last"></div>
<a href="/my-web-page/" class="back-to-top">Back to Top</a>
The code works fine. When users click the "back to top" button on the page, they're automatically smooth scrolled to the top. All good.
The code above, however, doesn't work for in-page links.
<a href="/my-web-page/#section-3">text text text</a>
So if a user clicks on a link like the one above, the pages instantly jumps to that section (which is the default behavior).
I added this code to fix that problem:
$(document).ready(function(){
$('a[href*="\\#"]').on('click', function(event){
var href = $(event.target).closest('a').attr('href'),
skip = false;
if (!skip) {
event.preventDefault();
$('html,body').animate({scrollTop:$(this.hash).offset().top}, 500);
}
});
});
$(document).ready(function() {
// browser window scroll position (in pixels) where the button will appear
var offset = 200,
// duration of the animation (in ms)
scroll_top_duration = 700,
// bind with the button
$back_to_top = $('.back-to-top');
// display and hide the button
$(window).scroll(function() {
($(this).scrollTop() > offset) ? $back_to_top.addClass('make-visible-btt'): $back_to_top.removeClass('make-visible-btt');
});
//smooth scroll to top
$back_to_top.on('click', function(event) {
event.preventDefault();
$('body,html').animate({
scrollTop: 0,
}, scroll_top_duration);
});
});
$(document).ready(function(){
$('a[href*="\\#"]').on('click', function(event){
var href = $(event.target).closest('a').attr('href'),
skip = false;
if (!skip) {
event.preventDefault();
$('html,body').animate({scrollTop:$(this.hash).offset().top}, 500);
}
});
});
.back-to-top {
position: fixed;
bottom: 20px;
right: 20px;
display: inline-block;
height: 40px;
width: 40px;
background: url(../images/back-to-top.png) no-repeat;
background-size: contain;
overflow: hidden;
text-indent: 100%;
white-space: nowrap;
border: 1px solid #aaa;
visibility: hidden;
opacity: 0;
transition: opacity .3s 0s, visibility 0s .3s;
}
.make-visible-btt {
visibility: visible;
opacity: 1;
transition: opacity 1s 0s, visibility 0s 0s;
}
.section {
border: 1px solid black;
background-color: #ededed;
height: 200px;
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<a href="#last">jump to last section</a>
<div class="section"></div>
<div class="section"></div>
<div class="section"></div>
<div class="section"></div>
<div class="section" id="last"></div>
<a href="/my-web-page/" class="back-to-top">Back to Top</a>
So now that's fixed.
But that second code block has created two new problems:
*
*Links with a fragment identifier (e.g., #section-of-page) no longer update in the browser address bar. For example, within a page, on click, the page does scroll smoothly to the target section (so it works), but the web address stays fixed at www.website.com/whatever, when it should update to www.website.com/whatever#section-of-page.
*Links with a fragment identifier don't work across pages. In other words, /this-web-page#section-of-page works fine. But /another-web-page#section-of-page and www.another-website.com/whatever#section-of-page both fail (click does nothing).
These problems didn't exist before adding that second code block.
Looking for some guidance on how to fix these problems.
Also if you can suggest a way to integrate all functions into one block of code, that would be great.
Lastly, I know about the CSS scroll-behavior property, but it's still very rudimentary (can't adjust any settings), so I'd rather stick with JS for now.
Thanks.
A: You can check if the href points to an internal location by creating a URL object from it and checking its host against window.location.host. Then call event.preventDefault and perform smooth scrolling only in that case.
The callback function (third argument) to $.animate can be used to set the hash properly after the scrolling effect.
if (new URL(href, window.location).host === window.location.host) {
event.preventDefault();
$('html,body').animate({
scrollTop: $(this.hash).offset().top
}, 500, function() {
window.location.hash = new URL(href, window.location).hash;
});
}
$(document).ready(function() {
// browser window scroll position (in pixels) where the button will appear
var offset = 200,
// duration of the animation (in ms)
scroll_top_duration = 700,
// bind with the button
$back_to_top = $('.back-to-top');
// display and hide the button
$(window).scroll(function() {
($(this).scrollTop() > offset) ? $back_to_top.addClass('make-visible-btt'): $back_to_top.removeClass('make-visible-btt');
});
//smooth scroll to top
$back_to_top.on('click', function(event) {
event.preventDefault();
$('body,html').animate({
scrollTop: 0,
}, scroll_top_duration);
});
});
$(document).ready(function(){
$('a[href*="\\#"]').on('click', function(event){
var href = $(event.target).closest('a').attr('href');
if (new URL(href, window.location).host === window.location.host) {
event.preventDefault();
$('html,body').animate({
scrollTop: $(this.hash).offset().top
}, 500, function() {
window.location.hash = new URL(href, window.location).hash;
});
}
});
});
.back-to-top {
position: fixed;
bottom: 20px;
right: 20px;
display: inline-block;
height: 40px;
width: 40px;
background: url(../images/back-to-top.png) no-repeat;
background-size: contain;
overflow: hidden;
text-indent: 100%;
white-space: nowrap;
border: 1px solid #aaa;
visibility: hidden;
opacity: 0;
transition: opacity .3s 0s, visibility 0s .3s;
}
.make-visible-btt {
visibility: visible;
opacity: 1;
transition: opacity 1s 0s, visibility 0s 0s;
}
.section {
border: 1px solid black;
background-color: #ededed;
height: 200px;
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<a href="#last">jump to last section</a>
<a href="https://example.com">External link</a>
<div class="section"></div>
<div class="section"></div>
<div class="section"></div>
<div class="section"></div>
<div class="section" id="last"></div>
<a href="/my-web-page/" class="back-to-top">Back to Top</a>
A: for The 1st problem is because of event.preventDefault();. If you remove this line then browser url will be updated accordingly.
if you see any problems then you try setting the url again after animation completes / finishes.
for syntax refer docs
$('html,body').animate({
scrollTop: $(href).offset().top
}, 500, function () {
window.location.hash = href;
});
For 2nd problem
check wether the callback is hitting on click by putting debug point
| |
doc_1026
|
Now i want this to close automatically when i click outside the pulldown.
Something like a lightbox or modal popup which closes if you click anywhere else on the page.
Now i have to click the button again to close it. If i dont and go elsewhere on the page, the dropdown stays visible (until i click it)
This is the code of the button:
$(function(){
$('#browse-btn').click(function(){
$('#browse-content').toggle('1000');
$(this).toggleClass('active');
if ($(this).hasClass('active')) $(this).find('span').html('▲')
else $(this).find('span').html('▼')
});
$(".scroll-top").scrollToTop();
$("#category_id").selectbox();
});
Is this possible?
thanks
A: I'm not exactly sure of the elements you want to hide as you don't have a demo, so I cannot provide a fully working code, however you should do something like this:
$("body").click(function(event) {
if (event.target.id != "browse-btn") {
// Do something when there's a click outside of #browse-btn
// and the element you want to hide is currently visible
}
});
A: Using jquery this is the code I used for a similar case scenario sometime ago:
$(document).click(function(event) {
if(!$(event.target).closest('.pulldown').length) {
if($('.pulldown').is(":visible")) {
$('.pulldown').slideUp()
}
}
})
You can read more about this in the original post How to detect a click outside an element? submitted by Art.
A: You can attach a click event to all chidren of the body tag that removes that active class, but you would want to make sure to unbind that event so it doesn't run every time a click takes place that doesn't have some sort of prevent default on it. Something like this:
$(function(){
var hidePulldown = function(){
$('#browse-btn').removeClass('active');
$('body *').unbind("click", hidePulldown);
}
$('#browse-btn').click(function(){
$('#browse-content').toggle('1000');
$(this).toggleClass('active');
if ($(this).hasClass('active')) $(this).find('span').html('▲')
else {
$(this).find('span').html('▼');
$(document).on('click', 'body *', hidePulldown);
}
});
$(".scroll-top").scrollToTop();
$("#category_id").selectbox();
});
Also, the
$(document).on('click', element, function(){function body})
is the preferred way to attach click events i believe: $(document).on('click', '#id', function() {}) vs $('#id').on('click', function(){})
A: This is what worked flawlessly for me after reading some of the answers here:
$(document).click(function(event) {
if(!$(event.target).closest('#menucontainer').length &&
!$(event.target).is('#menucontainer')) {
if($('#menucontainer').is(":visible")) {
$('#menucontainer').hide();
}
}
})
Thanks for pointing me in the right way!
| |
doc_1027
|
2nd file- Then including the 3rd file which is also a php file but has both few lines of html as well as javascript.
3rd file - In this php file I am including a js file as shown below:
<html><head>
<script type="text/javascript" src="abcd.js"></script>
</head></html>
But I am unable to include this abcd.js file. Mainly I am including this file to call a function from 3rd php file which has been defined in abcd.js.
| |
doc_1028
|
SELECT *
FROM `fclients` AS F
LEFT JOIN `fclients_sequens` AS FS ON F.category = FS.category
ORDER by FS.num
AND I get results:
Then, for given id in first table, I make sql query:
SELECT *, fclients.id as fclientsID
FROM `fclients` AS F
LEFT JOIN `fclients_sequens` AS FS ON F.category = FS.category
ORDER by FS.num
and I get an error:
#1054 - Unknown column 'fclients.id' in 'field list'
Tell me please how give id fist column (id=37)?
A: You need to use the alias F when you reference the column ID
SELECT *, F.id...
| |
doc_1029
|
<string name="id">Hi \u0026</string>
^This worked. and showed as: Hi &.
But this does not work with this emoji :
<string name="id">Hi \u1F448</string>
https://www.compart.com/en/unicode/U+1F448
How can I make it work with ?
A: Using HTML encoding is not working for me, at least when using the HTML code as ૌ for the red heart emoji in the translation editor. I copied the unicode number from https://emojipedia.org/emoji/%E2%9D%A4/
However, just pasting it as the emoji character from clipboard ❤️ direct into the translations editor worked just fine, and perhaps I needn't have worried at all.
A: Use HTML Entity (decimal) i.e. 👈 to add in strings.xml and use it in your app.
So your string will be:
<string name="emoji">Hi 👈</string>
Output:
For more information please check here
| |
doc_1030
|
rule "two tickets purchased by same person"
when
$t1 : Ticket($o : owner)
$t2 : Ticket(owner == $o, this != $t1)
then
do something...
end
There are a multitude of ways to do this with non-drools constructs (a member flag the rule flips for example), but is there a way to do this type of check with a native drools construct (and keep the facts in the knowledge base), rather than use a java workaround
Thanks
A: The standard design pattern is to use a key attribute to force an order on the pair. Tickets might have a serial number:
$t1 : Ticket($o : owner, $sno: serialNumber )
$t2 : Ticket(owner == $o, serialNumber > $sno )
This eliminates the need for the constraint forcing different objects.
But three or more tickets would still create a similar problem. Therefore, you might also keep track of the tickets of an owner:
rule "insert Owner"
when
$t: Ticket( $o: owner )
not TicketSet( owner == $o )
then
insert( new TicketSet( $t ) );
end
rule "more tickets of one Owner"
when
$t: Ticket( $o: owner )
$s: TicketSet( owner == $o, set not contains $t )
then
modify( $s ){ add( $t ) }
// $o buys n-th ticket
end
| |
doc_1031
|
+---------+---------+-------+
| 0 | 1 | Equal |
+---------+---------+-------+
| 3200 | 3200 | True |
+---------+---------+-------+
| 3200.01 | 3200.01 | True |
+---------+---------+-------+
| 8080.63 | 8080.63 | False |
+---------+---------+-------+
| 3408.81 | 3408.81 | True |
+---------+---------+-------+
| 7080.01 | 7080.01 | False |
+---------+---------+-------+
| 2400.00 | 2400.00 | True |
+---------+---------+-------+
The Equal column stores True/False values - True if column 0 and 1 are equal and False if not. The problem is that it does not always work - sometimes it adds False at random row even though it should be True and the numbers are equal
Column dtypes:
0 float64
1 float64
Equal bool
dtype: object
I create Equal this way:
df['Equal'] = df[0].ge(df[1])
I also tried:
df['Equal'] = df[0].eq(df.iloc[:, -2])
Same problem all the time
It randomly returns False even though there shouldn't be False and now I am not sure if the values it returns as True are really correct or if it randomly gave True when there should be False, for example. How do I fix it?
A: First for equal use DataFrame.eq:
df['Equal'] = df[0].eq(df[1])
working like:
df['Equal'] = df[0] == df[1]
But because working with floats, there should be some accuracy problems, so use numpy.isclose:
df['Equal'] = np.isclose(df[0], df[1])
| |
doc_1032
|
We are using Struts2 as framework, Tomcat 6 as server and Openejb for the database connection.
I tried to find why it's going out of memory with the Eclipse extension Memory Analyser but I have to say that it's not easy to understand.
Here is the report of Memory Analyser :
I'm not really sure about what raise this error but is it possible that the databases connection are not closed and then the map that contains this connection is becoming to big for the JVM.
I resolve the problem by giving more memory space to the JVM but i'm not sure it's the good way to solve this problem.
Can anyone help me?
Thanks
A: According to your snapshots, your problem comes from the field resourceEntries of the WebappClassLoader which is according to the javadoc:
The cache of ResourceEntry for classes and resources we have loaded,
keyed by resource name.
In other words, it is a cache that will store all the meta information of all the resource files and classes that are loaded through the ClassLoader of your web application in order to avoid scanning the whole classpath at each call as it is potentially very slow especailly when you have a lot of jar files.
As far as I can see from the source code, for me there is no much you can do to workaround it except increasing your heap size as you already did.
| |
doc_1033
|
A: Discriminator column is used to define type of entity in TPH inheritance and EF cannot change it - never. It is like inheritance in any object oriented language - once you define object of some type you cannot make it different type - you can only cast it to parent but it will still be instance of original type. The only way to change it to different type is to create a new instance of the new type and somehow copy values from the first object to the new one.
So if you need to change discriminator you must do it without EF = by using old ADO.NET and SQL but if your change will not follow other rules in your entity model you will break EF functionality and your application will not work.
| |
doc_1034
|
For instance, if I have a dataset with two columns one of which is a type of fruit and the other is the cost of that one particular fruit (each fruit does not necessarily cost the same), then I would like to display a histogram of the fruit prices for all fruits. I want to change the color of the histogram whenever the cost value exceeds the third quartile. The tricky part is that I would like a pop up window to show up whenever a user either clicks or hovers over the bins that displays the frequency of each fruit in that particular bin.
I feel like the color situation could be fixed using ifelse(), but I'm not exactly sure how to go about doing the pop up window as described.
The kind of pop up that I would like to show up would ideally be something like this:
Apple: 3
Banana: 2
Here is a short sample code that will hopefully help:
Fruit <- c("Apple", "Apple", "Banana", "Grape", "Orange", "Grape", "Apple", "Banana", "Banana", "Banana")
Cost <- c(rep(sample(1:9), 1), 10)
Data <- as.data.frame(cbind(Fruit,Cost))
Data$Cost <- as.numeric(Data$Cost)
library(shiny)
ui <- fluidPage(
titlePanel("Example Code"),
sidebarLayout(
sidebarPanel(
sliderInput("bins",
"Number of bins:",
min = 1,
max = 50,
value = 30)
),
mainPanel(
plotOutput("distPlot")
)
)
)
# Define server logic required to draw a histogram
server <- function(input, output) {
output$distPlot <- renderPlot({
hist(Data$Cost, main = "Cost of Fruit", col = "skyblue", bins = sqrt(nrow(Data)))
})
}
# Run the application
shinyApp(ui = ui, server = server)
| |
doc_1035
|
trigger_file_org = '''
10.792001 283292 30
11.286001 296136 9
11.792001 309292 130
17.898001 468048 23
18.390001 480840 9
18.896001 493996 123
24.988001 652388 73
25.482001 665232 9
25.988001 678388 173
34.026002 887376 10
34.518002 900168 9
35.024002 913324 110
40.676002 1060276 82
41.170002 1073120 9
41.676002 1086276 182
48.994002 1276544 43
49.488002 1289388 9
49.994002 1302544 143
56.032003 1459532 30
56.524003 1472324 9
57.032003 1485532 130
'''
trigger_file = trigger_file_org.readlines()
new_scenario_org = '''
30 7503
23 6412
73 1307
10 3901
82 4118
43 7404
30 3403
'''
scenario = new_scenario_org.readlines()
Now, the order of the two-digit codes in the first column of the scenario file is the same as the order of the two-digit codes in the last column of the trigger file (30 -> 23 -> 73 -> 10 -> 82 -> 43 -> 30), but in the trigger file there are other numbers in between, and the distance is not always the same.
Moreover, the two-digit codes will repeat eventually, so they do not identify a row uniquely.
What I want to do is compare the lines of the two files in descending order, and when the two-digit codes from the trigger file are found and matched, I want the four-digit codes from the scenario file be attached to that line, like this:
10.792001 283292 30 7503
11.286001 296136 9
11.792001 309292 130
17.898001 468048 23 6412
18.390001 480840 9
18.896001 493996 123
24.988001 652388 73 1307
25.482001 665232 9
25.988001 678388 173
34.026002 887376 10 3901
34.518002 900168 9
35.024002 913324 110
40.676002 1060276 82 4118
41.170002 1073120 9
41.676002 1086276 182
48.994002 1276544 43 4704
49.488002 1289388 9
49.994002 1302544 143
56.032003 1459532 30 3403
56.524003 1472324 9
57.032003 1485532 130
So far the code I have is:
iterations = 0
trig_item_count = 0
for trig_i in range(len(trigger_file)):
curr_trigger_line = str.split(trigger_file[trig_i])
#print(curr_trigger_line)
if re.match('^[1-9][0-9]$', curr_trigger_line[2]):
trig_item_count = trig_item_count + 1
for sce_i in range(len(scenario)):
iterations = iterations + 1 # this is 129 600 total in the end bc it iterates through the trigger file and then the scenario file
curr_sce_line = str.split(scenario[sce_i])
if curr_trigger_line[2] == curr_sce_line[0]:
line_where_match__was_found = trig_i
if trig_i > line_where_match__was_found:
print("Hurray")
This code finds all the occurrences of the two-digit code, but it iterates through the entire scenario file every time. I understand why this is wrong, but I don't know how to tell Python to do the search in a descending order and to ignore the occurrences that have already been matched.
Any help is greatly appreciated!
A: trigger = '''\
10.792001 283292 30
11.286001 296136 9
11.792001 309292 130
17.898001 468048 23
18.390001 480840 9
18.896001 493996 123
24.988001 652388 73
25.482001 665232 9
25.988001 678388 173
34.026002 887376 10
34.518002 900168 9
35.024002 913324 110
40.676002 1060276 82
41.170002 1073120 9
41.676002 1086276 182
48.994002 1276544 43
49.488002 1289388 9
49.994002 1302544 143
56.032003 1459532 30
56.524003 1472324 9
57.032003 1485532 130
'''.splitlines()
scenario = '''\
30 7503
23 6412
73 1307
10 3901
82 4118
43 7404
30 3403
'''.splitlines()
it = iter(scenario)
out = []
for line in trigger:
if line[-3] == ' ' and line[-2] != ' ':
key = line[-2:]
try:
key2, extra = next(it).split()
except StopIteration:
raise ValueError('scenario ended too soon')
if key2 != key:
raise ValueError('scenario key does not match')
line += ' ' + extra
out.append(line)
for line in out:
print(line)
# matches your desired output as given
| |
doc_1036
|
I write unit tests using mostly APITestCase from rest_framework.test, TestCase from django.test, and sometimes TestCase from test_plus.
When I execute my tests I typically use commands like
python manage.py test
python manage.py test somemodule.tests.some_test_file.TestClass.specific_test_case
Or the above but with the keepdb flag to shorten the testing time
python manage.py test --keepdb
python manage.py test somemodule.tests.some_test_file.TestClass.specific_test_case --keepdb
How do I shift to using pytest gradually?
By gradually, I mean long term I move to pytest and pytest commands, but in the meantime, the tests already written can still be used in a single commmand because my CICD on CircleCI is still dependent on the tests to pass.
I also use factory-boy if that's relevant.
| |
doc_1037
|
The code is,
Resources packageResources;
Context packageContext;
try
{
packageResources = pm.getResourcesForApplication(packageName);
packageContext = this.createPackageContext(packageName, Context.CONTEXT_INCLUDE_CODE + Context.CONTEXT_IGNORE_SECURITY);
}
catch(NameNotFoundException excep)
{
// the package does not exist. move on to see if another exists.
}
Class layoutClass;
try
{
// using reflection to get the layout class inside the R class of the package
layoutClass = packageContext.getClassLoader().loadClass(packageName + ".R$layout");
}
catch (ClassNotFoundException excep1)
{
// Less chances that class won't be there.
}
for( Field layoutID : layoutClass.getFields() )
{
try
{
int id = layoutID.getInt(layoutClass);
XmlResourceParser xmlResourceLayout = packageResources.getLayout(id);
View v = new View(this, Xml.asAttributeSet(xmlResourceLayout));
this.viewFlipper.addView(v);
}
catch (Exception excep)
{
continue;
}
}
I get no errors, and i debugged and checked. The layout IDs are correct. However, in my viewFlipper its just blank. No warnings or errors i can find.
A: Finally got it.... Its actually simple !!!!
Here is what i did...
*
*In the target apk, there is only resources and layouts with no application or activity code. I created a class,
public final class ViewExtractor
{
private static final int NUMBER_OF_LAYOUTS = 5;
public static View[] getAllViews(Context context)
{
View[] result = new View[ViewExtractor.NUMBER_OF_LAYOUTS];
result[0] = View.inflate(context, R.layout.layout_0, null);
result[1] = View.inflate(context, R.layout.layout_1, null);
result[2] = View.inflate(context, R.layout.layout_2, null);
result[3] = View.inflate(context, R.layout.layout_3, null);
result[4] = View.inflate(context, R.layout.layout_4, null);
return result;
}
}
Then in my current application, I modified my earlier code. The modification takes place once package has been verified to exist.
// If the package exists then get the resources within it.
// Use the method in the class to get the views.
Class<?> viewExtractor;
try
{
viewExtractor = packageContext.getClassLoader().loadClass(packageName + ".ViewExtractor");
}
catch (ClassNotFoundException excep)
{
continue;
}
View[] resultViews;
try
{
Method m = viewExtractor.getDeclaredMethod("getAllViews", Context.class);
resultViews= (View[])m.invoke(null, new Object[]{packageContext});
for( View v : resultViews)
{
this.viewFlipper.addView(v);
}
}
catch (Exception excep)
{
excep.printStackTrace();
}
A: You are not inflating a layout. You are creating an empty View and adding it to your ViewFlipper.
A: I am currently doing this. It only works if a know the packageName of activity or fragment from the .apk that should provide the layout hierarchy (I call this foreign context). And you need to know the name of the layout you want to inflate (e.g. R.layout.mylayout -> "mylayout")
Context c = createPackageContext(foreignPackageName,
Context.CONTEXT_INCLUDE_CODE|Context.CONTEXT_IGNORE_SECURITY); //create foreign context
int resId = c.getResources.getIdentifier(mylayoutName,"layout",foreignPackageName);
LayoutInflater myInflater = LayoutInflater.from(c); //Inflater for foreign context
View myLayout = myInflater.inflate(resId,null,false); //do not attach to a root view
| |
doc_1038
|
Code:
import whois
domains = ['http://www.example.com']
for dom in domains:
domain = whois.Domain(dom)
print domain.registrar
Error:
domain = whois.Domain(dom)
File "C:\Python27\lib\site-packages\whois\_3_adjust.py", line 12, in __init__
self.name = data['domain_name'][0].strip().lower()
TypeError: string indices must be integers, not str
Have you any idea what could be wrong? Or can you give me a better solution?
EDIT: I tried the pythonwhois module but it returns an error too.
EDIT2: According to one solution here, on SO, I've tried to use pywhois, this code raises an error too.
import pywhois
w = pywhois.whois('google.com')
w.expiration_date
ERROR:
w = pywhois.whois('google.com')
AttributeError: 'module' object has no attribute 'whois'
A: I've had issues with python-whois in Python 3, but Python 2 works fine for me using the following code.
First, I would recommend uninstalling any whois module(s) you might have installed. Both python-whois (0.6.1) and whois (0.7) use the same import whois, which created some additional confusion for me.
Next, install python-whois through Command Prompt, Terminal, etc.
pip install python-whois
Once installed, enter the following code in your preferred python IDE.
"""
Python = 2.79
OS = Windows 10
IDE = PyCharm 4.5
PyPIPackage = python-whois 0.6.1
"""
import whois
url = 'example.com'
w = whois.whois(url)
print w
The result is a dictionary.
{
"updated_date": "2015-08-14 00:00:00",
"status": [
"clientDeleteProhibited https://icann.org/epp#clientDeleteProhibited",
"clientTransferProhibited https://icann.org/epp#clientTransferProhibited",
"clientUpdateProhibited https://icann.org/epp#clientUpdateProhibited"
],
"name": null,
"dnssec": null,
"city": null,
"expiration_date": "2016-08-13 00:00:00",
...
...
...
"address": null,
"name_servers": [
"A.IANA-SERVERS.NET",
"B.IANA-SERVERS.NET"
],
"org": null,
"creation_date": "1995-08-14 00:00:00",
"emails": null
}
A: Installation
pip install pythonwhois
You might need to execute pip3 install pythonwhois --user or something similar.
Code
import pythonwhois
def is_registered(site):
"""Check if a domain has an WHOIS record."""
details = pythonwhois.get_whois(site)
return not details['raw'][0].startswith('No match for')
names = ['google', 'af35foobar90']
for name in names:
site = '{}.com'.format(name)
print('{}: {}'.format(site, is_registered(site)))
A: The whois project has been moved to github, you can install it using pip install python-whois:
domains = ['http://www.example.com']
from whois import whois
print(whois(domains[0]))
{'updated_date': datetime.datetime(2014, 8, 14, 0, 0), 'status': ['clientDeleteProhibited http://www.icann.org/epp#clientDeleteProhibited', 'clientTransferProhibited http://www.icann.org/epp#clientTransferProhibited', 'clientUpdateProhibited http://www.icann.org/epp#clientUpdateProhibited'], 'name': None, 'dnssec': None, 'city': None, 'expiration_date': datetime.datetime(2015, 8, 13, 0, 0), 'zipcode': None, 'domain_name': 'EXAMPLE.COM', 'country': None, 'whois_server': ['whois.enetica.com.au', 'whois.godaddy.com', 'whois.domain.com', 'whois.iana.org'], 'state': None, 'registrar': ['ENETICA PTY LTD', 'GODADDY.COM, LLC', 'DOMAIN.COM, LLC', 'RESERVED-INTERNET ASSIGNED NUMBERS AUTHORITY'], 'referral_url': ['http://www.enetica.com.au', 'http://registrar.godaddy.com', 'http://www.domain.com', 'http://res-dom.iana.org'], 'address': None, 'name_servers': ['A.IANA-SERVERS.NET', 'B.IANA-SERVERS.NET'], 'org': None, 'creation_date': datetime.datetime(1995, 8, 14, 0, 0), 'emails': None}
A: with pythonwhois if you favor, it could be
>>> import pythonwhois # i'm using this http://cryto.net/pythonwhois
>>> domains = ['google.com', 'stackoverflow.com']
>>> for dom in domains:
... details = pythonwhois.get_whois(dom)
... print details['contacts']['registrant']
which returns a dictionary
{'city': u'Mountain View',
'fax': u'+1.6506188571',
'name': u'Dns Admin',
'state': u'CA',
'phone': u'+1.6502530000',
'street': u'Please contact contact- admin@google.com, 1600 Amphitheatre Parkway',
'country': u'US',
'postalcode': u'94043',
'organization': u'Google Inc.',
'email': u'dns-admin@google.com'}
{'city': u'New York',
'name': u'Sysadmin Team',
'state': u'NY',
'phone': u'+1.2122328280',
'street': u'1 Exchange Plaza , Floor 26',
'country': u'US',
'postalcode': u'10006',
'organization': u'Stack Exchange, Inc.',
'email': u'sysadmin-team@stackoverflow.com'}
edit: i checked your whois
this code worked for me.
>>> import whois
>>> domains = ['google.com', 'stackoverflow.com']
>>> for dom in domains:
... domain = whois.query(dom)
... print domain.name, domain.registrar
...
google.com MARKMONITOR INC.
stackoverflow.com NAME.COM, INC.
this api uses unix/linux's whois shell command and as it shown here you shouldn't add http:// before domain name. or if you have a unix/linux machine test this:
$ whois google.com
Whois Server Version 2.0
Domain names in the .com and .net domains can now be registered
with many different competing registrars. Go to http://www.internic.net
for detailed information ...
but with http (it is maybe because of http(s) is a just a protocol type, and doesn't have any realiton with domain name itself)
$ whois http://google.com
No whois server is known for this kind of object.
A: I've checked the documentation and worked for me.Domain name should be like mysite.com(not http://www.example.com)
>>> import whois
>>> domains = ['google.com']
>>>
>>> for dom in domains:
... domain = whois.query(dom)
... print domain.registrar
...
MARKMONITOR INC.
EDIT:1 I don't know why it is not working for others and getting errors.
I'm posting a screenshot of my terminal
| |
doc_1039
|
*
*I am the only developer in my team (stated to make it clear that there will be no change from other team members to consider)
*have a main branch stable and deployed on production server
*created a feature branch add some other features and changes into existing business flow
*Everything looks fine, now I wanted to merge these all new features to main branch
*Committed and synced on server all news changes are going to feature branch
*Created a pull request and approved it, it deleted feature branch from server but it still exists on Local machine
*The question is How to merge all changes into main branch and delete feature branch on local machine as well?
A: We can just delete the remote-tracking branches at VS with the command "git config remote.origin.prune true" or set the "Prune remote branches during fetch" combo (Team Explorer->Settings->Git Global Settings) is true.
The various prune options (git remote update --prune, git remote prune, git fetch --prune) only delete remote-tracking branches.
If we want to deleted the local branches, we can only delete them manually.
You'll need to manually delete local branches you no longer want, or change or remove their upstream setting if the remote-tracking branch no longer exists.
For more details, you can rewards here: git fetch origin --prune doesn't delete local branches?
A: After completing the PR and having the remote feature branch deleted, you'll need to do a fetch into your local clone. By default, the remote tracking branches in the local clone aren't deleted. You can call "git fetch --prune" to do that cleanup.
If you have a local main branch, you'll need to pull it from the remote main branch to get it up to date.
If you would like to have fetch always prune, you can set a config option to force this behavior. Team Explorer includes the ability to set this in the UI. Team Explorer->Settings->Git Global Settings, and look for the "Prune remote branches during fetch" combo.
Hope this helps.
| |
doc_1040
|
My code in the view:
@foreach (var currentFeature in Model.FacilityFeatures)
{
<li class="features-list-enclosure">
<span title="@currentFeature.CategoryName" class="features-list-enclosure__item">
@RenderImage(x => currentFeature.CategoryICON, new { title = @currentFeature.CategoryName }, isEditable: true)
</span>
</li>
}
My Model:
public class FacilitiesPage : BasePage
{
[SitecoreField(FieldName = "Facility features")]
public virtual IEnumerable<ContentCategory> FacilityFeatures { get; set; }
}
When I am running the debugger it is showing that currentFeature.CategoryICON item has got different image in each iteration of the loop. But it is showing the same image on the UI but the title on the image is different.
If I don't use the Glassmapper render image than it works and shows different images:
@foreach (var currentFeature in Model.FacilityFeatures)
{
<li class="features-list-enclosure">
<span title="@currentFeature.CategoryName" class="features-list-enclosure__item">
<img title="@currentFeature.CategoryName" src="@currentFeature.CategoryICON.Src" />
</span>
</li>
}
A: This happens because Glass caches the output when the expression is the same. Try using this to render the image:
@RenderImage(currentFeature, x => x.CategoryICON, new { title = @currentFeature.CategoryName }, isEditable: true)
This issue related to this: https://github.com/mikeedwards83/Glass.Mapper/issues/95
| |
doc_1041
|
See this non geojson fiddle recreating the effect i'm after.
Is there an easy Google Map API 3 function to do this for geojson data?
See my code below and fiddle here
var map;
window.initMap = function() {
var mapProp = {
center: new google.maps.LatLng(51.8948201,-0.7333298),
zoom: 17,
mapTypeId: 'satellite'
};
map = new google.maps.Map(document.getElementById("map"), mapProp);
map.data.loadGeoJson('https://api.myjson.com/bins/g0tzw');
map.data.setStyle({
strokeColor: '#FF0000',
strokeOpacity: 0.8,
strokeWeight: 2,
fillColor: '#FF0000',
fillOpacity: 0.35
});
var bounds = new google.maps.LatLngBounds();
map.fitBounds(bounds);
map.setCenter(bounds.getCenter());
}
I need expert pointers on cleanest and best way approach this.
See working demo of my code above in fiddle.
http://jsfiddle.net/joshmoto/fe2vworc/
I've included my geojson inline so you can see the polygons on the map.
A: Here is a quick example of how you can get your features bounds. This will just get each feature bounds, extend a LatLngBounds object and then fit the map with these bounds.
var map;
function initialize() {
map = new google.maps.Map(document.getElementById('map-canvas'), {
zoom: 10,
center: {
lat: 0,
lng: 0
}
});
var permits = {
type: "FeatureCollection",
id: "permits",
features: [{
type: "Feature",
properties: {
name: "Alpha Field"
},
geometry: {
type: "Polygon",
coordinates: [
[
[-0.72863, 51.895995],
[-0.730022, 51.896766],
[-0.730754, 51.896524],
[-0.731234, 51.896401],
[-0.731832, 51.896294],
[-0.732345, 51.896219],
[-0.732945, 51.896102],
[-0.732691, 51.895774],
[-0.732618, 51.895531],
[-0.732543, 51.895359],
[-0.73152, 51.894751],
[-0.731037, 51.894488],
[-0.730708, 51.894324],
[-0.72863, 51.895995]
]
]
}
},
{
type: "Feature",
properties: {
name: "Beta Field"
},
geometry: {
type: "Polygon",
coordinates: [
[
[-0.728004, 51.895658],
[-0.72863, 51.895995],
[-0.730708, 51.894324],
[-0.731217, 51.893784],
[-0.730992, 51.893709],
[-0.730793, 51.893567],
[-0.730734, 51.893435],
[-0.730761, 51.89333],
[-0.729696, 51.893244],
[-0.729391, 51.89314],
[-0.729249, 51.893586],
[-0.728991, 51.894152],
[-0.728525, 51.894983],
[-0.728004, 51.895658]
]
]
}
}
]
};
google.maps.event.addListenerOnce(map, 'idle', function() {
// Load GeoJSON.
map.data.addGeoJson(permits);
// Create empty bounds object
var bounds = new google.maps.LatLngBounds();
// Loop through features
map.data.forEach(function(feature) {
var geo = feature.getGeometry();
geo.forEachLatLng(function(LatLng) {
bounds.extend(LatLng);
});
});
map.fitBounds(bounds);
});
}
initialize();
#map-canvas {
height: 150px;
}
<div id="map-canvas"></div>
<script src="https://maps.googleapis.com/maps/api/js?key=AIzaSyCkUOdZ5y7hMm0yrcCQoCvLwzdM6M8s5qk"></script>
A: Props to @MrUpsidown for providing the working method to fitBounds.
I'm posting this answer to show my final solution based on @MrUpsidown answer using GeoJson data via loadGeoJson()
Here is my readable GeoJson here http://myjson.com/g0tzw
// initiate map
window.initMap = function() {
// permits json
var permits = 'https://api.myjson.com/bins/g0tzw';
// map properties
var mapProp = {
zoom: 17,
mapTypeId: 'satellite'
};
// google map object
var map = new google.maps.Map(document.getElementById("map"), mapProp);
// load GeoJSON.
map.data.loadGeoJson(permits, null, function () {
// create empty bounds object
var bounds = new google.maps.LatLngBounds();
// loop through features
map.data.forEach(function(feature) {
var geo = feature.getGeometry();
geo.forEachLatLng(function(LatLng) {
bounds.extend(LatLng);
});
});
// fit data to bounds
map.fitBounds(bounds);
});
// map data styles
map.data.setStyle({
strokeColor: '#FF0000',
strokeOpacity: 0.8,
strokeWeight: 2,
fillColor: '#FF0000',
fillOpacity: 0.35
});
}
I'm calling initMap via...
<script async defer src="https://maps.googleapis.com/maps/api/js?key=<?=$gmap_api?>&callback=initMap"></script>
See working demo here.
http://jsfiddle.net/joshmoto/eg3vj17m/
| |
doc_1042
|
For that i created custom scopes. API Gateway checks those scopes and proxies these requests to my Elastic Beanstalk API. This works fine.
But another part of my Authorization are groups. Based on a assigned group some actions have restricted Access. I need to use groups because i want to be able to add or remove those groups during user-lifecycle. The group will be checked in my Elastic Beanstalk API.
Problem
The documentation states that Access Tokens contain the cognito:groups claim. But a setup like in the Image below does not include this claim in my token.
The following decoded jwt will be produced after a login via hosted-UI. As you can see the claim is missing. ID tokens (with openid scope) will include this group. I am also sure that i've tested Cognito earlier with Amplify JS-SDK which included the group. But there i was unable to include my custom scopes.
{
"sub": "xxxxxxxxxxxxxxxxxxxxxx",
"token_use": "access",
"scope": "api.example.com/item.read api.example.com/item.write",
"auth_time": 1615325374,
"iss": "https://cognito-idp.eu-central-1.amazonaws.com/eu-central-1_xxxxxxx",
"exp": 1615328974,
"iat": 1615325374,
"version": 2,
"jti": "f37219a5-c8b0-411b-bdb3-ab7d9201b491",
"client_id": "xxxxxxxxxxxxxxx",
"username": "xxxxxxxxxxxxxxxxxxxxxxxxxx"
}
Do I miss about a restriction or configuration issue? Why is the group missing inside my Access Tokens?
Thanks for your help!
A: I had the same issue. The cognito:groups value appeared after I added the openid scope:
and the access token is still supplied, as per earlier comments.
| |
doc_1043
|
curl https://releases.rancher.com/install-docker/20.10.sh | sh
https://rancher.com/docs/rancher/v2.5/en/installation/requirements/installing-docker/
on a Google Compute Engine instance and getting the following error:
ERROR: '20.10.7' not found amongst apt-cache madison results
GCP instance is a 2 vcpu 2 gb e2-small
A: In the GCP console; if you have not created a new Google compute engine as per your tutorial then you can choose Ubuntu after naming it Select a zone and enable HTTP and click ‘Create’.
Enable and create an SSH key for Google compute engine and use the command sudo -s(superuser login).
Sudo apt -get install docker.io (docker will be installed and it will be ready to go).
Please refer to this installation documentation.
A: Version 2.10.7 is not available in this repository:
$ apt-cache madison search docker-ce
docker-ce | 5:20.10.14~3-0~ubuntu-impish | https://download.docker.com/linux/ubuntu impish/stable amd64 Packages
docker-ce | 5:20.10.13~3-0~ubuntu-impish | https://download.docker.com/linux/ubuntu impish/stable amd64 Packages
docker-ce | 5:20.10.12~3-0~ubuntu-impish | https://download.docker.com/linux/ubuntu impish/stable amd64 Packages
docker-ce | 5:20.10.11~3-0~ubuntu-impish | https://download.docker.com/linux/ubuntu impish/stable amd64 Packages
docker-ce | 5:20.10.10~3-0~ubuntu-impish | https://download.docker.com/linux/ubuntu impish/stable amd64 Packages
I fixed the issue downloading 20.10.sh file and changing the line with another version (i.e.: VERSION=20.10.14). It should work.
| |
doc_1044
|
What I am looking for is an excel like function, though I use MS Access.
How am I able to automatically get a concatenated Week & SowOrder appear on EN?
0101
0102
I use 2010 Access but 2003 format.
A: You can concatenate those fields in a query.
SELECT [Week], SowOrder, [Week] & SowOrder AS EN
FROM YourTable;
Then you can use the query anytime you need to see EN.
If you need to store those concatenated values in your table, you can use an UPDATE query.
UPDATE YourTable
SET EN = [Week] & SowOrder;
However, storing the values means you need to remember to execute the UPDATE again any time the Week or SowOrder values change.
Note you could use + instead of & for concatenation. The difference between those two operators is how they behave with Null:
*
*"foo" & Null yields "foo"
*"foo" + Null yields Null
A: I'm not sure how you'd get the leading zeros from each field to display if they are of number types, or if your data would change to '10', '11', '12'...'41','42','43.. and so on.
However, if Fields 1 and 2 are text, Field three would be a Calculated field as
=[week]&[SowOrder]
A: Check this, use + sign to concatenate in your case
Calculated field
A: Building on HansUp's answer, you probably want
SELECT [Week], [SowOrder], Format([Week],"00") & Format([SowOrder],"00") AS EN
The Format functions will force the inclusion of leading zeroes on any single-digit numbers.
| |
doc_1045
|
A: Solution 1 - View#bringToFront()
View.bringToFront() should work if used properly. Maybe you forgot this (source: http://developer.android.com/reference/android/view/View.html#bringToFront()):
Prior to KITKAT this method should be followed by calls to requestLayout() and invalidate()
Solution 2 - reattach view
If View.bringToFront() will not work try to remove and add the view again. The advantage is that it has the same code on pre-KITKAT versions too.
I am doing it this way in my code:
ViewGroup parentView = (ViewGroup) (listView.getParent());
parentView.removeView(addFab);
parentView.addView(addFab);
A: Use FrameLayout, frame layout shows last added view on top..
A: Try this :
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent" >
<ListView
android:id="@+id/list"
android:layout_width="match_parent"
android:layout_height="match_parent" />
<YourCustomView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_centerInParent="true" />
</RelativeLayout>
A: Views get placed in the order you add them to their parent views. The view added last will be on top. You might also want to try
View.bringToFront() http://developer.android.com/reference/android/view/View.html#bringToFront()
A: Please try adding android:elevation attribute to your view in the XML. It will always place your custom view over the ListView.
A: Again: Views get placed to the layout in the order you add it. Question is: is it realy hidden or does it not render?. Check LogCat and Console for errors. If you only add this single Custom View to your layout, does it get rendered without any problems? If so, ensure you realy add your custom to the same parent view (group), as you add your ListView. If you don't get any further, provide de respective code sample.
A: Try to use: View.bringToFront();
A: Give an elevation to the view which you want to bring up.
android:elevation="2dp"
A: This works for me :
<RelativeLayout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
android:layout_width="match_parent"
android:layout_height="220dp"
android:gravity="bottom"
android:orientation="vertical"
android:theme="@style/ThemeOverlay.AppCompat.Dark">
<ImageView
android:id="@+id/details_wrapper"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:orientation="vertical"
android:paddingBottom="16dip"
android:scaleType="centerCrop"
android:src="@drawable/temp_nav_image">
</ImageView>
<LinearLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_alignParentBottom="true"
android:paddingBottom="@dimen/activity_vertical_margin"
android:paddingLeft="@dimen/activity_horizontal_margin"
android:paddingRight="@dimen/activity_horizontal_margin"
android:paddingTop="@dimen/activity_vertical_margin"
android:orientation="vertical">
<TextView
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:paddingTop="@dimen/nav_header_vertical_spacing"
android:text="Kapil Rajput"
android:textStyle="bold"
android:textColor="@android:color/black"
android:textAppearance="@style/TextAppearance.AppCompat.Body1"/>
<TextView
android:id="@+id/textView"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:textColor="@android:color/black"
android:textStyle="bold"
android:text="kapil@gnxtsystems.com"/>
</LinearLayout>
</RelativeLayout>
A: I used relative layout and the order of the views somehow determines which one is on top.
<RelativeLayout
android:layout_width="match_parent"
android:layout_height="match_parent"
android:layout_weight="1">
<SeekBar
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:splitTrack="false"
android:layout_alignParentTop="true"
/>
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentTop="true"
android:layout_centerHorizontal="true"/>
In this layout, the textview will be on top of the seekbar
A: Make parent view as relative layout and add your both view in xml.It wont work for Linear Layout.
And then add FirstView.bringToFront() programtically.
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:background="@mipmap/bg"
android:orientation="vertical"
android:padding="15dp">
<ImageView
android:id="@+id/imgIcon"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_centerHorizontal="true"
android:layout_gravity="center"
android:layout_marginBottom="-30dp"
android:layout_marginTop="30dp"
android:src="@mipmap/login_logo" />
<LinearLayout
android:layout_width="match_parent"
android:layout_height="match_parent"
android:layout_below="@id/imgIcon"
android:layout_gravity="bottom"
android:background="#B3FFFFFF"
android:gravity="center"
android:orientation="vertical"
android:paddingLeft="10dp"
android:paddingRight="10dp">
<View 1/>
<View 2/>
</LinearLayout>
Java file
imgIcon.brigtToFront();
A: what you should do is using android:layout_below or android:layout_above programmatically. so you should do this:
RelativeLayout.LayoutParams params= new RelativeLayout.LayoutParams(ViewGroup.LayoutParams.WRAP_CONTENT,ViewGroup.LayoutParams.WRAP_CONTENT);
params.addRule(RelativeLayout.BELOW, R.id.below_id);
viewToLayout.setLayoutParams(params);
and i suggest you take a look at this answer
hope this helps
| |
doc_1046
|
Given I am logged in
When I visit home page
Then the request and all of my user information should get logged into CouchDB
This is basically it, the middleware itself isn't that complicated, but I'm having trouble with the workflow.
First thing is, that I have no idea how to test something like this. The feature itself is pretty clear, but how would I go about implementing it? Probably the highest level test that I can do is send a request via curl and then check if it got saved into CouchDB.
The problem is, I'm not really sure at what level should I test this and what types of tests will be most helpful. At the moment I basically hit F5 and take a look in the db if there is a new record.
The app is running on Rails 3.2.2 and I'm using couchrest gem to do the logging.
A: This seems like the place for integration tests. If the stock rails integration tests bypass your middleware treat it like a rack app and use [ https://github.com/brynary/rack-test ]
A: This Sinatra tutorial ought to help:
http://www.sinatrarb.com/testing.html
You probably also want to redirect logging for your middleware to the Rails logger in an initializer so you can see what's going on while your tests are running:
My::Awesome::Middleware.logger = Rails.logger
| |
doc_1047
|
parameter file:
#zpar.ini: parameter file for configparser
[my pars]
my_zpar = 2.
parser:
#zippy_parser
import configparser
def read(_rundir):
global rundir
rundir = _rundir
cp = configparser.ConfigParser()
cp.read(rundir + '/zpar.ini')
#[my pars]
global my_zpar
my_zpar = cp['my pars'].getfloat('my_zpar')
and the main python file:
# dask test with configparser
import dask
from dask.distributed import Client
import zippy_parser as zpar
def my_func(x, y):
# print stuff
print("parameter from main is: {}".format(main_par))
print("parameter from configparser is: {}".format(zpar.my_zpar))
# do stuff
return x + y
if __name__ == '__main__':
client = Client(n_workers = 4)
#read parameters from input file
rundir = '/path/to/parameter/file'
zpar.read(rundir)
#test zpar
print("zpar is {}".format(zpar.my_zpar))
#define parameter and call my_func
main_par = 5.
z = dask.delayed(my_func)(1., 2.)
z.compute()
client.close()
The first print statement in my_func() executes just fine, but the second print statement raises an exception. The output is:
zpar is 2.0
parameter from main is: 5.0
distributed.worker - WARNING - Compute Failed
Function: my_func
args: (1.0, 2.0)
kwargs: {}
Exception: AttributeError("module 'zippy_parser' has no attribute 'my_zpar'",)
I am new to dask. I suppose this has something to do with the serialization, which I do not understand. Can someone enlighten me and/or point to relevant documentation? Thanks!
A: I will try to keep this brief.
When a function is serialised in order to be sent to workers, python also sends local variables and functions needed by the function (its "closure"). However, it stores the modules it references by name, it does not try to serialise your whole runtime.
This means that zippy_parser is imported in the worker, not deserialised. Since the function read has never been called
in the worker, the global variable is never initialised.
So, you could call read in the workers as part of your function or otherwise, but probably the pattern or setting module-global variables from with a function isn't great. Dask's delayed mechanism prefers functional purity, that the result you get should not depend on the current state of the runtime.
(note that if you had created the client after calling read in the main script, the workers might have got the in-memory version, depending on how subprocesses are configured to be created on your system)
A: I encourage you to pass in all parameters to your dask delayed functions explicitly, rather than relying on the global namespace.
| |
doc_1048
|
A: check on the resources section of your project if they are actually there... sometimes you move the .alp somewhere else and you lose the location of the objects... or sometimes AnyLogic just makes our life difficult
if you have issues with the elements, you will see a red or grey circle instead of green.
| |
doc_1049
|
raspivid -t 0 -h 720 -w 1080 -fps 25 -hf -b 2000000 -o - | gst-launch-1.0 -v fdsrc ! h264parse ! rtph264pay config-interval=1 pt=96 ! gdppay ! tcpserversink host=192.168.0.249 port=5000
What pipeline shall I create on Android device to recieve the video?
data->pipeline = gst_parse_launch("???", &error);
I used this tutorial with some changes: http://docs.gstreamer.com/display/GstSDK/Android+tutorial+4%3A+A+basic+media+player
IP address can be hardcoded, so I removed the code:
void gst_native_set_uri (JNIEnv* env, jobject thiz, jstring uri) {
/*
CustomData *data = GET_CUSTOM_DATA (env, thiz, custom_data_field_id);
if (!data || !data->pipeline) return;
const jbyte *char_uri = (*env)->GetStringUTFChars (env, uri, NULL);
GST_DEBUG ("Setting URI to %s", char_uri);
if (data->target_state >= GST_STATE_READY)
gst_element_set_state (data->pipeline, GST_STATE_READY);
g_object_set(data->pipeline, "uri", char_uri, NULL);
(*env)->ReleaseStringUTFChars (env, uri, char_uri);
data->duration = GST_CLOCK_TIME_NONE;
data->is_live = (gst_element_set_state (data->pipeline, data- >target_state) == GST_STATE_CHANGE_NO_PREROLL);
*/
}
| |
doc_1050
|
I would like to find similar names like this examples:
John F. Kennedy or John Fitzgerald Kennedy
I tried this code:
df2[df2['nombre'].duplicated() == True]
Brings the exact value duplicated, but i need to find any coincidence with the frist name.
| |
doc_1051
|
ElasticSearch Java API to get distinct values from the Query Builders
Is there any way to achieve distinct emails using org.elasticsearch.client.RestHighLevelClient?
Could you please help me on this, i tried so many ways but could not able to solve it. But I am able to achieve the same in SQL Workbench and the equivalent json query is given below using Kibana translator.
SELECT DISTINCT email_client.keyword
FROM email_reference;
Equivalent Elasticsearch query given below:
{
"from": 0,
"size": 0,
"_source": {
"includes": ["email_client.keyword"],
"excludes": []
},
"stored_fields": "email_client.keyword",
"aggregations": {
"email_client.keyword": {
"terms": {
"field": "email_client.keyword",
"size": 200,
"min_doc_count": 1,
"shard_min_doc_count": 0,
"show_term_doc_count_error": false,
"order": [
{
"_count": "desc"
},
{
"_key": "asc"
}
]
}
}
}
}
So now i wanted form this JSON query using RestHighLevelClient, I have tried but the problem is prepareSearch() is not there in RestHighLevelClient is there any other to achieve using RestHighLevelClient?
A: This should work with the RestHighLevelClient:
MultiSearchRequest multiRequest = new MultiSearchRequest();
SearchRequest searchRequest = new SearchRequest();
SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
searchSourceBuilder.size(0); // return only aggregation results
searchSourceBuilder.aggregation(AggregationBuilders.terms("label_agg").field("email_client.keyword").size(200));
searchRequest.indices("email_reference"); // index name
searchRequest.source(searchSourceBuilder);
multiRequest.add(searchRequest);
MultiSearchResponse response = restHighLevelClient.msearch(multiRequest, RequestOptions.DEFAULT);
A: For elasticsearch-rest-high-level-client with version 7.11.2, below example worked for me. You need to set an aggregation for a field with keyword.
private List<String> queryForDistinctMetadata(String aggregationKey, String field) throws IOException {
var aggregationBuilder = AggregationBuilders
.terms(aggregationKey)
.field(field);
var searchSourceBuilder = new SearchSourceBuilder()
.aggregation(aggregationBuilder)
.size(0);
var searchRequest = new SearchRequest()
.indices("<your index here>")
.source(searchSourceBuilder);
var response = client.search(searchRequest, RequestOptions.DEFAULT);
var aggregation = (ParsedStringTerms) response.getAggregations().get(aggregationKey);
return aggregation.getBuckets()
.parallelStream()
.map(Terms.Bucket::getKeyAsString)
.collect(Collectors.toList());
}
| |
doc_1052
|
#upload training data
upload_response = openai.File.create(
file=open(file_name, "rb"),
purpose='fine-tune'
)
file_id = upload_response.id
print(f'\nupload training data respond:\n\n {upload_response}')
OpenAI respond with data
{
"bytes": 380,
"created_at": 1675789714,
"filename": "file",
"id": "file-lKSQushd8eABcfiBVwhxBMOJ",
"object": "file",
"purpose": "fine-tune",
"status": "uploaded",
"status_details": null
}
My training file has been uploaded so I am checking for fine-tune response with code
fine_tune_response = openai.FineTune.create(training_file=file_id)
print(f'\nfine-tune respond:\n\n {fine_tune_response}')
I am getting
{
"created_at": 1675789714,
"events": [
{
"created_at": 1675789715,
"level": "info",
"message": "Created fine-tune: ft-IqBdk4WJETm4KakIzfZeCHgS",
"object": "fine-tune-event"
}
],
"fine_tuned_model": null,
"hyperparams": {
"batch_size": null,
"learning_rate_multiplier": null,
"n_epochs": 4,
"prompt_loss_weight": 0.01
},
"id": "ft-IqBdk4WJETm4KakIzfZeCHgS",
"model": "curie",
"object": "fine-tune",
"organization_id": "org-R6DqvjTNimKtBzWWgae6VmAy",
"result_files": [],
"status": "pending",
"training_files": [
{
"bytes": 380,
"created_at": 1675789714,
"filename": "file",
"id": "file-lKSQushd8eABcfiBVwhxBMOJ",
"object": "file",
"purpose": "fine-tune",
"status": "uploaded",
"status_details": null
}
],
"updated_at": 1675789714,
"validation_files": []
}
As you see, the fine_tune_model is null so I cant use it for Completion.
My question is how to check for example in While loop if my fine-tune is complete using ft id
A: Here is data from the OpenAI documentation on fine-tuning:
After you've started a fine-tune job, it may take some time to complete. Your job may be queued behind other jobs on our system, and training our model can take minutes or hours depending on the model and dataset size.
Ref: https://platform.openai.com/docs/guides/fine-tuning
The OpenAI guide uses the CLI tool to create the fine-tuning and then accesses the model programatically once the training has completed.
Therefore, you couldn't run the code in Python as you have laid it out, since you need to wait for the training to complete. Meaning you can't train the model on the fly and use it instantly.
| |
doc_1053
|
using System.Threading.Tasks;
public abstract class MigrationBase
{
public abstract string Name { get; }
public abstract string Description { get; }
public abstract int FromVersion { get; }
public abstract int ToVersion { get; }
public abstract void Apply();
public abstract Task ApplyAsync();
}
Now, depending on the underlaying database system a concrete migration will be written for, it may make more sense to implement the void Apply or the Task ApplyAsync method. I see the following options:
*
*If I decide just having one of both abstract methods, I force the developer implementing the concrete migration to do it wrong either in the one or the other way "in 50% of the cases".
*If I decide having both abstract methods, I force him doing it wrong whenever the database system doesn't offer both possibilities.
*Having a MigrationBase, SyncMigrationBase and AsyncMigrationBase and using typecasts everywhere doesn't seem reasonable to me.
*Is there a better option I'm currently missing?
Now you might say I could just choose option two because ADO.Net offers sync and async methods and in most cases the database adapters will offer both variants. Is there a better solution if you look at the more general problem without ADO.Net in mind?
If I chose option two, should I provide a preimplemented version of the Task ApplyAsync similar to what Microsoft did with System.IO.Stream.ReadAsync while considering what Stephen Toub has written? If yes, what else should I pay attention to?
A: Use ApplyAsync alone, and delete the other method.
If I decide just having one of both abstract methods, I force the developer implementing the concrete migration to do it wrong either in the one or the other way
If the user needs to implement an async method from an interface or an abstract class, but he must do everything synchronously, there is absolutely no problem substituting an async method returning a Task with a synchronous method, which returns a completed task constructed from result.
Your code, on the other hand, always performs calls as if they were asynchronous.
If I decide having both abstract methods, I force him doing it wrong whenever the database system doesn't offer both possibilities.
That's right, exposing both methods is a worse alternative.
Having a MigrationBase, SyncMigrationBase and AsyncMigrationBase and using typecasts everywhere doesn't seem reasonable to me.
I think you are right, adding a subclass for what can be "folded" into the base class does look like more effort than is necessary.
| |
doc_1054
|
Example of existing file:
"12,345.67",12.34,"123,456.78",1.00,"123,456,789.12"
Example of desired file (thousands separators removed):
"12345.67",12.34,"123456.78",1.00,"123456789.12"
I found a regex expression for matching the numbers with separators that works great, but I'm having trouble with the -replace operator. The replacement value is confusing me. I read about $& and I'm wondering if I should use that here. I tried $_, but that pulls out ALL my commas. Do I have to use $matches somehow?
Here's my code:
$Files = Get-ChildItem *input.csv
foreach ($file in $Files)
{
$file |
Get-Content | #assume that I can't use -raw
% {$_ -replace '"[\d]{1,3}(,[\d]{3})*(\.[\d]+)?"', ("$&" -replace ',','')} | #this is my problem
out-file output.csv -append -encoding ascii
}
A: Tony Hinkle's comment is the answer: don't use regex for this (at least not directly on the CSV file).
Your CSV is valid, so you should parse it as such, work on the objects (change the text if you want), then write a new CSV.
Import-Csv -Path .\my.csv | ForEach-Object {
$_ | ForEach-Object {
$_ -replace ',',''
}
} | Export-Csv -Path .\my_new.csv
(this code needs work, specifically the middle as the row will have each column as a property, not an array, but a more complete version of your CSV would make that easier to demonstrate)
A: I would use a simpler regex, and use capture groups instead of the entire capture.
I have tested the follow regular expression with your input and found no issues.
% {$_ -replace '([\d]),([\d])','$1$2' }
eg. Find all commas with a number before and after (so that the weird mixed splits dont matter) and replace the comma entirely.
This would have problems if your input has a scenario without that odd mixing of quotes and no quotes.
A: You can try with this regex:
,(?=(\d{3},?)+(?:\.\d{1,3})?")
See Live Demo or in powershell:
% {$_ -replace ',(?=(\d{3},?)+(?:\.\d{1,3})?")','' }
But it's more about the challenge that regex can bring. For proper work, use @briantist answer which is the clean way to do this.
| |
doc_1055
|
I have also checked PHP versions: local and live have the same version.
A: I took a look at MPDF and it requires mbstring (less important) but also GD extension for PHP. Please check that your GD extension has the same version locally and on your live server.
You can check by running php -a and execute var_dump(gd_info());. It will return something like:
php > var_dump(gd_info());
php shell code:1:
array(12) {
'GD Version' =>
string(26) "bundled (2.1.0 compatible)"
Maybe some older versions of GD couldn't handle base64 encoded images?
| |
doc_1056
|
tc='(107, 189)'
and I need it to be a tuple, so I can call each number one at a time.
print(tc[0]) #needs to output 107
Thank you in advance!
A: All you need is ast.literal_eval:
>>> from ast import literal_eval
>>> tc = '(107, 189)'
>>> tc = literal_eval(tc)
>>> tc
(107, 189)
>>> type(tc)
<class 'tuple'>
>>> tc[0]
107
>>> type(tc[0])
<class 'int'>
>>>
From the docs:
ast.literal_eval(node_or_string)
Safely evaluate an expression node or a Unicode or Latin-1 encoded string containing a
Python expression. The string or node provided may
only consist of the following Python literal structures: strings,
numbers, tuples, lists, dicts, booleans, and None.
A: Use ast.literal_eval():
>>> import ast
>>> tc='(107, 189)'
>>> tc_tuple = ast.literal_eval(tc)
>>> tc_tuple
(107, 189)
>>> tc_tuple[0]
107
A: You can use the builtin eval, which evaluates a Python expression:
>>> tc = '(107, 189)'
>>> tc = eval(tc)
>>> tc
(107, 189)
>>> tc[0]
107
| |
doc_1057
|
blabla (lo-g) (kk-jj)
i want to make it like this
blabla (lo-g)
but when it's already like this in vb.net
blabla (lo-g) with one parenteses
just let it as and is
thanks
A: Regex is a powerful, flexible, and configurable tool for doing things like this. It's not entirely clear from your question exactly what rules need to be followed in other variations for the input, but here's an example which works for the inputs you specified:
Dim input As String = "blabla (lo-g) (kk-jj)"
Dim pattern As String = "(?<=^[^(]*\([^)]*\)\s*)\([^)]*\)(?=\s*$)"
Dim output As String = Regex.Replace(input, pattern, "")
However, that will not remove the second set of parenthesis for inputs like these:
blabla (lo-g) blabla (kk-jj) blabla
blabla (lo-g) (kk-jj) blabla
To handle variations like those, you could use a pattern like this:
Dim input As String = "blabla (lo-g) with (kk-jj) trailing"
Dim pattern As String = "(?<=^[^(]*\([^)]*\)[^(]*)\([^)]*\)(?=[^(]*$)"
Dim output As String = Regex.Replace(input, pattern, "")
A: i want to thanks steven for his answer but for those who can't handle regex (like me)
here's a simple stupid method
Dim res As String = String.Empty
Dim check As Boolean = False
For Each letter In teamname
If letter = "(" Then
If check = True Then
Exit For
End If
check = True
End If
res &= letter
Next
Return res
good luck
A: Since you want to avoid RegEx, this is a simpler/easier to read version I think of what you're trying to accomplish with your loop through each character. Basically you split the string into an array using the ( character as the dividers. So, they need to be re-added but it's nominal since you only need the 1 included (or not).
Essentially splitting your string into this:
"blabla (lo-g) (kk-jj)"
Array(0) = "blabla "
Array(1) = "lo-g) "
So, Array(0) & "(" & Array(1) = "blabla (lo-g)", or simply returns the whole string if it counts <= 1 ( character
Dim SplitText = text.Split("(")
If SplitText.Length > 1 Then
Return String.Format("{0}({1}", text.Split("(")(0), text.Split("(")(1))
'This may be easier for you to read, though:
'Return text.Split("(")(0) & "(" & text.Split("(")(1)
End If
Return text
| |
doc_1058
|
{
"request": {
"method": "PUT",
"url": "/aayush&]"
},
"response": {
"status": 200
}
}
When i am hitting http://localhost/aayush&]
I am getting illegal argument expection.
A: Unfortunately Java's URI class takes a stricter view of what's valid than many other libraries.
There's already one open issue about this on Github, so it's on my to-do list.
In the meantime if you can, I'd suggest escaping the ] character.
| |
doc_1059
|
In my class:
class Connection
{
SerialPort Port = new SerialPort();
public void ConfigurePort( string name, int speed)
{
Port.PortName = name;
Port.BaudRate = speed;
Port.DataBits = 8;
Port.Parity = Parity.None;
Port.StopBits = StopBits.One;
Port.Encoding = Encoding.BigEndianUnicode;
Port.DataReceived += new SerialDataReceivedEventHandler(DataReceivedHandler);
Port.RtsEnable = true;
}
public void Write()
{
Port.Write("TAD->TIC,LIST");
}
public void OpenPort()
{
try
{
Port.Open();
}
catch(Exception ex)
{
MessageBox.Show(ex.ToString(),"Error", MessageBoxButtons.OK, MessageBoxIcon.Exclamation);
}
}
public void ClosePort()
{
try
{
Port.Close();
}
catch (Exception ex)
{
MessageBox.Show(ex.ToString(), "Error", MessageBoxButtons.OK, MessageBoxIcon.Error);
}
}
public static void DataReceivedHandler( object sender, SerialDataReceivedEventArgs e)
{
SerialPort sp = (SerialPort)sender;
string indata = sp.ReadExisting();
}
}
In my Form1:
private void XXX_Load(object sender, EventArgs e)
{
port.ConfigurePort("COM5", 115200);
}
private void button2_Click(object sender, EventArgs e)
{
port.OpenPort();
port.Write();
port.ClosePort();
}
Why after click in button event DataReceivedHandler does not work?
When the code is in the main file, everything works.
| |
doc_1060
|
img.myImage {
display: block;
position: relative;
border: 6px solid #0090ff;
width: 85%;
margin: 0 auto;
-webkit-filter: grayscale(100%);
filter: grayscale(100%);
}
However, it affects the colored border, which also becomes grayish. Is there a "painless" workaround here?
A: Since img does not accept :after/:before, and since filter seems to apply on children elements, the only solution I can think of is the basic:
div {
display: inline-block;
background-color: #0090ff;
padding: 6px;
width: 200px;
}
img {
display: block;
-webkit-filter: grayscale(100%);
filter: grayscale(100%);
width: 100%;
}
<div>
<img src="https://pbs.twimg.com/profile_images/562466745340817408/_nIu8KHX.jpeg" alt="" />
</div>
| |
doc_1061
|
char name[10];
strcpy(name,"james"); //valid statement
char name[10];
name[10]="james"; //invalid statement
*name="james"; // invalid statement
For above mentioned both invalid statment it says
"error: assignment makes integer from pointer without a cast"
The error message is not clear. What is integer here? Which pointer is getting converted to
integer.
char name[10];
name="james"; //invalid statement
error: incompatible types when assigning to type char[10] from type char
Please explain the error message to me. What exactly they ment.
A: The problem is here:
name[10]="james";
name[10] in this context is a char (which is a type of integer), while "james" is a pointer (to char). So you're trying to convert a pointer to a char, which is an invalid conversion.
Note that when you write:
char name[10]; you're defining a char array of size 10.
When you write just:
name[10]
you're referring to element index 10 of name, i.e. the 11th char in name (which is actually out of bounds - the valid char indices in name are name[0]..name[9]).
| |
doc_1062
|
Suppose I have data in this format:
(x_val, y_val)
I would like to do something like:
x <- [x_val_1, x_val_2, ... x_val_n]
y <- [y_val_1, y_val_2, ... y_val_n]
plot x y
Is it possible? How can I do that?
A: If I understand you correctly, you can use gnuplot's "-" pseudo-datafile:
#for documentation, refer to
# `help plot datafile special-filenames`
# at the gnuplot prompt
plot '-' w points
1 2
3 4
5 6
7 8
9 10
11 12
e
If you're trying to use gnuplot to calculate the points, then this won't work. Depending on how you're calculating the points, you might be able to plot it at a parametric curve.
| |
doc_1063
|
<div id="main">
<div class="row sort-container">
<span class="sort-by brandon-grotesque-regular">
Lihat berdasarkan:
</span>
<ul class="arvo-regular clearfix">
<li><a class="" href="?sort=popular">Barang Terpopuler</a></li>
<li><a class="" href="?sort=terbaru">Barang Terbaru</a></li>
</ul>
</div>
</div>
A: You can set vertical-align to middle on the ul element.
However, why do you need a layout so complex? It would be better if every element was inline, with no floated or inline-block box.
A: This is due to browser defaults, just like body element also has a padding...so we often use something like normalize to reset all of those in all browsers to 0.
ul{
margin:0;
padding:0;
}
And remove margin-left from #main .sort-container li {...
Thats it
| |
doc_1064
|
I have defined route as follows
resources :tutorialcategories do
end
following is my model definition
class Tutorial < ActiveRecord::Base
has_many :tutorialcategories
A: If you can add categories but not edit them then you must load a @tutorial_category variable in your edit action, something like this:
def edit
@tutorial_category = TutorialCategory.find params[:id]
end
| |
doc_1065
|
Thank You in advance.
A: Have you tried The Microsoft Keyboard Layout Creator? I think this should translate the keys at a sufficiently low level for your purposes.
| |
doc_1066
|
Once the user is logged in I create a variable that creates a path on their local machine.
c:\tempfolder\date\username
The problem is that some usernames are throwing "Illegal chars" exception. For example if my username was mas|fenix it would throw an exception..
Path.Combine( _
Environment.GetFolderPath(System.Environment.SpecialFolder.CommonApplicationData), _
DateTime.Now.ToString("ddMMyyhhmm") + "-" + form1.username)
I don't want to remove it from the string, but a folder with their username is created through FTP on a server. And this leads to my second question. If I am creating a folder on the server can I leave the "illegal chars" in? I only ask this because the server is Linux based, and I am not sure if Linux accepts it or not.
EDIT: It seems that URL encode is NOT what I want.. Here's what I want to do:
old username = mas|fenix
new username = mas%xxfenix
Where %xx is the ASCII value or any other value that would easily identify the character.
A: Url Encoding is easy in .NET. Use:
System.Web.HttpUtility.UrlEncode(string url)
If that'll be decoded to get the folder name, you'll still need to exclude characters that can't be used in folder names (*, ?, /, etc.)
A: I've been experimenting with the various methods .NET provide for URL encoding. Perhaps the following table will be useful (as output from a test app I wrote):
Unencoded UrlEncoded UrlEncodedUnicode UrlPathEncoded EscapedDataString EscapedUriString HtmlEncoded HtmlAttributeEncoded HexEscaped
A A A A A A A A %41
B B B B B B B B %42
a a a a a a a a %61
b b b b b b b b %62
0 0 0 0 0 0 0 0 %30
1 1 1 1 1 1 1 1 %31
[space] + + %20 %20 %20 [space] [space] %20
! ! ! ! ! ! ! ! %21
" %22 %22 " %22 %22 " " %22
# %23 %23 # %23 # # # %23
$ %24 %24 $ %24 $ $ $ %24
% %25 %25 % %25 %25 % % %25
& %26 %26 & %26 & & & %26
' %27 %27 ' ' ' ' ' %27
( ( ( ( ( ( ( ( %28
) ) ) ) ) ) ) ) %29
* * * * %2A * * * %2A
+ %2b %2b + %2B + + + %2B
, %2c %2c , %2C , , , %2C
- - - - - - - - %2D
. . . . . . . . %2E
/ %2f %2f / %2F / / / %2F
: %3a %3a : %3A : : : %3A
; %3b %3b ; %3B ; ; ; %3B
< %3c %3c < %3C %3C < < %3C
= %3d %3d = %3D = = = %3D
> %3e %3e > %3E %3E > > %3E
? %3f %3f ? %3F ? ? ? %3F
@ %40 %40 @ %40 @ @ @ %40
[ %5b %5b [ %5B %5B [ [ %5B
\ %5c %5c \ %5C %5C \ \ %5C
] %5d %5d ] %5D %5D ] ] %5D
^ %5e %5e ^ %5E %5E ^ ^ %5E
_ _ _ _ _ _ _ _ %5F
` %60 %60 ` %60 %60 ` ` %60
{ %7b %7b { %7B %7B { { %7B
| %7c %7c | %7C %7C | | %7C
} %7d %7d } %7D %7D } } %7D
~ %7e %7e ~ ~ ~ ~ ~ %7E
Ā %c4%80 %u0100 %c4%80 %C4%80 %C4%80 Ā Ā [OoR]
ā %c4%81 %u0101 %c4%81 %C4%81 %C4%81 ā ā [OoR]
Ē %c4%92 %u0112 %c4%92 %C4%92 %C4%92 Ē Ē [OoR]
ē %c4%93 %u0113 %c4%93 %C4%93 %C4%93 ē ē [OoR]
Ī %c4%aa %u012a %c4%aa %C4%AA %C4%AA Ī Ī [OoR]
ī %c4%ab %u012b %c4%ab %C4%AB %C4%AB ī ī [OoR]
Ō %c5%8c %u014c %c5%8c %C5%8C %C5%8C Ō Ō [OoR]
ō %c5%8d %u014d %c5%8d %C5%8D %C5%8D ō ō [OoR]
Ū %c5%aa %u016a %c5%aa %C5%AA %C5%AA Ū Ū [OoR]
ū %c5%ab %u016b %c5%ab %C5%AB %C5%AB ū ū [OoR]
The columns represent encodings as follows:
*
*UrlEncoded: HttpUtility.UrlEncode
*UrlEncodedUnicode: HttpUtility.UrlEncodeUnicode
*UrlPathEncoded: HttpUtility.UrlPathEncode
*EscapedDataString: Uri.EscapeDataString
*EscapedUriString: Uri.EscapeUriString
*HtmlEncoded: HttpUtility.HtmlEncode
*HtmlAttributeEncoded: HttpUtility.HtmlAttributeEncode
*HexEscaped: Uri.HexEscape
NOTES:
*
*HexEscape can only handle the first 255 characters. Therefore it throws an ArgumentOutOfRange exception for the Latin A-Extended characters (eg Ā).
*This table was generated in .NET 4.0 (see Levi Botelho's comment below that says the encoding in .NET 4.5 is slightly different).
EDIT:
I've added a second table with the encodings for .NET 4.5. See this answer: https://stackoverflow.com/a/21771206/216440
EDIT 2:
Since people seem to appreciate these tables, I thought you might like the source code that generates the table, so you can play around yourselves. It's a simple C# console application, which can target either .NET 4.0 or 4.5:
using System;
using System.Collections.Generic;
using System.Text;
// Need to add a Reference to the System.Web assembly.
using System.Web;
namespace UriEncodingDEMO2
{
class Program
{
static void Main(string[] args)
{
EncodeStrings();
Console.WriteLine();
Console.WriteLine("Press any key to continue...");
Console.Read();
}
public static void EncodeStrings()
{
string stringToEncode = "ABCD" + "abcd"
+ "0123" + " !\"#$%&'()*+,-./:;<=>?@[\\]^_`{|}~" + "ĀāĒēĪīŌōŪū";
// Need to set the console encoding to display non-ASCII characters correctly (eg the
// Latin A-Extended characters such as ĀāĒē...).
Console.OutputEncoding = Encoding.UTF8;
// Will also need to set the console font (in the console Properties dialog) to a font
// that displays the extended character set correctly.
// The following fonts all display the extended characters correctly:
// Consolas
// DejaVu Sana Mono
// Lucida Console
// Also, in the console Properties, set the Screen Buffer Size and the Window Size
// Width properties to at least 140 characters, to display the full width of the
// table that is generated.
Dictionary<string, Func<string, string>> columnDetails =
new Dictionary<string, Func<string, string>>();
columnDetails.Add("Unencoded", (unencodedString => unencodedString));
columnDetails.Add("UrlEncoded",
(unencodedString => HttpUtility.UrlEncode(unencodedString)));
columnDetails.Add("UrlEncodedUnicode",
(unencodedString => HttpUtility.UrlEncodeUnicode(unencodedString)));
columnDetails.Add("UrlPathEncoded",
(unencodedString => HttpUtility.UrlPathEncode(unencodedString)));
columnDetails.Add("EscapedDataString",
(unencodedString => Uri.EscapeDataString(unencodedString)));
columnDetails.Add("EscapedUriString",
(unencodedString => Uri.EscapeUriString(unencodedString)));
columnDetails.Add("HtmlEncoded",
(unencodedString => HttpUtility.HtmlEncode(unencodedString)));
columnDetails.Add("HtmlAttributeEncoded",
(unencodedString => HttpUtility.HtmlAttributeEncode(unencodedString)));
columnDetails.Add("HexEscaped",
(unencodedString
=>
{
// Uri.HexEscape can only handle the first 255 characters so for the
// Latin A-Extended characters, such as A, it will throw an
// ArgumentOutOfRange exception.
try
{
return Uri.HexEscape(unencodedString.ToCharArray()[0]);
}
catch
{
return "[OoR]";
}
}));
char[] charactersToEncode = stringToEncode.ToCharArray();
string[] stringCharactersToEncode = Array.ConvertAll<char, string>(charactersToEncode,
(character => character.ToString()));
DisplayCharacterTable<string>(stringCharactersToEncode, columnDetails);
}
private static void DisplayCharacterTable<TUnencoded>(TUnencoded[] unencodedArray,
Dictionary<string, Func<TUnencoded, string>> mappings)
{
foreach (string key in mappings.Keys)
{
Console.Write(key.Replace(" ", "[space]") + " ");
}
Console.WriteLine();
foreach (TUnencoded unencodedObject in unencodedArray)
{
string stringCharToEncode = unencodedObject.ToString();
foreach (string columnHeader in mappings.Keys)
{
int columnWidth = columnHeader.Length + 1;
Func<TUnencoded, string> encoder = mappings[columnHeader];
string encodedString = encoder(unencodedObject);
// ASSUMPTION: Column header will always be wider than encoded string.
Console.Write(encodedString.Replace(" ", "[space]").PadRight(columnWidth));
}
Console.WriteLine();
}
}
}
}
Click here to run code on dotnetfiddle.net
A: I think people here got sidetracked by the UrlEncode message. URLEncoding is not what you want -- you want to encode stuff that won't work as a filename on the target system.
Assuming that you want some generality -- feel free to find the illegal characters on several systems (MacOS, Windows, Linux and Unix), union them to form a set of characters to escape.
As for the escape, a HexEscape should be fine (Replacing the characters with %XX). Convert each character to UTF-8 bytes and encode everything >128 if you want to support systems that don't do unicode. But there are other ways, such as using back slashes "\" or HTML encoding """. You can create your own. All any system has to do is 'encode' the uncompatible character away. The above systems allow you to recreate the original name -- but something like replacing the bad chars with spaces works also.
On the same tangent as above, the only one to use is
Uri.EscapeDataString
-- It encodes everything that is needed for OAuth, it doesn't encode the things that OAuth forbids encoding, and encodes the space as %20 and not + (Also in the OATH Spec) See: RFC 3986. AFAIK, this is the latest URI spec.
A: I have written a C# method that url-encodes ALL symbols:
/// <summary>
/// !#$345Hf} → %21%23%24%33%34%35%48%66%7D
/// </summary>
public static string UrlEncodeExtended( string value )
{
char[] chars = value.ToCharArray();
StringBuilder encodedValue = new StringBuilder();
foreach (char c in chars)
{
encodedValue.Append( "%" + ( (int)c ).ToString( "X2" ) );
}
return encodedValue.ToString();
}
A: You should encode only the user name or other part of the URL that could be invalid. URL encoding a URL can lead to problems since something like this:
string url = HttpUtility.UrlEncode("http://www.google.com/search?q=Example");
Will yield
http%3a%2f%2fwww.google.com%2fsearch%3fq%3dExample
This is obviously not going to work well. Instead, you should encode ONLY the value of the key/value pair in the query string, like this:
string url = "http://www.google.com/search?q=" + HttpUtility.UrlEncode("Example");
Hopefully that helps. Also, as teedyay mentioned, you'll still need to make sure illegal file-name characters are removed or else the file system won't like the path.
A: Better way is to use
Uri.EscapeUriString
to not reference Full Profile of .net 4.
[Update]
Based on what the OP is asking for, the recommended API should be
Uri.EscapeDataString
(Thank you @ykadaru)
A: Edit: Note that this answer is now out of date. See Siarhei Kuchuk's answer below for a better fix
UrlEncoding will do what you are suggesting here. With C#, you simply use HttpUtility, as mentioned.
You can also Regex the illegal characters and then replace, but this gets far more complex, as you will have to have some form of state machine (switch ... case, for example) to replace with the correct characters. Since UrlEncode does this up front, it is rather easy.
As for Linux versus windows, there are some characters that are acceptable in Linux that are not in Windows, but I would not worry about that, as the folder name can be returned by decoding the Url string, using UrlDecode, so you can round trip the changes.
A: Since .NET Framework 4.5 and .NET Standard 1.0 you should use WebUtility.UrlEncode. Advantages over alternatives:
*
*It is part of .NET Framework 4.5+, .NET Core 1.0+, .NET Standard 1.0+, UWP 10.0+ and all Xamarin platforms as well. HttpUtility, while being available in .NET Framework earlier (.NET Framework 1.1+), becomes available on other platforms much later (.NET Core 2.0+, .NET Standard 2.0+) and it still unavailable in UWP (see related question).
*In .NET Framework, it resides in System.dll, so it does not require any additional references, unlike HttpUtility.
*It properly escapes characters for URLs, unlike Uri.EscapeUriString (see comments to drweb86's answer).
*It does not have any limits on the length of the string, unlike Uri.EscapeDataString (see related question), so it can be used for POST requests, for example.
A: If you can't see System.Web, change your project settings. The target framework should be ".NET Framework 4" instead of ".NET Framework 4 Client Profile"
A: Levi Botelho commented that the table of encodings that was previously generated is no longer accurate for .NET 4.5, since the encodings changed slightly between .NET 4.0 and 4.5. So I've regenerated the table for .NET 4.5:
Unencoded UrlEncoded UrlEncodedUnicode UrlPathEncoded WebUtilityUrlEncoded EscapedDataString EscapedUriString HtmlEncoded HtmlAttributeEncoded WebUtilityHtmlEncoded HexEscaped
A A A A A A A A A A %41
B B B B B B B B B B %42
a a a a a a a a a a %61
b b b b b b b b b b %62
0 0 0 0 0 0 0 0 0 0 %30
1 1 1 1 1 1 1 1 1 1 %31
[space] + + %20 + %20 %20 [space] [space] [space] %20
! ! ! ! ! %21 ! ! ! ! %21
" %22 %22 " %22 %22 %22 " " " %22
# %23 %23 # %23 %23 # # # # %23
$ %24 %24 $ %24 %24 $ $ $ $ %24
% %25 %25 % %25 %25 %25 % % % %25
& %26 %26 & %26 %26 & & & & %26
' %27 %27 ' %27 %27 ' ' ' ' %27
( ( ( ( ( %28 ( ( ( ( %28
) ) ) ) ) %29 ) ) ) ) %29
* * * * * %2A * * * * %2A
+ %2b %2b + %2B %2B + + + + %2B
, %2c %2c , %2C %2C , , , , %2C
- - - - - - - - - - %2D
. . . . . . . . . . %2E
/ %2f %2f / %2F %2F / / / / %2F
: %3a %3a : %3A %3A : : : : %3A
; %3b %3b ; %3B %3B ; ; ; ; %3B
< %3c %3c < %3C %3C %3C < < < %3C
= %3d %3d = %3D %3D = = = = %3D
> %3e %3e > %3E %3E %3E > > > %3E
? %3f %3f ? %3F %3F ? ? ? ? %3F
@ %40 %40 @ %40 %40 @ @ @ @ %40
[ %5b %5b [ %5B %5B [ [ [ [ %5B
\ %5c %5c \ %5C %5C %5C \ \ \ %5C
] %5d %5d ] %5D %5D ] ] ] ] %5D
^ %5e %5e ^ %5E %5E %5E ^ ^ ^ %5E
_ _ _ _ _ _ _ _ _ _ %5F
` %60 %60 ` %60 %60 %60 ` ` ` %60
{ %7b %7b { %7B %7B %7B { { { %7B
| %7c %7c | %7C %7C %7C | | | %7C
} %7d %7d } %7D %7D %7D } } } %7D
~ %7e %7e ~ %7E ~ ~ ~ ~ ~ %7E
Ā %c4%80 %u0100 %c4%80 %C4%80 %C4%80 %C4%80 Ā Ā Ā [OoR]
ā %c4%81 %u0101 %c4%81 %C4%81 %C4%81 %C4%81 ā ā ā [OoR]
Ē %c4%92 %u0112 %c4%92 %C4%92 %C4%92 %C4%92 Ē Ē Ē [OoR]
ē %c4%93 %u0113 %c4%93 %C4%93 %C4%93 %C4%93 ē ē ē [OoR]
Ī %c4%aa %u012a %c4%aa %C4%AA %C4%AA %C4%AA Ī Ī Ī [OoR]
ī %c4%ab %u012b %c4%ab %C4%AB %C4%AB %C4%AB ī ī ī [OoR]
Ō %c5%8c %u014c %c5%8c %C5%8C %C5%8C %C5%8C Ō Ō Ō [OoR]
ō %c5%8d %u014d %c5%8d %C5%8D %C5%8D %C5%8D ō ō ō [OoR]
Ū %c5%aa %u016a %c5%aa %C5%AA %C5%AA %C5%AA Ū Ū Ū [OoR]
ū %c5%ab %u016b %c5%ab %C5%AB %C5%AB %C5%AB ū ū ū [OoR]
The columns represent encodings as follows:
*
*UrlEncoded: HttpUtility.UrlEncode
*UrlEncodedUnicode: HttpUtility.UrlEncodeUnicode
*UrlPathEncoded: HttpUtility.UrlPathEncode
*WebUtilityUrlEncoded: WebUtility.UrlEncode
*EscapedDataString: Uri.EscapeDataString
*EscapedUriString: Uri.EscapeUriString
*HtmlEncoded: HttpUtility.HtmlEncode
*HtmlAttributeEncoded: HttpUtility.HtmlAttributeEncode
*WebUtilityHtmlEncoded: WebUtility.HtmlEncode
*HexEscaped: Uri.HexEscape
NOTES:
*
*HexEscape can only handle the first 255 characters. Therefore it throws an ArgumentOutOfRange exception for the Latin A-Extended characters (eg Ā).
*This table was generated in .NET 4.5 (see answer https://stackoverflow.com/a/11236038/216440 for the encodings relevant to .NET 4.0 and below).
EDIT:
*
*As a result of Discord's answer I added the new WebUtility UrlEncode and HtmlEncode methods, which were introduced in .NET 4.5.
A: The .NET implementation of UrlEncode does not comply with RFC 3986.
*
*Some characters are not encoded but should be. The !()* characters are listed in the RFC's section 2.2 as a reserved characters that must be encoded yet .NET fails to encode these characters.
*Some characters are encoded but should not be. The .-_ characters are not listed in the RFC's section 2.2 as a reserved character that should not be encoded yet .NET erroneously encodes these characters.
*The RFC specifies that to be consistent, implementations should use upper-case HEXDIG, where .NET produces lower-case HEXDIG.
A: Ideally these would go in a class called "FileNaming" or maybe just rename Encode to "FileNameEncode". Note: these are not designed to handle Full Paths, just the folder and/or file names. Ideally you would Split("/") your full path first and then check the pieces.
And obviously instead of a union, you could just add the "%" character to the list of chars not allowed in Windows, but I think it's more helpful/readable/factual this way.
Decode() is exactly the same but switches the Replace(Uri.HexEscape(s[0]), s) "escaped" with the character.
public static List<string> urlEncodedCharacters = new List<string>
{
"/", "\\", "<", ">", ":", "\"", "|", "?", "%" //and others, but not *
};
//Since this is a superset of urlEncodedCharacters, we won't be able to only use UrlEncode() - instead we'll use HexEncode
public static List<string> specialCharactersNotAllowedInWindows = new List<string>
{
"/", "\\", "<", ">", ":", "\"", "|", "?", "*" //windows dissallowed character set
};
public static string Encode(string fileName)
{
//CheckForFullPath(fileName); // optional: make sure it's not a path?
List<string> charactersToChange = new List<string>(specialCharactersNotAllowedInWindows);
charactersToChange.AddRange(urlEncodedCharacters.
Where(x => !urlEncodedCharacters.Union(specialCharactersNotAllowedInWindows).Contains(x))); // add any non duplicates (%)
charactersToChange.ForEach(s => fileName = fileName.Replace(s, Uri.HexEscape(s[0]))); // "?" => "%3f"
return fileName;
}
Thanks @simon-tewsi for the very usefull table above!
A: In addition to @Dan Herbert's answer ,
You we should encode just the values generally.
Split has params parameter Split('&','='); expression firstly split by & then '=' so odd elements are all values to be encoded shown below.
public static void EncodeQueryString(ref string queryString)
{
var array=queryString.Split('&','=');
for (int i = 0; i < array.Length; i++) {
string part=array[i];
if(i%2==1)
{
part=System.Web.HttpUtility.UrlEncode(array[i]);
queryString=queryString.Replace(array[i],part);
}
}
}
A: For .net core users, use this
Microsoft.AspNetCore.Http.Extensions.UriHelper.Encode(Uri uri)
| |
doc_1067
|
My question is: How do we create these variable defaults?
One could say 'just set the value after creating the struct'. However, when working with virtual fields, this is not possible. When you use e. g. Repo.all(MyModel) or any other querying-commands, virtual fields are set to their default fixed value.
How can we create variable schema field defaults?
A: It is not possible. Ecto simply defines a struct and Elixir structs are expanded at compile time.
You can get around this by explicitly having a function to produce the struct with default values or do this in the changeset function via put_change and similar.
| |
doc_1068
|
I am unable to calculate HCF of a very large number and a small number
A: Correctly implemented algorithm shall use at most log(number) steps, and thus not cause stack overflow. I suppose you use the following algorithm:
gcd(a, 0) = a
gcd(a, b) = gcd(a-b, b)
which looks like this in C++:
int gcd(int a, int b) {
if (b == 0) {
return a;
} else {
return gcd(std::max(a, b) - std::min(a, b), std::min(a, b));
}
}
This is not optimal. Instead you shall use the following relation
gcd(a, 0) = a
gcd(a, b) = gcd(b, a mod b)
which looks like this in C++:
int gcd(int a, int b) {
if (b == 0) {
return a;
} else {
return gcd(b, a % b);
}
}
This code will actually take only log(ab) steps, and thus not cause stack overflow
Also you may try to enable optimisation: it should allow to collapse both of the functions call into non-recursive versions (as this is a tail recursion). Note that it is not certain if it will increase speed.
As a matter of caution: be careful with the negative numbers, the % operator works incorrectly for them
A: I believe you're writing a function like this:
int hcf(int a, int b){
if (a == 0){
return b;
}
else if (b == 0){
return a;
}
else if (a > b){
return hcf(b, a - b); // this is subtraction
}
else if (a < b){
return hcf(a, a - b); // this is subtraction
}
}
...and you're calling it with something like
int q = hcf(100000000, 1);
Well... Without optimisation that will create 1 billion recursion calls. It's definite that your program will run out of stack capacity.
My personally preferred solution is give up recursive methods and use an iterative one. The code can then be simplified to a single loop:
int hcf(int a, int b){
while(a != 0 && b != 0){
if (a > b){
a = a - b;
}
else{
b = b - a;
}
}
if (a == 0){
return b;
}
else{
return a;
}
}
If you insist on using recursive methods, replace subtraction with modulus.
else if (a > b){
-> return hcf(b, a % b); // this is modulus
}
else if (a < b){
-> return hcf(a, a % b); // this is modulus
}
| |
doc_1069
|
you can test this problem in this demo page froala editor demo.
How to solve this?
A: There is a newly integration with Aviary which you could use now for advanced image editing. It allays rich editing features such as crop, rotate and even filters.
| |
doc_1070
|
Array
(
[userid] => 1
[alias] => rahul
[firstname] => rahul
[lastname] => Khan2
[password] => Ý2jr™``¢(E]_Ø=^
[email] => salman@gmail.com
[url] => 4cfe07dbf35d6.jpg
[avatar_url] => 4cfe07efd2e1c.jpg
[thumb] => 4cfe07ebc8955.jpg
[crop_url] => 4cfe07dbf35d6.jpg
[crop_position] => [100,100,200,200]
[updatedon] => 0000-00-00 00:00:00
[createdon] => 0000-00-00 00:00:00
)
I want to remove the element url ,and crop_url How i can i remove these from array.
A: unset($array['url'],$array['crop_url']);
A: use unset
unset($arrayname['url']);
| |
doc_1071
|
brew install gettext --32-bit
But when it is still x86_64 version:
file /usr/local/Cellar/gettext/0.18.2/lib/libgettextlib-0.18.2.dylib
/usr/local/Cellar/gettext/0.18.2/lib/libgettextlib-0.18.2.dylib: Mach-O 64-bit dynamically linked shared library x86_64
A: You probably want to use brew install gettext --universal, which will build a universal ("fat") binary containing both 32-bit and 64-bit code. There's no --32-bit option; see brew info gettext to see what options are supported.
$ brew install gettext --universal
[...snip...]
$ file /usr/local/Cellar/gettext/0.18.2/lib/libgettextlib-0.18.2.dylib
/usr/local/Cellar/gettext/0.18.2/lib/libgettextlib-0.18.2.dylib: Mach-O universal binary with 2 architectures
/usr/local/Cellar/gettext/0.18.2/lib/libgettextlib-0.18.2.dylib (for architecture i386): Mach-O dynamically linked shared library i386
/usr/local/Cellar/gettext/0.18.2/lib/libgettextlib-0.18.2.dylib (for architecture x86_64): Mach-O 64-bit dynamically linked shared library x86_64
| |
doc_1072
|
Here is the error.
Traceback (most recent call last):
File "/opt/python/current/app/foo/boo.py", line 25, in create_file
input_file = open(input_filename, "w")
IOError: [Errno 13] Permission denied: 'testing.txt'
Thanks in advance!
A: If you want to create file on ElasticBeanstalk, you can, but you shouldn't, you have to use the amazon S3 service for that, with boto3.
But if it's just for a test you can add permisson with the .ebextensions file :
.ebextensions/instance.config
container_commands:
# Permisson on deploy command
0.0.0.files.chmod.ondeck:
command: "chmod u+xwr -R /opt/python/ondeck/app"
# Permisson on run dir
0.0.1.files.chmod.run:
command: "chmod u+xwr -R /opt/python/current/app"
A: I suggest you to create a folder in your app just for that. Than you can XX_permissions.config in your .ebextensions folder.
container_commands:
01_change_my_folder_permissions:
command: "mkdir -p /opt/python/current/app/my_folder; chmod 777 -R /opt/python/current/app/my_folder"
The command create the folder if doesn't exists and set the permissions. Just verify that your instance got the right permissions connecting directly using the ssh. Run the eb ssh [name-of-your-env] and check if the permission are ok:
ls -l /opt/python/current/app/
You should see your folder with a permission like drwxrwxrwx in the list.
| |
doc_1073
|
How can i do it? Here is my code
<html>
<div>
<form>
<input type='submit' name='radiotest' value='1crn' /> </div>
<input type='submit' name='radiotest' value='1FFZ' /> </div>
</form>
</div>
<div>
<head>
<script src="jmol/Jmol.js"></script>
</head>
<script>
jmolInitialize("jmol");
</script>
<script>
jmolSetAppletColor("gray");
jmolApplet(450, "load <?php echo "/jmol/".$_REQUEST['radiotest'].".pdb"; ?>; select all; cartoon on; wireframe off; spacefill off; color chain; ");
</script>
</div>
</html>
Any suggestions?
A: This should not be done using PHP (javascript would be a better candidate), but to answer the question:
Make the form so it has an action and method attributes. Action will be the current page (you don't need to specify this attribute if it's the same page) and method will be GET or POST.
Something like this:
<form action="" method="GET">
Now, when you click the button, the form gets submitted and the page reloads but the URL is different:
http://site.com/page.php?radiotest=1crn
You can now check to see if the $_GET['radiotest'] is set using isset() function. If it is set, do whatever you want it to do.
isset($_GET['radiotest']) - tells you if radiotest=something is in the URL
$_GET['radiotest'] - gives you the value of "something" from radiotest=something in the URL.
Because PHP is a server-side language, all code gets executed on the server. By the time the page gets sent to the browser, it has already executed and no further computations can be made. If you must have a PHP solution, the request must be sent to the server, the page must reload.
A: This is the one of the way to display the output in the same page...
<html>
<head>
</head>
<body>
<form name='test' method='GET' action="">
<input type='radio' name='radiotest' value='1crn' />1crn
<input type='radio' name='radiotest' value='1FFZ' />1FFz
<input type='submit' value='submit' name='submit' />
</form>
<?php
if(isset($_GET("submit")))
{
echo $_GET["radiotest"];
}
?>
</body>
</html>
| |
doc_1074
|
[first.dc1]
...
[second.dc1]
...
[first.dc2]
...
[second.dc2]
...
I want to define child grouping all groups with suffix dc1
[dc1:children]
*.dc1
Is it possible in Ansible? I've tried *, all, ranges but it doesn't work
A: Unfortunately this seems to be not possible. Ranges in Ansible inventroy are defined as [1:99] but this only is expanded in hostnames, not in host group names. Also there are no wildcards.
You could help yourself with an inventory script, which dynamically generates the group dependencies.
| |
doc_1075
|
*
*The asset pipeline is fairly complex (coming from ASP.NET MVC, this
is much more complex):
How do I associate a CoffeeScript file with a view?
*All the js files will be loaded on each page and every "on ready"
jquery method in each file will be executed
Does it make more sense to break the javascript for an application into "areas" -- one js file for the user/public area of a site, one file for the admin area of a site and so on. Scripts coming from external sites (jquery, other sites in the org) wouldn't really apply.
The upsides I see are
*
*Performance: loading one 80kb file is much faster than loading 4 20kb files because of TCP/IP overhead (source). In production, all of the public js will be compiled into one file and served on every request, but I see downsides to including "admin" code in that file.
*Security: there might be some things in script files that I don't want to expose to unauthorized users. Granted, if it's in script it's not secure, but it would be nice if I could minimize the exposure of things like paths to controller actions that perform database maintenance.
*Simplicity in development: if the on-ready event will be firing off from each file, it makes sense to me to just put it in one file, that way I won't have to load each file to see what on-readys are in there. The js for sub-areas (like admin) would naturally be in a separate file that wouldn't come from assets
Related: Put javascript in one .js file or break it out into multiple .js files?
A: Splitting admin javascript from other javascript sounds like a good idea.
That performance article is definetly right. I haven't seen a situation where caching as much javascript in one file hasn't been the performance winner. Even if the areas don't share javascript, downloading a larger file all at once tends to be faster than multiple downloads.
If you have a lot of libraries and 1-off page specific javascript files then maybe caching all the libraries makes sense, just make sure you measure for cache hits for a typical user when testing.
There are patterns you can use to minimize the onload problem. Splitting things into modules and only initializing those modules based on feature detection has been an effective method for me.
For example, if I have a UserInfo module:
!function(ns) {
ns.init = function() {
ns.setup_login()
ns.show_user_info()
}
ns.setup_login = function() { /* blah-blah */ }
// ... etc
}.call(this, this.UserInfo={})
And if I have some html on the page related to a login:
<div id="user_login">
<div class="user_info"></div>
<div class="login_links"></div>
</div>
I can write an initializer like:
$(function() {
$("#user_login").each(function() { UserInfo.init() })
})
Without this pattern I would've written loading calls to setup_login and show_user_info separately. Usually this lets me initialize a few different modules based on what aspects I detect on the page, and if you group these modules by their dependencies that usually cuts it down even more. (I might've done User.init -> UserInfo.init since maybe I could assume UserInfo depends on User.)
| |
doc_1076
|
Screen Shot:
Image
Original Code
if (typeof navigator.cookieEnabled == 'undefined' && !cookieEnabled) {
document.cookie = 'testcookie';
cookieEnabled = (document.cookie.indexOf('testcookie') != -1) ? true : false;
}
Please help.
A: As i check your code, Please mention the editor you are using along with your code. So that i can give you a positive result.
| |
doc_1077
|
Public Function PostTestStatus(dto As DTOStatusUpdate) As HttpResponseMessage
and the "dto" parameter is defined as this:
Public Class DTOStatusUpdate
Property QueryByNameDTO As QueryByNameDTO
Property Tests As ICollection(Of Test)
Public Sub New()
Me.Tests = New Collection(Of Test)
End Sub
End Class
tests is just a simple class (strings/integers only)
and i am sending the following json: (which validates as correct)
{"QueryByNameDTO":{"email":"atest@here.co.uk","notes":"blahblahblah","Name":"bob"},"Tests":[{"Name":"gsdf","Status":"idle"},{"Name":"gsdf","Status":"idle"}]}
My method is never reached because the Collection(Of Test) is not valid/correct
ive tried List(Of Test), IEnumerable(Of Test) and various other combinations none work.
The part i dont get is that if i send the json without the "Test" list my methods breakpoint is reached and the first property (QueryByNameDTO) is all present and correct.
It also works if i change the DTOStatusUpdate class to use just a singular "Test" rather than a List(of Test).
The problem seems to be in the use of a List/Collection/Enumeration(of Test).
What am i doing wrong, ive been at this for hours now and its doing my head in. ive googled and searched and gone round and round in circles trying to find an explanation. The closest i got was regarding adding to the parameter which then reaches the method and breakpoint but all properties of the parameter are nothing?
I can return a IEnumerable(of whatever) from a webapi method so what am i missing to send one to a webapi method?
EDIT1: Having stripped the Test Class down to one field "id" where it worked ok, and building it back up i've narrowed this down to one attribute on one field in that class.
Here are two fields from the test class Status and DNNUserId.
Status works fine with or without the Required Attribute!
However adding Required attribute to DNNUserId property causes the method not to be called/reached? Taking the required attribute off it works?!?!
Perhaps ive been at this too long because other than string vs integer there is no difference? should the json be formatted differently if an integer is required? can anyone shed any light on this?
Private _Status As String
<Required()>
Public Property Status As String
Get
Return _Status
End Get
Set(value As String)
If _Status <> value Then
_Status = value
OnPropertyChanged("Status")
End If
End Set
End Property
Private _DNNUserId As Integer
Public Property DNNUserId As Integer
Get
Return _DNNUserId
End Get
Set(value As Integer)
If _DNNUserId <> value Then
_DNNUserId = value
OnPropertyChanged("DNNUserId")
End If
End Set
End Property
A: Can you try this and see if it works.
Public Class DTOStatusUpdate
Public Property QueryByNameDTO As QueryByNameDTO
Public Property Tests As ICollection(Of Test)
Public Sub New()
Me.Tests = New Collection(Of Test)
End Sub
End Class
| |
doc_1078
|
function addArrowToGraph(src, dst) {
s.kill();
g.edges.push({
id: 'e' + g.edges.length,
source: 'n' + src,
target: 'n' + dst,
size: 100,
color: '#ccc',
minArrowSize: 100,
type: 'arrow'
});
s = new sigma({
graph: g,
container: 'graph-container'
});
}
No matter what number I set `minArrowSize` to, it stays the same. I have to zoom in real close to even see the arrows at all.
A: If someone wonders how to change arrow size in sigma.js v2 without making edges thicker: I managed to do it via implementing a custom edge arrow head program like this:
In your index.js:
import ArrowEdgeProgram from "./edge.arrow";
// [...]
const renderer = new Sigma(graph, container, {
edgeProgramClasses: {
arrow: ArrowEdgeProgram
}
});
Create these two files in the same directory as your index.js: (taken from the sigma.js code, located in src/rendering/webgl/programs, only converted from TypeScript to JS and with correct import paths)
edge.arrow.js:
import { createEdgeCompoundProgram } from "sigma/rendering/webgl/programs/common/edge";
import EdgeArrowHeadProgram from "./edge.arrowHead";
import EdgeClampedProgram from "sigma/rendering/webgl/programs/edge.clamped";
const EdgeArrowProgram = createEdgeCompoundProgram([EdgeClampedProgram, EdgeArrowHeadProgram]);
export default EdgeArrowProgram;
edge.arrowHead.js:
import { floatColor } from "sigma/utils";
import vertexShaderSource from "sigma/rendering/webgl/shaders/edge.arrowHead.vert.glsl";
import fragmentShaderSource from "sigma/rendering/webgl/shaders/edge.arrowHead.frag.glsl";
import { AbstractEdgeProgram } from "sigma/rendering/webgl/programs/common/edge";
const POINTS = 3,
ATTRIBUTES = 9,
STRIDE = POINTS * ATTRIBUTES;
export default class EdgeArrowHeadProgram extends AbstractEdgeProgram {
// Locations
positionLocation;
colorLocation;
normalLocation;
radiusLocation;
barycentricLocation;
matrixLocation;
sqrtZoomRatioLocation;
correctionRatioLocation;
constructor(gl) {
super(gl, vertexShaderSource, fragmentShaderSource, POINTS, ATTRIBUTES);
// Locations
this.positionLocation = gl.getAttribLocation(this.program, "a_position");
this.colorLocation = gl.getAttribLocation(this.program, "a_color");
this.normalLocation = gl.getAttribLocation(this.program, "a_normal");
this.radiusLocation = gl.getAttribLocation(this.program, "a_radius");
this.barycentricLocation = gl.getAttribLocation(this.program, "a_barycentric");
// Uniform locations
const matrixLocation = gl.getUniformLocation(this.program, "u_matrix");
if (matrixLocation === null) throw new Error("EdgeArrowHeadProgram: error while getting matrixLocation");
this.matrixLocation = matrixLocation;
const sqrtZoomRatioLocation = gl.getUniformLocation(this.program, "u_sqrtZoomRatio");
if (sqrtZoomRatioLocation === null)
throw new Error("EdgeArrowHeadProgram: error while getting sqrtZoomRatioLocation");
this.sqrtZoomRatioLocation = sqrtZoomRatioLocation;
const correctionRatioLocation = gl.getUniformLocation(this.program, "u_correctionRatio");
if (correctionRatioLocation === null)
throw new Error("EdgeArrowHeadProgram: error while getting correctionRatioLocation");
this.correctionRatioLocation = correctionRatioLocation;
this.bind();
}
bind() {
const gl = this.gl;
// Bindings
gl.enableVertexAttribArray(this.positionLocation);
gl.enableVertexAttribArray(this.normalLocation);
gl.enableVertexAttribArray(this.radiusLocation);
gl.enableVertexAttribArray(this.colorLocation);
gl.enableVertexAttribArray(this.barycentricLocation);
gl.vertexAttribPointer(this.positionLocation, 2, gl.FLOAT, false, ATTRIBUTES * Float32Array.BYTES_PER_ELEMENT, 0);
gl.vertexAttribPointer(this.normalLocation, 2, gl.FLOAT, false, ATTRIBUTES * Float32Array.BYTES_PER_ELEMENT, 8);
gl.vertexAttribPointer(this.radiusLocation, 1, gl.FLOAT, false, ATTRIBUTES * Float32Array.BYTES_PER_ELEMENT, 16);
gl.vertexAttribPointer(
this.colorLocation,
4,
gl.UNSIGNED_BYTE,
true,
ATTRIBUTES * Float32Array.BYTES_PER_ELEMENT,
20,
);
// TODO: maybe we can optimize here by packing this in a bit mask
gl.vertexAttribPointer(
this.barycentricLocation,
3,
gl.FLOAT,
false,
ATTRIBUTES * Float32Array.BYTES_PER_ELEMENT,
24,
);
}
computeIndices() {
// nothing to do
}
process(
sourceData,
targetData,
data,
hidden,
offset,
) {
if (hidden) {
for (let i = offset * STRIDE, l = i + STRIDE; i < l; i++) this.array[i] = 0;
return;
}
const thickness = data.size || 1,
radius = targetData.size || 1,
x1 = sourceData.x,
y1 = sourceData.y,
x2 = targetData.x,
y2 = targetData.y,
color = floatColor(data.color);
// Computing normals
const dx = x2 - x1,
dy = y2 - y1;
let len = dx * dx + dy * dy,
n1 = 0,
n2 = 0;
if (len) {
len = 1 / Math.sqrt(len);
n1 = -dy * len * thickness;
n2 = dx * len * thickness;
}
let i = POINTS * ATTRIBUTES * offset;
const array = this.array;
// First point
array[i++] = x2;
array[i++] = y2;
array[i++] = -n1;
array[i++] = -n2;
array[i++] = radius;
array[i++] = color;
array[i++] = 1;
array[i++] = 0;
array[i++] = 0;
// Second point
array[i++] = x2;
array[i++] = y2;
array[i++] = -n1;
array[i++] = -n2;
array[i++] = radius;
array[i++] = color;
array[i++] = 0;
array[i++] = 1;
array[i++] = 0;
// Third point
array[i++] = x2;
array[i++] = y2;
array[i++] = -n1;
array[i++] = -n2;
array[i++] = radius;
array[i++] = color;
array[i++] = 0;
array[i++] = 0;
array[i] = 1;
}
render(params) {
if (this.hasNothingToRender()) return;
const gl = this.gl;
const program = this.program;
gl.useProgram(program);
// Binding uniforms
gl.uniformMatrix3fv(this.matrixLocation, false, params.matrix);
gl.uniform1f(this.sqrtZoomRatioLocation, Math.sqrt(params.ratio));
gl.uniform1f(this.correctionRatioLocation, params.correctionRatio);
// Drawing:
gl.drawArrays(gl.TRIANGLES, 0, this.array.length / ATTRIBUTES);
}
}
Now, to increase arrow size, modify the line const thickness = data.size || 1, in edge.arrowHead.js, e.g.:
// [...]
const thickness = data.size * 2.5 || 1,
// [...]
A: I was able to change the arrow size by setting the render type to canvas and defining the minArrowSize in the settings of the graph.
let s = new sigma({
renderer: {
container: document.getElementById('graph-container'),
type: 'canvas'
},
settings: {
edgeLabelSize: 'proportional',
minArrowSize: 15
}
});
| |
doc_1079
|
I attempted to get this component to fail in the aurelia skeleton-navigation app (on github "skeleton-navigation\skeleton-typescript") as it does in our application however it works consistently - that is no characters are being lost.
I then went back to our app. If I reduce the barcode component to just a simple input field as below it also fails. If I take out the value.bind or value.two-way the input field has no loss of characters.
<input type="text" value.bind="barcodeValue1"/>
<input type="text" value.two-way="barcodeValue2"/>
There are many difference is the package.json file for example our app is using:
"aurelia-framework": "npm:aurelia-framework@^1.0.0-rc.1.0.2".
aurelia-skeleton is using:
"aurelia-framework": "npm:aurelia-framework@^1.0.0"
There is one solution we can see and that is to introduce a delay on the scanner between characters however we would like the binding to work and were also thinking it may be a bug in Aurelia. We are currently re-writing the component NOT to use the binding ability. The problem in our application occurs in Internet Explorer and works fine in Google Chrome.
A: This is probably a bug in IE. I believe this problem can be solved by changing the updateTrigger to 'change'
<input type="text" value.bind="barcodeValue1 & updateTrigger:'change'"/>
You can also try debounce
<input type="text" value.bind="barcodeValue1 & debounce">
http://aurelia.io/hub.html#/doc/article/aurelia/binding/latest/binding-binding-behaviors/1
Make sure your scanner is pressing "enter" or "tab" after typing the code.
| |
doc_1080
|
AudioManager Audio = (AudioManager) getSystemService(Context.AUDIO_SERVICE);
int mode = Audio.getMode();
this code is working fine on API level 11 and above as mentioned in Android|Developers
http://developer.android.com/reference/android/media/AudioManager.html#MODE_IN_COMMUNICATION
Is there any way to detect the MODE_IN_COMMUNICATION mode in older APIs?
A: Yes there any way to detect the MODE_IN_COMMUNICATION mode in older APIs. Look at this SIP-based VoIP http://developer.android.com/about/versions/android-2.3.html
http://developer.android.com/guide/topics/connectivity/sip.html
Hope this helps.
| |
doc_1081
|
I am writing a Music applications that plays songs from YouTube on a server. I am downloading the Thumbnail and storing it in a cfs:collection (because I need a gaussian blur version of it later).
My publish method looks like the following:
Meteor.publish('currentSong', function() {
return [Playlist.find({'position': 0}), Thumbnails.find()];
});
I had a version which published only the Thumbnail of the current song, but that caused even more problems.
In my Templates onCreated method I subscribe to that (among various other things). I've tried both within a this.autorun() method and outside of one.
Template.controlpanel.onCreated(function() {
// subscribe to the publications
Meteor.subscribe('currentSong');
Meteor.subscribe('status');
});
And then I have a Template helper retrieving the URL of the Thumbnail to actually display it within a <img src="<url>" /> context:
getThumbnail: function() {
if(Template.instance().subscriptionsReady()){
var thumbnail = this.thumbnail.getFileRecord();
if(!$.isEmptyObject(thumbnail)){
return thumbnail.url({'store': 'Thumbnail'});
}
}
}
Since I was asked for some more Code, here is a snippet from the actual Template
{{#with currentSong}}
<div id="ThumbnailDisplay">
<img src="{{getThumbnail}}" alt="{{title}} Thumbnail" id="thumbnail">
</div>
{{/with}}
Where currentSong only returns one database entry, listening to the following schema:
PlaylistSchema = new SimpleSchema({
title: {
type: String,
label: "titlename"
},
url: {
type: String,
label: "titleurl"
},
duration: {
type: Number,
label: "duration"
},
file: {
type: String,
label: "filepath",
optional: true
},
position: {
type: Number,
label: "postion"
},
// id to the Thumbnail CFS Collection object
thumbnail: {
// type: String,
type: FS.File,
label: "thumbnail"
}
});
Here you can see, that this.thumbnail in the context of {{#with currentSong}} refers to a FS.File object storing both the Thumbnail and its gaussian blur version.
The Problem is, that when the Template is already loaded and it is switching from one song to another, it works without any Problem. But when there is no song in my playlist, the display part is invisible (The Template is technically rendered, but all the information is inside a {{#with currentSong}} Block). And when I insert a song, the display "pops up" and everything is displayed (song title, duration slider and so on) except for the Thumbnail. Once I reload the page, it is there.
I am subscribing to the collection. I check weather the subscriptions are ready or and still it is not working.
I have some other parts in my application where I subscribe in an onCreated context, and still have to use setTimeout (function(), 100), because otherwise the data is not available yet.
I am pretty sure that the Problem lies in front of the screen, that I am missing something or don't fully understand Meteor's subscriptions and which part is reactive or not. But I just don't get it. You help would be much appreciated.
One thing to note is, that I am aware of iron-router and its capabilities. I have used it in another project of mine. But this app is a sole one page applications having no need for routes at all. So I'd like to restrain from using it.
A: So after playing around a little more the problem basically is, that the subscriptions are ready, but the CollectionFS files are not.
My solution was now, that I have my helper, reactively setting the correct URL, which always works except when the Template is first rendered.
To tackle that Problem I have an autorun function which changes the image whenever the song changes.
this.autorun(function() {
if(this.subscriptionsReady()){
var currentSong = Playlist.findOne({'position': 0});
if(currentSong ){
var thumbnail = currentSong.thumbnail.getFileRecord();
$('#thumbnail').attr('src', thumbnail.url());
}
}
}.bind(this));
It probably is not the ideal solution but it works. For some reason subscriptionReady and CollectionFS do not seem to work together very well.
| |
doc_1082
|
Why can I select and run multiple statements like this:
drop trigger T_MyTab1;
drop trigger T_MyTab2;
But If I select these 2 statements it fails:
create trigger T_MyTab1 after insert on MyTab1 referencing new as n for each row mode db2sql insert into MyAuditTab1 values (n.col1);
create trigger T_MyTab2 after insert on MyTab2 referencing new as n for each row mode db2sql insert into MyAuditTab2 values (n.col1);
The error is:
DB2 SQL Error: SQLCODE=-104, SQLSTATE=42601, SQLERRMC=create trigger T_MyTab1 after inse;BEGIN-OF-STATEMENT;<space>, DRIVER=3.69.24
The same SQL works fine in SquirrelSQL, so I think it's to do with the line delimiter in SQL Developer...
| |
doc_1083
|
The Jquery code:
$(document).ready(function(){
$('.teacher').hide();
$('.switch').click(function(){
$('.student').hide();
$('.teacher').show();
});
});
The HTML code:
<label>Student </label>
<label class="switch">
<input type="checkbox" id="switchVal" value="0">
<span class="slider"></span>
</label>
<label> Teacher</label>
A:
$('.teacher').hide();
$('.switch').click(function() {
$('.student').toggle();
$('.teacher').toggle();
});
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<div class='student'>Student</div>
<div class="switch">
<input type="checkbox" id="switchVal" value="0">
<span class="slider"></span>
</div>
<div class='teacher'>Teacher</div>
A: Use $(".teacher, .student").toggle();
Or, if needed, for more granular control you could always get the current checkbox state using
const isChecked = this.checked; // boolean`
Example:
jQuery($ => {
$(".teacher").hide();
$("#switchVal").on("input", function() {
$(".teacher, .student").toggle();
});
});
.toggler {
display: inline-flex;
gap: 0 5px;
cursor: pointer;
}
.toggler-checkbox {
display: inline-block;
width: 35px;
height: 15px;
background: #444;
border-radius: 1.2em;
}
.toggler-checkbox::before {
content: "";
position: relative;
display: inline-block;
width: 15px;
height: 15px;
background: #0bf;
border-radius: 1em;
transition: transform 0.3s;
}
.toggler input:checked ~ .toggler-checkbox::before {
transform: translateX(20px);
}
.toggler-label {
user-select: none;
}
.toggler-label:nth-of-type(1) {
order: -1;
color: #0bf;
}
.toggler input:checked ~ .toggler-label:nth-of-type(1) {
color: inherit;
}
.toggler input:checked ~ .toggler-label:nth-of-type(2) {
color: #0bf;
}
<label class="toggler">
<input type="checkbox" id="switchVal" value="0" hidden>
<span class="toggler-checkbox"></span>
<b class="toggler-label">Student</b>
<b class="toggler-label">Teacher</b>
</label>
<ul>
<li class="student">Student: Anna</li>
<li class="student">Student: John</li>
<li class="teacher">Teacher: Mark</li>
<li class="student">Student: Tara</li>
<li class="teacher">Teacher: Zack</li>
</ul>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.6.0/jquery.min.js"></script>
| |
doc_1084
|
And i have a "like" button app for my website.
But my database is not safe, because everyone can write to my database.
I want is: "allow only incoming data from my website. And block incoming from other sites"
For example:
{
"rules": {
".read": true,
".write": allow only incoming data from "www.example.com" and block incoming from other sites
}
}
How can I do this? Or how can I set this rule on Firebase console?
A: If I understand correctly what you are trying to do, I believe you can do it with the service account linked with your Firebase account. You can manually create a whitelist of
URL's allowed to use your API key. The trick is that it is found in the Google Cloud Platform, not the Firebase Console. However, there is a nifty link in Firebase Console that will take you to where you need to be.
(Also, the direct link of where to go is https://console.cloud.google.com/apis/credentials but make sure you are logged into an "Owner" or "Editor" account listed on the "Users and permissions" tab found at step two below.)
Here are the steps:
*
*Log into your Firebase Console and go to the gear icon next to "project overview" in the top left of the Firebase console.
*Then navigate to the "Users and permissions" tab
*Then click the small blue link underneath the main table on the screen that says "Advanced permission settings".
*It should take you to Google Cloud Console. (Make sure you are logged into an "Owner" or "Editor" account listed on the "Users and permissions" tab you were just looking at from the Firebase Console.) Click the menu in the top left of the Google Cloud Console, and go to "APIs & Services"
*Then the click sub-menu item "credentials"
*Click the desired API key you want to restrict.
*And set the websites you want to allow access by clicking the radio button "HTTP Referers" under "Application Restrictions", adding an item, entering the web address, and hitting done to save the changes.
| |
doc_1085
|
We are adding the More button @ the end of the table.
Added code for the same:
UIView *footerView = [[UIView alloc]initWithFrame:CGRectMake(0, 0, 768, 40)];
footerView.backgroundColor = [UIColor grayColor];
UIButton *btnLoadMore = [UIButton buttonWithType:UIButtonTypeCustom];
btnLoadMore.frame = CGRectMake(-10, 0, 768, 30);
btnLoadMore.autoresizingMask = UIViewAutoresizingFlexibleBottomMargin | UIViewAutoresizingFlexibleWidth;
[btnLoadMore setTitle:@"Sample" forState:UIControlStateNormal];
[btnLoadMore setBackgroundColor:[UIColor redColor]];
[btnLoadMore setTitleColor:[UIColor blackColor] forState:UIControlStateNormal];
[btnLoadMore addTarget:self action:@selector(loadMore) forControlEvents:UIControlEventTouchUpInside];
btnLoadMore.userInteractionEnabled=YES;
[btnLoadMore.titleLabel setFont:[UIFont fontWithName:@"Helvetica" size:17.0]];
[footerView addSubview:btnLoadMore];
[footerView setHidden:YES];
[footerView setTag:999];
[cell addSubview: footerView];
for (id subView in [cell subviews]) {
[subView setHidden:NO];
}
UIView *lastRow = [cell viewWithTag:999];
[lastRow setHidden:YES];
if(indexPath.section == [arSearch count] && isLoadMore){
for (id subView in [cell subviews]) {
[subView setHidden:YES];
}
cell.backgroundView = nil;
[lastRow setHidden:NO];
A: Check your code. Why you have written following line :
[footerView setHidden:YES];
I think you are hiding view from cell.
A: Instead of
[cell addSubview: footerView];
use
[cell.contentView addSubview: footerView];
Also remove
[footerView setHidden:YES];
For loadmore, I always do below.
UIView *v = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 320, 65)];
v.backgroundColor = [UIColor clearColor];
int mySiz = 0;
// keep counter how many times load more is pressed.. initial is 0 (this is like index)
mySiz = [startNumberLabel.text intValue]+1;
// i have 15 bcz my index size is 15.
if ([feeds count]>=(15*mySiz)) {
NSLog(@"showing button...");
UIButton *button = [UIButton buttonWithType:UIButtonTypeCustom];
[button setFrame:CGRectMake(10, 10, 296, 45)];
[button setBackgroundImage:[UIImage imageNamed:localize(@"loadmore")] forState:UIControlStateNormal];
[button addTarget:self action:@selector(loadMoreData:) forControlEvents:UIControlEventTouchUpInside];
[v addSubview:button];
mainTableView.tableFooterView = v;
} else {
mainTableView.tableFooterView = nil;
}
}
[mainTableView reloadData];
| |
doc_1086
|
[
["elem1","elem2"],
["elem1","elem3"],
["elem4","elem7"],
...
]
And I want to create a nested dictionary that then looks something like this:
[{"elem1":["elem2","elem3"]},{"elem4":"elem7"}]
So the higher the index in one of the initial sublists the higher will be the hierachical posiiton in the generated tree. How would you go about this in python? How do you call that "treeification"? I feel like there has to be a package out there that does exactly that.
A: Here is code which can help you to get as your required output
data = [
["elem1","elem2"],
["elem1","elem3"],
["elem4","elem7"],
]
maplist = {}
for a in data:
if a[0] in maplist:
maplist[a[0]].append(a[1])
else:
maplist[a[0]] = [a[1]]
print(maplist)
To get sorted based on list item you can use below code
sorted_items = sorted(maplist.items(), key = lambda item : len(item[1]), reverse=True)
A: I don't imagine there is something in a library for this considering it is fairly simple and not that useful for most people. It is better to write the code manually.
First of all, the output format in the question cannot fully represent a tree: for example the data
[
["elem1", "elem2"],
["elem1", "elem3"],
["elem4", "elem7"],
["elem3", "elem5"],
]
...would need to be similar to [{elem1":["elem2","elem3"]},{"elem4":"elem7"}] but add elem5 as a child of elem3, however elem3 is a string type, with no place for children to be stored. Thus, I suggest the following output format:
{'elem4': {'elem7': {}}, 'elem1': {'elem2': {}, 'elem3': {'elem5': {}}}}
Here every node is represented as a dictionary from child node names to child node values, so a tree containing only a root node looks like {}, and a tree with 3 nodes (the root + 2 children) looks like {'child1': {}, 'child2': {}}.
To take the turn a list of parent-child associations and turn them into such a tree you can use this code:
def treeify(data):
# result dictionary
map_list = {}
# initially all nodes with a child, will have items removed later
root_nodes = {parent for parent, child in data}
for parent, child in data:
# get the dictionary that this node maps to (empty dictionary by default)
children = map_list.setdefault(parent, {})
# add this connection
children[child] = map_list.setdefault(child, {})
# remove node with a parent from the set of root_nodes
if child in root_nodes:
root_nodes.remove(child)
# return the dictionary with only root nodes at the root
return dict((root_node, map_list[root_node]) for root_node in root_nodes)
print(treeify([
["elem1", "elem2"],
["elem1", "elem3"],
["elem4", "elem7"],
["elem3", "elem5"],
]))
| |
doc_1087
|
I have a simple axios interceptor coded as below.
// Add a response interceptor
axios.interceptors.response.use((response: AxiosResponse<any>) => {
// Do something with response data
return response;
}, (error: any) => {
// Do something with response error
// Here "error.response" is undefined.
return Promise.reject(error);
});
What i am trying to do is I need to redirect to other location (identityserver SSO page but that is not relevant here I guess) when not authenticated.
so when calling my API returns 302 status code when not authenticated with proper location.
but that axios is not redirecting to that location automatically.
That is still ok if I have to manually redirect.
but I am getting "error.response" as undefined.
so now how do I redirect? Because I am not able to detect status code due to "error.response" as undefined.
In network tab it displays as following.
Response tab shows following. No response data available...!!!
What am I doing wrong?
A: As I understood 302 code is handled by browser. After answer 302 browser will receive page that was passed by location header, and pass it to Axios. So you can find page html in response.data. Also in response.request.responseURL you can find URL you need to redirect to.
So I made something like this:
axios.post('.', Data)
.then(response => {
if (response.status === 200) {
window.location.href = response.request.responseURL;
}
})
.catch(error => {console.log(error)});
It is usefull for me now.
| |
doc_1088
|
I am trying to manually download the sources file with ZnClient. The directory my image is located in is /mnt/upload/upload.140605183221.
This is the code I have
| aFileStream |
aFileStream := '/mnt/universe/upload/upload.140605183221/PharoV30.sources' asFileName writeStream.
aFileStream write: (ZnClient new get: 'http://files.pharo.org/sources/PharoV30.sources.zip').
aFileStream close.
I'm brand new to ZnClient; I don't know how to use it. What's wrong with my code?
A: Nearly right. You need to replace the message #asFileName with #asFileReference, since #asFileName will answer a string object (so you actually get a WriteStream on the string).
fileReference := '/mnt/universe/upload/upload.140605183221/PharoV30.sources' asFileReference
fileReference writeStreamDo: [ :stream |
| url|
url := 'http://files.pharo.org/sources/PharoV30.sources.zip'.
stream write: (ZnClient new get: url) ]
A: You can do this:
'./PharoV30.sources' asFileReference
writeStreamDo: [ :stream |
stream write: (ZnClient new get: 'http://files.pharo.org/sources/PharoV30.sources') contents ].
| |
doc_1089
|
<html>
<body>
<script>
var d =1;
try {
if(d == 2) {
console.log('fd');
}
} catch(e) {
console.log('catch');
}
</script>
</body>
</html>
When i give the value 2 for d the code inside try works but when the value is given 1 the code inside catch didnt works..
Can you tel me why its not working ??..Any help would be great ...Thanx
A: try...catch is for catching errors, not for handling conditional statements. if (d == 2) is perfectly valid and doesn't throw any errors, nor does the code within your conditional statement.
A catch clause contain statements that specify what to do if an exception is thrown in the try block. That is, you want the try block to succeed, and if it does not succeed, you want control to pass to the catch block. If any statement within the try block (or in a function called from within the try block) throws an exception, control immediately shifts to the catch clause. If no exception is thrown in the try block, the catch clause is skipped.
— MDN's Notes on try...catch
If you want to do something if d isn't equal to 2, you can use else:
if (d == 2) {
...
}
else {
...
}
If you really want to use a try...catch statement here then you're going to have to throw an error. You can do this with JavaScript's throw statement:
try {
if (d != 2) {
throw "d is not equal to 2!";
}
}
catch (e) {
...
}
The catch block here will catch the error, and the e argument will be equal to our error string: "d is not equal to 2!".
A: try/catch is for handling errors. You're not generating an error, since your comparison is a valid comparison. An error is not the same as an if statement that returns false.
A: That's not the way a catch is intended to be used. The catch block will be visited once an exception is thrown. As an example, try to add the following else block to your if:
else { throw new Error; }
That said, it is not a good idea to control your flow by means of exceptions and I strongly discourage the use of such a solution in a production environment.
A: You need to throws the exception. Use throw "Exception" inside code like this:
try {
if(d == 2) {
console.log('fd');
}else{
throw "Exception";
}
} catch(e) {
console.log('catch');
}
| |
doc_1090
|
My question is, is there an elegant way to merge all the ids in a file?
My options are:
*
*Make a JavaScript after using JMeter to put all together.
*A JSON post-processor to get the id and then append to a file
Any nicer solution?
A: Extract the id from response, you can either use regular expression extractor or json post processor.
Use Beanshell Post processor and append these id's into a file. That should be easiest way.
| |
doc_1091
|
For some reason its adding extra slashes to the path. "\\\\\\Drive\\folder\\folder\\lib\\bingads\\v12\\proxies\\campaign_management_service.xml". I am using Python 3.6 on Windows 7 and this is my 1st time working on python. Bingads is already included in packages setup. The error code that shows up is [WinError 161].
I added all the packages I need to 'packages' setup. I'm also using CX_Freeze 6.0. Using 6.1 caused the tkinter folders tcl and tk to not copy to the build directory.
OS:Win 7 64 bit
Python ver: 3.6
Error Message
setup.py
| |
doc_1092
|
Hi, I'm just playing around with the What3Words grid on Mapbox code from the tutorial. (https://developer.what3words.com/tutorial/displaying-the-what3words-grid-on-a-mapbox-map)
I'm trying to make the tiles from the grid interactive, kind of like in the w3w website (clickable, hover effect, getting data from them, etc), but the grid doesn't seem to work when the data source is loaded as a 'fill' layer on Mapbox, it only works as a 'line' layer type. Every single example I find online uses Polygons (or MultiPolygons) from a fill layer type, but I can't see nothing around with bounding boxes.
(Basically trying to achieve something like this, but with every tile instead of the states: https://docs.mapbox.com/mapbox-gl-js/example/hover-styles/)
I don't really know what's going on, why can't I add the source data as a fill layer? Is there a way to load the data as Polygons instead of bounding boxes?
Thanks.
Code (from the tutorial):
<html>
<head>
<script src="https://assets.what3words.com/sdk/v3.1/what3words.js?key=YOUR_API_KEY"></script>
<script src="https://api.tiles.mapbox.com/mapbox-gl-js/v0.53.0/mapbox-gl.js"></script>
<link href="https://api.tiles.mapbox.com/mapbox-gl-js/v0.53.0/mapbox-gl.css" rel="stylesheet" />
<style>
#map {
height: 100%;
}
html, body {
height: 100%;
margin: 0;
padding: 0;
}
</style>
</head>
<body>
<div id="map"></div>
<script>
// Create the Mapbox
mapboxgl.accessToken = "YOUR_MAPBOX_TOKEN";
let map = new mapboxgl.Map({
container: "map", // container id
style: "mapbox://styles/mapbox/streets-v9", // stylesheet location
center: [-0.195499, 51.52086], // starting position [lng, lat]
zoom: 18 // starting zoom
});
map.addControl(new mapboxgl.NavigationControl());
</script>
<script>
function drawGrid() {
const zoom = map.getZoom();
const loadFeatures = zoom > 17;
if (loadFeatures) { // Zoom level is high enough
var ne = map.getBounds().getNorthEast();
var sw = map.getBounds().getSouthWest();
// Call the what3words Grid API to obtain the grid squares within the current visble bounding box
what3words.api
.gridSectionGeoJson({
southwest: {
lat: sw.lat, lng: sw.lng
},
northeast: {
lat: ne.lat, lng: ne.lng
}
}).then(function(data) {
// Get the grid source from the map (it won't exist initally)
var grid = map.getSource('grid');
if (grid === undefined) {
// Create a source of type 'geojson' which loads the GeoJSON returned from the what3words API
map.addSource('grid', {
type: 'geojson',
data: data
});
// Create a new layer, which loads data from the newly created data source
map.addLayer({
id: 'grid_layer',
type: "line",
source: 'grid',
layout: {
"line-join": "round",
"line-cap": "round"
},
paint: {
"line-color": '#777',
"line-width": .5
}
});
} else {
// The source and map layer already exist, so just update the source data to be the new
// GeoJSON returned from the what3words API
map.getSource('grid').setData(data);
}
}).catch(console.error);
}
// If we have reached the required zoom level, set the 'grid_layer' to be visible
var grid_layer = map.getLayer('grid_layer');
if (typeof grid_layer !== 'undefined') {
map.setLayoutProperty('grid_layer', 'visibility', loadFeatures ? 'visible' : 'none');
}
}
// When the map is either loaded or moved, check to see if the grid should be draw
// if the appropriate zoom level has been met, and if so draw it on.
map
.on('load', drawGrid)
.on('move', drawGrid);
</script>
</body>
</html>
| |
doc_1093
|
But in the media folder the image is saved and when I double click on that image is opened. and when i upload an image from the admin site, the image is displayed in a template.
Actually in development server uploading displaying images is working good. After deploying into the production server I have this issue. Please help me to solve this issue.
settings.py
# managing media
MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
MEDIA_URL = '/media/'
Project urls.py
urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
Thanks
A: Django doesn't serve static and media files in production.
You must configure your server to serve them.
It depends on your production environment but here's a Ubuntu + Django + Nginx tutorial: https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-20-04
| |
doc_1094
|
I already created everything, used the subplot() and the twinx. Here's my code
originalFuncForSingleID = originalFunc[(originalFunc['ID'])==IDVal]
originalFuncForSingleIDResampled = originalFuncForSingleID.set_index('Date').resample('5T').mean().reset_index()
fig, ax1 = plt.subplots()
ax2 = ax1.twinx()
ax1.plot(x=originalFuncForSingleID['Date'], y=originalFuncForSingleID['Value'], use_index=True)
ax2.plot(x=originalFuncForSingleIDResampled['Date'], y=originalFuncForSingleIDResampled['Value'], use_index=True)
ax1.set_xlabel('Date')
ax1.set_ylabel('Value original', color='g')
ax2.set_ylabel('Value resampled', color='b')
plt.rcParams['figure.figsize'] = 12, 5
plt.show()
My result should be the original function line with an overlapped function (the resampled) that shows the changes and the newly created function. How can I do that? Where am I wrong?
A: Turns out I solved it myself without the multiple axs. Here's the code:
f, ax = plt.subplots(1)
plt.title('Title')
originalFuncForSingleID.plot(kind='line', x='Date', y='Value', color='brown', label='Originasl', ax=ax)
originalFuncForSingleIDResampled.plot(kind ='line', x='Date', y='Value', color='green', label='Resampled', ax=ax)
plt.legend()
plt.show()
Using this I had an overlapped function over the original one, showing the differences with a different color.
A: I tried your solution and it worked.
The main thing is to specify ax=ax in the plot command:
f, ax = plt.subplots(1)
df1.plot(ax=ax)
df2.plot(ax=ax)
plt.show()
| |
doc_1095
|
I have made a database(in access for ease of work) and i'm now looking for a way to compare the date in the datagridview to today's date. If the date of the last payment is 30 days behind, it has to turn red. If its <30 it should stay green.
Now i'm wondering how to do that, because I can't seem to get it to work.
Since i'm fairly new to the vb.net-language I didn't get anything to work. I figured posting any code would be useless since it's all underlined with red and my program won't even run.
I figured it would be something in the style of
If me.dgv.columns("1") > 30:
me.dgv.row.defaultcellstyle.color = "red"
End if
The dates are sorted/displayed in an ascending fashion, not sure if that might help?
Any help would be of great value and I thank you in advance!
A: *
*to check 30 days later you use date.Now.addDays(30)
*to check 30 days before you use date.Now.addDays(-30)
*to color the cell forecolor you use something like this:
For Each dr As DataGridViewRow In DataGridView1.Rows
If CDate(dr.Cells(0).Value) > Date.Now.AddDays(30) Then
dr.Cells(0).Style.ForeColor = Color.Red
End If
Next
you can also loop the cells to check all DGV cells for dates and compare them
A: From my experience it is best to handles this in RowPrePaint event.
Example:
Public Class Test
Private Shared Rnd As New Random
Private Sub FormControls_Load(sender As Object, e As EventArgs) Handles MyBase.Load
'Start Sample
DataGridView1.Columns.Add(New DataGridViewTextBoxColumn With {
.Name = "MyDate",
.ValueType = GetType(DateTime)})
For i As Integer = 0 To 1000
'create random dates in the past
DataGridView1.Rows.Add(Now.AddDays(-Rnd.Next(0, 25)))
Next
'End Sample
End Sub
Private Sub DataGridView1_RowPrePaint(sender As Object, e As DataGridViewRowPrePaintEventArgs) Handles DataGridView1.RowPrePaint
Dim DgvRow As DataGridViewRow = DataGridView1.Rows(e.RowIndex)
With DgvRow
If DgvRow IsNot Nothing AndAlso DgvRow.Cells(0).Value IsNot Nothing AndAlso DgvRow.Cells(0).Value IsNot DBNull.Value Then
If Now.Subtract(CDate(DgvRow.Cells(0).Value)).TotalDays > 20 Then
DgvRow.DefaultCellStyle.BackColor = Color.Yellow
Else
DgvRow.DefaultCellStyle.BackColor = Color.Empty
End If
End If
End With
End Sub
End Class
| |
doc_1096
|
I've got a huge directory of folders and some of then contain 3 dashes like '---' which I need to find and replace with just one dash.
Is there an easy way to do a find and replace right on my server? I really don't want to have to download all the folders and do it on my desktop and re-upload them.
A: You could do something like this:
find ./ -name '*' -exec rename 's/---/-/g' {} \;
* HAS NOT BEEN TESTED
src
| |
doc_1097
|
f :: [a] -> [a] -- length f(xs) == length xs
Similarly, I might have a function like g, which accepts two lists that should be of equal length:
g :: [a] -> [a] -> ...
If f and g are typed as above, then run-time errors may result if their length-related constraints are not satisfied. I would therefore like to encode these constraints in the type system. How might I do this?
Please note that I'm looking for a practical framework that may be used in everyday situations, adding as little intuitive overhead to the code as possible. I am particularly interested to know how you would deal with f and g yourself; that is, would you attempt to add the length-related constraints to their types, as asked here, or would you leave them with the types as given above for simplicity of code?
A: The following code is adapted from Gabriel Gonzalez's blog in combination with some information supplied in the comments:
{-# LANGUAGE GADTs, DataKinds #-}
data Nat = Z | S Nat
-- A List of length 'n' holding values of type 'a'
data List n a where
Nil :: List Z a
Cons :: a -> List m a -> List (S m) a
-- Just for visualization (a better instance would work with read)
instance Show a => Show (List n a) where
show Nil = "Nil"
show (Cons x xs) = show x ++ "-" ++ show xs
g :: (a -> b -> c) -> List n a -> List n b -> List n c
g f (Cons x xs) (Cons y ys) = Cons (f x y) $ g f xs ys
g f Nil Nil = Nil
l1 = Cons 1 ( Cons 2 ( Nil ) ) :: List (S (S Z)) Int
l2 = Cons 3 ( Cons 4 ( Nil ) ) :: List (S (S Z)) Int
l3 = Cons 5 (Nil) :: List (S Z) Int
main :: IO ()
main = print $ g (+) l1 l2
-- This causes GHC to throw an error:
-- print $ g (+) l1 l3
This alternative list definition (using GADTs and DataKinds) encodes the length of a list in its type. If you then define your function g :: List n a -> List n a -> ... the type system will complain if the lists are not of the same length.
In case this would (understandably) be too much extra complication for you, I'm not sure using the type system would be the way to go. The easiest is to define g using a monad/applicative (e.g. Maybe or Either), let g add elements to your list depending on both inputs, and sequence the result. I.e.
g :: (a -> b -> c) -> [a] -> [b] -> Maybe [c]
g f l1 l2 = sequence $ g' f l1 l2
where g' f (x:xs) (y:ys) = (Just $ f x y) : g' f xs ys
g' f [] [] = []
g' f _ _ = [Nothing]
main :: IO ()
main = do print $ g (+) [1,2] [2,3,4] -- Nothing
print $ g (+) [1,2,3] [2,3,4] -- Just [3,5,7]
A: The lack you observe is because the information of length is not a part of the type of the list; because the type checker is meant to reason about types, you can't specify invariants in your functions unless the invariants are in the type of the arguments themselves, or on typeclass constraints or type family-based equalities. (There are some haskell pre-processors, though, like Liquid Haskell, that allow you to annotate functions with invariants like this that will be checked on compilation.)
there are many haskell libraries that offer list-like data structures with length encoded in the type. Some notable ones are linear (with V) and fixed-vector.
The interface for V goes something like this:
f :: V n a -> V n a -> V n a
g :: V n a -> V n a -> [a]
-- or --
g :: V n a -> V n a -> V ?? a -- if ?? can be determined at compile-time
Note the particular pattern of our first type signature for g: We take two types where we care about the lengths, and return a type that doesn't care about the length, losing information.
In the second case, if we do care about the length of the result, the length has to be known at compile-time for this to make sense.
Note that V from linear actually doesn't wrap a list, but a Vector from the vectors library. It also requires lens (the linear library, that is), which is admittedly a huge dependency to pull in if all you want is length-encoded vectors. I think the vector type from fixed-vectors does use something more equivalent to a normal haskell list...but I'm not totally sure. In any case, it has a Foldable instance, so you can convert it to a list.
Do remember of course that if you plan to encode lengths in your functions like this...Haskell/GHC has to be able to see that your implementation typechecks! For most of these libraries, Haskell will be able to typecheck things like this (if you stick with things like zipping and fmapping, binding, ap-ping). For most useful cases this is true...however, sometimes your implementation just can't be "proven" by the compiler, so you'll have to "prove" it to yourself in your head, and use some sort of unsafe coercion.
| |
doc_1098
|
Test A, Test B, Prod A, and Prod B
Test A, Test B, and Prod A were set up in SAS Management Console by a different Admin (no longer with the org). All three connect and return data without issue.
I just created Prod B in Server Manager, and our Windows Server Admin created the DSN (based on the other 3 DSNs). I used the Prod A server configuration as a guide for creating Prod B, and made sure the Datasrc points to the newly created DSN.
When running a PROC SQL ; SELECT * FROM QUIT ; I get "Error: File ____ does not exist" for Prod B. When I run the same script in Test B, I get the expected results. (Similar scripts for Test A and Prod A return results as well).
I'm not sure where the error is coming from (SAS, Windows, or somewhere else). The table with data exists in both Test B and Prod B Azure DBs, so it is not a missing DB (been asked this already).
Any suggestions are appreciated.
A: The issue has been resolved:
The Default database in the DSN was set to master instead of the PROD_DB. Once this was changed, I was able to connect to the table.
| |
doc_1099
|
And I have their WingtipToys project downloaded.
This led me to do the following (Visual Studio 2013, .NET 4.5):
*
*New Project - ASP.NET Web Application (I named it EmptyWebApp).
a. On the next screen of this Wizard, I selected Empty, and for Add folders and core references for, I ensured that Web Forms, MVC, Web API were all unchecked.
*Added a New Web Form (named WebForm1).
The code in the .cs looks like this:
public partial class WebForm1 : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
throw new Exception("foo");
}
}
*Added a New Web Form (named ErrorPage), i left the default html code other than adding "I am an error page!" in between the div tags.
The code in the .cs looks like this:
public partial class ErrorPage : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
Server.ClearError();
}
}
*Added a New Global Application Class
The only function with code is this:
protected void Application_Error(object sender, EventArgs e)
{
Exception exc = Server.GetLastError();
Server.Transfer("ErrorPage.aspx");
}
(Yes, exc does nothing right now, but I was planning on using it).
When I compile and run (with breakpoints on Server.Transfer and Server.ClearError in the ErrorPage), I notice that the breakpoint is hit twice.
It does the following:
a. Server.Transfer (in Application_Error)
b. Server.ClearError (in Page_Load of ErrorPage)
c. Server.Transfer (in Application_Error)
d. Server.ClearError (in Page_Load of ErrorPage)
So, it's transfered twice, and the Page Loads twice.
First time I get to the Server.Transfer breakpoint (a.), exc has a value.
Second time I get to the Server.Transfer breakpoint (c.), exc is null.
Commenting out the Server.ClearError, it doesn't load twice.
Placing Server.ClearError right after the Server.Transfer, again, we are back to loading twice.
Removing the Server.Transfer (in Application_Error) and just keeping the Server.ClearError(), and it executes once.
So, some combination of Sever.Transfer and Server.ClearError is causing this.
But, when I run the WingTipToys project, they have a Server.ClearError in their Page_Load, and it doesn't cause Application_Error to get executed twice. I mean, if it's executing twice, is there some error happening (that I can't explain)?
I suppose it's not a big deal, but it gives me pause that my simple project exhibits different run-time behaviors.
Thanks!
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.