text stringlengths 70 452k | dataset stringclasses 2 values |
|---|---|
Form Validation: If data in form contains "VP-", error out
I'm trying to do code that errors out a form validate if the form contains "VP-".
My code is:
// quick order form validation
function validateQuickOrder(form) {
if ((form.ProductNumber.value == "")|| (form.ProductNumber.value == "VP")){
alert("Please enter an item number.");
form.ProductNumber.focus();
return false;
}
return true;
}
Whats not working? Whats expected and what are you getting?
if it IS or if it contains? because you're only checking if it is literally "VP" there.
Expecting to receive the alert, but not receiving anything; form still submitting with no problem using my code above. I want it to encounter a problem, and display alert if I enter a product number that contains "VP"
form.ProductNumber.value.indexOf('VP') > -1
@Aidanc - If it contains anywhere in the string.
== does a full string comparison. You'll want to use indexOf to check if it contains that string:
if ( ~form.ProductNumber.value.indexOf('VP') ) {
// ProductNumber.value has "VP" somewhere in the string
}
The tilde is a neat trick, but you can be more verbose if you want:
if ( form.ProductNumber.value.indexOf('VP') != -1 ) {
// ProductNumber.value has "VP" somewhere in the string
}
Just to provide an alternative to the other answer, regex can be used as well:
if ( /VP/.test( form.ProductNumber.value ) ) {
// value contains "VP"
}
| common-pile/stackexchange_filtered |
Bad input shape () in multi-class classification
I'am performing a multi-class classification task using sci-kit learn. In the setup i created, i want to compare different classification algorithms.
I use a pipeline, where text is inserted as X and Y is the class (multi-class, N = 5). Textual features are extracted in the pipeline using TfidfVectorizer().
KNN does the job, but other classifiers give this: ValueError: bad input shape (670, 5)
Full traceback:
"/Users/Robbert/pipeline.py", line 62, in <module>
train_pipeline.fit(X_train, Y_train)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn/pipeline.py", line 130, in fit
self.steps[-1][-1].fit(Xt, y, **fit_params)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn/svm/base.py", line 138, in fit
y = self._validate_targets(y)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn/svm/base.py", line 441, in _validate_targets
y_ = column_or_1d(y, warn=True)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn/utils/validation.py", line 319, in column_or_1d
raise ValueError("bad input shape {0}".format(shape))
ValueError: bad input shape (670, 5)
The code i use:
def read_data(f):
data = []
for row in csv.reader(open(f), delimiter=';'):
if row:
plottext = row[8]
target = { 'Age': row[4] }
data.append((plottext, target))
(X, Ycat) = zip(*data)
Y = DictVectorizer().fit_transform(Ycat)
Y = preprocessing.LabelBinarizer().fit_transform(Y)
return (X, Y)
X, Y = read_data('development2.csv')
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.33, random_state=42)
###KNN Pipeline
#train_pipeline = Pipeline([
# ('vect', TfidfVectorizer(ngram_range=(1, 3), min_df=1)),
# ('clf', KNeighborsClassifier(n_neighbors=350, weights='uniform'))])
###Logistic regression Pipeline
#train_pipeline = Pipeline([
# ('vect', TfidfVectorizer(ngram_range=(1, 3), min_df=1)),
# ('clf', LogisticRegression())])
##SVC
train_pipeline = Pipeline([
('vect', TfidfVectorizer(ngram_range=(1, 3), min_df=1)),
('clf', SVC(C=1, kernel='rbf', gamma=0.001, probability=True))])
##Decision tree
#train_pipeline = Pipeline([
# ('vect', TfidfVectorizer(ngram_range=(1, 3), min_df=1)),
# ('clf', DecisionTreeClassifier(random_state=0))])
train_pipeline.fit(X_train, Y_train)
predicted = train_pipeline.predict(X_test)
print accuracy_score(Y_test, predicted)
How is it possible that KNN accepts the shape of the array and other classifiers don't? And how to change this shape?
If you compare documentation for fit(X, y) function in KNeighborsClassifier and SVC, you will see that only the former one accepts the y in the form [n_samples, n_outputs].
Possible solution: why do you need LabelBinarizer at all? Just do not use it.
I tried to input the labels without binarizing, and:ValueError: You appear to be using a legacy multi-label data representation. Sequence of sequences are no longer supported; use a binary array or sparse matrix instead - the MultiLabelBinarizer transformer can convert to this format.
If your Y vector is of size (n_samples, n_classes) and contains at least a single row which has more than one non-zero element, then you are solving a multi-label classification problem. If that is the case, The multiclass and multilabel algorithms page in scikit-learn docs lists KNN as one of the classifiers that supports multi-label classification. You might want to try out other classifiers from that list
* sklearn.tree.DecisionTreeClassifier
* sklearn.tree.ExtraTreeClassifier
* sklearn.ensemble.ExtraTreesClassifier
* sklearn.neural_network.MLPClassifier
* sklearn.neighbors.RadiusNeighborsClassifier
* sklearn.ensemble.RandomForestClassifier
* sklearn.linear_model.RidgeClassifierCV
| common-pile/stackexchange_filtered |
Do I need to factory reset to root galaxy note 4 SM-N910V?
I have the Samsung Galaxy note 4 for Verizon. I believe it is the developer edition because the model # is SM-N910V. I am trying to either install CFs auto root or install TWRP recovery and root from there. So far both fail in Odin. here's the log for Odin for CF auto root,
<ID:0/003> Added!!
<OSM> Enter CS for MD5..
<OSM> Check MD5.. Do not unplug the cable..
<OSM> Please wait..
<OSM> CF-Auto-Root-trltevzw-trltevzw-smn910v.tar.md5 is valid.
<OSM> Checking MD5 finished Sucessfully..
<OSM> Leave CS..
<ID:0/003> Odin v.3 engine (ID:3)..
<ID:0/003> File analysis..
<ID:0/003> SetupConnection..
<ID:0/003> Initialzation..
<ID:0/003> Get PIT for mapping..
<ID:0/003> Firmware update start..
<ID:0/003> SingleDownload.
<ID:0/003> recovery.img
<ID:0/003> NAND Write Start!!
<ID:0/003> cache.img.ext4
<ID:0/003> FAIL! (Auth)
<ID:0/003>
<ID:0/003> Complete(Write) operation failed.
<OSM> All threads completed. (succeed 0 / failed 1)
I did a samsung update the other day and went from NI1 to NJ5, i guess that is the update version. Does updating remove the unlocked bootloader? Do I need to factory reset to be able to install CF auto root and TWRP?
I'm pretty sure the base model number for both the "developer edition" and the regular version are the same. Assuming you got it new, you would know if you had the developer edition, because you have to buy it full retail price from Samsung (Verizon won't sell it to you). Where did you purchase your device?
I purchased from Verizon. I was able to click on build number seven times under about phone and I was able to unlock the developer menu so are you sure about that?
If you got it from Verizon it's not the developer edition. You have a locked bootloader. There is currently no known way to root your device. The presence of the development menu is not actually relevant for your purposes - you can use ADB for debugging and such without having a "developer edition" device with an unlockable bootloader. Supporting ADB is a requirement of Google's Android Compatibility Test Suite. Compliance with that is needed in order to ship the device with Google apps.
That link is only for the retail version. The model SM-N910V is supposed to be indicative of the developer model.
No, what I'm saying is that the base model number for both is the same. The one sold by Verizon is the SM-N910VZKEVZW. The developer edition is the SM-N910VMKEVZW. Notice that both have a base model number of SM-N910V. The difference is that the dev edition has the "MKEVZW" part number and the Verizon-sold version is "ZKEVZW". Verizon does not sell the developer edition.
Samsung devices usually have no locked bootloader so you won't remove that. Official updates do however remove any trace of a working root.Your device is nowhere near a developer device but more a custom device served with a custom firmware of Verizon noticeable on the V in your model no.
Because of that custom Verizon model it is possible that the bootlader may be locked as you can see in the log where it says <ID:0/003> FAIL! (Auth).
This means you are not authenticated to flash anything on your device which leads me to the assumption that your bootloader is Verizon-specific and therefore locked.
Verizon does indeed lock the bootloader on this device. I'm pretty sure they lock them on all their devices these days, unfortunately.
i got same problem on Galaxy s7 edge, and now this is solved :
Developer Option - on - OEM unlock - On => run odin again. Done.
| common-pile/stackexchange_filtered |
What is the effect of irrelevant reading after study on memory consolidation?
I heard and experienced a little that unconscious mental processes involved in storing memories work really well when you are not thinking about your study subject during rest. However, I'm not sure mental effort such as reading a novel is as effective as physical effort for consolidation of studied information. What is the effect of irrelevant mental exercise on memory consolidation?
Welcome to cogsci.SE! We encourage preliminary research, both to help the asker formulate the question well and to help the reader understand the asker's intentions, so anything that you can cite or more information you can give will improve the answers that you get.
@ucha, I don't think your question is answerable because it builds on a false premise of sorts. Your use of the term "subconscious mind" suggests that your ideas stem from Freudian theory (or a derivative thereof), which is neither current nor scientific. The false premise is therefore that there exists a unitary subconscious mind that can be rested or made to work particularly well. Furthermore, this question borders on self-help, which is considered off-topic on these forums. Are you perhaps asking if there are ways to efficiently study something in particular?
thanks for replies. i couldn't explain my problem. Because i'm not used to write and speak in english and you're right i have made a lot of mistakes in science terms but i'm here to learn.
and i was wondering about both study and feautures of brain, and maybe i can tell my problem better for the next time
,good day everyone
@blz Why do you say that the subconscious is not current or scientific? I would say that if there is a Freudian concept that IS scientific, it is probably exactly this notion that we are unaware of most mental processes. See for instance http://ccrg.cs.memphis.edu/assets/papers/TICSarticle2003.pdf.
By the way, @ucha, your question is understandable. I have suggested some edits to make it clearer.
@StrangeLoop, sorry if I was unclear! I meant that Freudian psychology is not current or scientific, not non-conscious processing. "The subconscious" in the Freudian sense is about as inscrutable as it gets (no clear predictions). To be clear, we owe much of modern psychology to Freud's work, but Freudian psych is generally accepted to be unfalsifiable. I just wanted to make sure that the OP wasn't basing his question on id, ego and whatnot.
@StrangeLoop, Just to further clarify my above comment, the issue remains that the OP is referring to a unitary subconscious mind, which strongly insinuates a Freudian interpretation (i.e. "The Subconscious"). Your citation doesn't deal with a unitary subconscious with it's own internal goals and behaviours, but with non-conscious processing -- a very important distinction. We can talk about learning in the absence of conscious access, but we can't engage in a scientific discussion about "the subconscious working really well".
@blz I totally agree with you, except that OP's question didn't strike me as necessarily referring to the Freudian conception of "The Subconscious". I guess only OP can tell us that!
First off, a citation on the neurochemical effects of exercise beneficial to memory consolidation. Certain activities (post-study) that modulate neurotransmitter systems have generally been shown to affect how well the brain stores memories, such as caffeine consumption. Regarding the topic of your question, i.e. the effect of mental exercise on memory consolidation, it seems that post-study mental exertion has a negative effect on consolidation. This work cites a number of studies which generally show something along the lines of: "wakeful rest provided a more favorable condition for memory consolidation than the non-verbal cognitive task." Generally speaking, memory consolidation seems to work best under the influence of specific neurotransmitter release (induced by physical activity, for instance) or rest. After all, one main theory of why we sleep is that it is for the purpose of storing memories.
| common-pile/stackexchange_filtered |
How to send a simple command to a Window's program on another machine
I have a program on a Windows machine that I VNC into. It is a normal Window's program with a GUI written in Delphi. After I VNC into the machine, I can select the program and hit Alt+S and then F to execute the command I want.
So, is there a simple way to send a command to this machine?
If you VNC to the machine, you can look at Sysinternal's Psexec. It's a command line exe that allows you to send and capture output of remotely executed commands.
Yeah, it's great how you can capture the output of programs to your remote workstation.
| common-pile/stackexchange_filtered |
Angular 2 How to make setter wait to input setter to complete
Hi i have a problem with that. In my component i have a inputt setter like this :
@Input() set editStatus(status: boolean) {
this.savedEditStatus = status;
if (this.savedEditStatus === true && this.getTrigg === true) {
this.getDataFromFlex("SHOWS_L").then((data: any) => {
this.getTrigg = false;
this.lengthDataOptions = data;
})
} else {
this.getTrigg = true;
this.lengthDataOptions = undefined;
}
};
My problem is i get two edit status with value true and they are comming in the almost same time. So right now for this component in this case the getDataFromFlex function will be called two times. I don't want the second time call so i think the getTrigg(boolean) will be a solve. And its doesn't work. So i need a bit of help from you guys.
The getTrigg is default on component initiation set for true
I recomend you to use the ngOnChanges interface to do this kind of communication with external components. This will ensure that you have the expected value to do what you need after this value changes:
import { Component, OnInit, Input, OnChanges, SimpleChange } from '@angular/core';
@Input() savedEditStatus: boolean; // change to field not getter / setter
ngOnChanges(changes: { [propKey: string]: SimpleChange }) {
if (changes['savedEditStatus'] && changes['savedEditStatus'].currentValue === true &&
this.getTrigg === true) {
this.getDataFromFlex("SHOWS_L").then((data: any) => {
this.getTrigg = false;
this.lengthDataOptions = data;
})
} else {
this.getTrigg = true;
this.lengthDataOptions = undefined;
}
}
why did you use === instead of simply "changes.savedEditStatus.currentValue", "this.getTrigg" ? And, ofcourse, in some cases(one single input) you could do smth like:
ngOnChanges ({ firstChange }: SimpleChanges){
if(!firstChange) // instead of use currentValue
}
And i recommend you to use es8 async await instead of ugly promises
@DenKerny this question is from 2017, angular and javascript have changed a lot in 3 years. === means type check in javascript
i took it into account, its es6(destructing)-es7(async functions) syntax, === - it's redundant just asked
| common-pile/stackexchange_filtered |
How can we read a blurred image using pytesseract?
I am using pytesseract ocr to read the text of the image, but i am getting a false output for blurry images shown below.
How can I make the above image or such images ocr ready.
I'd recommend retaking the photographs if possible, but I doubt that's possible if you're asking this... lol
Haha what I mean is how can I sharpen a blur image like above(Changed the image) readable for pytesseract.
I think this helps. Explained very clearly.. https://stackoverflow.com/a/54465855/11049657
| common-pile/stackexchange_filtered |
An unhandled exception occurred: ENOENT: no such file or directory, lstat 'E\CoffeShop\node_modules' See "C:\aangular-errors.log" for further details
An unhandled exception occurred: ENOENT: no such file or directory, lstat 'E:\Angular_April2021\Angular*node_modules*'See "C:\Users\Srisylam\AppData\Local\Temp\ng-ul0h3k\angular-errors.log" for further details.
You need to remove slash before /node in below like:
"node_modules/bootstrap/dist/css/bootstrap.min.css".
For npm install bootstrap@3
Not working "./node_module/bootstrap/dist/css/bootstrap.min.css"
This works %
"./node_modules/bootstrap/dist/css/bootstrap.min.css",
And also use the above cmd npm install bootstrap@(ur version) will work
| common-pile/stackexchange_filtered |
How to get output from class after input all values
from tkinter import *
app = Tk()
app.title("First Software")
app.minsize(500, 500)
#For Input Declaration
UserName = StringVar()
pass1 = IntVar()
class Functions:
def username(self, UserName):
self.UserName = Username
print(UserName.get())
def password(self, pass1):
self.pass1 = pass1
print(pass1.get())
#---Label---
label = Label(text='Enter any Number between 1 and 100...').pack()
#---Entry---
entry = Entry(app, textvariable=UserName).pack()
entry1 = Entry(app, textvariable=pass1).pack()
#---Buttton---
button = Button(text='Submit', command=Functions).pack()
app.mainloop()
After running this code, How can I get separate input of Username and password by using this method. I'm lil bit confused right now!
You will have to create two input fields for them
@ThatBird The OP has created 2 Entry widgets to receive the inputs.
Oh yes OP has indeed created them
command=Functions tells Tkinter to create an instance of your Functions class when the Submit button is pressed. But it doesn't tell it to call the functions defined inside Functions. However, that class definition is a bit odd, and you don't really need a class to contain those functions. Instead, we can write a simple function that gets the values of UserName and pass1 and prints them.
I've made a few other minor changes to your code.
import tkinter as tk
app = tk.Tk()
app.title("First Software")
#app.minsize(500, 500)
#For Input Declaration
UserName = tk.StringVar()
pass1 = tk.IntVar()
def print_entries():
print('User name:', UserName.get())
print('Password:', pass1.get())
tk.Label(app, text='User name.').pack()
entry = tk.Entry(app, textvariable=UserName)
entry.pack()
tk.Label(app, text='Password\nEnter any Number between 1 and 100').pack()
entry1 = tk.Entry(app, textvariable=pass1)
entry1.pack()
tk.Button(app, text='Submit', command=print_entries).pack()
app.mainloop()
This code doesn't actually need the names entry and entry1, but I kept them in case you want to add code that does need to refer to those widgets by name.
Note that you should not do
entry = Entry(app, textvariable=UserName).pack()
The .pack method returns None, so the above statement assigns the value of None to entry. Instead, we do
entry = tk.Entry(app, textvariable=UserName)
entry.pack()
That assigns the new Entry widget to the name entry and then calls its .pack method to pack it into the window.
If you like, you can wrap that code up in a class, but I wouldn't bother with a simple GUI like this. But this is one way to do it:
import tkinter as tk
class App:
def __init__(self):
root = tk.Tk()
root.title("First Software")
#root.minsize(500, 500)
#For Input Declaration
self.UserName = tk.StringVar()
self.pass1 = tk.IntVar()
tk.Label(root, text='User name.').pack()
entry = tk.Entry(root, textvariable=self.UserName)
entry.pack()
tk.Label(root, text='Password\nEnter any Number between 1 and 100').pack()
entry1 = tk.Entry(root, textvariable=self.pass1)
entry1.pack()
tk.Button(root, text='Submit', command=self.print_entries).pack()
root.mainloop()
def print_entries(self):
print('User name:', self.UserName.get())
print('Password:', self.pass1.get())
App()
| common-pile/stackexchange_filtered |
A more efficient way of getting the nlargest values of a Pyspark Dataframe
I am trying to get the top 5 values of a column of my dataframe.
A sample of the dataframe is given below. In fact the original dataframe has thousands of rows.
Row(item_id=u'2712821', similarity=5.0)
Row(item_id=u'1728166', similarity=6.0)
Row(item_id=u'1054467', similarity=9.0)
Row(item_id=u'2788825', similarity=5.0)
Row(item_id=u'1128169', similarity=1.0)
Row(item_id=u'1053461', similarity=3.0)
The solution I came up with is to sort all of the dataframe and then to take the first 5 values. (the code below does that)
items_of_common_users.sort(items_of_common_users.similarity.desc()).take(5)
I am wondering if there is a faster way of achieving this.
Thanks
You can use RDD.top method with key:
from operator import attrgetter
df.rdd.top(5, attrgetter("similarity"))
There is a significant overhead of DataFrame to RDD conversion but it should be worth it.
| common-pile/stackexchange_filtered |
How to convert a BasicDBObject to a Mongo Document with the Java Mongo DB driver version 3?
In the Java Mongo DB driver version 3 the API has changed as compared to the version 2. So a code like this does not compile anymore:
BasicDBObject personObj = new BasicDBObject();
collection.insert(personObj)
A Collection insert works only with a Mongo Document.
Dealing with the old code I need to ask the question:
What is the best way to convert a BasicDBObject to a Document?
You may want to see this
We Can Convert BasicDBObject
to Document by the following way
public static Document getDocument(DBObject doc)
{
if(doc == null) return null;
return new Document(doc.toMap());
}
as Document itself is an implementation of Map<String,Object>.
and BasicDBObject can be too be catch in DBObject as BasicDBObject is an implementation of DBObject.
@Black_Rider for you too
I think the easiest thing to do is just change your code to use a Document as opposed to BasicDBObject.
So change
BasicDBObject doc = new BasicDBObject("name", "john")
.append("age", 35)
.append("kids", kids)
.append("info", new BasicDBObject("email"<EMAIL_ADDRESS> .append("phone", "876-134-667"));
To
import org.bson.Document;
...
Document doc = new Document("name", "john")
.append("age", 35)
.append("kids", kids)
.append("info", new BasicDBObject("email"<EMAIL_ADDRESS> .append("phone", "876-134-667"));
and then insert into the collection
coll.insertOne(doc);
You'll need to change other bits of code to work with MongoDB 3+
MongoDatabase vs. DB
MongoCollection vs DBCollection
The Document is very similar to the BasicDBObject. I am not quite sure what you are referring to as a way to convert BasicDBObjects to Documents, but the Document object provides some very useful methods.
Document.parse(string) will return a Document if you feed it in a JSON string.
Document.append("key", Value) will add to the fields of a Document.
As for the code in your post, this should compile with version 3:
Document personObj = new Document();
collection.insertOne(personObj)
See
Java Driver 3.0 Guide
and
MongoDB Java Driver 3.0 Documentation
What is I receive a BasicDBObject from an old method and then I want to insert it?
You could do Document doc = Document.parse(YourBasicDBObject.toString());
@SuppressWarnings("unchecked")
public static Document getDocument(BasicDBObject doc)
{
if(doc == null) return null;
Map<String, Object> originalMap = doc.toMap();
Map<String, Object> resultMap = new HashMap<>(doc.size());
for(Entry<String, Object> entry : originalMap.entrySet())
{
String key = entry.getKey();
Object value = entry.getValue();
if(value == null)
{
continue;
}
if(value instanceof BasicDBObject)
{
value = getDocument((BasicDBObject)value);
}
if(value instanceof List<?>)
{
List<?> list = (List<?>) value;
if(list.size() > 0)
{
// check instance of first element
Object firstElement = list.get(0);
if(firstElement instanceof BasicDBObject)
{
List<Document> resultList = new ArrayList<>(list.size());
for(Object listElement : list)
{
resultList.add(getDocument((BasicDBObject)listElement));
}
value = resultList;
}
else
{
value = list;
}
}
}
resultMap.put(key, value);
}
Document result = new Document(resultMap);
return result;
}
| common-pile/stackexchange_filtered |
Let $A$ be a PID, $M$ an injective finitely generated module. Prove that $M = 0$.
Help!
Let $A$ be a PID, $M$ an injective finitely generated module.
Prove that:
$$ M = 0.$$
People will be much more motivated to help if you indicate what you have tried. In particular when it is phrased like this (imparative is usually not well-received).
Also the statement looks wrong to me. Take $A$ a field, then any finitely generated vector space is injective, if I am not mistaken.
To avoid close votes, take a look here for tips on how to ask a good question, and more particularly here for providing extra content such as what you understand and have tried, and here for avoiding "I haven't got a clue!" questions.
As $A$ is a PID and $M$ is finitely generated, by the structure theorem of finitely generated modules over a PID we have that
$$M\cong A^r\oplus\bigoplus_{i=1}^k A/(p_i^{\alpha_i})$$
for some $r,\alpha_1,\dots,\alpha_k\in \mathbb{Z}$ and some prime elements $p_i\in A$. So you may assume that $M$ is equal to the module in the right.
Now, an injective module must be divisible: for the proof see here and just change $\mathbb{Z}$ by $A$ accordingly (actually, a module is injective iff it is divisible in a PID but this is not used here).
Hence, for each $a\in A$ the map
$$\begin{align*} a\cdot:M&\rightarrow M \\ f&\mapsto a\cdot f \end{align*}$$
is surjective.
This imply that $\alpha_i=0$ for each $i$. Otherwise the map $p_i\cdot$ wouldn't be surjective, because an element in the image of $p_i\cdot$ would always have a $0$ in its $i$-coordinate.
Hence, $M=A^r$. But if the multiplication by $a\cdot$ is surjective then $a$ is invertible (look at the first coordinate of an element in the inverse of $(1,0,\cdots,0)$). Hence every element in $a$ is invertible and $A$ is a field.
So your result is true only if you assume that your ring is not a field as noticed by Severin Schraven in the comments.
| common-pile/stackexchange_filtered |
What is the difference between "could not get to" and "did not get to"?
Take, for example, this sentence:
Away from Vatican City, Tome was quiet as authorities ordered all public offices and schools to close (close), and banned (ban) cars from the roads. Millions of Catholics who ______________ (not get) to Italy, bid (bid) him farewell in myriad services round the globe.
What's the grammatically correct form of the concept "to not get" in this context, to fill in the blank and complete the sentence?
The word must be derived from "get", or the phrase include "get", in order to be consistent with the pattern that that the words/phrases preceding bracketed words are derivations of those bracketed words.
Probably could not, but could easily be did not. Either would be grammatically correct, and the difference would come down to the particular shade of meaning the author desired to express.
"Millions of Catholics who were unable to get to Italy" also fits.
Thanks for clearing my doubt Robusto and Josh61. Mari-Lou A I have a question for you. Upon discussing the answers of Robusto and Josh61 with my friend he agreed that using "could not" is more appropriate in this scenario because of the reasons as mentioned by both of them, but he told me not to use "unable to" as it was not a modal verb. So, could you please shed light on this? When I asked him the reason behind it, he explained in a manner which was not understood by me!
"Could not" means that there was some reason why the activity was not possible. "Did not" means that either the people "could not", or for some other reason they chose to not undertake the activity.
Thanks Hot Licks,Mari Lou A and Josh61 for clearing my concept!
Frankly, the question is not entirely clear. I see there are unrelated ideas in the question(s).
Just replace 'get to' by 'reach': 'could/did not reach'. What is the problem? This is very basic.
@EdwinAshworth the exercise requires that the verb "get" is used. To answer Ranajoy Saha's question; if I say I can get to work, it can mean one of two things 1) It is "possible" for me to get there 2) I am "capable" of getting there (I have the ability). The verb to able to expresses ability. So, I am able to get to work is the same meaning as 2).
For tye benefit of @Mari-Lou A: Just see how replacing 'get to' with 'reach' makes the question simpler. Answer this question, and the corresponding question becomes clear. But thank you for pointing out that this is homework. Oh, and if we're being pedantic, unable to get doesn't fulfil the homework's "to not get" requirement. Or are there different rules for different posters?
@EdwinAshworth I didn't say it did, I said it also fitted, as an alternative answer. I never said your solution was incorrect. But you asked "What is the problem?" and I gave you an answer. I didn't mean to sound rude.
And I'm pointing out that (1) the basic question here seems to be differentiating between 'could not' and 'did not' and (2) as such this isn't ELU material.
The difference is similar to that of can and may. Saying to someone "Sorry, I could not get to that" is like saying "I can't get to it because I am physically or mentally unable to." If you "Did not get to that" it means you most likely can, just were busy in the mean time with no time to do said task. "I'm sorry, I did not get to that. I'll do it right now."
I hope this answers your question!
Comments by Josh61 notwithstanding, to get to X normally implies to reach a target [location, situation, condition]. This implication is even stronger in negated constructions, where to not get to X strongly implies there was a failed attempt to reach X. Consider...
1: I did not get to the party last night
2: I did not make it to the party last night
3: I did not go to the party last night
...where #1 and #2 are largely equivalent (I wanted and/or made some attempt to attend the party, but for some reason I wasn't able to be there). #3 carries no such implications, which illustrates a clear distinction between get and go in such constructions.
Therefore, although in general there's a difference between did not ("neutral", implying nothing about desirability or possibility) and could not ("loaded", implying being prevented from doing something, usually desirable), in OP's specific context there's really little to choose between the two, because of the way to get to X is normally understood.
I see significant difference. Maybe it's just me.
@Kris: Are you seriously saying you don't see much difference between "We didn't get to the funeral" (we'd have liked to, but it wasn't possible) and "We didn't go to the funeral" (perhaps because we never wanted to be there anyway)?
I do see a difference, did I say I don't? :)
@Kris: I assume you meant you do see a difference between did not and could not. My entire argument here is based on the premise that to get to X (as opposed to to go to X) already implies a "desire" to end up at X, particularly in negated contexts. Therefore to my mind it makes no real difference whether OP explicitly reinforces the "frustrated desire" element using could, since it's effectively just repeating the implication already made by choosing to use get to rather than go to.
| common-pile/stackexchange_filtered |
Creating triggers without SUPER privilege
I am trying to create a trigger in a MySQL database. However, I'm not allowed to:
#1419 - You do not have the SUPER privilege and binary logging is enabled (you might want to use the less safe log_bin_trust_function_creators variable)
However, I can't enable said variable either, because I need the SUPER privilege for that too...
I've been looking for solutions for hours, but I cannot get it to work. I have tried the following to add the super privilege:
GRANT SUPER ON *.* TO<EMAIL_ADDRESS>IDENTIFIED BY '...';
Result:
#1045 - Access denied for user '...'@'%' (using password: YES)
In terms of configuration files: It seems like I can only view the config.inc.php file. I cannot edit it either:
Cannot load or save configuration
Please create web server writable folder config in phpMyAdmin top level directory as described in documentation. Otherwise you will be only able to download or display it.
What am I expected to do?
You should use the root user to grant SUPER privilege.
@Barmar is right, if could don't have root access, contact your DBA.
| common-pile/stackexchange_filtered |
JNI Global reference to PacketWriter in smack library, how to identify the cause JNI Global referece?
I am using smack library to connect to Facebook XMPP server.
In my local enviroment debug mode, memory leak sometimes happened. I have:
checked the heap dump,
found that JNI Global reference always referred to PacketWriter object in smack library,
PacketWriter contains one thread to do output work
Question:
what caused the JNI Global reference? (of course no JNI used in the server)
Is the JNI Glabal reference caused by debug mode of the server?
I already checked the question: How to identify the cause of a JNI global reference memory leak?.
This is smack source code view:
Update
the heap dump is very big, I post the screenshot of VisualVM about heap
http://sphotos-a.ak.fbcdn.net/hphotos-ak-ash3/21927_384581804957834_1241962037_n.jpg
Can you post the heap dump somewhere?
possible duplicate of How to identify the cause of a JNI global reference memory leak?
| common-pile/stackexchange_filtered |
wp_enqueue_script and wp_register_script in theme not working
I am trying to use a self written script for the jQuery library in my self-made theme, but it's not working. When I simply use script-tags, everything is alright. So it has to be something with wp_enqueue_script and wp_register_script.
In the beginning of my code in the index.php I have
<?php
wp_enqueue_script('jquery');
wp_register_script( 'my_name', 'path/to/my/script/script.js'));
wp_enqueue_script('my_name');
?>
I know that normally that has to be written BEFORE wp_head(); but because I am still developing everything, I haven't split my code into various files, I am working straight in the index.php. Do I have to do any other steps before enqueueing the scripts?
You want to use wp_enqueue_scripts.
add_action( 'wp_enqueue_scripts', 'wpse_3810' );
function wpse_3810() {
wp_enqueue_script( 'jquery' );
wp_register_script( 'my_name', 'path/to/my/script/script.js'));
wp_enqueue_script( 'my_name' );
}
Yes, but technically, no earlier than init. Using wp_enqueue_scripts is probably more semantically correct.
Doing enqueue on init is wrong and myth, long propagated by incorrect info in Codex. wp_enqueue_scripts is correct hook to use for that.
Oh, good to know. I've changed the code appropriately.
There is a 'wp_enqueue_scripts' hook for this, 'init' is usually not the right choice.
I would recommend you put your code in functions.php as that would be loaded before your index.php
You should also define dependencies so that they are loaded in the correct order.
<?php
//wp_register_script( $handle, $src, $deps, $ver, $in_footer );
wp_register_script( 'my_name', 'path/to/my/script/script.js', array('jquery') );
?>
See: wp register script
| common-pile/stackexchange_filtered |
Controlling different clocks with switches in VHDL
How can I control 2 different clocks? I wrote that clk1Hz<=newclock or newclock2; so I was going to be able to control it by choosing one of them is '1'. However, it gives me unexpected identifier error in that sentence(clk1Hz<=newclock or newclock2;). I have no idea why I have such an error there.
entity Top_module is
port( clk : in std_logic;
input: in std_logic_vector(2 downto 0);
reset: in std_logic;
output: out std_logic_vector(7 downto 0)
);
end Top_module;
architecture Behavioral of Top_module is
component Clockdivider
port( clock : in std_logic;
newclock : out std_logic;--for 1/4sec
newclock2: out std_logic--for 1/20 sec
);
end component;
signal state :std_logic_vector(3 downto 0):= "0000";
signal clk1Hz : std_logic;
clk1Hz<=newclock or newclock2;
begin
comp1: Clockdivider port map(clk, clk1Hz);
process(clk1Hz,reset)
begin
if(rising_edge(clk1Hz)) then
if newclock='1' then --this is for 1/4 sec option
...
elsif newclock2='1' then
...
Your mapping of clockdivider's ports is not corresponding to it's declaration. It has one input and two outputs. The port mapping should go like this:
signal newclock : std_logic;
signal newclock2 : std_logic;
.......
comp1: Clockdivider port map(clk, newclock, newclock2);
Your clk1Hz signal is assigned inside the architecture head -> move it after begin (architecture body).
But your design has more drastic problems:
A OR-gate is not a selection circuit, that would be a multiplexer.
But, you can not multiplex two clock signals with normal logic.
Which test platform are you using?
ASIC/FPGA, Xilinx/Altera/..., Spartan/Virtex/.../Cyclon/Stratix/... ?
For example: If you are using a Xilinx FPGA device, then you should use a BUFGCTRL to switch two clock signals. This is a very advanced technique, so I would suggest to review your design and use a solution without clock-multiplexing :)
| common-pile/stackexchange_filtered |
Maven site in gradle
Is there a plugin in gradle which can generate maven site similar thing? It would be great if it was compatible with current maven site apt format file.
It seems that there are two plugins, this and this. The first one was committed four years ago, I know nothing about the second. So it seems that these plugins will not be helpful.
Thanks, it is not that helpful, but I don't find any answer also in the google.
Unfortunately it won't be that helpful since there's no helpful answer here ;]
just a sitenote. I'm the author of the 1st one mentioned here. It was originally driven by a presentation about writing gradle plugins and I personally never used it in production
I just wrote one as part of Gradle Fury. The primary plugin(s) (it's really just a collection of scripts) for Gradle-Fury is to enhance/fix many of the gradle short comings on publishing, android stuff, pom stuff, etc. Since there's pretty much no standard way for most things in gradle, we jam most of those configurations in the gradle.properties file. That said, the site plugin does depend on those settings to correctly stylize the site.
In short, to apply to your project...
put this in your root build.gradle file
apply from 'https://raw.githubusercontent.com/gradle-fury/gradle-fury/master/gradle/site.gradle'
Next edit your gradle.properties file and use this link as a template for your pom settings....
https://github.com/gradle-fury/gradle-fury/blob/master/gradle.properties
Create a src/site/ directory.
Make a src/site/index.md file as your home page
Copy/clone following files/folders from https://github.com/gradle-fury/gradle-fury/tree/master/src/site
css
images
img
js
template.html
Finally, build the site with gradlew site. Default output is to rootDir/build/site
Don't like the way it looks? (it looks like the Apache Fluido theme from the maven site plugin). Just edit template.html to meet your needs.
I'm currently working on a mechanism to bootstrap the site plugin which will remove a few of these steps, but that's what it is right now. More info and complete feature list is at the wiki
One last note, should run gradlew site after all of your check tasks, but it's not wired up to depend on it. Basically, anything that produces reports for your modules should be ran before site since it's aggregated into the site package, including javadocs and much more. The rest of the fury scripts help automate much of the painful configuration steps. Worth checking out (see the quality and maven-support plugins)
Disclaimer: I'm one of the authors.
Edit: site preview: http://gradle-fury.github.io/gradle-fury/
Edit: We just cut an updated version that makes manual creation of src/site and all the copy/clone tasks from the master repo unnecessary. Just run gradlew site while internet connected and it'll do the rest for you.
It look like you put a lot of effort into this plugin, but that it "died" after gradle 4.1. What is the current status and if it has been superseded by something else, then what is that?
well i just stopped using gradle for a number of reasons and i haven't had a whole lot of people interested in it nor in helping maintain it. The biggest issue was that newer versions of gradle broke artifact publishing. Regardless, the gradle site stuff should still work with newer versions
| common-pile/stackexchange_filtered |
Prove the triangle inequality only using the inverse triangle inequality.
I know the easy way to show the inverse triangle inequality using the triangle inequality. As far as I‘m aware of the two are equivalent so I was wondering how to prove the triangle inequality using only the inverse triangle inequlality. I tried using cases and some substitutions but I didn‘t get far:/
Thanks for your help<3
$|x|=|(x+y+(-y)| \geq |x+y|-|y|$.
@JitendraSingh: That question is the opposite of what's being asked here.
@JitendraSingh: Apparently not, since they already said “I know the easy way to show the inverse triangle inequality using the triangle inequality”, which is what that question was about. But never mind...
| common-pile/stackexchange_filtered |
CMFCRibbonStatusBar and menu prompts
I have a VS2015 application based around a Ribbon bar with numerous context menu's. The issue is how do I get the legacy menu item prompts to route the status bar derived from CMFCRibbonStatusBar.
Pre Ribbon bar these prompts were transparently mapped to CMFCStatusBar. Advice on how to activate/re-establish this facility under the Ribbon Bar environment would be appreciated.
I don't know, but I can advice you to put a breakpoint on CFrameWnd::GetMessageString and try to follow the code.
CFrameWnd::GetMessageString() provided good advice and showed the messages being picked up within MFC but not displayed. Consequently reverted CMFCRibbonStatusBar to CMFCStatusBar and all worked fine again.
I think you would need to implement the SetWindowText method on your "status bar derived from CMFCRibbonStatusBar" class.
See my answer on http://stackoverflow.com/a/42957422/383779
| common-pile/stackexchange_filtered |
Migrating only selected accounts to new machine
I am working on upgrading the hardware used for our mail/ftp server from an old 32 bit platform to a much newer 64 bit platform. Both are running Ubuntu 16.04.03. Getting all of the appropriate packages installed on the new system has been accomplished. Now I need to transfer the USER accounts/groups ONLY to the new machine. I don't want to just copy the old system's passwd, shadow, group and gshadow files because many of the uid/gid numbers are different on the new system. Once I get the user accounts migrated, I will start on moving all package settings (not looking forward to that!)
My question has several parts:
Is it "safe" to copy individual records from the old files into the new ones?
Is there a better way to do what I need than manually copying each individual record?
Are the four files I listed the only ones I need to modify, or are there others?
What purpose do the files named like 'gshadow-' serve?
EDIT: Perhaps I should add that I am currently transferring the entire /home folder tree from the old system to the new, which is why I want to keep all of the existing uid/gid values. Luckily, they are all well above those created by installing packages so the user's uid/gid values don't conflict with anything on the new machine. I only have about a dozen, but they are virtually computer illiterate, so I am not allowed to either change their passwords nor tell them to provide me with new ones. That's why I need to transfer their existing records across.
I would have just cloned the drive and moved the image, but I wanted to use the additional memory that moving to 64 bit allowed.
EDIT2: It appears that there are a set of vi tools (vipw and vigr) that may be used to manually edit the files -- if I can only figure out how to use vi enough to do that. Sigh. The "vi way" has always been utterly, totally alien to me to the point that it is even difficult to comprehend the documentation and tutorials. Hopefully, I can just use an editor I understand and then have vi delete everything and paste in the entire updated file content.
I would myself use the generic migration methods (example: https://www.cyberciti.biz/faq/howto-move-migrate-user-accounts-old-to-new-server/ ) and renew the user id on the new server if you want to (example script to change user id: https://pastebin.com/2Hfm4VgK )
Thanks for the link to the faq, it is similar to what I tried but much more automated.
Is it "safe" to copy individual records from the old files into the
new ones?
Apparently, since my system is still working and all migrated accounts are now accessible.
Is there a better way to do what I need than manually copying each
individual record?
Probably, the link provided by @Rinzwind (Move or migrate user accounts from old Linux server to a new Linux server | nixCraft) shows how to use command line tools to automate the user account transfer ... mostly. (smile)
Are the four files I listed the only ones I need to modify, or are
there others?
It seems that group, gshadow, passwd and shadow are the only files that need modifying, though any other user-specific things such as home and mail folders would also need to be transferred.
What purpose do the files named like 'gshadow-' serve?
I did not figure this one out, but I believe they are backups of the previous version. I asked because I was concerned that they might, somehow, be used to ensure the integrity of the shadow files to protect them from manual modifications, but they are not used for that.
To make my changes, I used the WinMerge tool on my Windows desktop to compare the old/new files and selectively move only the lines I needed across from old to new. Then I used the sudo vipw/vigr commands to edit the files on the new system. Surprisingly, I was asked what editor I wanted to use when I started the first one, so I picked nano, which I understand enough to delete the old and paste the entire modified content into. I rebooted after changing all four and the migrated user accounts are working.
| common-pile/stackexchange_filtered |
memory error, but only on device running ios 8
My app running on an iPhone 4 iOS 8.3 gets this error:
2016-06-26 19:09:22.587 Skyline Flora[4498:949043] *** Terminating app
due to uncaught exception 'RLMException', reason: 'mmap() failed:
Cannot allocate memory size: 671088640 offset: 0'
*** First throw call stack:
(0x29cd5d67 0x37534c77 0x2318db 0x211d03 0x21231b 0x212c41 0x2113fd
0x15354f 0x153133 0x152fbf 0xbea6b 0x2d17a705 0x2d2245a5 0x2d2244cd
0x2d223a51 0x2d22378b 0x2d2234f1 0x2d223489 0x2d177c1f 0x2cba2f65
0x2cb9e951 0x2cb9e7d9 0x2cb9e1c7 0x2cb9dfd1 0x2d3dba5d 0x2d3dc7f5
0x2d3e6c39 0x2d3dac2b 0x304470e1 0x29c9c60d 0x29c9b8d1 0x29c9a06f
0x29be7981 0x29be7793 0x2d1deb87 0x2d1d9981 0xbfa1b 0x37ad0aaf)
libc++abi.dylib: terminating with uncaught exception of type
NSException
It only happens on the device, it's fine in the simulator (Xcode 7.3).
The phone has 18GB free when the app is run.
This error has shown up in the past, as can be found easily with a search, but on writes; this app only reads the database, never writes.
There's no problem on iOS 9 devices.
What's the next thing to check?
Is the phone in question an iPhone 4 or iPhone 4S? The iPhone 4 doesn't run anything past iOS 7, as far as I know.
My typo - it's a 4S
The Realm file is mmaped whether it is opened as read-only or read-write. Unfortunately, this is probably an issue inherent to the resource constraints of the iPhone 4/4S. We've seen issues with mmap ranging from files as small as 300 MB, depending on the device.
You can check out this Github issue for some potential workarounds.
OK, thanks. Given that it only happens on iOS 8 devices, and iOS 10 being imminent, it doesn't make sense to go through a whole lot of dev and testing for a small set of devices.
| common-pile/stackexchange_filtered |
Google Column Chart blank page
I am working on this project which I need to make a Google Chart (column chart) to make the data in my database visualize. I checked the IP and the database (the data comes from a database), everything works fine. But when I try to see the output on my computer, the page is blank. I thought the problem comes from google.load, and I made it like this below. I still get blank page. Please help me get this through. Thanks!
//
google.load('visualization', '1.0', {packages:['corechart'], callback: drawChart});
//
Here is the whole page.
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml"><html>
<head>
<title>R1 Google Chart</title>
<!-- Load jQuery -->
<script language="javascript" type="text/javascript"
src="http://ajax.googleapis.com/ajax/libs/jquery/1.7.0/jquery.min.js">
</script>
<!--Load the Ajax API-->
<script type="text/javascript" src="https://www.google.com/jsapi"></script>
<script type="text/javascript">
// Load the Visualization API and the column chart package.
// Set a callback to run when the Google Visualization API is loaded.
google.load('visualization', '1.0', {packages:['corechart'], callback: drawChart});
function drawChart() {
var jsonData = $.ajax({
url: "chart.php",
dataType: "json",
async: false
}).responseText;
var obj = jQuery.parseJSON(jsonData);
var data = google.visualization.arrayToDataTable(obj);
var options = {
title: 'Solar Panel Data',
width: 800,
height: 600,
hAxis: {title: 'time', titleTextStyle: {color: 'red'}}
};
var chart = new google.visualization.ColumnChart(document.getElementById('chart_div'));
chart.draw(data, options);
}
</script>
</head>
<body>
<!--this is the div that will hold the column chart-->
<div id="chart_div" style="width: 900px; height: 500px;">
</div>
</body>
</html>
PHP page
<?php
$con=mysql_connect("131.xxx.xxx.xx","xx","xxxx") or die("Failed to connect with database!!!!");
mysql_select_db("r1array", $con);
/** This example will display a column chart. If you need other charts such as a Bar chart, you will need to modify the code a little
to make it work with bar chart and other charts **/
$sth = mysql_query("SELECT UNIX_TIMESTAMP(TimeStamp), Pac FROM SolarData");
/*
---------------------------
example data: Table (Chart)
--------------------------
TimeStamp Pac
2013-08-16 06:45:01 0
2013-08-16 06:50:01 0
2013-08-16 06:55:01 12
2013-08-16 07:00:00 39
2013-08-16 07:05:01 64
2013-08-16 07:10:00 84
*/
$rows = array();
//flag is not needed
$flag = true;
$table = array();
$table['cols'] = array(
// Labels for your chart, these represent the column titles
array('label' => 'TimeStamp', 'type' => 'TIMESTAMP DEFAULT NOW()'),
array('label' => 'Pac', 'type' => 'INT')
);
$rows = array();
while($r = mysql_fetch_assoc($sth)) {
$temp = array();
//
$temp[] = array('v' => (string) $r['TimeStamp']);
// Values of each slice
$temp[] = array('v' => (int) $r['Pac']);
$rows[] = array('c' => $temp);
}
$table['rows'] = $rows;
$jsonTable = json_encode($table);
echo $jsonTable;
mysql_close($db);
?>
Use fire bug to track issue. Check your firebug console and make sure you are getting no error in it.
I used Firebug. I can see all my codes. But it seems like Firebug doesn't give you errors if I use it in the right way...
Go to chart.php in your browser, and update your post with what is output by your PHP.
It seems that prepare data in wrong way in your php file. With your html file and following php file which fake your data I got column chart.
<?php
/*
---------------------------
example data: Table (Chart)
--------------------------
TimeStamp Pac
2013-08-16 06:45:01 0
2013-08-16 06:50:01 0
2013-08-16 06:55:01 12
2013-08-16 07:00:00 39
2013-08-16 07:05:01 64
2013-08-16 07:10:00 84
*/
$table = array();
$table[0] = array('TimeStamp', 'Pac');
$table[1] = array('2013-08-16 06:45:01', 0);
$table[2] = array('2013-08-16 06:50:01', 0);
$table[3] = array('2013-08-16 06:55:01', 12);
$table[4] = array('2013-08-16 07:00:00', 39);
$table[5] = array('2013-08-16 07:05:01', 64);
$table[6] = array('2013-08-16 07:10:00', 84);
$jsonTable = json_encode($table);
echo $jsonTable;
?>
There is comment in your php file This example will display a pie chart.... Which chart do you want to create?
sorry, I am trying to display a column data, that's for sure.
Off the bat, I can see a problem here:
$table['cols'] = array(
// Labels for your chart, these represent the column titles
array('label' => 'TimeStamp', 'type' => 'TIMESTAMP DEFAULT NOW()'),
array('label' => 'Pac', 'type' => 'INT')
);
as 'TIMESTAMP DEFAULT NOW()' and 'INT' are not valid data types for a DataTable. If you want to create Date objects from your timestamps, you need to use the 'date' or 'datetime' data type, and format your timestamps as strings with the format 'Date(year, month, day, hour, minute, second, millisecond)', where month is zero-indexed (so January is 0 not 1). The 'INT' type should be 'number'.
Update your code with these fixes. If your chart still doesn't work, view chart.php in a browser - you should see a JSON string output of your data (if not, there is a problem in your PHP that you will have to debug) - and update your post with this JSON string.
| common-pile/stackexchange_filtered |
How can I have one object that can be any class with the same methods?
I have a basic c# app to process some sql files. I have a class for each type of object like proc, function, view, etc.
What I want to do in the main is determine what kind of script file im processing and them create an instance of that class.
Then using that object, call it objSqlType, I want to call a method in the underlying class. each class will have the same method names implemented like getHeader(), getPermissions(), etc
Is this possible. I looked at interfaces which forces me to create methods but can't seem to use a single object,
for example:
object objSqlType;
switch (type)
{
case ObjectType.PROC:
objSqlType = new SqlProc();
break;
case ObjectType.FUNCTION:
objSqlType = new SqlFunction();
break;
case ObjectType.VIEW:
objSqlType = new SqlView();
break;
default:
break;
}
string header = objSqlType.getHeader(); // call getHeader for whichever object I created
Possible duplicate of How to use reflection to call method by name, that should do it if you know method name and all types have it (you can check if method exists otherwise). From design point of view you should use interfaces however (or even base class if objects are similar).
instead of object objSqlType; use yourInterface objSqlType;. Maybe consider using base class instead of interface
I would create a generic object MainObject where T can be any of the classes you desire func, proc, view. Those classes would all implement the same interface IMyInterface. class MainObject<T> where T: IMyInterface {public T myObject {get;set;} ...}.
Create an interface
Create classes that implement that interface
Declare your object as that interface
Instantiate your object as one of the classes that implement the interface
Like this:
public interface ISqlType
{
string getHeader();
}
public class SqlProc : ISqlType
{
public string getHeader()
{
return "I'm a SqlProc!"
}
}
public class SqlFunction : ISqlType
{
public string getHeader()
{
return "I'm a SqlFunction!"
}
}
public class SqlView : ISqlType
{
public string getHeader()
{
return "I'm a SqlView!"
}
}
ISqlType objSqlType;
switch (type)
{
case ObjectType.PROC:
objSqlType = new SqlProc();
break;
case ObjectType.FUNCTION:
objSqlType = new SqlFunction();
break;
case ObjectType.VIEW:
objSqlType = new SqlView();
break;
default:
break;
}
string header = objSqlType.getHeader();
Thanks! this exactly what I was after. I just didn't define my generic type as the interface! thanks.
Create an interface and make every object implement it. Or alternatively, create an abstract class and make every object inherit from it.
use dynamic
dynamic objSqlType;
// rest of code
string header = objSqlType.getHeader();
Also it seems that you are using an uninitialized variable here because if you fall in default case then objSqlType is uninitialized and you can not use an uninitialized local variable.
And I quote: Therefore, no compiler error is reported. However, the error does not escape notice indefinitely. It is caught at run time and causes a run-time exception. From: https://msdn.microsoft.com/en-us/library/dd264736.aspx Why would you want that in your code?
May be below code help you. I am not sure whether you want to remove create new object from client and make it centralised.
You can think of making your requirement in decoupled way
Like..
interface IDBObject
{
string GetDBType()
}
then have your required classes and implement respect interface
public class SQLStoreProc:IDBObject
{
public string GetDBType()
{}
}
public class SQLFunction:IDBObject
{
public string GetDBType()
{}
}
public class SQLMisc:IDBObject
{
public string GetDBType()
{}
}
Now create your wrapper or factory class which will be
class clsFactory
{
public static IDBObject CreateObject(string type)
{
IDBObject ObjSelector = null;
switch (type)
{
case "SP":
ObjSelector = new SQLStoreProc();
break;
case "FUNC":
ObjSelector = new SQLFucntion();
break;
default:
ObjSelector = new SQLMisc();
break;
}
return ObjSelector;
}
}
Now in main method you can have something like below
public static void main (args a)
{
IDBObject ObjIntrface = null;
ObjIntrface = clsFactory.CreateObject(your object type);
string res = ObjIntrface.GetDBType("First", "Second");
}
| common-pile/stackexchange_filtered |
Homology of nice planar sets
Is there a quick and simple proof of the fact that the homology group of a nice (say with piecewise smooth boundary) planar domain is free abelian with a basis corresponding to the holes in the domain? Can you direct me to literature that might be relevant, i.e., with explicit examples similar to this? In particular, I would like to know an exact condition under which the above fact is true. For instance, it seems that one cannot take simply an open subset of the plane, as it appears that at least the homotopy of general planar sets is an active research field today.
If you already are at the point where you can talk about the "holes" in the domain, then you can apply Mayer-Vietoris to the union of the planar domain and discs capping off the holes.
Thanks a lot! The holes are defined using the ambient plane simply as the bounded connected components of the complement (I am not sure this is the definition you had in mind). I was thinking of an argument that can be given in a complex analysis class. There is a complex analysis argument to this but another topological argument would be nice if it can be presented in say 20 minutes.
I suppose I was assuming the Schönflies theorem (which implies that each of those bounded components is homeomorphic to a disc). The homological corollary of the Schönflies theorem is an immediate corollary of Alexander duality, but given that I don't have any idea of the background of your students I can't recommend an easy way of teaching the special case you need.
Thanks! Can you direct me to literature that might be relevant, i.e., with explicit examples similar to this? I particular, I would like to know an exact condition under which the above fact is true. For instance, it seems that one cannot take simply an open subset of the plane, as it appears that at least the homotopy of general planar sets is an active research field today.
| common-pile/stackexchange_filtered |
Default case in conflict between linter and code coverage in Scala
My linters always complain when I don't have a default case in pattern matching in Scala. However, often the default case is artificial and my programs can never actually reach that case.
As an example, consider the following program:
scala> val x = 1
x: Int = 1
scala> x match {
| case 1 => println("yay")
| case _ => println("nay")
| }
yay
It is clear that the bottom case is in fact dead code here, however, my linter will still complain about it. On the other hand, I very much understand the gut feeling that matching on integers and not covering all cases feel dangerous, but in this case it's clearly irrational.
Should I simply delete the default case here and suppress the linter warning to get some peace of mind?
edit:
Please see https://www.codacy.com/app/hejfelix/Frase/issues?bid=2858415&filters=W3siaWQiOiJDYXRlZ29yeSIsInZhbHVlcyI6WyJFcnJvciBQcm9uZSJdfV0= for a more detailed view on the number of cases where Codacy asks for default cases.
in the example case, an if-else would solve the problem. I don't know your real cases, but maybe if-elses are also applicable
In some other cases, probably is just the linter that it's not powerful enough to understand the default case is not needed. (btw, which one are you using?)
I am actually doing an open source project, so Codacy is a free option for me: https://www.codacy.com/app/hejfelix/Frase/dashboard
Very happy with the stuff it does. At work I'm also using ScalaStyle.
edit Also, codacy and travis seem to play quite nicely with each other, especially when you throw in scoverage.
To comment on your suggestion, I almost exclusively use pattern matching on case classes to distinguish between different shapes of e.g. Abstract Syntax Trees. There, I would feel extremely handicapped using if/else because I would have to inspect the types explicitly + manually extracting fields etc. Furthermore, I would hope that the compiler does a better job of optimizing my match cases than I would do using if/else myself.
| common-pile/stackexchange_filtered |
detect suspect
How To detect the suspect database in SQL server2000 With Script Or Program ?
Thats some blasphemous ms sql slang, but, honestly, it got me thinking of FBI and some kind of "database of suspects" the moment I've seen the question.
the status column in master.dbo.sysdatabases will have the suspect bit set which is 256 for the database. in which case this will return a result:
select * from sysdatabases where status&256 = 256
| common-pile/stackexchange_filtered |
why <> is empty in adjacency_list<>
Can I ask what the <> is for in adjacency_list<> ? I am new to stl. I know I can define a container like this: vector<int> vec, but why here it is empty in <>? Thank you.
#include <boost/graph/adjacency_list.hpp>
using namespace boost;
adjacency_list<> g;
// adds four vertices to the graph
adjacency_list<>::vertex_descriptor v1 = add_vertex(g);
adjacency_list<>::vertex_descriptor v2 = add_vertex(g);
adjacency_list<>::vertex_descriptor v3 = add_vertex(g);
adjacency_list<>::vertex_descriptor v4 = add_vertex(g);
possible duplicate of Template default arguments
It's because adjacency_list is a templated type. You must specify <> when using C++ templates.
The full definition for the type is:
template <class OutEdgeListS = vecS,
class VertexListS = vecS,
class DirectedS = directedS,
class VertexProperty = no_property,
class EdgeProperty = no_property,
class GraphProperty = no_property,
class EdgeListS = listS>
class adjacency_list
{
...
}
Notice that each template parameter has a default: vecS, vecS, directedS, no_property, no_property, no_property, listS, respectively.
The empty <> means you want the default classes for the template parameters. By not specifying the specific values for the template parameters, you get the defaults.
The reason <> is needed, and can't be left out (which would be nice, yes) is because of the way the C++ language has been defined. You can avoid it by using a typedef, but ultimately the angle brackets are required for using template types.
| common-pile/stackexchange_filtered |
Merge left outer join column
I have a query that use left outer join.
something like that :
select issues.id, journal.notes from issues left outer join on (issues.id = journal.issue_id and journal.notes != '')
So far it works.
But in fact in the database journal there can be multiple row that reference the issues one.
So for each row of journal, it duplicates the result of issues.
What I would like to do is merge notes row for a same issue.
Example :
issues :
id
1
2
journal :
issue_id, notes
1, note 1
2, note 1 for issue 2
2, note 2 for issue 2
Actually it returns this result :
1, note 1
2, note 1 for issue 2
2, note 2 for issue 2
And I would like it to be :
1, note 1
2, note 1 for issue 2\nnote 2 for issue 2
I am using postgres.
How can I do this kind of merge ?
Thanks.
Perhaps you are looking for this:
select issues.id, array_agg(journal.notes) from issues
left outer join journal on (issues.id = journal.issue_id and journal.notes != '')
group by issues.id
Please check this http://sqlfiddle.com/#!1/24db9/2
| common-pile/stackexchange_filtered |
Snake Game -- Can't steer snake
Actual question on the bottom of the post!
At first, I want to explain my problem.
I'm writing a basic Snake game and I got the snake to move automatically. It moves automatically to the right of the window when you execute the code, just like intended. However, I can't steer my snake the way I want, it doesn't change its direction at all.
To avoid confusion, player is an instance of the class Snake.
To explain the movement of the snake:
The Snake object has a coordinates property which is an ArrayList holding SnakePart objects. Each SnakePart object has the property x and y. Using this ArrayList, the snake is moving by drawing little rectangles on a canvas by using the x and y properties on the y- and x-axis of the canvas.
The Snake object also has a dx and a dy property that gets added (or subtracted -- depending on the direction of the snake) to the x and y property of the SnakePart object to move the snake in a direction.
To update the ArrayList in Snake.java:
public void move() {
SnakePart head = new SnakePart(this.coordinates.get(0).x + this.dx, this.coordinates.get(0).y + this.dy);
this.coordinates.add(0, head);
this.coordinates.remove(this.coordinates.size() - 1);
}
To draw the snake on the canvas in Board.java (partly, rest of the method is not necessary for now):
@Override
public void paintComponent(Graphics g) {
this.player.coordinates.forEach(snakePart -> {
g.setColor(Color.BLUE);
g.fillRect(snakePart.x, snakePart.y, 10, 10);
});
}
To steer the snake, I want to use the arrow keys. Depending on which arrow key is pressed, the snake's x and y coordinates/properties get modified (Board.java):
@Override
public void keyPressed(KeyEvent event) {
int keyCode = event.getKeyCode();
if (keyCode == 37) {
this.player.dx = -10;
this.player.dy = 0;
} else if (keyCode == 38) {
this.player.dx = 0;
this.player.dy = -10;
} else if (keyCode == 39) {
this.player.dx = 10;
this.player.dy = 0;
} else if (keyCode == 40) {
this.player.dx = 0;
this.player.dy = 10;
}
}
Whole code:
Snake.java:
package com.codef0x.snake;
import java.util.ArrayList;
public class Snake {
ArrayList < SnakePart > coordinates;
int dx = 10;
int dy = 0;
public Snake(ArrayList < SnakePart > coords) {
this.coordinates = coords;
}
public void move() {
SnakePart head = new SnakePart(this.coordinates.get(0).x + this.dx, this.coordinates.get(0).y + this.dy);
this.coordinates.add(0, head);
this.coordinates.remove(this.coordinates.size() - 1);
}
public void grow() {
SnakePart newPart = new SnakePart(0, 0);
newPart.x = this.coordinates.get(this.coordinates.size() - 1).x - 10;
newPart.y = this.coordinates.get(this.coordinates.size() - 1).y;
this.coordinates.add(this.coordinates.size() - 1, newPart);
}
}
Board.java (showing only relevant parts, otherwise it would be too much code)
package com.codef0x.snake;
import javax.swing.*;
import java.awt.*;
import java.awt.event.KeyEvent;
import java.awt.event.KeyListener;
import java.util.ArrayList;
import java.util.Timer;
import java.util.TimerTask;
public class Board extends JPanel implements KeyListener {
Snake player;
ArrayList<SnakePart> snakeCoordinates;
public Board() {
this.snakeCoordinates = new ArrayList<>();
snakeCoordinates.add(new SnakePart(150, 150));
snakeCoordinates.add(new SnakePart(140, 150));
snakeCoordinates.add(new SnakePart(130, 150));
snakeCoordinates.add(new SnakePart(120, 150));
this.player = new Snake(snakeCoordinates);
this.food = new Food();
}
@Override
public void paintComponent(Graphics g) {
clear(g);
this.player.coordinates.forEach(snakePart - > {
g.setColor(Color.BLUE);
g.fillRect(snakePart.x, snakePart.y, 10, 10);
});
}
public void clear(Graphics g) {
g.clearRect(0, 0, getHeight(), getWidth());
}
@Override
public void update(Graphics g) {
paintComponent(g);
}
@Override
public void keyPressed(KeyEvent event) {
int keyCode = event.getKeyCode();
if (keyCode == 37) {
this.player.dx = -10;
this.player.dy = 0;
} else if (keyCode == 38) {
this.player.dx = 0;
this.player.dy = -10;
} else if (keyCode == 39) {
this.player.dx = 10;
this.player.dy = 0;
} else if (keyCode == 40) {
this.player.dx = 0;
this.player.dy = 10;
}
}
@Override
public void keyTyped(KeyEvent event) {
return;
}
@Override
public void keyReleased(KeyEvent event) {
return;
}
public void run(Board board) {
Timer game = new Timer();
game.schedule(new TimerTask() {
boolean initiallySpawned = false;
@Override
public void run() {
Graphics g = board.getGraphics();
if (hitSomething()) { // removed method hitSomething, not relevant
game.cancel();
return;
}
player.move();
update(g);
}
}, 0, 500);
}
}
SnakePart.java:
package com.codef0x.snake;
public class SnakePart {
int x;
int y;
public SnakePart(int x, int y) {
this.x = x;
this.y = y;
}
}
What am I doing wrong and what do I need to change to steer the snake properly?
In case you still want / need to see all files as a whole, you can have a look at them here:
Snake.java
Board.java
SnakePart.java
Food.java <- Not related, but may prevent confusion about the Food object
What behavior are you seeing? Does the snake respond at all? Have you checked to make sure that the keyPressed event is firing like you think it is? I would put System.out.println() statements in your movement code and validate that as you press keys it's flowing through the codepaths that you expect.
I would do key-typed or released rather than just key-pressed, but have you tried just printing or otherwise debugging the grow method to see why dx is always moving "to the right"?
@Jazzepi I've used System.out.println(this.player.dx + " " + this.player.dx) to debug, but only in the keyPressed method where everything looked fine. Now I did it again, but this time I also added a System.out.println(this.dx + " " + this.dy) to the move() method in Snake.java. In keyPressed the outputs look like expected (after pressing key up dx went from 10 to 0 and dy from 0 to -10). But in move() dx is still 10 and dy still 0...
@cricket_007 I'm sure the grow() method is not related to the problem because dx is sopposed to be moving to the right all the time, until changed via the arrow keys what doesn't work.
That's my point... It's not changing so it's always 10 within move and/or grow. If you set a breakpoint in each method and your key handler, what happens?
@CodeF0x Call System.identityHashCode(player) inside of the keyPressed method and the move method. I wonder if they're the same object. I'm hazy on threading semantics at the moment, but I feel like you might have two seperate objects. One inside the thread and one outside.
@cricket_007 I've set a breakpoint in grow(), but because it doesn't get called anywhere the debugger doesn't jump to grow(). In keyPressed() everything looks fine. But in move() I still have the same values as they would have never been modified by keyPressed(). The values somehow don't get updated.
Don't use a KeyListener. See Motion Using the Keyboard
The problem is in the main.
You create a board to host your game status, and create different one to listen to the keyboard.
public static void main(String[] args) {
JFrame frame = new JFrame("Snake");
frame.setDefaultCloseOperation(3);
Board board = new Board();
frame.add(board);
frame.setSize(500, 500);
frame.addKeyListener(new Board());
frame.setVisible(true);
board.run(board);
}
it should be:
public static void main(String[] args) {
JFrame frame = new JFrame("Snake");
frame.setDefaultCloseOperation(3);
Board board = new Board();
frame.add(board);
frame.setSize(500, 500);
frame.addKeyListener(board);
frame.setVisible(true);
board.run(board);
}
Also board.run(board) has little sense, in the scope of run method, board can be swapped to this (and so omitted) ...
@minus Nice catch!
| common-pile/stackexchange_filtered |
Explicit Loading of DLL
I'm trying to explicitly link with a DLL. No other resources is available except the DLL file itself and some documentation about the classes and its member functions.
From the documentation, each class comes with its own
member typedef
example: typedef std::map<std::string,std::string> Server::KeyValueMap, typedef std::vector<std::string> Server::String Array
member enumeration
example: enum Server::Role {NONE,HIGH,LOW}
member function
example: void Server::connect(const StringArray,const KeyValueMap), void Server::disconnect()
Implementing the codes from google search, i manage to load the dll can call the disconnect function..
dir.h
LPCSTR disconnect = "_Java_mas_com_oa_rollings_as_apiJNI_Server_1disconnect@20";
LPCSTR connect =
"_Java_mas_com_oa_rollings_as_apiJNI_Server_1connect@20";
I got the function name above from depends.exe. Is this what is called decorated/mangled function names in C++?
main.cpp
#include <iostream>
#include <windows.h>
#include <tchar.h>
#include "dir.h"
typedef void (*pdisconnect)();
int main()
{
HMODULE DLL = LoadLibrary(_T("server.dll"));
pdisconnect _pdisconnect;`
if(DLL)
{
std::cout<< "DLL loaded!" << std::endl;
_disconnect = (pdisconnect)GetProcAddress(DLL,disconnect);
if(_disconnect)
{
std::cout << "Successful link to function in DLL!" << std::endl;
}
else
{
std::cout<< "Unable to link to function in DLL!" << std::endl;
}
}
else
{
std::cout<< "DLL failed to load!" << std::endl;
}
FreeLibrary (DLL);
return 0;}
How do i call (for example) the connect member function which has the parameter datatype declared in the dll itself?
Edit
more info:
The DLL comes with an example implementation using Java. The Java example contains a Java wrapper generated using SWIG and a source code.
The documentation lists all the class, their member functions and also their datatypes. According to the doc, the list was generated from the C++ source codes.(??)
No other info was given (no info on what compiler was used to generate the DLL)
My colleague is implementing the interface using Java based on the Java example given, while I was asked to implement using C++. The DLL is from a third party company.
I'll ask them about the compiler. Any other info that i should get from them?
I had a quick read through about JNI but i dont understand how it's implemented in this case.
Update
i'm a little confused... (ok, ok... very confused)
Do i call(GetProcAddress) each public member function separately only when i want to use them?
Do i create a dummy class that imitates the class in the dll. Then inside the class definition, i call the equivalent function from the DLL? (Am i making sense here?) fnieto, is this what you're showing me at the end of your post?
Is it possible to instantiate the whole class from the DLL?
I was trying to use the connect function described in my first post. From the Depends.exe DLL output,
std::map // KeyValueMap has the following member functions: del, empty, get, has_1key,set
std::vector // StringArray has the following member functions: add, capacity, clear, get, isEMPTY, reserve, set, size
which is different from the member functions of map and vector in my compiler (VS 2005)...
Any idea? or am i getting the wrong picture here...
Unless you use a disassembler and try to figure out the paramater types from assemly code, you can't. These kind of information is not stored in the DLL but in a header file coming with the DLL. If you don't have it, the DLL is propably not meant to be used by you.
I would be very careful if I were you: the STL library was not designed to be used across compilation boundaries like that.
Not that it cannot be done, but you need to know what you are getting into.
This means that using STL classes across DLL boundaries can safely work only if you compile your EXE with the same exact compiler and version, and the same settings (especially DEBUG vs. RELEASE) as the original DLL. And I do mean "exact" match.
The C++ standard STL library is a specification of behavior, not implementation. Different compilers and even different revisions of the same compiler can, and will, differ on the code and data implementations. When your library returns you an std::map, it's giving you back the bits that work with the DLL's version of the STL, not necessarily the STL code compiled in your EXE.
(and I'm not even touching on the fact that name mangling can also differ from compiler to compiler)
Without more details on your circumstances, I can't be sure; but this can be a can of worms.
thanks for your reply... What other details do you need?.. I'm not sure what i details i should be looking for..
Well -- essentially -- where did the DLL come from. I would be very concerned if you said it was a DLL handed down to you and you didn't know where it came from, or what compiler was used. I would be much less concerned if it had been developed by someone else in your organization and you knew exactly what compiler they were using and you could coordinated your compiler upgrades with the other person. Just as @fnieto, I didn't notice the Java Native Interface connection; I have no experience with JNI.
In order to link with a DLL, you need:
an import library (.LIB file), this describes the relation between C/C++ names and DLL exports.
the C/C++ signatures of the exported items (usually functions), describing the calling convention, arguments and return value. This usually comes in a header file (.H).
From your question it looks like you can guess the signatures (#2), but you really need the LIB file (#1).
The linker can help you generate a LIB from a DLL using an intermediate DEF.
Refer to this question for more details: How to generate an import library from a DLL?
Then you need to pass the .lib as an "additional library" to the linker. The DLL must be available on the PATH or in the target folder.
| common-pile/stackexchange_filtered |
list of object to single selected object on submit form button razor mvc c#
i have razor view with List. but on submit button i want to send selected Employee[0] or any other index Employee[1] to respected controller.
let suppose.
@model List<Employee>
@for(int i=0;i<Model.Count;i++)
{
@using (Html.BeginForm("A", "B", FormMethod.Post,
new { enctype = "multipart/form-data" }))
{
@html.TextBoxFor(x=>x[i].Name);
<input id="abc" type=submit>
}
}
public Acontroller()
{
public ActionResult B(Employee employee)
{
employee.Name.....
}
}
i<Model.Count? Right?
You code would not even compile! The textbox would need to be @Html.TextBoxFor(m => m[i].Name) because the model is a collection. but this makes no sense. You can only submit one form so why are you generating all that extra pointless html. And then what happens when a user enters a value in one textbox and then clicks another submit button
Your going about this the wrong way (and it will not work anyway). Have an index view to display all your objects, with associated Edit links to redirect to an Edit page to edit that Employee. Or if you want to edit one or more employees in a view then you have one form and a for loop inside it to generate the html for each item, and you post back the whole collection.
| common-pile/stackexchange_filtered |
OBD2: Reading VIN number using CAN Reader and custom Python Code
I am trying to read vehicles VIN number using CAN Reader and my custom Python Code, following OBD2 specification and I cannot obtain the whole VIN number as I receive only one response back.
I noticed that after sending this:
7DF#0209005555555555
I get the response:
7E8#1014490201574241
When I translate<PHONE_NUMBER>574241 into bitarray, I get the following:
>>> Response = "1014490201574241"
>>> ResponseByteArray = bytearray.fromhex(Response)
>>> ResponseByteArray
bytearray(b'\x10\x14I\x02\x01WBA')
Now, "WBA" are only the first 3 out of 17 characters of the VIN number, and I have not received any other messages from ECU.
I found this post: Flow control message while receiving CAN message with ELM327, which explains the concept of "First Frame" and "Flow Control". The post is saying that we have to send a "Flow Control" request to get the rest of the information. It also says that this needs to be sent directly to the address of the main ECU and not on the broadcast address of 7DF. In that example, they stated that the address of the main ECU is 7E0.
Questions:
How did they know that the address of the main CPU is 7E0? Can that be determined somehow by sending some OBD2 requests?
Under which standard things like "First Frame" and "Flow Control" are defined? I have not seen that in any of the OBD2 documentation I came across.
I was trying to obtain the VIN number using custom Python Code and CAN Reader connected to OBD2 port. I was expecting to receive the whole VIN number in multiple messages, but I received only the first one.
| common-pile/stackexchange_filtered |
Core Deposits when modelling Non-Maturity Deposits according to IRRBB
When modeling Non-maturity deposits (NMDs) the Basel Committee suggests the following
(see 31.109 of the guidelines):
Banks should distinguish between the stable and the non-stable parts
of each NMD category using observed volume changes over the past 10
years. The stable NMD portion is the portion that is found to remain
undrawn with a high degree of likelihood. Core deposits are the
proportion of stable NMDs which are unlikely to reprice even under
significant changes in the interest rate environment. The remainder
constitutes non-core NMDs.
I found a bit of literature on how to distinguish between stable and non-stable portions.
However, I found nothing on modelling core and non-core deposits. And given the defintion of core deposits and the current interest-rate environment in central Europe I think there are not so many analytical methods to distinguish.
Does anyone have experience in modeling core-deposits and/or can point me towards literature?
How Can be Identified Core Deposit in the Bnak?
This is more of a practital answer, but I've seen an approach by a medium-sized bank (balance sheet about 60 billion), which is already ECB-proof and which might answer your question:
In a nutshell, you can apply a dynamical replication approach, which tries to find the optimal portfolio consisting of fixed assets that you have to invest to archive a margin which has minimal standard deviation. You apply this to your whole deposit. Then you define non-core deposits as the proportion which has the overnight rate (or less than 1 month, depends on how you define it) of modeled maturity.
Example:
You have retail deposits, which technically have a maturity of 1 day. After applying your optimizing algo, the result is: Invest 10% at the overnight interest rate, 30% at the 1 year rate and 60% at the 10 year rate to archive a margin with lowest standard deviation.
You can go one step further and apply a CAPM approach, which tries to find the optimal mixture in regards on an efficient frontier, which is not the minimum standard deviation margin, but the one with the optimal "sharpe ratio", i.e. which has the best risk-return-tradeoff for you as a bank.
So in the end, you result with 10% non-core deposits and 90% core deposits of your overall retail deposits (which are NMDs).
Hope this helps a bit.
| common-pile/stackexchange_filtered |
How to add text to an adjacent cell based on string values
I am currently trying to make a CSV import macro system that imports, formats and simplifies the data enclosed. I have, up to this point, gotten the file imported and formatted for the most part. At this point, I am trying to add a column adjacent to A:A and, based off of a file suffix, put the related term in the new column.
I've tried using .Find functions, and am currently trying to work with a For loop, with an If InStr function enclosed.
Sub FileExtentionAddition()
ActiveSheet.Range("B:B").Select 'Adding the adjacent column
Selection.Insert Shift:=xlToRight, CopyOrigin:=xlFormatFromLeftOrAbove
Dim SearchRange As Range, Cell As Range
Dim i As Integer
i = 2
Dim LastRow As String 'Creates Variable for Last Row
LastRow = ActiveSheet.UsedRange.Rows.Count 'Defines Last Row Number
Set SearchRange = Range("A2:A" & LastRow)'Restricts For Loop to used cells
For Each Cell In SearchRange
If InStr(1, ActiveCell.Value, ".SLDPRT") > 0 Then
ActiveCell.Offset(0, 1).Value = "Part"
ElseIf InStr(1, ActiveCell.Value, ".SLDASM") Then
ActiveCell.Cell.Offset(0, 1).Value = "Assembly"
ElseIf InStr(1, ActiveCell.Value, ".SLDDRW") Then
ActiveCell.Cell.Offset(0, 1).Value = "Drawing"
Else
ActiveCell.Offset(0, 1).Value = "Other"
End If
Next
End Sub
Currently running this with a file gives me 0 added text in column B, with no errors at all. I have tried using Debug.Print to read ActiveCell.Value but from what I can tell, there isn't anything being read. I'm fairly new to VBA so hopefully I am just missing something.
Use Cell, not ActiveCell.
Switching it back gives the same issues, doesn't change much, sorry
Using cell as a variable is a bad idea, cell already has a built in meaning.
dim lastrow as long
dim i as long
dim cellval as string
with activesheet ' You should make this an explicit reference
lastrow = .cells(.rows.count, "A").end(xlup).row
for i = 2 to lastrow
cellval = .cells(i, "A").value
if instr(1, cellval, ".SLDPRT") then
.cells(i, "B").value = "Part"
elseif instr(1, cellval, ".SLDASM") then
.cells(i, "B").value = "ASSEMBLY"
elseif InStr(1, cellval, ".SLDDRW") Then
.cells(i, "B").Value = "Drawing"
else
.cells(i, "B").value = "Other"
end if
next i
end with
Hey Warcupine, That's fair, I didn't really realize that it had a set meaning by itself.
I tried out your code, and the for loop doesn't seem to even activate, just gets skipped over. Do I need to add anything to it?
I didn't put the top of your code, lastrow isn't actually being set if you copy pasted
I'm running it with all of the variables defined, but am still getting a 1004 Runtime Error. Any guesses?
is lastrow = .cells(.rows.count, "A").end(xlup).row within the with statement? I'm guessing it is a range issue, on what line does it fail?
It is failing on cellval = .Cells(iter, "A").Value right now. I've been trying a bunch of other things, and no matter what method that I use it's always defining the string to be searched in inStr that doesn't seem to work
aha i declared i and then used iter that will probably fix it. Assuming you are using option explicit (which you should be).
That did it! Thank you so much!
| common-pile/stackexchange_filtered |
Plot LINESTRING Z from GeoDataFrame using pydeck's PathLayer (or TripLayer)
I have a geodataframe with LINESTRING Z geometries:
TimeUTC
Latitude
Longitude
AGL
geometry
0
2021-06-16 00:34:04+00:00
42.8354
-70.9196
82.2
LINESTRING Z (42.83541343273769 -70.91961015378617 82.2, 42.83541343273769 -70.91961015378617 82.2)
1
2021-06-14 13:32:18+00:00
42.8467
-70.8192
66.3
LINESTRING Z (42.84674080836037 -70.81919357049679 66.3, 42.84674080836037 -70.81919357049679 66.3)
2
2021-06-18 23:56:05+00:00
43.0788
-70.7541
0.9
LINESTRING Z (43.07882882269921 -70.75414567194126 0.9, 43.07884601143309 -70.75416286067514 0, 43.07885174101104 -70.75416286067514 0, 43.07884028185512 -70.75415713109717 0, 43.07884601143309 -70.75414567194126 0, 43.07884601143309 -70.75414567194126 0)
I can plot the component points using pydeck's ScatterplotLayer using the raw
(not geo) dataframe but I need to also plot the full, smooth, track.
I've tried this:
layers = [
pdk.Layer(
type = "PathLayer",
data=tracks,
get_path="geometry",
width_scale=20,
width_min_pixels=5,
get_width=5,
get_color=[180, 0, 200, 140],
pickable=True,
),
]
view_state = pdk.ViewState(
latitude=gdf_polygon.centroid.x,
longitude=gdf_polygon.centroid.y,
zoom=6,
min_zoom=5,
max_zoom=15,
pitch=40.5,
bearing=-27.36)
r = pdk.Deck(layers=[layers], initial_view_state=view_state)
return(r)
Which silently fails. Try as I might, I cannot find a way to convert the
LINESTRING Z's (and I can do without the Z component if need be) to an object
that pydeck will accept.
I found a way to extract the info needed from GeoPandas and make it work in pydeck. You just need to apply a function that extracts the coordinates from the shapely geometries as a list. Here is a fully reproducible example:
import shapely
import numpy as np
import pandas as pd
import pydeck as pdk
import geopandas as gpd
linestring_a = shapely.geometry.LineString([[0,1,2],
[3,4,5],
[6,7,8]])
linestring_b = shapely.geometry.LineString([[7,15,1],
[8,14,2],
[9,13,3]])
multilinestring = shapely.geometry.MultiLineString([[[10,11,2],
[13,14,5],
[16,17,8]],
[[19,10,11],
[12,15,4],
[10,13,0]]])
gdf = gpd.GeoDataFrame({'id':[1,2,3],
'geometry':[linestring_a,
linestring_b,
multilinestring],
'color_hex':['#ed1c24',
'#faa61a',
'#ffe800']})
# Function that transforms a hex string into an RGB tuple.
def hex_to_rgb(h):
h = h.lstrip("#")
return tuple(int(h[i : i + 2], 16) for i in (0, 2, 4))
# Applying the HEX-to-RGB function above
gdf['color_rgb'] = gdf['color_hex'].apply(hex_to_rgb)
# Function that extracts the 2d list of coordinates from an input geometry
def my_geom_coord_extractor(input_geom):
if (input_geom is None) or (input_geom is np.nan):
return []
else:
if input_geom.type[:len('multi')].lower() == 'multi':
full_coord_list = []
for geom_part in input_geom.geoms:
geom_part_2d_coords = [[coord[0],coord[1]] for coord in list(geom_part.coords)]
full_coord_list.append(geom_part_2d_coords)
else:
full_coord_list = [[coord[0],coord[1]] for coord in list(input_geom.coords)]
return full_coord_list
# Applying the coordinate list extractor to the dataframe
gdf['coord_list'] = gdf['geometry'].apply(my_geom_coord_extractor)
gdf_polygon = gdf.unary_union.convex_hull
# Establishing the default view for the pydeck output
view_state = pdk.ViewState(latitude=gdf_polygon.centroid.coords[0][1],
longitude=gdf_polygon.centroid.coords[0][0],
zoom=4)
# Creating the pydeck layer
layer = pdk.Layer(
type="PathLayer",
data=gdf,
pickable=True,
get_color='color_rgb',
width_scale=20,
width_min_pixels=2,
get_path="coord_list",
get_width=5,
)
# Finalizing the pydeck output
r = pdk.Deck(layers=[layer], initial_view_state=view_state, tooltip={"text": "{id}"})
r.to_html("path_layer.html")
Here's the output it yields:
Big caveat
It seems like pydeck isn't able to deal with MultiLineString geometries. Notice how, in the example above, my original dataframe had 3 geometries, but only 2 lines were drawn in the screenshot.
This neatly solves half of the problem, thank you! Unfortunately, pydeck doesn't seem to like LINESTRING any more than it likes LINESTRING Z.
There! I've updated the code to make it work with pydeck's PathLayer method. Let me know if you have any issues converting my example above to the specific case you need.
Thank you, very much. I spent the afternoon trying all sorts of conversions and they either failed silently or threw a JSON related parsing error.
This works, works well, and is clear.
Supposedly PathLayer takes 3D coordinates but I'll worry about that another day.
Thank you very much.
Awesome! Happy to hear that =)
Creating a polygon from all of the points and then using it's centroid may be faster than the view_state lambdas you use:
gdf_polygon = gdf.unary_union.convex_hull
Well noted! I've updated the code above to incorporate the suggestion =) Thanks for the tip!
| common-pile/stackexchange_filtered |
How do I set the aspect ratio for a plot in Python with Spyder?
I'm brand new to Python, I just switched from Matlab. The distro is Anaconda 2.1.0 and I'm using the Spyder IDE that came with it.
I'm trying to make a scatter plot with equal ratios on the x and y axes, so that this code prints a square figure with the vertices of a regular hexagon plotted inside.
import numpy
import cmath
import matplotlib
coeff = [1,0,0,0,0,0,-1]
x = numpy.roots(coeff)
zeroplot = plot(real(x),imag(x), 'ro')
plt.gca(aspect='equal')
plt.show()
But plt.gca(aspect='equal') returns a blank figure with axes [0,1,0,1], and plt.show() returns nothing.
I think the main problem is that plt.gca(aspect='equal') doesn't just grab the current axis and set its aspect ratio. From the documentation, (help(plt.gca)) it appears to create a new axis if the current one doesn't have the correct aspect ratio, so the immediate fix for this should be to replace plt.gca(aspect='equal') with:
ax = plt.gca()
ax.set_aspect('equal')
I should also mention that I had a little bit of trouble getting your code running because you're using pylab to automatically load numpy and matplotlib functions: I had to change my version to:
import numpy
import cmath
from matplotlib import pyplot as plt
coeff = [1,0,0,0,0,0,-1]
x = numpy.roots(coeff)
zeroplot = plt.plot(numpy.real(x), numpy.imag(x), 'ro')
ax = plt.gca()
ax.set_aspect('equal')
plt.show()
People who are already comfortable with Python don't generally use Pylab, from my experience. In future you might find it hard to get help on things if people don't realise that you're using Pylab or aren't familiar with how it works. I'd recommend disabling it and trying to get used to accessing the functions you need through their respective modules (e.g. using numpy.real instead of just real)
Perfecto! Thanks a lot. I'm not used to having to call functions through modules, but if its better than, well, I guess I'll get used to it. I was avoiding it, but now that I'm a few days in and more familiar with the syntax it's not a problem. Thanks a lot.
Yeah, I understand why Pylab exists, and why it seems so attractive to people moving over from MATLAB, but over the long term I feel like it's a net negative because it makes it harder to share code with and get help from everyone who isn't using it.
| common-pile/stackexchange_filtered |
Powershell: Convert combobox.SelectedItem to time value to use at trigger for task scheduler
My question is how to convert a hour and minute chosen from a combobox to a time value that i can enter in my script for creating a tasks scheduler. I have method to pick the date through a monthcalendar, the only problem is the hour and minutes
write-host 'datum: ' $dateTimePickerDatum.SelectionRange.Start.ToShortDateString()
write-host 'Hour: ' $DropDownUur.SelectedItem.ToString()
write-host 'Min: ' $DropDownMin.SelectedItem
$action = New-ScheduledTaskAction -Execute 'c:\Users\plc\Desktop\script.bat'
$trigger = New-ScheduledTaskTrigger -Once $dateTimePickerDatum.SelectionRange.Start.ToShortDateString() -At ($DropDownUur.SelectedItem.ToString()):($DropDownMin.SelectedItem.ToString())
Register-ScheduledTask -Action $action -Trigger $trigger -TaskName "Bestand verplaatser" -Description "Het verplaatsen van bestand op gewenste tijdstip"
What is an example that $DropDownUur.SelectedItem.ToString() and $DropDownMin.SelectedItem come across as? Are the hours digits? ie "04" Is it 24 hour format? ie "14"
You need to get it into the System.DateTime or just [datetime]format that the -At parameter takes.
Examples:
"4:45" = Today at 4:45 AM
"4:45 pm" = Today at 4:45 PM
"16:45" = Today at 4:45 PM
Based upon the info I have, it will probably depend on how the Hour format is given.
The value following -at could work as ($DropDownUur.SelectedItem.ToString()+":"+$DropDownMin.SelectedItem.ToString())guessing that both are digits. If you need to AM/PM if the Hour is 12 hour formatted, then you'll just need to add that to the string concat. ($DropDownUur.SelectedItem.ToString()+":"+$DropDownMin.SelectedItem.ToString()+" PM")
Whole line:
$trigger = New-ScheduledTaskTrigger -Once $dateTimePickerDatum.SelectionRange.Start.ToShortDateString() -At ($DropDownUur.SelectedItem.ToString()+":"+$DropDownMin.SelectedItem.ToString())
Update
Also, I should have reorganized the date too. The date and time need to be after the -At switch. You're only using the -Once and -At switch here. Look at the docs for New-ScheduledTaskTrigger to see where things need to go. The date needs to be after the -At switch like the time like so:
Corrected:
$trigger = New-ScheduledTaskTrigger -Once -At ($dateTimePickerDatum.SelectionRange.Start.ToShortDateString() + " " + $DropDownUur.SelectedItem.ToString()+":"+$DropDownMin.SelectedItem.ToString())
The effective string being made now is "11/02/18 4:45" which is in [datetime] format/structure to be read in by -At
Thank you so much it works, but now I have this problem "New-ScheduledTaskTrigger : A positional parameter cannot be found that accepts argument '11/02/2018'." It says that the datetimepickerdatum isn't a good argument.
See the answer edit, I should have caught that when I first looked at it. The date need so be after the -At switch.
| common-pile/stackexchange_filtered |
Error importing built-in module "_subprocess" using Google Cloud Platform's Local Development Server
Does anyone know how I can fix the following error? Error Message: "Import Error
C:\Users\MicroSilicon\Desktop\hello_world>python2 "C:\Users\MicroSilicon\AppData\Local\Google\Cloud SDK\google-cloud-sdk\bin\dev_appserver.py" app.yaml
INFO 2019-12-16 09:23:23,341 devappserver2.py:285] Skipping SDK update check.
INFO 2019-12-16 09:23:23,506 api_server.py:282] Starting API server at: http://localhost:60054
INFO 2019-12-16 09:23:23,509 dispatcher.py:263] Starting module "default" running at: http://localhost:8080
INFO 2019-12-16 09:23:23,512 admin_server.py:150] Starting admin server at: http://localhost:8000
INFO 2019-12-16 09:23:25,522 instance.py:294] Instance PID: 7284
INFO 2019-12-16 09:23:37,250 module.py:434] [default] Detected file changes:
main.pyc
WARNING 2019-12-16 15:23:37,354 sandbox.py:1104] The module msvcrt is whitelisted for local dev only. If your application relies on msvcrt, it is likely that it will not function properly in production.
ERROR 2019-12-16 15:23:37,355 wsgi.py:269]
Traceback (most recent call last):
File "C:\Users\MicroSilicon\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\runtime\wsgi.py", line 240, in Handle
handler = _config_handle.add_wsgi_middleware(self._LoadHandler())
File "C:\Users\MicroSilicon\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\runtime\wsgi.py", line 311, in _LoadHandler
handler, path, err = LoadObject(self._handler)
File "C:\Users\MicroSilicon\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\runtime\wsgi.py", line 85, in LoadObject
obj = __import__(path[0])
File "C:\Users\MicroSilicon\Desktop\hello_world\main.py", line 16, in <module>
import subprocess
File "C:\Users\MicroSilicon\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\devappserver2\python\runtime\sandbox.py", line 1043, in load_module
return self.import_stub_module(fullname)
File "C:\Users\MicroSilicon\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\devappserver2\python\runtime\sandbox.py", line 1049, in import_stub_module
__import__(fullname, {}, {})
File "C:\Users\MicroSilicon\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\dist27\subprocess.py", line 8, in <module>
from python_std_lib import subprocess
File "C:\Users\MicroSilicon\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\dist27\python_std_lib\subprocess.py", line 417, in <module>
import _subprocess
File "C:\Users\MicroSilicon\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\devappserver2\python\runtime\sandbox.py", line 1113, in load_module
raise ImportError('No module named %s' % fullname)
ImportError: No module named _subprocess
I have installed the Google Cloud SDK for Windows and checked the box to get the bundled python installation (version 2.7.13) with it.
Python Installation Check
Basically followed the instructions in the link below to get the Hello World application working in a local environment (up to step "Make a change").
https://cloud.google.com/appengine/docs/standard/python/quickstart
Now, the problem occurred when I added the statement import subprocess to the main.py file.
Note the exact problem is with line import _subprocess in module "subprocess.py". This is strange to me because if I try to run any basic python script (not using dev_appserver.py app.yaml to deploy the Google Cloud Environment) or if I just use the python interpreter directly from the console (Windows Command Prompt) I get no errors when trying toimport subprocess nor if I try to import _subprocess directly.
Import Statement from Console Python.
Here is the Hello World code (with added subprocess import):
# Copyright 2016 Google Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import webapp2
import subprocess
class MainPage(webapp2.RequestHandler):
def get(self):
self.response.headers['Content-Type'] = 'text/plain'
self.response.write('Hello, World!')
app = webapp2.WSGIApplication([
('/', MainPage),
], debug=True)
Finally, my acquaintance had installed the same software earlier and gets no such error when running any application that uses this import statement.
Note: I am working on a Windows 10 machine.
Understood, I've now included errors in text format as well.
[Update]
I was not able to resolve this issue in my installed version of Google Cloud SDK.
However, installing an older version (specifically version 220.0.0 with bundled python) resolved the import subprocess error. For my purposes this is acceptable.
Here are the details of my working installation:
C:\Users\MicroSilicon>gcloud version
Google Cloud SDK 220.0.0
app-engine-python 1.9.77
app-engine-python-extras 1.9.74
bq 2.0.34
cloud-datastore-emulator 2.0.2
core 2018.10.08
gsutil 4.34
Also ran into this same issue and could only solve it by rolling back to 220:
gcloud components update --version 220.0.0
I ran into this as well and found out GAE was using a modified version of subprocess residing in the dist27 folder. Looking at the file subprocess.py I found out it listens to a environment variable GAE_USE_SUBPROCESS.
GAE Version (gcloud version
>gcloud version
Google Cloud SDK 308.0.0
app-engine-python 1.9.91
app-engine-python-extras 1.9.90
beta 2020.08.28
bq 2.0.60
cloud-datastore-emulator 2.1.0
core 2020.08.28
gsutil 4.53
kubectl 1.15.11
Solution
Disable the usage of subprocess by adding the following to your app.yaml file:
env_variables:
GAE_USE_SUBPROCESS: 0
This worked for me at least (running in win64/windows 10)
I was thinking that as far as we can tell, it could be something wrong with your installation, it could be either your SDK installation and your Python installation, could you reinstall them and check again?
Is just that I tried this with no issues and as you specified someone else did the same and had no issues, maybe the problem is within the installation of either of those products on your pc.
Yes, I've tried reinstalling but I seem to always end up with the same issue. My next attempt will be to try an older version (the same one my acquaintance installed) and see if that helps. Thanks for your help, seeing that it worked for you too I'll keep an eye on the location of the installation and see if I notice anything that could cause problems.
| common-pile/stackexchange_filtered |
API trying to use code on the same server - require path issue
Goal: to require files from an old monolithic code base that is still an active website, and use those files as logic for an API running on the same server in its own instance of PHP. Hoping to leverage the old codes deeply buried business logic. As the old code base must run as we transition it away.
Problem: The old code base uses $_SERVER[‘DOCUMENT_ROOT’] in its require paths. So the new API can’t include those files because it thinks that the document root is the root of its own web root. The old code files contain MANY requires, and those required pages include many requires, so going through and replacing the document root var with relative paths would be a mammoth job. For every new route I write I need to go through all of the old code requires and changes the paths to relative.
Using curl from the new API works fine, but I’m trying to avoid the overhead.
I thought of using an $_ENV var but again it’s the server environment and it has the same problem.
Is there a way to circumvent this problem?
Why not overwrite $_SERVER['DOCUMENT_ROOT'] before you require the old files, and restore it after?
The easiest by far would be to write a script that automatically updates all the require paths. This is probably far better than any workaround you can come up with.
Wouldn't this be better as a comment?
@Adam given the question, I feel that this is the most optimal solution. It would certainly solve OP's problem. Do you feel that there's something missing from my answer?
| common-pile/stackexchange_filtered |
XMLHttpRequest java javascript
I try to communicate between javascript and java. My script javascript send a message to java and java send a response.
javascript part:
var xmlhttp = new XMLHttpRequest();
xmlhttp.onreadystatechange=function()
{
if (xmlhttp.readyState==4)
{
alert(xmlhttp.responseText);
}
}
var s = "LIGNE \n 2 \n il fait beau \nEND\n";
xmlhttp.open("POST","http://localhost:6020",true);
xmlhttp.send(s);
java part:
try {
serverSocket = new ServerSocket(6020);
} catch (IOException e) {
System.err.println("Could not listen on port: 6020.");
System.exit(-1);
}
serverSocket.accept()
BufferedReader br = new BufferedReader(
new InputStreamReader(socket.getInputStream()));
BufferedWriter bw = new BufferedWriter(
new OutputStreamWriter(socket.getOutputStream()));
String ligne = "";
while(!(ligne = plec.readLine()).equals("END")){
System.out.println(ligne);
}
bw.write("Il fait beau\n");
bw.flush();
bw.close();
plec.close();
socket.close();
output java :
POST / HTTP/1.1
Host: localhost:6020
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:13.0) Gecko/20100101 Firefox/13.0.1
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: fr,fr-fr;q=0.8,en-us;q=0.5,en;q=0.3
Accept-Encoding: gzip, deflate
Connection: keep-alive
Referer: http://localhost:8080/test.html
Content-Length: 30
Content-Type: text/plain; charset=UTF-8
Origin: http://localhost:8080
Pragma: no-cache
Cache-Control: no-cache
LIGNE
2
il fait beau
So, I receive correctly the message send by javascript but the alert his always empty. How to response at this message?
I try a lot of possiblity but they don't work. And I don't want to use the servlet, it's to heavy to do that.
Thanks.
Edit:
I did this :
bw.write("HTTP/1.1 200 OK\r\n"+
"Content-Type: text/html; charset=utf-8\r\n"+
"Content-Length: 13\r\n\r\n" +
"il fait beau\n");
and this:
String data = "il fait beau \n";
StringBuilder builder = new StringBuilder();
builder.append("HTTP/1.1 200 OK\r\n");
builder.append("Content-Type: text/html; charset=utf-8\r\n");
builder.append("Content-Length:" + data.length() + "\r\n\r\n");
builder.append(data);
bw.write(builder.toString());
But the alert remain empty. Maybe it's a problem in the javascript.
Update the output java : section of your question to show what happens when you send the new request.
@DarkXphenomenon the output didn't change.
Try opening http://localhost:6020 in your browser, to debug this. You should see the text displayed in the browser, if your response is correct. You can also debug the response, using LiveHttpHeaders addon in firefox Also, note the edit to my answer. Change the content-type to text/plain.
The content-type shouldn't be the issue...
@DarkXphenomenon I just open the http://localhost:6020 in a browser and "il fait beau" appeared.
I know why it doesn't work xD Will update my answer...
That means your response is fine. It should work, try again with your request.. To simplify XMLHttp, you should use jquery in future.
@DarkXphenomenon I joined this with the all project and it works. I don't know why it doesn't work before. Thanks.
The javascript needs to see a full HTTP response. Merely sending back characters to it, makes it discard the reply as it is an invalid HTTP response.
In your java code, send back something like this
HTTP/1.1 200 OK
Content-Type: text/html; charset=utf-8
Content-Length: <length of data>
---data here---
Reference
Something like:
StringBuilder builder = new StringBuilder();
builder.append("HTTP/1.1 200 OK\r\n");
builder.append("Content-Type: text/plain; charset=utf-8\r\n");
builder.append("Content-Length:" + data.length() + "\r\n\r\n);
builder.append(data);
bw.write(builder.toString());
How do I build a complete HTTP response?
You need to add at least HTTP/1.1 200 OK because only then, readyState==4 in JavaScript and a Content-Length for your response. (Or you send a chunked response but I guess this is too complicated in your case.)
@Birk what's a chunked response?
If you send Content-Length you have to know the length of your response before. If you do not know before, you can make use of Content-Transfer-Encoding: chunked (http://en.wikipedia.org/wiki/Chunked_transfer_encoding)
Try:
bw.write("HTTP/1.1 200 OK\r\n"+
"Content-Type: text/html; charset=utf-8\r\n"+
"Content-Length: 13\r\n\r\n" +
"il fait beau\n");
HTTP-Headers are separated by \r\n (CRLF). Headers and body is spearated by \r\n\r\n.
Note that you set the length to 13 because you also have to count the \n at the end of your string.
EDIT: It does not work because of the cross-domain-policy. http://localhost:6020 is not the same port as the website which executes your JavaScript and so the xmlhttprequest might not be delivered.
Thanks, but always nothing in the alert.
Sorry, I have currently Content-Length: 13\r\n\r\n+ in my code. Remove the +. This is a typing mistake. I have changed it in my example. Then it should work.
It's not the problem, I remove it but always nothing.
+1 :D That didn't occur to me!
@AnthonyMaia, is the site from which you are executing the XMLHttp request also on localhost?
Also if it was, it cannot be the same port because already his Java-program is using it. I also cannot find another reason why it should not work. But I first didn't think of it either^^
This would be difficult... You could write a simple proxy using curl, if you have PHP installed on your primary server. You could also take a look there: http://stackoverflow.com/questions/787067/is-there-a-xdomainrequest-equivalent-in-firefox
Try adding a Access-Control-Allow-Origin: *\r\n to the response headers of your Java application and see if it works then.
| common-pile/stackexchange_filtered |
load partial view in to a modal popup
I am working with MVC3 c# 4.0.
I have created a partial view.
I have a button on my page, when I click this button I want to be able to load the partial view in to a modal popup. I presume the best way to do this is via javascript - I am using jQuery already in the application.
Any pointers as to how I could do this?
You could use jQuery UI dialog.
So you start by writing a controller:
public class HomeController : Controller
{
public ActionResult Index()
{
return View();
}
public ActionResult Modal()
{
return PartialView();
}
}
then a Modal.cshtml partial that will contain the partial markup that you want to display in the modal:
<div>This is the partial view</div>
and an Index.cshtml view containing the button which will show the partial into a modal when clicked:
<script src="@Url.Content("~/Scripts/jquery-ui-1.8.11.js")" type="text/javascript"></script>
<script type="text/javascript">
$(function () {
$('.modal').click(function () {
$('<div/>').appendTo('body').dialog({
close: function (event, ui) {
dialog.remove();
},
modal: true
}).load(this.href, {});
return false;
});
});
</script>
@Html.ActionLink("show modal", "modal", null, new { @class = "modal" })
Obviously in this example I have put the directly scripts into the Index view but in a real application those scripts will go into separate javascript file.
Thank you very much for this example it helps alot. One further question, I have integrated in MVC3 in to my web forms application. So the control that triggers the modal is on the web forms. I don't think I can use @Html.ActionLink in web forms, is there an alternative?
Of course that there is an alternative. All that the Html.ActionLink helper generates is an <a> tag. So you could generate this markup in your web forms application.
Hello Darin, i have tried using this like http://stackoverflow.com/questions/15876993/display-partialview-as-popup but i still just get a new page. Any idea?
I hate css & now this answer too..killed a lot of my time. If anyone else is having trouble please use https://www.codeproject.com/Tips/826002/Bootstrap-Modal-Dialog-Loading-Content-from-MVC-Pa to understand things better
| common-pile/stackexchange_filtered |
Compensation Capacitor and Resistor Values
I'm currently working on a Power Electronics project which I will convert 10-28V input DC voltage to 12V DC output voltage. I've deciced to use LM3481/3488 configured as SEPIC in my project. I've found which values to use such as Sense resistor, Feedback resistors, Frequency selecting resistor, Inductors and Capacitors etc.
Since I am a newbie in Power Electronics, What purpose do Compensation Capacitor and Resistor have and How should I choose values to them?
Please share your schematic so we're all talking about the same thing.
I have no access to my work computer right now. But I'll add a schematic to demonstrate what I am talking about.
What purpose do Compensation Capacitor and Resistor have and How
should I choose values to them?
Inside those chips (and many others of this type) is an error amplifier and that amplifier will have some degree of "phase margin". Having some phase margin means that by the time the gain has dropped to 0 dB (at high frequencies of course), the phase has not reversed. This means it cannot become an oscillator.
However, between the switching drive output and the error amplifier feedback point is an LC circuit and this will, progressively, add a phase error that becomes -90 degrees at LC resonance. Above resonance, the output amplitude falls rapidly but the phase can also change rapidly to -180 degrees. Well above resonance, the output amplitude is too small to meet the criteria for oscillation.
So, the problem usually lies at a range of frequencies around the LC filter resonance point. This is where instability is most likely to occur if measures are not taken.
What the error amp does is use components that prevent the combined phase shift of the error amp AND LC filter creating a situation where the loop gain is greater than unity at a phase shift of exactly 180 degrees. If it didn't do that, the circuit would oscillate at or around the LC frequency.
These components stop the error amplifier's phase shift lowering too much around the LC resonant frequency i.e. they keep the margin reasonably high: -
Hopefully you should be able to see that the added peak in the phase margin prevents a phase margin of zero occuring when the loop gain is still greater than unity.
The picture above is taken from document AN1286 and this is for the LM3481 so it might also prove to be useful to read.
| common-pile/stackexchange_filtered |
How to import APEX application using SQLPLUS
I have build an application in APEX 18 on my Development environment. Now I want to deploy the whole application on Test environment. I have already exported the file from application builder, so I have the export sql with me.
My question is: Is there any way that I can import this file into my test environment using SQLPLUS and not by the Import functionality on application builder.
Can you please let me know the way to import using sqlplus.
Thanks,
Abha..
Definitely. It's well documented here:
https://docs.oracle.com/database/apex-18.2/AEAPI/Import-Script-Examples.htm
Here's what I'd typically have for an install script:
declare
c_workspace constant apex_workspaces.workspace%type := 'DEMO';
c_app_id constant apex_applications.application_id%type := 20000;
c_app_alias constant apex_applications.alias%type := 'MYDEMO';
l_workspace_id apex_workspaces.workspace_id%type;
begin
apex_application_install.clear_all;
select workspace_id
into l_workspace_id
from apex_workspaces
where workspace = c_workspace;
apex_application_install.set_workspace_id(l_workspace_id);
apex_application_install.set_application_id(c_app_id);
apex_application_install.set_application_alias(c_app_alias);
apex_application_install.generate_offset;
end;
/
@f10000.sql
Thanks. I tried to follow all the steps for "Import Application with Specified Application ID" in the same DEV environment, but I am getting following error :
ERROR at line 1:
ORA-20001: Package variable g_security_group_id must be set.
ORA-06512: at "APEX_180100.WWV_FLOW_API", line 1808
ORA-06512: at "APEX_180100.WWV_FLOW_API", line 1843
ORA-06512: at "APEX_180100.WWV_FLOW_API", line 3276
ORA-06512: at line 2
Here, I am just trying to import the application on same workspace sam schema but with a different application id.
| common-pile/stackexchange_filtered |
Using the AWS CLI, how do I get a figure showing how many requests my API Gateway(s) have served?
I've tried a few googled suggestions but they all seem to come back with a Count value of 1, which is obviously incorrect. For clarity, I'm looking to see how many requests (success and errors) the API has served between given dates.
Thanks
Edit: A good suggestion, to include failed commands:
aws cloudwatch get-metric-statistics --namespace AWS/ApiGateway --metric-name Count --start-time 2021-06-02T09:00:00 --end-time 2021-06-03T09:00:00 --period 3600 --region eu-west-1 --statistics Average --unit Count
Others were very similar, tweaking the period in most cases as I recall (several were on a VM that has since been destroyed).
Do you have any of these incorrect commands to show that you tried?
@Marcin Good point, thank you. Example added.
I noticed the Statistic here is Average instead of Sum to get the total count. I got here because I am having issues finding the right way to filter this by api gateway.
| common-pile/stackexchange_filtered |
User generated content and security
I'm gonna create a new site. I want to make the site as user generate able content site.
Basically this is my site's function. Users signup in my wordpress site, Submit content from wp-admin panel, some points will be given if the post approved by admin, later those points will be converted into some cash money.
So i need some advise. I heard wordpress has low security. If i let the users to create content from wp-admin, will they hack my site? is it good or bad to let them use wp-admin area? Is it possible to make those user content submission form from frontend?
I'm not interested in plugin like TDO Mini Forms. Because i need points system.
Some tips needed. Thanks in advance
PS: Can i use buddypress to achieve this?
You need to stop listening to people who says PHP and WordPress is not secure. Its how you do it. You can do it in WordPress, without using BuddyPress. In fact you don't even need it for anything.
All you need to make default users contributors, and a small plugin which takes care of their points when their posts are approved.
| common-pile/stackexchange_filtered |
Need help formatting Tshark command string from bash script
I'm attempting to run multiple parallel instances to tshark to comb through a large number of pcap files in a directory and copy the filtered contents to a new file. I'm running into an issue where Tshark is throwing an error on the command I'm feeding it.
It must have something to do with the way the command string is interpreted by tshark as I can copy / paste the formatted command string to the console and it runs just fine. I've tried formatting the command several ways and read threads from others who had similar issues. I believe I'm formatting correctly... but still get the error.
Here's what I'm working with:
Script #1: - filter
#Takes user arguments <directory> and <filter> and runs a filter on all captures for a given directory.
#
#TO DO:
#Add user prompts and data sanitization to avoid running bogus job.
#Add concatenation via mergecap /w .pcap suffix
#Delete filtered, unmerged files
#Add mtime filter for x days of logs
starttime=$(date)
if [$1 = '']; then echo "no directory specified, you must specify a directory (VLAN)"
else if [$2 = '']; then echo "no filter specified, you must specify a valid tshark filter expression"
else
echo $2 > /home/captures-user/filtered/filter-reference
find /home/captures-user/Captures/$1 -type f | xargs -P 5 -L 1 /home/captures-user/tshark-worker
rm /home/captures-user/filtered/filter-reference
fi
fi
echo Start time is $starttime
echo End time is $(date)
Script #2: - tshark-worker
# $1 = path and file name
#takes the output from the 'filter' command stored in a file and loads a local variable with it
filter=$(cat /home/captures-user/filtered/filter-reference)
#strips the directory off the current working file
file=$(sed 's/.*\///' <<< $1 )
echo $1 'is the file to run' $filter 'on.'
#runs the filter and places the filtered results in the /filtered directory
command=$"tshark -r $1 -Y '$filter' -w /home/captures-user/filtered/$file-filtered"
echo $command
$command
When I run ./filter ICE 'ip.addr == <IP_ADDRESS>' I get the following output for each file. Note the the inclusion of == in the filter expression is not the issue, I've tried substituting 'or' and get the same output. Also, tshark is not aliased to anything, and there's no script with that name. It's the raw tshark executable in /usr/sbin.
Output:
/home/captures-user/Captures/ICE/ICE-2019-05-26_00:00:01 is the file to run ip.addr == <IP_ADDRESS> on.
tshark -r /home/captures-user/Captures/ICE/ICE-2019-05-26_00:00:01 -Y 'ip.addr == <IP_ADDRESS>' -w /home/captures-user/filtered/ICE-2019-05-26_00:00:01-filtered
tshark: Display filters were specified both with "-d" and with additional command-line arguments.
Welcome to StackOverflow. Consider adding the output of what happens when $command is run. Right now it looks like we only have the output of the two echo commands (which are echoing as one would expect)
Thanks for the welcome.The third line is the output from $command.
"tshark: Display filters were specified both with "-d" and with additional command-line arguments."
It is odd becasue -d is not in the command string. I've seen posts by others reporting similar (but not exact) errors. None of their solutions have worked for me.
colons : in filenames is probably problem - try to remove them if possible or escape them \:
No luck. I made a dummy directory with files absent the colons. Same result when running the script. I can copy / paste the same formatted command strings that the script creates (with colon-containing file name) and it works. The only time it's a problem is when launching the command from a script.
However, I agree colons in filenames are not good practice. Unfortunately I didn't set up the environment and the script that formats the capture names. I'll ask the offending party to amend it.
Is there any chance someone else on the system aliased tshark to something with tshark -d......?
Good question, but alas no.
I updated the post to include that tidbit.
I believe the problem is one of quoting. Try running your script with "set -x" to get more debug and you'll see where the filter is problematic due to the spaces in it. By the way, if the path and/or filename colud have spaces in them, then you could run into trouble there too.
As I mentioned in the comments, I think this is a problem with quoting and how your command is constructed due to spaces in the filter (and possibly in the file name and/or path).
You could try changing your tshark-worker script to something like the following:
# $1 = path and file name
#takes the output from the 'filter' command stored in a file and loads a local variable with it
filter="$(cat /home/captures-user/filtered/filter-reference)"
#strips the directory off the current working file
file="$(sed 's/.*\///' <<< $1 )"
echo $1 'is the file to run' $filter 'on.'
#runs the filter and places the filtered results in the /filtered directory
tshark -r "${1}" -Y "${filter}" -w "${file}"-filtered
Thanks, Chris! It was a whitespace issue although I still don't fully understand it.
I am able to successfully run the generated command as long as I put the full filter string in parentheses. As an argument to the 'filter' script it must additionally be enclosed in single quotes so bash doesn't try to evaluate the filter before putting it in the temporary file.
The really odd thing to me is that I can use the same filter with only single quotes when pasting it to the command line. I'm not sure why parentheses are needed when running from a script.
The answer to the following question might help explain things a bit? https://unix.stackexchange.com/questions/131766/why-does-my-shell-script-choke-on-whitespace-or-other-special-characters
BTW, what version of tshark were you using? Because the filter was incorrectly spit previously such that only "ip.addr" was interpreted as the display filter, "== <IP_ADDRESS>" was then the "leftover" argument being interpreted as the capture filter, but it's a bit strange that the error message would indicate "-d" instead of "-f", since "-f" is the option for the capture filter. Could be a tshark bug.
This was Tshark version 1.10.14
Oh, well in that case, once the bug is fixed, it's not going to be back-ported to such an old release, after all, 1.10 went EOL 4 years ago: https://wiki.wireshark.org/Development/LifeCycle
| common-pile/stackexchange_filtered |
WPF mvvm wizard
I am implementing WPF MVVM Wizard and I am wondering about the right approach of performing a DoOperation when a new Wizard’s Page (UserControl) is loaded.
The DoOperation is implemented on the MyWizard.ViewModal class while the UserControl Load is happening at the MyWizard.View namespaces.
How can I connect between the UserControl loaded event to the DoOperation api?
I tried the following:
<xmlns:i="clr-namespace:System.Windows.Interactivity;assembly=System.Windows.Interactivity"
<i:Interaction.Triggers>
<i:EventTrigger EventName="Loaded">
<i:InvokeCommandAction Command="{Binding Path=RunOperation}"/
</i:EventTrigger>
</i:Interaction.Triggers>
RunOperation calls DoOperation.
it doesn’t work, RunOperation is not being called.
This is the right approach or there is a better way to perform an operation at the MyWizard.ViewModal class?
Your approach should work. Have you checked your output console for binding errors? Is RunOperation a command? Is the DataContext of the UserControl already set, when the Loaded event is raised? Have you implemented the triggers like this in your UC?
<UserControl x:Class="..."
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:i="clr-namespace:System.Windows.Interactivity;assembly=System.Windows.Interactivity">
<i:Interaction.Triggers>
<i:EventTrigger EventName="Loaded">
<i:InvokeCommandAction Command="{Binding Path=RunOperation}"/>
</i:EventTrigger>
</i:Interaction.Triggers>
<Grid>
...
</Grid>
</UserControl>
| common-pile/stackexchange_filtered |
SLAM possibilities,where does it end?
Hello,
the goal : something that looks like a droid, able to do SLAM without falling down the stairs
the problem : the more I dig, the less I think it's feasible, even by spending money and hours around ROS configs/packages and hardware
at first I wanted to use sonars,then distance sensors on a servo, then RPLIDAR
then I realize they have limitations (I don't want a rotating widget on the top of my robot that only detect obstacles on a plane of sight at a couple of feets above the ground)
so I thought, doing slam based on opencv-stereo-cameras-features like approach would be better; I found SLAM articles/videos about using Intel D435 cameras; then even more limitations (needed better odometry/IMU/off load SLAM...)
then there is the D435i camera; then I found the Intel t265 camera; optimized for slam with integrated IMU and stuffs(yet, grayscale fisheye, no depth available...map memory size ?)
now I am really lost and confused...
I really don't wanna spend a lot of money on devices that won't work or don't do what I need, or worst, will end up not supported anymore, beeing super complicated to interface/tune within ROS
then finding out a new/better device is now out there (a couple of days after receiving what I ordered)
SLAM is one problem among many, I want to focus on other parts of the robot (TTS, voice recognition, actuators...)
any idea/opinion on this ?
am I going the right way ?
is slam possible for a hobbyist or is it only possible inside money loaded universities ?
thanks
Originally posted by phil123456 on ROS Answers with karma: 51 on 2020-02-19
Post score: 0
is slam possible for a hobbyist or is
it only possible inside money loaded
universities ?
I think that boils down alot of the other comments.
SLAM is hard. There are alot of 2D packages out there because they're reliable and robust. Most of the time people that have a 2D lidar also have another form of 3D sensing for obstacle avoidance. The lidar is for positioning and longer range obstacle detection than a depth camera or a sonar can handle.
Once you leave the planar-land of SLAM, you are now entering an active research field. Within ROS, its great that anyone can publish a package and have it available for the world, but that also creates varying quality and maintained implementations. This is especially true in edge research fields were every couple years someone's new approach deems their prior work no longer state of the art. None of the packages that I do know of visual or dense SLAM are going to be trivial for a hobbyist to configure or use (needing high quality and highly calibrated cameras / extrinsic). Many times, these are use-case specific and may not be as general as lidar slam. ORB-SLAM is the one I usually point people to, but there are also about 5 different popular implementations with their own special quirks on GitHub.
My recommendation is as a hobbyist unless your interest is SLAM, don't leave planar-land. Use the lidar slam packages and use that D435i you bought as an obstacle avoidance camera to give you 3D sensing. Or maybe add in some visual odometry to improve your positioning.
Originally posted by stevemacenski with karma: 8272 on 2020-02-19
Post score: 1
Comment by phil123456 on 2020-02-20:
ok thanks....
Comment by stevemacenski on 2020-02-20:
Can you mark this as correct to get it off the unanswered questions queue?
Comment by phil123456 on 2020-02-20:
I was waiting someone else to give another opinion before purchassing a lidar :-)
I would agree with what @stevemacenski states. Start with an RPlidar or similar an get gmapping to work. This should be fairly easy as this has been done countless times before.
Typically the stairs/hole problem is solved differently and depends on your robot (is it holonomic, i.e. can it generate a velocity in any direction or is it differential drive?). You can decide to put a second lidar on the robot on an angle and manually detect any sudden terrain changes, or similarly using a depth sensor to do the same.
These things you learn by experience. I would recommend you to go ahead and get started as opposed to spending countless hours researching what to do. A lot of problems will become clear just by trying something. Happy tinkering!
Originally posted by achille with karma: 464 on 2020-02-21
This answer was NOT ACCEPTED on the original site
Post score: 1
Comment by phil123456 on 2020-02-22:
fair enough
Depending on what you do it might even be enough to localize and navigate, this can be done to some degree from a floorplan map and scanmatching (LIDAR) in AMCL. Nice Video and Software link for creating pgm and yaml map from "pictures"/plans. Slam might not be needed in your case.
https://www.youtube.com/watch?v=ySlU5CIXUKE
Regarding the stair problem. To be safe you need additional sensors because light based sensors have problems with reflective surfaces and transparency. Depending on the floor they might get problems. But if that isnt the issus. RTABMAP (from depth image) has nice features, like obstacle-map, clear-space-map etc. Doesnt have testet negative height obstacles yet, but for sure that should be possible. RTABMAP also provides visual odometry and SLAM. Needs some computation power though.
Originally posted by Dragonslayer with karma: 574 on 2020-02-22
This answer was NOT ACCEPTED on the original site
Post score: 1
Comment by phil123456 on 2020-02-22:
interesting, I''l have a look
| common-pile/stackexchange_filtered |
Does each child add reactive getters and setters to props?
Imagine you have a huge list of things to do:
[
{
id: 1,
name: Pet a cat,
priority: 'extreme'
},
...
]
The app has three components grandparent -> parent -> child
The todo list is declared in grandparent's data function. This is where the list becomes reactive first. The list is then passed from grandparent to parent and from parent to child as a prop. From my limited understanding each component in the chain adds their own getters and setters. My concern is that this way of passing props is not optimal for performance. I mean, should I restructure my code in a way that minimizes such props passing or not?
There are several caveats that can bite you when passing props down through multiple levels this way if multiple levels intend to manipulate that data. Generally speaking, if multiple components need to act on the same data, I prefer to use a vuex store instead of passing props that way they all have a single "source of truth"
@WesleySmith, I plan on emitting events so that only grandparent could decide what kind of data manipulation to perform. What I wonder is how much computaional load this approach adds
It will add load, there will be multiple copies, though its doubtful you'd ever notice it.
@WesleySmith, If you had to guess how much more load compared to vuex approach?
Hard to say, it would depend on how the code is written, what it does with the data, the size of the data, how often the data is changed, etc. General rule of thumb though is to not try to optimize your application until its clear that it needs it though. ie, if it works and is fast enough for your needs, its fine.
@WesleySmith, thank you, sir! Have a good day or night!
First of all, Props in Vue are readonly. You'd get a runtime error if you ever try to update one. So actually there's no setters for parent and child components, only for grand-parent.
If your child components wants to update it, you'll have to send events until some component can actually update the data.
Second, you won't have any performance issue with it, it's the way Vue works and it's good at it. Actually it's the proper and most straightforward way to achieve what you want. Obviously, if the parent / child list extends even more, it's gonna be a pain for you to use only props + events, but no performance issue I think.
The other solution to avoid data to be passed through each component descendent is to use a "Vuex Store". This is not super easy to set up and understand for beginners though. You may give it a try if your app is becoming more complexe.
I'd suggest you to stick with your current solution as it has nothing wrong.
Happy coding :)
| common-pile/stackexchange_filtered |
unable to call Azure Service Bus from Python
Trying to run the hello world example to put something on the queue, create a queue ... anytime I call azure I get an error.
Here is the code:
from azure.servicebus import *
bus_service = ServiceBusService(service_namespace='testtest', account_key='my_access_token', issuer='my_issuer')
bus_service.create_topic('mytopic')
Here is the error I get back:
$ /c/Python27/python pythontest.py
Traceback (most recent call last):
File "pythontest.py", line 4, in <module>
bus_service.create_topic('mytopic')
File "c:\Python27\lib\site-packages\azure\servicebus\servicebusservice.py", line 1
42, in create_topic
request.headers = _update_service_bus_header(request, self.account_key, self.iss
uer)
File "c:\Python27\lib\site-packages\azure\servicebus\__init__.py", line 185, in _u
pdate_service_bus_header
request.headers.append(('Authorization', _sign_service_bus_request(request, acco
unt_key, issuer)))
File "c:\Python27\lib\site-packages\azure\servicebus\__init__.py", line 192, in _s
ign_service_bus_request
return 'WRAP access_token="' + _get_token(request, account_key, issuer) + '"'
File "c:\Python27\lib\site-packages\azure\servicebus\__init__.py", line 233, in _g
et_token
connection.send(request_body)
File "c:\Python27\lib\site-packages\azure\http\winhttp.py", line 313, in send
self._httprequest.send(request_body)
File "c:\Python27\lib\site-packages\azure\http\winhttp.py", line 198, in send
ctypes.memmove(safearray.pvdata, request, len(request))
WindowsError: exception: access violation writing 0x0000000000000000
It always gives me the same error whether I put something on a queue or create a queue, create a topic, send message to a topic, etc.
any ideas?
lol, miniBSOD. I think no one will help you except microsoft tech support.
Is this the exact code you are using? At least the account_key is not right, it should be in base64 format
bus_service = ServiceBusService(service_namespace='testtest', account_key='my_access_token', issuer='my_issuer')
no I stripped out the account key and the issuer. I've created multiple namespaces and created several topics and tried to send messages to them using the different account keys and none of them work. I think the issue is with permissions on the azure side.
Could you try account_name='weidongtestservicebus2' and account_key='LNkJyAIfeYlGyr3A8Wa7bwQovZ9b1/ZdOEtDr5bhxo0=' and issuer='owner'? If it works for you, then it maybe the storage account issue. Which python version are you using?
This is an issue with the azure library when used from 64-bit Python.
The changes to make it work are small, so I have listed them here for you. The fix will be pushed to github + pypi shortly as well.
Make the following changes to azure/http/winhttp.py:
Add c_size_t to the import statement
from ctypes import c_void_p, c_long, c_ulong, c_longlong, c_ulonglong, c_short, c_ushort, c_wchar_p, c_byte, c_size_t
Replace CoInitialize(0) with
CoInitialize(None)
Replace the Com related APIs section with this
_ole32 = oledll.ole32
_oleaut32 = WinDLL('oleaut32')
_CLSIDFromString = _ole32.CLSIDFromString
_CoInitialize = _ole32.CoInitialize
_CoInitialize.argtypes = [c_void_p]
_CoCreateInstance = _ole32.CoCreateInstance
_SysAllocString = _oleaut32.SysAllocString
_SysAllocString.restype = c_void_p
_SysAllocString.argtypes = [c_wchar_p]
_SysFreeString = _oleaut32.SysFreeString
_SysFreeString.argtypes = [c_void_p]
_SafeArrayDestroy = _oleaut32.SafeArrayDestroy
_SafeArrayDestroy.argtypes = [c_void_p]
_CoTaskMemAlloc = _ole32.CoTaskMemAlloc
_CoTaskMemAlloc.restype = c_void_p
_CoTaskMemAlloc.argtypes = [c_size_t]
| common-pile/stackexchange_filtered |
Handle Enter Key for search and login page
I have an ASP.Net 3.5 in VB.Net page that uses a master-page and has several pages it uses to display content. On the master-page their is a search text-box for the site so that shows up on all the pages. Their is also a login page screen to get to a members account.
The issue is: When you are on a content page I would like when a user puts something in to search for and presses enter it will process the search request.
But....
If you are on the login page I would like to disable enter key for the search and if the user presses enter on the login page to process the login request.
Is their an easy way to accomplish this?
Create your form in an asp:panel and set the DefaultButton to be the button of the form you want to submit when enter is pressed
<asp:Panel DefaultButton="btnSubmit" runat="server">
<asp:TextBox ID="UserName" runat="server"/>
<asp:TextBox ID="Password" TextMode="Password" runat="server"/>
<asp:Button ID="btnSubmit" OnClick="btnSubmit_Click" Text="Sign In" runat="server"/>
</asp:Panel>
Note: asp:panel creates a div
| common-pile/stackexchange_filtered |
Python plot forced sorting dates alphabetically instead of chronologically
I am plotting my dataset (Mortality in England and Wales against region) and the dates on the X-axis keep sorting alphabetically. It goes Apr 06, Apr 07,..., Feb 06, Feb 07,..., Sep-13,Sep-14.
I want them to be in chronological order (Like it is in my data set) Is there any way to turn off forced sorting? I am using matplot lib and seaborn for this plot.
Also if anyone knows a way to write out this code without repeating the code 13 times I would be happy to hear it.
My code is as follows
plt.figure(figsize=(48,12))
sns.lineplot(data=Regional,x='Date',y='England and Wales')
sns.lineplot(data=Regional,x='Date',y='England')
sns.lineplot(data=Regional,x='Date',y='North East')
sns.lineplot(data=Regional,x='Date',y='North West')
sns.lineplot(data=Regional,x='Date',y='Yorkshire and the Humber')
sns.lineplot(data=Regional,x='Date',y='East Midlands')
sns.lineplot(data=Regional,x='Date',y='West Midlands')
sns.lineplot(data=Regional,x='Date',y='East of England')
sns.lineplot(data=Regional,x='Date',y='Greater London')
sns.lineplot(data=Regional,x='Date',y='South East')
sns.lineplot(data=Regional,x='Date',y='South West')
sns.lineplot(data=Regional,x='Date',y='Wales')
sns.lineplot(data=Regional,x='Date',y='Non Residents')
plt.legend(['England and Wales','England','North East','North West','Yorkshire and the Humber','East Midlands','West Midlands','East of England','Greater London','South East','South West','Wales','Non Residents'])
Turn your date into datetime type before plotting.
your dates probably aren't actually dates, but instead strings. Also, you should melt your data and pass the resulting region column as your "hue" to a single call to lineplot
Why not use Matplotlib? No sorting happens behind the scenes unless you force it to
If you put your dates into ISO 8601 format they will sort properly.
As mentioned, using pd.melt and datetime format will likely solve your issues. You can use pd.to_datetime to convert your dates to datetime format. Assuming your strings are 'Jul-06' format, you can specify your format is '%b-%y'. Otherwise, you can check this table for the correct format specifier.
pd.melt can reformat your dataframe to plot using a single line of code. Assuming your dataframe contains columns only for date and regions, you can use the following code to pull everything together:
Regional['Date'] = pd.to_datetime(Regional['Date'], format='%b-%y')
Regional = pd.melt(Regional, id_vars=['Date'], var_name='Region', value_name='Mortality')
sns.lineplot(data=Regional, x='Date', y='Mortality', hue='Region')
Thankyou for this, usually I would use hue but I cannot wrap my head around how to organise my CSV to have region as a header. I currently have it oragnised to have date as the row and regions as the column but there is no header for region.
It looks roughly like this
Date, England, North East,.... Wales
Jan - 06, 300, 345, 654
Feb -06 .... etc
So all my headers are separate regions
I see pandas.melt repeated the date 14 times for each region. I did initially considered this but thought there was a more efficient way but I suppose not, thankyou.
Instead of melting you can also create a pd.MultiIndex to automatically plot as desired with matplotlib:
Regional['Date'] = pd.to_datetime(Regional['Date'])
Regional = Regional.set_index('Date')
Regional.columns = pd.MultiIndex.from_tuples([('Region', col) for col in Regional.columns])
Regional.plot(ax=ax, title='Daily Mortality Rate by Region', ylabel='Mortality')
plt.legend(title='Regions', labels=[col[1] for col in Regional.columns])
The seaborn way (See other answer) is a bit cleaner, but this is just a matplotlib solution.
| common-pile/stackexchange_filtered |
Create Instance from assembly and its dependencies in C#
I have an app (let's just call it MyApp) that dynamically creates source code for a class and then compiles it. When it compiles the source code I also reference another DLL (that is the base class for this newly created class) that already exists in another folder. I do the following to compile and output the DLL:
//Create a C# code provider
CodeDomProvider provider = CodeDomProvider.CreateProvider("CSharp");
//Set the complier parameters
CompilerParameters cp = new CompilerParameters();
cp.GenerateExecutable = false;
cp.GenerateInMemory = false;
cp.TreatWarningsAsErrors = false;
cp.WarningLevel = 3;
cp.OutputAssembly = "SomeOutputPathForDLL";
// Include referenced assemblies
cp.ReferencedAssemblies.Add("mscorlib.dll");
cp.ReferencedAssemblies.Add("System.dll");
cp.ReferencedAssemblies.Add("System.Core.dll");
cp.ReferencedAssemblies.Add("System.Data.dll");
cp.ReferencedAssemblies.Add("System.Data.DataSetExtensions.dll");
cp.ReferencedAssemblies.Add("System.Xml.dll");
cp.ReferencedAssemblies.Add("System.Xml.Linq.dll");
cp.ReferencedAssemblies.Add("MyApp.exe");
cp.ReferencedAssemblies.Add("SomeFolder\SomeAdditionalReferencedDLL.dll");
// Set the compiler options
cp.CompilerOptions = "/target:library /optimize";
CompilerResults cr = provider.CompileAssemblyFromFile(cp, "PathToSourceCodeFile");
Later on in my app (or next time the app runs) I try to create an instance of the class. I know where both the DLL for the newly created class (let's call it Blah) and the base class is. I use the following code to try to create an instance of the new class:
Assembly assembly = Assembly.LoadFile("PathToNewClassDLL");
Blah newBlah = assembly.CreateInstance("MyApp.BlahNamespace.Blah") as Blah;
When I call Assembly.CreateInstance like I do above I get an error saying it cannot create the instance. When I check assembly.GetReferencedAssemblies() it has the standard references and the reference for my app (MyApp.exe) but it doesn't have the reference for the dependent base class that I used when compiling the class originally (SomeAdditionalReferencedDLL.dll).
I know that I have to add the base class reference somehow in order to create the instance but I am not sure how to do this. How do I create an instance of a class from an assembly when I have the assembly and all of it dependecies?
Thanks
Just curious but did you try and load your dependancy first into your app domain then load the custom generated one?
If you manually load an external DLL (Assembly) IT WILL NOT AUTOMATICALLY LOAD WHAT YOU REFERENCED.
So you will have to create a AssemblyLoader. A code that checks the Referenced assemblies for your assembly and load them yourself.
For complications regarding the references assemblies residing in odd folders on your computer and not together with your compiled DLL, check out the AppDomain.CurrentDomain.AssemblyResolve event. (You use it to fool .NET to accept your assembly being loaded even though its not in the GAC or with your compiled DLL)
After you have loaded your referenced DLL manually with code, the CreateInstance will work.
Are you sure that when it was compiling, it referenced the instance of the DLL you think it is referencing? It might be resolving the path to somewhere other than where you think such that when instantiating your class, it can't find that DLL anymore. I recommend getting a Fusion log of both the compilation process and the type creation to see how the types are being resolved.
I think .Net is trying to find the "SomeAdditionalReferencedDLL.dll" in your bin or GAC. Have you tried to do an Assembly.Load for the "SomeAdditionalReferencedDLL.dll" before creating a new Blah?
This sounds like the assembly you're referencing in the dynamically generated assembly isn't in one of the standard probing paths.
Where does it reside relative to everything else?
You should fire up the fusion log viewer to see where things are going awry.
first of all i think you have circular dependency .. your last paragraph sums it up.
You need to rethink your application and decide if you have responsibilities setup correctly.
Why circular dependency:
For the new dll to be generated it requires MyApp.exe;
MyApp.exe cannot be used without the new dll.
maybe post what your goal is and we can help structure your app correctly.
With proper responsibility delegated MyApp.exe should be making the new generated assembly do work not require MyApp.exe to use objects from the new dll.
Example you should only have Execute on the new generated assembly .....
public static void RenderTemplate(String templatepath, System.IO.Stream outstream, XElement xml, Dictionary<String, object> otherdata)
{
var templateFile = System.IO.File.ReadAllText(templatepath);
var interpreter = new Interpreter();
interpreter.ReferencedAssemblies.Add("System.Core.dll"); // linq extentions
interpreter.ReferencedAssemblies.Add("System.Xml.dll");
interpreter.ReferencedAssemblies.Add("System.Xml.Linq.dll");
interpreter.UsingReferences.Add("System.Linq");
interpreter.UsingReferences.Add("System.Xml");
interpreter.UsingReferences.Add("System.Xml.Linq");
interpreter.UsingReferences.Add("System.Collections.Generic");
interpreter.Execute(templateFile, outstream, xml, otherdata);
}
//Constructor
static MyClass()
{
//Provoque l'événement quand .Net ne sait pas retrouver un Assembly référencé
AppDomain.CurrentDomain.AssemblyResolve += new ResolveEventHandler(CurrentDomain_AssemblyResolve);
}
/// <summary>
/// Mémorise les assembly référencés par d'autres qui ne sont pas dans le répertoire principal d'EDV
/// </summary>
static Dictionary<string, string> _AssembliesPath;
/// <summary>
/// .Net ne sait pas retrouver un Assembly référencé
/// Cherche et charge d'après les assembly référencés par d'autres qui ne sont pas dans le répertoire principal d'EDV
/// </summary>
/// <param name="sender"></param>
/// <param name="args"></param>
/// <returns></returns>
static Assembly CurrentDomain_AssemblyResolve(object sender, ResolveEventArgs args)
{
if (_AssembliesPath != null && _AssembliesPath.ContainsKey(args.Name))
{
Assembly lAssembly = Assembly.LoadFile(_AssembliesPath[args.Name]);
AddAssemblyReferencedAssemblies(lAssembly, System.IO.Path.GetDirectoryName(lAssembly.Location));
return lAssembly;
}
Error = string.Format("L'assembly {0} n'a pu être chargé", args.Name);
return null;
}
/// <summary>
/// Mémorise les assembly référencés par d'autres qui ne sont pas dans le répertoire principal d'EDV
/// </summary>
/// <param name="pAssembly"></param>
/// <param name="psRootPath"></param>
static void AddAssemblyReferencedAssemblies(Assembly pAssembly, string psRootPath)
{
if (_AssembliesPath == null) _AssembliesPath = new Dictionary<string, string>();
foreach (AssemblyName lRefedAss in pAssembly.GetReferencedAssemblies())
if (!_AssembliesPath.ContainsKey(lRefedAss.FullName))
{
string lsRoot = psRootPath + "\\" + lRefedAss.Name + ".";
string lsExt;
if (System.IO.File.Exists(lsRoot + (lsExt = "dll")) || System.IO.File.Exists(lsRoot + (lsExt = "exe")))
{
_AssembliesPath.Add(lRefedAss.FullName, lsRoot + lsExt);
}
}
}
//Function call
Assembly lAssembly = Assembly.LoadFile(lsExternalAssemblyPath);
AddAssemblyReferencedAssemblies(lAssembly, System.IO.Path.GetDirectoryName(lsExternalAssemblyPath));
| common-pile/stackexchange_filtered |
How to Convert UIView to PDF within iOS?
There are a lot of resources about how to display a PDF in an App's UIView. What I am working on now is to create a PDF from UIViews.
For example, I have a UIView, with subviews like Textviews, UILabels, UIImages, so how can I convert a big UIView as a whole including all its subviews and subsubviews to a PDF?
I have checked Apple's iOS reference. However, it only talks about writing pieces of text/image to a PDF file.
The problem I am facing is that the content I want to write to a file as PDF is a lot. If I write them to the PDF piece by piece, it is going to be huge work to do.
That's why I am looking for a way to write UIViews to PDFs or even bitmaps.
I have tried the source code I copied from other Q/A within Stack Overflow. But it only gives me a blank PDF with the UIView bounds size.
-(void)createPDFfromUIView:(UIView*)aView saveToDocumentsWithFileName:(NSString*)aFilename
{
// Creates a mutable data object for updating with binary data, like a byte array
NSMutableData *pdfData = [NSMutableData data];
// Points the pdf converter to the mutable data object and to the UIView to be converted
UIGraphicsBeginPDFContextToData(pdfData, aView.bounds, nil);
UIGraphicsBeginPDFPage();
// draws rect to the view and thus this is captured by UIGraphicsBeginPDFContextToData
[aView drawRect:aView.bounds];
// remove PDF rendering context
UIGraphicsEndPDFContext();
// Retrieves the document directories from the iOS device
NSArray* documentDirectories = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask,YES);
NSString* documentDirectory = [documentDirectories objectAtIndex:0];
NSString* documentDirectoryFilename = [documentDirectory stringByAppendingPathComponent:aFilename];
// instructs the mutable data object to write its context to a file on disk
[pdfData writeToFile:documentDirectoryFilename atomically:YES];
NSLog(@"documentDirectoryFileName: %@",documentDirectoryFilename);
}
Note that the following method creates just a bitmap of the view; it does not create actual typography.
(void)createPDFfromUIView:(UIView*)aView saveToDocumentsWithFileName:(NSString*)aFilename
{
// Creates a mutable data object for updating with binary data, like a byte array
NSMutableData *pdfData = [NSMutableData data];
// Points the pdf converter to the mutable data object and to the UIView to be converted
UIGraphicsBeginPDFContextToData(pdfData, aView.bounds, nil);
UIGraphicsBeginPDFPage();
CGContextRef pdfContext = UIGraphicsGetCurrentContext();
// draws rect to the view and thus this is captured by UIGraphicsBeginPDFContextToData
[aView.layer renderInContext:pdfContext];
// remove PDF rendering context
UIGraphicsEndPDFContext();
// Retrieves the document directories from the iOS device
NSArray* documentDirectories = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask,YES);
NSString* documentDirectory = [documentDirectories objectAtIndex:0];
NSString* documentDirectoryFilename = [documentDirectory stringByAppendingPathComponent:aFilename];
// instructs the mutable data object to write its context to a file on disk
[pdfData writeToFile:documentDirectoryFilename atomically:YES];
NSLog(@"documentDirectoryFileName: %@",documentDirectoryFilename);
}
Also make sure you import:
QuartzCore/QuartzCore.h
+1 I went through several posts on pdf generation before finding this simple solution.
I wanted to do the same and your method seems to work fine but its quality is very low. Did I miss anything?
I suspect the quality is pretty low because it's taking the UIView and converting it to a raster, where as the other methods of rendering text and images directly preserves them as vectors in the PDF file.
I am following this method , but I am getting a blank pdf generated . can any one help me ?
It works awesome !!! cheers !! the only problem that I have is, it generate PDF in just one page. How can i Separate pages instead of having a long PDF file ?!
Gives me a blank screen on iOS6.1
Hi this works fine but it doesn't create pdf for whole uitableview. Can you please suggest how to take pdf for invisible cells as well?
Seems this works ok. Except if the view scrolls. In which case it only renders visible content. I imagine aview.bounds is controlling this ?
I found that I was getting a blank screen when I ran the code on the view did load method. Maybe try and run the method when a button is pressed or something and it should be fine. The quality is fantastic thanks Casillic!
iOS 5.1 get half page, is there normal ?
This draws my view such that it fills the page. If I use drawInContext it's the right size but in the top left. How do I tell it where to draw the view?
hey Casilic thats a great answer!! but i have some doubt like if i have a form which has 40-50 text field and 10-20 text view and user has entered 1000 words in text view then how can i show all text view data
@amit soni -This method is really for onscreen info only. You probably should generate a report screen from the user's info and have that report converted to PDF. Best of luck.
hey see this html2pdf cordova plugin https://github.com/moderna/cordova-plugin-html2pdf/blob/master/src/ios/Html2pdf.h it can print scrollable webview into multipage pdf
this crashes in ios 8 at UIGraphicsEndPDFContext
Can i somehow convert this in PNG or JPG ? because i want to be able to save it in camera roll as well as use in my application. Very nice solution though +1
@casillic your code works fine for UIView but I need for UIScrollview. I am using UIScrollview instead of UIView in parameter and using UIScrollview's ContentSize insted of UIView bound. This thing generate PDF but only for visible portion. You have any proper solution for UIScrollview ?
Additionally, if anyone is interested, here is the Swift 3 code:
func createPdfFromView(aView: UIView, saveToDocumentsWithFileName fileName: String)
{
let pdfData = NSMutableData()
UIGraphicsBeginPDFContextToData(pdfData, aView.bounds, nil)
UIGraphicsBeginPDFPage()
guard let pdfContext = UIGraphicsGetCurrentContext() else { return }
aView.layer.render(in: pdfContext)
UIGraphicsEndPDFContext()
if let documentDirectories = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true).first {
let documentsFileName = documentDirectories + "/" + fileName
debugPrint(documentsFileName)
pdfData.write(toFile: documentsFileName, atomically: true)
}
}
this creating PDF for only firstPage! what about scrollview?
Great question! I'm not the one to ask, however. Perhaps start another question?
I've the same problem then @SaurabhPrajapati and I created an Question
If someone is interested, here's Swift 2.1 code:
func createPdfFromView(aView: UIView, saveToDocumentsWithFileName fileName: String)
{
let pdfData = NSMutableData()
UIGraphicsBeginPDFContextToData(pdfData, aView.bounds, nil)
UIGraphicsBeginPDFPage()
guard let pdfContext = UIGraphicsGetCurrentContext() else { return }
aView.layer.renderInContext(pdfContext)
UIGraphicsEndPDFContext()
if let documentDirectories = NSSearchPathForDirectoriesInDomains(.DocumentDirectory, .UserDomainMask, true).first {
let documentsFileName = documentDirectories + "/" + fileName
debugPrint(documentsFileName)
pdfData.writeToFile(documentsFileName, atomically: true)
}
}
Your guard statement means the UIGraphicsEndPDFContext() isn't called - maybe add a defer earlier?
@DavidH thanks, David, good idea! Also, I think, there's a good idea add a completion block kind completion: (success: Bool) -> () for guard return cases
Yesterday I posted a Q&A on how to produce a high resolution image by rendering the view in a large image, then drawing the image into a PDF in interested: http://stackoverflow.com/a/35442187/1633251
A super easy way to create a PDF from UIView is using UIView Extension
Swift 4.2
extension UIView {
// Export pdf from Save pdf in drectory and return pdf file path
func exportAsPdfFromView() -> String {
let pdfPageFrame = self.bounds
let pdfData = NSMutableData()
UIGraphicsBeginPDFContextToData(pdfData, pdfPageFrame, nil)
UIGraphicsBeginPDFPageWithInfo(pdfPageFrame, nil)
guard let pdfContext = UIGraphicsGetCurrentContext() else { return "" }
self.layer.render(in: pdfContext)
UIGraphicsEndPDFContext()
return self.saveViewPdf(data: pdfData)
}
// Save pdf file in document directory
func saveViewPdf(data: NSMutableData) -> String {
let paths = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)
let docDirectoryPath = paths[0]
let pdfPath = docDirectoryPath.appendingPathComponent("viewPdf.pdf")
if data.write(to: pdfPath, atomically: true) {
return pdfPath.path
} else {
return ""
}
}
}
Credit: http://www.swiftdevcenter.com/create-pdf-from-uiview-wkwebview-and-uitableview/
Thanks it worked, One question, So I have long scroll view, but the PDF file only shows a portion of it, so is there a way to tweak your code for example to give it Height ?
@HusseinElbeheiry just use contentView to generate pdf. When I create a scrollView (UIScrollView) I will definitely create a contentView (UIView) and put the contentView in the scrollView, and I add all subsequent elements to the contentView. In this case, it is enough to use contentView to create a PDF document. contentView.exportAsPdfFromView
Where i can find the saved PDF file?
@MidhunNarayan inside your app document directory. Just print the pdfPath on console and access it.
@AshishChauhan i have printed the path but when i open my file app it is not showing. do i need anything extra to see the converted pdf in my file app
With Swift 5 / iOS 12, you can combine CALayer's render(in:) method with UIGraphicsPDFRenderer's writePDF(to:withActions:) method in order to create a PDF file from a UIView instance.
The following Playground sample code shows how to use render(in:) and writePDF(to:withActions:):
import UIKit
import PlaygroundSupport
let view = UIView(frame: CGRect(x: 0, y: 0, width: 100, height: 100))
view.backgroundColor = .orange
let subView = UIView(frame: CGRect(x: 20, y: 20, width: 40, height: 60))
subView.backgroundColor = .magenta
view.addSubview(subView)
let outputFileURL = PlaygroundSupport.playgroundSharedDataDirectory.appendingPathComponent("MyPDF.pdf")
let pdfRenderer = UIGraphicsPDFRenderer(bounds: view.bounds)
do {
try pdfRenderer.writePDF(to: outputFileURL, withActions: { context in
context.beginPage()
view.layer.render(in: context.cgContext)
})
} catch {
print("Could not create PDF file: \(error)")
}
Note: in order to use playgroundSharedDataDirectory in your Playground, you first need to create a folder called "Shared Playground Data" in you macOS "Documents" folder.
The UIViewController subclass complete implementation below shows a possible way to refactor the previous example for an iOS app:
import UIKit
class ViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
let view = UIView(frame: CGRect(x: 0, y: 0, width: 100, height: 100))
view.backgroundColor = .orange
let subView = UIView(frame: CGRect(x: 20, y: 20, width: 40, height: 60))
subView.backgroundColor = .magenta
view.addSubview(subView)
createPDF(from: view)
}
func createPDF(from view: UIView) {
let documentDirectory = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first!
let outputFileURL = documentDirectory.appendingPathComponent("MyPDF.pdf")
print("URL:", outputFileURL) // When running on simulator, use the given path to retrieve the PDF file
let pdfRenderer = UIGraphicsPDFRenderer(bounds: view.bounds)
do {
try pdfRenderer.writePDF(to: outputFileURL, withActions: { context in
context.beginPage()
view.layer.render(in: context.cgContext)
})
} catch {
print("Could not create PDF file: \(error)")
}
}
}
in iPhone where i can find this file, using files application it is not showing
I don't know why, but casilic's answer gives me blank screen on iOS6.1. The code below works.
-(NSMutableData *)createPDFDatafromUIView:(UIView*)aView
{
// Creates a mutable data object for updating with binary data, like a byte array
NSMutableData *pdfData = [NSMutableData data];
// Points the pdf converter to the mutable data object and to the UIView to be converted
UIGraphicsBeginPDFContextToData(pdfData, aView.bounds, nil);
UIGraphicsBeginPDFPage();
CGContextRef pdfContext = UIGraphicsGetCurrentContext();
// draws rect to the view and thus this is captured by UIGraphicsBeginPDFContextToData
[aView.layer renderInContext:pdfContext];
// remove PDF rendering context
UIGraphicsEndPDFContext();
return pdfData;
}
-(NSString*)createPDFfromUIView:(UIView*)aView saveToDocumentsWithFileName:(NSString*)aFilename
{
// Creates a mutable data object for updating with binary data, like a byte array
NSMutableData *pdfData = [self createPDFDatafromUIView:aView];
// Retrieves the document directories from the iOS device
NSArray* documentDirectories = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask,YES);
NSString* documentDirectory = [documentDirectories objectAtIndex:0];
NSString* documentDirectoryFilename = [documentDirectory stringByAppendingPathComponent:aFilename];
// instructs the mutable data object to write its context to a file on disk
[pdfData writeToFile:documentDirectoryFilename atomically:YES];
NSLog(@"documentDirectoryFileName: %@",documentDirectoryFilename);
return documentDirectoryFilename;
}
This is the same exact code as my answer just broken out in two separate methods???? How do you think this fixed your blank screen problem when it is the same code??
I had the same experience. Got a blank PDF from the first code. Splitting it in two as Alex did, made it work. Can't explain why.
| common-pile/stackexchange_filtered |
Javascript variable changes value itself
So, I am writing a code for my webpage that if you have one of two boxes open (registration, login) you can't open another one. I'm stuck with the following problem: once you close the box with pressing the mouse out of the box, value changes to 1 and you should be able to open the box again but all of a sudden it sets itself to 0 and you can't open the box anymore.
IF you close the box with CLOSE button, you CAN open the box again but not with pressing outside the box.
At the start of JS file I declare var openable and set value to 1.
var openable = 1;
Code for closing with CLOSE button:
$("#closeReg").click(function(){
openable = 1;
console.log(openable);
$('#register').css('display','none');
$("#register").siblings().css("opacity", "1");
});
Code for pressing out of box:
var subject = $("#register");
if(e.target.id == subject.attr('id')){
subject.fadeOut();
subject.siblings().css("opacity", "1");
openable = 1;
console.log(openable);
}
Code for opening the box:
$("#regButton").click(function(){
console.log(openable);
if(openable == 1){
$("#register").fadeIn();
$("#register").siblings().css("opacity", "0.5");
openable = 0;
console.log(openable);
}
});
Whole code:
https://pastebin.com/vC461VGy
The cause of the problem you've described isn't illustrated in the code you've posted. Are you certain the value of openable is 0? Or are you just assuming this on the basis that the condition (or some part of it) doesn't fire?
https://pastebin.com/vC461VGy This is whole code
I am logging value of openable in regbutton.click function before IF and it says its ZERO
Better to put it in a Fiddle/Codepen so it can be run, rather than merely viewed.
Or even better, post it in a SO code snippet...
https://codepen.io/anon/pen/xWrvrp
var openable = 1;
$("#closeReg").click(function() {
openable = 1;
console.log(openable);
$('#register').css('display', 'none');
$("#register").siblings().css("opacity", "1");
});
$("#regButton").click(function() {
console.log(openable);
if (openable == 1) {
$("#register").fadeIn();
$("#register").siblings().css("opacity", "0.5");
openable = 0;
console.log(openable);
}
});
$('body').click(function(e) {
if ($(e.target).closest('#register').length <= 0 && !$(e.target).is('#closeReg') && !$(e.target).is('#regButton')) {
$('#closeReg').trigger('click');
}
});
#register {
background-color: #FFAAAA;
}
body {
height: 200px
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<button id="closeReg">Close</button>
<button id="regButton">Open</button>
<div id="register">
register box
</div>
Here, try this sample.
Your code for hiding when clicking outside was not working correctly, mainly because e is undefined and even if we consider it to be a click event your test was wrong.
The main difficulty is to detect the click outside the box without detecting the click that opens it, or you will end up opening and closing at the same time. Hence why I added the two tests to make sure the targets aren't buttons that handle specific behaviors.
Also, instead of doubling the code to close the modal, I have made it so when you click anywhere, it triggers a click on the close button, that way only one event is to be kept up to date. Note that this could also be do-able by creating your own event instead but meh...
This doesn't work, close button is not triggered when pressing out of the register box and Register button does not get clicked easily, only after several attempts.
| common-pile/stackexchange_filtered |
k-linear category
In the tasks I have it says
"Let $k$ be a field. Show that the structure of a $k$-linear category on a category $\mathcal{C}$ is equivalent to $\mathcal{C}$ being a module category (see Module category in nLab) over $\mathrm{vect_k}$, the vector spaces over $k$."
I have a few questions to that:
1) I googled and found that "$k$-linear" means that the category is enriched over $\mathrm{vect_k}$. I found the definition of enriched, but that's really abstract, could somebody maybe explain what enriched over $\mathrm{vect_k}$ explicitly means?
2) What exactly does $k$-linear category ON A CATEGORY $\mathcal{C}$ mean? I never found it like that..
3) For the proof he says as a hint "define $v.c$ for $v \in \mathrm{vect_k}$ and $c \in \mathcal{C}$ as the object representing the functor $v \otimes_k \mathrm{Hom}( - , c)$.
Why do I have to identify an object like $v.c$ as a functor?
Any further hints on how to proof this are of course always good to see, but I am glad if someone can help me understand the question :D
Thanks in advance!
Btw: it could be useful if you could provide the references you are using.
I assume you have found this definition of enriched category and we are talking of enrichment over a monoidal category.
Personally I believe that the best way to think about enriched categories is to consider them as categories whose $\hom$-functor, the compositions mappings, and the identities can be lifted respectively to a $\mathcal V$-valued functor and $\mathcal V$-valued mappings.
I could make this a little more formal but that could be long, so if you need additional details feel free to ask.
A little more concretely, for the case of the monoidal category $\mathbf{Vect}_k$ of $k$-vector spaces (having tensor product $(\otimes)$ as monoidal product, and $k$, the field, i.e. the unique up to isomorphism vector space of dimension 1, as monoidal unit) an enriched category amounts to a category $\mathbf C$ whose $\hom$-sets $\mathbf C[a,b]$ have an additonal (enriching) structure of $k$-linear vector spaces, in which the compositions $\circ_{a,b,c} \colon \mathbf C[b,c] \times \mathbf C[a,b] \to \mathbf C[a,c]$ are $k$-multilinear maps.
If you adopt this point of view, and see enriched categories as categories with additional structure (on the $\hom$-sets and on the compositions) then it should be pretty natural to call this additional structure the $k$-linear category structure over the (underlying) category considered (as we talk of the $k$-vector space structure over a set).
Since in the formula $v \otimes_k \text{Hom}(-,c)$ you are using the tensor product of $\mathbf{Vect}_k$ I am assuming you are proving that a $k$-linear category is a $\mathbf{Vect}_k$-module category.
Actually the key to understand your hint is in the phrase "the object representing the functor $v \otimes_k \text{Hom}(-,c)$", this implies that $v.c$ is not a functor but it is an object such that you have a natural isomorphism
$$\text{Hom}(-,v.c) \cong v \otimes_k \text{Hom}(-,c)$$
of $\mathbf{Vect}_k$-valued functors.
I hope this helps, if not feel free to ask for additional details.
Hey, thanks for the answer! I understood 1) and 2) - thanks for that! in 3) I understood what you wrote, its kind of that there is such a $c' \in \mathcal{C}$ s.t.h $(\mathrm{Hom}(-, c') \cong v \otimes \mathrm{Hom}(-, c)$. and then I can define $v.c$ to be $c'$. Right? I have given that a lot of thought the last hours, but unfortunately I have absolutely no idea, how to find such a $c'$. Do you have any idea there?
Unfortunately no. It is not clear to me why such a functor should be representable. Is there any additional hypothesis you forgot in the question? Alternatively could you provide the references you have on the subject? (where does the question come from? from which book or article are you getting the definitions?)
I can start the same way, you did: Unfortunately no. Or at least not all of them. The questions are from a handwritten-collection of exercise I got from my professor. Some of the definitions in there are taken from https://arxiv.org/abs/math/0111139, "k-linear" and "representable" aren't explained at all, it seams like part of the exercises was to do some research on those terms. The only thing I can add is the next question (as they sometimes correlate) where it says:
Show: k-linear functor $F: \mathcal{C} \rightarrow \mathcal{C'}$ with $\mathcal{C}, \mathcal{C'}$ $k$-linear amounts t a $\mathbb{C}$-module functor $\mathcal{C} \rightarrow \mathcal{C'}$ ... but I don't think this I really helpful
A possible way to proceed could be to find a universal element for the $\mathcal V$-functor $v \otimes_k \text{Hom}(-,c)$ but I am not sure what such an element could be.
@Giorgia Mossa, since I asked this question I have come a little further. I know want to show the other way around, that for a module category I can construct that the $\mathrm{Hom}$-spaces are k-linear. I solved the problem of the scalar multiplication, but I am really confused with the vector addition. Since my basis is a module category I have no addition anywhere near, and the composition is not commutative. Can you help?
a litte mistake there, the second sentence should be "I now want to show..." sorry
@P.Schulze Sorry I have read your comment just now, could it be better to talk about this in a private chat?
no Problem! I think I actually already fixed it - thanks a lot!
Is the monoidal unit really the $0$ space? Wouldn't it be $K$ if we're talking about vector spaces over $K$?
IsAdisplayName you're right, I didn't notice at the time. I'll fix in a moment, thanks for pointing out.
"Let $k$ be a field. Show that the structure of a $k$-linear category on a category $\mathcal{C}$ is equivalent to $\mathcal{C}$ being a module category (see Module category in nLab) over $\mathrm{vect_k}$, the vector spaces over $k$."
I don't think this is true. Consider the category where the objects are 7-dimensional vector spaces over the field $k$ and the morphisms are linear maps. This is a $k$-linear category: that is, it's enriched over $\mathrm{vect_k}$ - or equivalently, in simpler terms, the hom-sets are vector spaces over $k$ and composition is bilinear. But I don't think it's a module category over $\mathrm{vect_k}$, since you can't tensor a 7-dimensional vector space by an arbitrary vector space and get another 7-dimensional vector space in some way that obeys the axioms of a module category.
| common-pile/stackexchange_filtered |
Database connection with error in CodeIgniter 2.1.4
I have this kind of error when I try to install a php scrit make with CodeIgniter 2.1.4
This is the error:
A PHP Error was encountered
Severity: 8192
Message: Methods with the same name as their class will not be constructors in a future version of PHP; CI_DB_driver has a deprecated constructor
Filename: database/DB_driver.php
Line Number: 31
This is the code:
class CI_DB_driver {
var $username;
var $password;
var $hostname;
var $database;
var $dbdriver = 'mysql';
var $dbprefix = '';
var $char_set = 'utf8';
var $dbcollat = 'utf8_general_ci';
var $autoinit = TRUE; // Whether to automatically initialize the DB
var $swap_pre = '';
var $port = '';
var $pconnect = FALSE;
var $conn_id = FALSE;
var $result_id = FALSE;
var $db_debug = FALSE;
var $benchmark = 0;
var $query_count = 0;
var $bind_marker = '?';
var $save_queries = TRUE;
var $queries = array();
var $query_times = array();
var $data_cache = array();
var $trans_enabled = TRUE;
var $trans_strict = TRUE;
var $_trans_depth = 0;
var $_trans_status = TRUE; // Used with transactions to determine if a rollback should occur
var $cache_on = FALSE;
var $cachedir = '';
var $cache_autodel = FALSE;
var $CACHE; // The cache class object
What is on line 31?
I think the issue is you are running a Very Old Codeigniter on a new version of PHP
Class constructors have been called __construct() for a long time now (many versions of PHP)
And as this code does not show an Old or New version of a contructor, it is probably NOT the script that is actually generating this error
It's farther down the function, there is an old function called CL_DB_driver and PHP hates that.
Then you will have to install an older version of PHP. What version are you using
| common-pile/stackexchange_filtered |
How can I make this skeleton a little more realistic?
So I've made this open type hourglass, which I am happy with in terms of graphical style and realism, but the key feature is the skeleton inside of it. I have failed to make the skeleton match (in any sense of the word) the rest of the hourglass.
Without Skeleton:
With Skeleton:
As you can see, the skeleton looks quite bad. What could I do to improve the overall look?
Thanks
What kind of image is the skelton? vector grafic or bitmap? Size etc?
I'd personally try to sharpen up the skeleton and add more realism through usage of highlights and shadows. your hourglass has highlights and shadows and therefor so should your skeleton inside it.
also be careful when sharpening the skeleton. it is already very grainy and sharpening it will only accentuate it even further. try to smooth it out.
another point I wanted to mention is that in comparison to the base of your hourglass the top of the hourglass is in a wrong angle. the front should be more raised up. and also the circle at the top is a bit too wide judging by the side tips of the hourglass.
Thanks a lot for your feedback. Sadly, I do not have enough time (as this is for a project due soon) to fix the perspective of the hourglass, but I will definitely attempt to add highlights/shadows the the skeleton.
Realistic? You might find a stock image of a 3D skeleton, as opposed to a blurry and somewhat pixelated line drawing. You don't use lines in your hourglass, so in keeping with the style something without lines in the skeleton is ideal.
Also, what makes the skeleton stand as such? Gravity should pull him to flop against the glass, bend his knees, twist his body a bit. At least, repositioning the arms to a more relaxed state might help, as right now he almost looks to have the attention of a cowboy ready to draw.
(Kudos on the shadows and texture on the wooden base!)
| common-pile/stackexchange_filtered |
VBA - Need to append to the end of a line in a large text
I have a system where I can only read 50 column's worth of data from a table at a time. The table has 150 columns that I need to read. The table has 1 million lines.
What I have been doing so far is to read the 1st 50 columns and write all 1 millions lines to a text file.
What I need to do next, is read next block of 50 columns and append lines 1 to 1 million into the existing text file that I have already written to.
This is where I become stuck. I'm not sure how I can write to a specific line in the text file. I was hoping not to have to loop each line in the existing text file to get to the row I need to append to. Is there a way to specify the line I need to append to without having to loop the text file to find the correct row?
why don't you just use the export functionalities of ms office? you could just export your table to csv file for example
Because excel has a limit on the number of lines.
What exactly is the "system" you're reading from?
A bespoke mainframe applications we had built
maybe this discussion will help you - http://bytes.com/topic/access/answers/819484-access-table-text-file-csv-tab-etc-there-easy-way-do
If each line is unique, read first 50 columns and 1 million records. Write them to a file. Read next 50 columns and 1 million records. Loop through the file (million lines) and append to it. Repeat one more time. If you are familiar with SQLite, you can create prepared statements and write to that database. It might be faster with primary key indexed.
@pony2deer i'm using excel not access
since you want to do it in vba, it will be the same strategy
@pony2deer it's not because access can hold many more lines
so how are you writing the first 50 lines into a text file, supply the code and we can work on it
I read and then write as I get each line. The link you provided is suggesting a load in a table before I export it. Excel would not let me store all the date before I export it.
Can you show how you are reading the data? How do you import?
Are you able to query each set of fifty columns in the exact same row order?
I would use SQL OLEDB and simply output data using RecordSet. Might take some time but should work. Try using LIMIT and FETCH in a loop in case the result won't load in a single go. Alternatively add a loop to the SQL statement and fetch records higher lower then a specific row ID (if your records ID is a numeric not a hash) e.g. WHERE ID > [OLD_ID] and ID <= [OLD_ID] + 100
Using SQL OLEDB and RecordSet
Dim fileName As String, textData As String, textRow As String, fileNo As Integer
fileNo = FreeFile
Open fileName For Output As #fileNo
Set rs = CreateObject("ADODB.Recordset")
sConn = "OLEDB;Provider=SQLOLEDB;Data Source=" & _
serverInstance & ";Initial Catalog=" & database & _
";User ID=" & userId & ";Password=" & password & ";"
strSQL = "SELECT * FROM [table_name] "
rs.Open strSQL, strcon, 3, 3
rs.MoveFirst
Do
textData = ""
textData = textData & ";" & rs("Col1")
textData = textData & ";" & rs("Col2")
textData = textData & ";" & rs("Col3") & vbNewLine
Print #fileNo, textData
rs.MoveNext
Loop Until rs.EOF
Close #fileNo
| common-pile/stackexchange_filtered |
What do American people call the classes that students go to after school for SATs?
What do American people call the classes that students go to after school for SATs? In Taiwan, we call it a cram school, but there is no such phrase in any American dictionary. Could Americans please tell me what you call the classes or schools? Thank you.
Note that such classes aren't as common in the US as they are in many Asian countries. In particular, while SATs might be given throughout mandatory education, colleges generally only care about the last set of exams... which you can take multiple times, and chose which one to submit after you have the scores. Cram school in general isn't as much of a thing.
In my experience (growing up in California, but I think this holds anywhere with a large Asian population), there is a whole industry of after-school SAT tutoring aimed specifically at Asian students. See e.g. https://www.nytimes.com/2017/10/25/magazine/asian-test-prep-centers-offer-parents-exactly-what-they-want-results.html . So I would imagine that a lot of them are mainly talked about in non-English languages, even in the US.
@Clockwork-Muse That is simply not true in this day and age in the US. When I was in HS they were not common, today they are.
It's probably true that they're not as common as in many Asian countries, but they are indeed pretty common. As you start looking at higher income brackets, unsurprisingly they get much more common.
SAT prep courses or classes
SAT prep courses or classes
That's the bottom line for a term.
We don't say cram school, though we do cram for exams.
For completeness, cramming for exams usually means waiting until the night before the exam to study for it. The metaphor is that you're squeezing all your studying into a short period of time instead of spreading it out.
| common-pile/stackexchange_filtered |
Create dynamic updated graph with Python
I need to write a script in Python that will take dynamically changed data, the source of data is not matter here, and display graph on the screen.
I know how to use matplotlib, but the problem with matplotlib is that I can display graph only once, at the end of the script. I need to be able not only to display graph one time, but also update it on the fly, each time when data changes.
I found that it is possible to use wxPython with matplotlib to do this, but it is little bit complicate to do this for me, because I am not familiar with wxPython at all.
So I will be very happy if someone will show me simple example how to use wxPython with matplotlib to show and update simple graph. Or, if it is some other way to do this, it will be good to me too.
Update
PS: since no one answered and looked at matplotlib help noticed by @janislaw and wrote some code. This is some dummy example:
import time
import matplotlib.pyplot as plt
def data_gen():
a=data_gen.a
if a>10:
data_gen.a=1
data_gen.a=data_gen.a+1
return range (a,a+10)
def run(*args):
background = fig.canvas.copy_from_bbox(ax.bbox)
while 1:
time.sleep(0.1)
# restore the clean slate background
fig.canvas.restore_region(background)
# update the data
ydata = data_gen()
xdata=range(len(ydata))
line.set_data(xdata, ydata)
# just draw the animated artist
ax.draw_artist(line)
# just redraw the axes rectangle
fig.canvas.blit(ax.bbox)
data_gen.a=1
fig = plt.figure()
ax = fig.add_subplot(111)
line, = ax.plot([], [], animated=True)
ax.set_ylim(0, 20)
ax.set_xlim(0, 10)
ax.grid()
manager = plt.get_current_fig_manager()
manager.window.after(100, run)
plt.show()
This implementation have problems, like script stops if you trying to move the window. But basically it can be used.
I was just trying to do this today and gave up on matplotlib. I just settled on sending all the data over a socket to a Processing script that does all the drawing, but that's probably not the answer you were hoping for.
matplotlib is easily embeddable inside any GUI you like, and does not need to be static. There are examples in the docs - see User interfaces section. There are also traits/traitsgui/chaco, maybe more suited to this kind of job, but require a paradigm shift link
As an alternative to matplotlib, the Chaco library provides nice graphing capabilities and is in some ways better-suited for live plotting.
See some screenshots here, and in particular, see these examples:
data_stream.py
spectrum.py
Chaco has backends for qt and wx, so it handles the underlying details for you rather nicely most of the time.
Updated. That said, there are many new libraries in the Python ecosystem since this answer: Bokeh, Altair, Holoviews, and others.
Here is a class I wrote that handles this issue. It takes a matplotlib figure that you pass to it and places it in a GUI window. Its in its own thread so that it stays responsive even when your program is busy.
import Tkinter
import threading
import matplotlib
import matplotlib.backends.backend_tkagg
class Plotter():
def __init__(self,fig):
self.root = Tkinter.Tk()
self.root.state("zoomed")
self.fig = fig
t = threading.Thread(target=self.PlottingThread,args=(fig,))
t.start()
def PlottingThread(self,fig):
canvas = matplotlib.backends.backend_tkagg.FigureCanvasTkAgg(fig, master=self.root)
canvas.show()
canvas.get_tk_widget().pack(side=Tkinter.TOP, fill=Tkinter.BOTH, expand=1)
toolbar = matplotlib.backends.backend_tkagg.NavigationToolbar2TkAgg(canvas, self.root)
toolbar.update()
canvas._tkcanvas.pack(side=Tkinter.TOP, fill=Tkinter.BOTH, expand=1)
self.root.mainloop()
In your code, you need to initialize the plotter like this:
import pylab
fig = matplotlib.pyplot.figure()
Plotter(fig)
Then you can plot to it like this:
fig.gca().clear()
fig.gca().plot([1,2,3],[4,5,6])
fig.canvas.draw()
I couldnt get your solution to work, since I am new to Tkinter I am not sure what is wrong, however what I have figured out so far is that the mainloop cannot be in a thread.
Instead of matplotlib.pyplot.show() you can just use matplotlib.pyplot.show(block=False). This call will not block the program to execute further.
example of dynamic plot , the secret is to do a pause while plotting , here i use networkx:
G.add_node(i,)
G.add_edge(vertic[0],vertic[1],weight=0.2)
print "ok"
#pos=nx.random_layout(G)
#pos = nx.spring_layout(G)
#pos = nx.circular_layout(G)
pos = nx.fruchterman_reingold_layout(G)
nx.draw_networkx_nodes(G,pos,node_size=40)
nx.draw_networkx_edges(G,pos,width=1.0)
plt.axis('off') # supprimer les axes
plt.pause(0.0001)
plt.show() # display
I had the need to create a graph that updates with time. The most convenient solution I came up was to create a new graph each time. The issue was that the script won't be executed after the first graph is created, unless the window is closed manually.
That issue was avoided by turning the interactive mode on as shown below
for i in range(0,100):
fig1 = plt.figure(num=1,clear=True) # a figure is created with the id of 1
createFigure(fig=fig1,id=1) # calls a function built by me which would insert data such that figure is 3d scatterplot
plt.ion() # this turns the interactive mode on
plt.show() # create the graph
plt.pause(2) # pause the script for 2 seconds , the number of seconds here determine the time after that graph refreshes
There are two important points to note here
id of the figure - if the id of the figure is changed a new graph will be created every time, but if it is same it relevant graph would be updated.
pause function - this stops the code from executing for the specified time period. If this is not applied graph will refresh almost immediately
I have created a class that draws a tkinter widget with a matplotlib plot. The plot is updated dynamically (more or less in realtime).
Tested in python 3.10, matplotlib 3.6.0 and tkinter 8.6.
from matplotlib import pyplot as plt
from matplotlib import animation
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg, NavigationToolbar2Tk
from tkinter import *
class MatplotlibPlot:
def __init__(
self, master, datas: list[dict], update_interval_ms: int = 200, padding: int = 5,
fig_config: callable = None, axes_config: callable = None
):
"""
Creates a Matplotlib plot in a Tkinter environment. The plot is dynamic, i.e., the plot data is periodically
drawn and the canvas updates.
@param master: The master widget where the pot will be rendered.
@param datas: A list containing dictionaries of data. Each dictionary must have a `x` key, which holds the xx
data, and `y` key, which holds the yy data. The other keys are optional and are used as kwargs of
`Axes.plot()` function. Each list entry, i.e., each dict, is drawn as a separate line.
@param fig_config: A function that is called after the figure creation. This function can be used to configure
the figure. The function signature is `fig_config(fig: pyplot.Figure) -> None`. The example bellow allows
the configuration of the figure title and Dots Per Inch (DPI).
``` python
my_vars = [{"x": [], "y": [], "label": "Label"}, ]
window = Tk()
def my_fig_config(fig: pyplot.Figure) -> None:
fig.suptitle("Superior Title")
fig.set_dpi(200)
MatplotlibPlot(master=window, datas=my_vars, fig_config=my_fig_config)
window.mainloop()
```
@param axes_config: A function that is called after the axes creation. This function can be used to configure
the axes. The function signature is `axes_config(axes: pyplot.Axes) -> None`. The example bellow allows
the configuration of the axes xx and yy label, the axes title and also enables the axes legend.
``` python
my_vars = [{"x": [], "y": [], "label": "Label"}, ]
window = Tk()
def my_axes_config(axes: pyplot.Axes) -> None:
axes.set_xlabel("XX Axis")
axes.set_ylabel("YY Axis")
axes.set_title("Axes Title")
axes.legend()
MatplotlibPlot(master=window, datas=my_vars, axes_config=my_axes_config)
window.mainloop()
```
@param update_interval_ms: The plot update interval in milliseconds (ms). Defaults to 200 ms.
@param padding: The padding, in pixels (px), to be used between widgets. Defaults to 5 px.
"""
# Creates the figure
fig = plt.Figure()
# Calls the config function if passed
if fig_config:
fig_config(fig)
# Creates Tk a canvas
canvas = FigureCanvasTkAgg(figure=fig, master=master)
# Allocates the canvas
canvas.get_tk_widget().pack(side=TOP, fill=BOTH, expand=True, padx=padding, pady=padding)
# Creates the toolbar
NavigationToolbar2Tk(canvas=canvas, window=master, pack_toolbar=True)
# Creates an axes
axes = fig.add_subplot(1, 1, 1)
# For each data entry populate the axes with the initial data values. Also, configures the lines with the
# extra key-word arguments.
for data in datas:
axes.plot(data["x"], data["y"])
_kwargs = data.copy()
_kwargs.pop("x")
_kwargs.pop("y")
axes.lines[-1].set(**_kwargs)
# Calls the config function if passed
if axes_config:
axes_config(axes)
# Creates a function animation which calls self.update_plot function.
self.animation = animation.FuncAnimation(
fig=fig,
func=self.update_plot,
fargs=(canvas, axes, datas),
interval=update_interval_ms,
repeat=False,
blit=True
)
# noinspection PyMethodMayBeStatic
def update_plot(self, _, canvas, axes, datas):
# Variables used to update xx and yy axes limits.
update_canvas = False
xx_max, xx_min = axes.get_xlim()
yy_max, yy_min = axes.get_ylim()
# For each data entry update its correspondent axes line
for line, data in zip(axes.lines, datas):
line.set_data(data["x"], data["y"])
_kwargs = data.copy()
_kwargs.pop("x")
_kwargs.pop("y")
line.set(**_kwargs)
# If there are more than two points in the data then update xx and yy limits.
if len(data["x"]) > 1:
if min(data["x"]) < xx_min:
xx_min = min(data["x"])
update_canvas = True
if max(data["x"]) > xx_max:
xx_max = max(data["x"])
update_canvas = True
if min(data["y"]) < yy_min:
yy_min = min(data["y"])
update_canvas = True
if max(data["y"]) > yy_max:
yy_max = max(data["y"])
update_canvas = True
# If limits need to be updates redraw canvas
if update_canvas:
axes.set_xlim(xx_min, xx_max)
axes.set_ylim(yy_min, yy_max)
canvas.draw()
# return the lines
return axes.lines
Below is an example of a custom tkinter scale used to update data which is drawn in the tkinter plot.
from matplotlib import pyplot as plt
from matplotlib import animation
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg, NavigationToolbar2Tk
from tkinter import *
class MatplotlibPlot:
def __init__(
self, master, datas: list[dict], update_interval_ms: int = 200, padding: int = 5,
fig_config: callable = None, axes_config: callable = None
):
"""
Creates a Matplotlib plot in a Tkinter environment. The plot is dynamic, i.e., the plot data is periodically
drawn and the canvas updates.
@param master: The master widget where the pot will be rendered.
@param datas: A list containing dictionaries of data. Each dictionary must have a `x` key, which holds the xx
data, and `y` key, which holds the yy data. The other keys are optional and are used as kwargs of
`Axes.plot()` function. Each list entry, i.e., each dict, is drawn as a separate line.
@param fig_config: A function that is called after the figure creation. This function can be used to configure
the figure. The function signature is `fig_config(fig: pyplot.Figure) -> None`. The example bellow allows
the configuration of the figure title and Dots Per Inch (DPI).
``` python
my_vars = [{"x": [], "y": [], "label": "Label"}, ]
window = Tk()
def my_fig_config(fig: pyplot.Figure) -> None:
fig.suptitle("Superior Title")
fig.set_dpi(200)
MatplotlibPlot(master=window, datas=my_vars, fig_config=my_fig_config)
window.mainloop()
```
@param axes_config: A function that is called after the axes creation. This function can be used to configure
the axes. The function signature is `axes_config(axes: pyplot.Axes) -> None`. The example bellow allows
the configuration of the axes xx and yy label, the axes title and also enables the axes legend.
``` python
my_vars = [{"x": [], "y": [], "label": "Label"}, ]
window = Tk()
def my_axes_config(axes: pyplot.Axes) -> None:
axes.set_xlabel("XX Axis")
axes.set_ylabel("YY Axis")
axes.set_title("Axes Title")
axes.legend()
MatplotlibPlot(master=window, datas=my_vars, axes_config=my_axes_config)
window.mainloop()
```
@param update_interval_ms: The plot update interval in milliseconds (ms). Defaults to 200 ms.
@param padding: The padding, in pixels (px), to be used between widgets. Defaults to 5 px.
"""
# Creates the figure
fig = plt.Figure()
# Calls the config function if passed
if fig_config:
fig_config(fig)
# Creates Tk a canvas
canvas = FigureCanvasTkAgg(figure=fig, master=master)
# Allocates the canvas
canvas.get_tk_widget().pack(side=TOP, fill=BOTH, expand=True, padx=padding, pady=padding)
# Creates the toolbar
NavigationToolbar2Tk(canvas=canvas, window=master, pack_toolbar=True)
# Creates an axes
axes = fig.add_subplot(1, 1, 1)
# For each data entry populate the axes with the initial data values. Also, configures the lines with the
# extra key-word arguments.
for data in datas:
axes.plot(data["x"], data["y"])
_kwargs = data.copy()
_kwargs.pop("x")
_kwargs.pop("y")
axes.lines[-1].set(**_kwargs)
# Calls the config function if passed
if axes_config:
axes_config(axes)
# Creates a function animation which calls self.update_plot function.
self.animation = animation.FuncAnimation(
fig=fig,
func=self.update_plot,
fargs=(canvas, axes, datas),
interval=update_interval_ms,
repeat=False,
blit=True
)
# noinspection PyMethodMayBeStatic
def update_plot(self, _, canvas, axes, datas):
# Variables used to update xx and yy axes limits.
update_canvas = False
xx_max, xx_min = axes.get_xlim()
yy_max, yy_min = axes.get_ylim()
# For each data entry update its correspondent axes line
for line, data in zip(axes.lines, datas):
line.set_data(data["x"], data["y"])
_kwargs = data.copy()
_kwargs.pop("x")
_kwargs.pop("y")
line.set(**_kwargs)
# If there are more than two points in the data then update xx and yy limits.
if len(data["x"]) > 1:
if min(data["x"]) < xx_min:
xx_min = min(data["x"])
update_canvas = True
if max(data["x"]) > xx_max:
xx_max = max(data["x"])
update_canvas = True
if min(data["y"]) < yy_min:
yy_min = min(data["y"])
update_canvas = True
if max(data["y"]) > yy_max:
yy_max = max(data["y"])
update_canvas = True
# If limits need to be updates redraw canvas
if update_canvas:
axes.set_xlim(xx_min, xx_max)
axes.set_ylim(yy_min, yy_max)
canvas.draw()
# return the lines
return axes.lines
class CustomScaler:
def __init__(self, master, init: int = None, start: int = 0, stop: int = 100,
padding: int = 5, callback: callable = None):
"""
Creates a scaler with an increment and decrement button and a text entry.
@param master: The master Tkinter widget.
@param init: The scaler initial value.
@param start: The scaler minimum value.
@param stop: The scaler maximum value.
@param padding: The widget padding.
@param callback: A callback function that is called each time that the scaler changes its value. The function
signature is `callback(var_name: str, var_index: int, var_mode: str) -> None`.
"""
self.start = start
self.stop = stop
if init:
self.value = IntVar(master=master, value=init, name="scaler_value")
else:
self.value = IntVar(master=master, value=(self.stop - self.start) // 2, name="scaler_value")
if callback:
self.value.trace_add("write", callback=callback)
Scale(master=master, from_=self.start, to=self.stop, orient=HORIZONTAL, variable=self.value) \
.pack(side=TOP, expand=True, fill=BOTH, padx=padding, pady=padding)
Button(master=master, text="◀", command=self.decrement, repeatdelay=500, repeatinterval=5) \
.pack(side=LEFT, fill=Y, padx=padding, pady=padding)
Button(master=master, text="▶", command=self.increment, repeatdelay=500, repeatinterval=5) \
.pack(side=RIGHT, fill=Y, padx=padding, pady=padding)
Entry(master=master, justify=CENTER, textvariable=self.value) \
.pack(fill=X, expand=False, padx=padding, pady=padding)
def decrement(self):
_value = self.value.get()
if _value <= self.start:
return
self.value.set(_value - 1)
def increment(self):
_value = self.value.get()
if _value >= self.stop:
return
self.value.set(_value + 1)
def scaler_changed(my_vars: list[dict], scaler: CustomScaler) -> None:
my_vars[0]["x"].append(len(my_vars[0]["x"]))
my_vars[0]["y"].append(scaler.value.get())
def my_axes_config(axes: plt.Axes) -> None:
axes.set_xlabel("Sample")
axes.set_ylabel("Value")
axes.set_title("Scaler Values")
def main():
my_vars = [{"x": [], "y": []}, ]
window = Tk()
window.rowconfigure(0, weight=10)
window.rowconfigure(1, weight=90)
frame_scaler = Frame(master=window)
frame_scaler.grid(row=0, column=0)
scaler = CustomScaler(
master=frame_scaler, start=0, stop=100, callback=lambda n, i, m: scaler_changed(my_vars, scaler)
)
frame_plot = Frame(master=window)
frame_plot.grid(row=1, column=0)
MatplotlibPlot(master=frame_plot, datas=my_vars, axes_config=my_axes_config, update_interval_ms=10)
window.mainloop()
if __name__ == "__main__":
main()
The example above produces the following window.
| common-pile/stackexchange_filtered |
How to derive this formula: $\int_a^bf(c-x)dx = \int_{c-b}^{c-a}f(x)dx$?
I'm stuck in this exercise:
$$\int_a^bf(c-x)dx = \int_{c-b}^{c-a}f(x)dx$$
My attempt is this:
$$$$
\begin{align*}
\int_a^bf(c-x)dx
&= - \int_{-a}^{-b}f(x-c)dx\\
&= \int_{-b}^{-a}f(x-c)dx
\end{align*}
$$$$
But at this point I'm not sure what to do. To my understanding if one wants to integrate $f(x-c)$, which is shifted to the right, it would be the same as integrating $f(x)$ with the interval of integration shifted to the left:
$$= \int_{-b-c}^{-a-c}f(x)dx$$
But that does not seem to be the right answer.
Make the Substitution $y=c-x$. The upper bound of your integral $x=b$ is changing to $y=c-b$. Moreover the lower bound $x=a$ transforms to $y=c-b$. Also the differential $dx$ is changed; you derive $y$ by $x$ for constant $c$ and you obtain $dy=-dx$.
| common-pile/stackexchange_filtered |
I am able to create a .csv file using Talend job and I want to convert .csv to .parquet file using tSystem component?
I have a Talend job to create a .csv file and now I want to convert .parquet format using Talend v6.5.1. Only option I can think, tSystem component to call the python script from local or directory where .csv landing temporarily. I know I can convert this easily using pandas or pyspark but I am not sure the same code will be work for tSystem in Talend. Can you please provide the suggestions or instructions-
Code:
import pandas as pd
DF = pd.read_csv("Path")
DF1 = to_parquet(DF)
If you have an external script on your file system, you can try
"python \"myscript.py\" "
Here is a link on talend forum regarding this problem :
https://community.talend.com/t5/Design-and-Development/how-to-execute-a-python-script-file-with-an-argument-using/m-p/23975#M3722
I am able to resolve the problem following below steps-
import pandas as pd
import pyarrow as pa
import numpy as np
import sys
filename = sys.argv[1]
test = pd.read_csv(r"C:\\Users\\your desktop\\Downloads\\TestXML\\"+ filename+".csv")
test.to_parquet(r"C:\\Users\\your desktop\\Downloads\\TestXML\\"+ filename+".parque
t")
| common-pile/stackexchange_filtered |
How to split the results of formulas in one cell over multiple columns in Google Spreadsheets?
Below is actually everything that is needed to answer this question.
You can use the following formula instead:
=ArrayFormula(SPLIT(FILTER(A2:A4&","&SUM(1,3),C2:C4=TRUE),","))
Functions used:
ArrayFormula
SPLIT
FILTER
SUM
Welcome. Please remember that as per site guidelines when an answer addresses your question, accept it and even upvote it so others can benefit as well.
You are my hero..
Glad I could help :)
Another approach:
=FILTER({A2:A4,SUM(1,3)*ROW(A2:A4)^0},C2:C4=TRUE)
EXPLANATION:
FILTER can take an array as its first argument. The tricky part is that every element of that array must have the same dimensions. While A2:A4 is a 3x1 range, SUM(1,3) is a 1x1 range. But this can be solved by multiplying SUM(1,3) by a sort of placeholder 3x1 array: ROW(A2:A4)^0. Since every element in A2:A4 has a row number, and any number to the zero power (^0) is 1, this just signals Google Sheets to give you SUM(1,3) "multiplied by 1 for three rows. And that makes the two elements of that curly-bracket array match up with everything else.
Really cool to see. How do you know all this? Could you recommend a course?
ROW(A2:A4)^0 Why can't I just fill in 1? This is hard to understand.
Again, every part of an array must have the same dimensions. So you can't multiply a range of values by a single number, like A2:A4 x 1, because A2:A4 is three rows by one column, while 1 is just 1 row by 1 column. You need a way to apply the x1 to every element of A2:A4 (like A2:A4 x {1;1;1}. I could have used something else, like IF(LEN(A2:A4), SUM(1,3),"") or IF(ROW(A2:A4), SUM(1,3),""). You just have to have some way of applying SUM(1,3) to every part of A2:A4 so they take up the same number of rows and columns.
Thank you for your reaction. I'm gonna take another good look at it. I want to understand.
I get it now.. . This is so nice to know.
| common-pile/stackexchange_filtered |
imbalanced learning: precision vs recall trade-off
Working on a multi-class problem (five classes) for which the dataset is highly imbalanced (two classes with less than 2% samples).
Which metric between precision and recall should I pay more attention to?
print(classification_report)
precision recall f1-score support
Class 0 0.24 0.01 0.02 12826
Class 1 0.00 0.00 0.00 1380
Class 2 0.00 0.00 0.00 6543
Class 3 0.51 0.98 0.67 22856
Class 4 0.00 0.00 0.00 1561
accuracy 0.50 45166
macro avg 0.15 0.20 0.14 45166
weighted avg 0.33 0.50 0.34 45166
Neither. Both of those are for binary problems.
I understand, but one can compute these metrics in a multi-classification problem as in one-against-others, for example, the the classification_report from my current work in the question edit above.
Neither one. You should aim for well-calibrated and sharp probabilistic predictions of class membership. (Note that "unbalanced classes" cease to be a problem in this setting.) Once you have these predictions, you can choose actions to apply to each instance based on your predictions and the costs of wrong actions. This may involve a threshold, but note that the threshold pertains to the decision aspect, not the statistical part of the exercise, and requires a notion of cost or utility.
More here. And here.
| common-pile/stackexchange_filtered |
HTML table with vertical headers & cells with equal width
I'm trying to create a table that the header is to the left and content is at the same line with same width.
Result of my trying is in the code snippet below. Cells content is not barking in to reasonable cell size and it is render just into a very long horizontal line (instead of breaking into few lines).
.table-demo {
text-align: center;
border-collapse: collapse;
table-layout: fixed;
overflow: visible;
display: table;
}
.table-demo tr {
display: inline-block;
}
.table-demo th, td {
display: block;
/*width:100%;*/
border: 1px solid;
}
.wrapper {
/*overflow-x: auto;*/
white-space: nowrap;
max-width:600px;
}
<div class=" col-md-10 wrapper " >
<table class="table table-responsive table-demo" width="100%;">
<tr>
<th>
Spec
</th>
<th>
Spec 2
</th>
<th>
Tect de
</th>
<th>
Price
</th>
<th>
Terms
</th>
</tr>
<tr>
<td>
with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.
</td>
<td>
Offertext 2
</td>
<td > Lorem Ipsum is simply dummy text of the printing and t versions of Lorem Ipsum.
</td>
<td>
<p>789.00</p>
</td>
<td>
+40
</td>
</tr>
<tr>
<td>
with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.
</td>
<td>
Offertext 2
</td>
<td > Lorem Ipsum is simply dummy text of the printing and typesetting industry.
Lorem Ipsum has been the industry's standard dummy text ever since the 1500s,
</td>
<td>
<p >999.00</p>
</td>
<td>
30
</td>
</tr>
<tr>
<td>
Lorem Ipsum issoftware like Aldus PageMaker including versions of Lorem Ipsum.
</td>
<td>
Offertext 2
</td>
<td > Lorem Ipsum is simply dummy text of the printing and typesetting industry.
Lorem Ipsum has been the industry's standard dummy text ever since the 1500s,
when an unknown printer took a galley of type and scrambled it to make a type
specimen book. It has survived not only five centuries, but also the leap into
electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release
of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like
Aldus PageMaker including versions of Lorem Ipsum.
</td>
<td>
811.00
</td>
<td>
30
</td>
</tr>
</table>
</div>
I would like it to:
a. The table to fit into bootstrap 10 col.
b. All cells to have the same width.
c. Long text to brake the multi line.
Hi,
to achive vertical headers in a table I suggest simply put a th cell into a table row instead of attempt do it with CSS.
I prepared a sample which meets your requirements from the question.
table, th, td {
border: 1px solid tomato;
border-collapse: collapse;
}
/*Point b: all cells have the same width*/
th, td {
width: 50%;
}
<table>
<tr>
<th>Short</th>
<td>Mallard is a kind of duck.</td>
</tr>
<tr>
<th>Empty row</th>
<td></td>
</tr>
<tr>
<th>What is Lorem Ipsum?</th>
<td>Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.</td>
</tr>
</table>
Cheers
| common-pile/stackexchange_filtered |
NMSSH does not connect with SWIFT
I just tried to use NMSSH for the first time but it simply does not connect for me
import UIKit
import NMSSH
class ViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
let session = NMSSHSession(host: "<IP_ADDRESS>", andUsername: "vnc")
if session.connected == true{
session.authenticateByPassword("1234")
if session.authorized == true {
print("works")
}
}
}
}
I realize you've already figured out the answer. But this is a pretty vague question. "it simply does not connect" isn't very informative. You ought to describe exactly what happens when you run the code, including any error messages or exceptions that you get.
did forget the session.connect() to start the connection
let session = NMSSHSession(host: "<IP_ADDRESS>", andUsername: "inc")
session.connect()
if session.connected == true{
session.authenticateByPassword("1234")
if session.authorized == true {
print("works")
}
}
Added bridging Header file, followed instructions but having this error "Use of unresolved identifier 'NMSSHSession" at the let session = NMSSHSession(host: "<IP_ADDRESS>", andUsername: "inc"). Any idea why would happening ?
| common-pile/stackexchange_filtered |
Unix ksh syntax errors
I have a script that checks the records in a file ($file) and checks if the file exists and stores it either to the existing or missing array.
existing[0]=0
missing[0]=0
i=0
j=0
sourcedir=/mydir
fileslist=/mydir/filelisting.txt
fileextension=csv
while IFS='' read -r line || [ -n "$line" ]; do
f="${line}.${fileextension}"
if [ -s ${sourcedir}/${f} ]
then
existing[$i]=${line}.${fileextension}
echo "${existing[$i]} exists"
((i=i + 1))
else
missing[$j]=${line}.${fileextension}
echo "${missing[$j]} does not exist"
((j=j + 1))
fi
done < "$filelist"
Apparently when this gets executed in the client's server I get the following errors:
myscript.sh: existing[0]=0: not found
myscript.sh: missing[0]=0: not found
myscript.sh: -r: is not an identifier
myscript.sh: bad substitution
Is there some portions on my script not compliant to ksh? If yes, greatly appreciate to assist me what should be the proper syntax. Is also there a way to let the errors reflect which line causes the problem?
Thanks!
Are you sure you're using a proper KornShell? More specifically ksh93?
@user3694537 : The lines existing[0]=0 & missing[0]=0 works fine for me in ksh.
does your script have a shebang at the top and if so, what is it? [wondering if script is missing shebang and client could be running the script under some other shell]
I guess the shell becomes different when it's executed through autosys. I've added the shebang line for ksh and finally got it to work
| common-pile/stackexchange_filtered |
changing culture(language) when button click
I have a website using framework 4. I made language change with global resources. on the button click code behind i use these codes.
protected void Button2_Click(object sender, EventArgs e)
{
dil = "en-US";
var ci = new CultureInfo(dil); //TO_DO Route culture
Thread.CurrentThread.CurrentUICulture = ci;
Thread.CurrentThread.CurrentCulture = ci;
Session["culture"] = ci;
}
an also my resx files:
-PB.resx
-PB.en-US.resx
-PB.ru-RU.resx
default language is work fine but how can i change to english and russian? where is my mistake?
I solve it after a long search. this is answer and all codes you need. I make this for the master page in visual studio 2010.
You can use ispostback in page load.
protected void Page_Load(object sender, EventArgs e)
{
//only does it on non-postback because otherwise the selected
//value will not reach event handler correctly
if (!Page.IsPostBack)
{
dil = Thread.CurrentThread.CurrentCulture.Name;
}
}
after then we can add button click and cookies
protected void Button2_Click(object sender, EventArgs e)
{
dil = "en-US";
//var ci = new CultureInfo(dil); //TO_DO Route culture
//Thread.CurrentThread.CurrentUICulture = ci;
//Thread.CurrentThread.CurrentCulture = ci;
//Session["culture"] = ci;
//Sets the cookie that is to be used by Global.asax
HttpCookie cookie = new HttpCookie("CultureInfo");
cookie.Value = dil;
Response.Cookies.Add(cookie);
//Set the culture and reload the page for immediate effect.
//Future effects are handled by Global.asax
Thread.CurrentThread.CurrentCulture =
new CultureInfo(dil);
Thread.CurrentThread.CurrentUICulture =
new CultureInfo(dil);
Server.Transfer(Request.Path);
}
and last global.asax file helps solving this problem.
//*
Public void Application_BeginRequest(Object sender, EventArgs e)
{
// Code that runs on application startup
HttpCookie cookie = HttpContext.Current.Request.Cookies["CultureInfo"];
if (cookie != null && cookie.Value != null)
{
System.Threading.Thread.CurrentThread.CurrentUICulture = new
System.Globalization.CultureInfo(cookie.Value);
System.Threading.Thread.CurrentThread.CurrentCulture = new
System.Globalization.CultureInfo(cookie.Value);
}
else
{
System.Threading.Thread.CurrentThread.CurrentUICulture = new
System.Globalization.CultureInfo("tr-TR");
System.Threading.Thread.CurrentThread.CurrentCulture = new
System.Globalization.CultureInfo("tr-TR");
}
}
//*
If you are using html tags instead of .net tags you can use these for adding text control.
<a><asp:Literal ID="Literal1" runat="server" Text="<%$Resources: PB, Home %>" /></a>
At first you should store language data in cookie. To set page language, override the InitializeCulture method.
protected override void InitializeCulture()
{
var currentLanguage= HttpContext.Current.Request.Cookies["dil"];
string defaultLanguage="tr";
if(currentLanguage==null)
{
//set cookie to defaultLanguage
}
else{
Thread.CurrentThread.CurrentCulture = new System.Globalization.CultureInfo(currentLanguage.Value);
Thread.CurrentThread.CurrentUICulture = Thread.CurrentThread.CurrentCulture;
}
}
To change language by clicking a button
protected void Button2_Click(object sender, EventArgs e)
{
dil = "en-US";
Thread.CurrentThread.CurrentCulture = new System.Globalization.CultureInfo(dil);
Thread.CurrentThread.CurrentUICulture = Thread.CurrentThread.CurrentCulture;
HttpCookie hc = new HttpCookie("dil");
hc.Expires=DateTime.Now.AddDays(30);
hc.Value=dil;
HttpContext.Current.Response.Cookies.Add(hc);
}
how can i set the default language in cookie. I don't know using cookie!
//set cookie to defaultLanguage
HttpCookie hc = new HttpCookie("dil");
hc.Expires=DateTime.Now.AddDays(30);
hc.Value="tr";
HttpContext.Current.Response.Cookies.Add(hc);
unfortunately my c# doesn't support. I use c# 4th version =(
and also i want to make this in master page. i can't use InitializeCulture(). it is working in page
You can use Application_BeginRequest method in Global.asax file instade of InitializeCulture
| common-pile/stackexchange_filtered |
Transform a list of dict from a column in a column of list
I have a column in my dataframe with list of dict inside.
Here is the first element of the 'reviews' column:
I would like to extract 'item_id' et 'recommend' key to put them in new column of the previous dataframe like this:
What libraries are you using? What have you tried?
You should give us some code to understand how you are trying to achieve that and with what tools.
Assuming you are trying this on Pandas dataframe, you can use custom function or a lambda function to extract key values from a list of dicts
import pandas as pd
df = pd.DataFrame({'A': 'x',
'B': [[{'Fruits': 'Apple', 'Vegetables': 'Tomato'},
{ 'Fruits': 'Orange', 'Vegetables': 'Onion'},
{'Fruits': 'Mango', 'Vegetables': 'Potato'}]]})
print(df)
key = 'Fruits'
df[key] = df['B'].apply(lambda ls: [d[key] for d in ls])
print(df)
You should rather add the actual code in your answer instead of an image ;)
| common-pile/stackexchange_filtered |
How do I increase my chances to win as black in the Ruy Lopez?
The Ruy Lopez is one of the most difficult lines for me to play against. I just can't hold my position when I play as black. If I move 3... Nd4, I always get a draw at best, and usually take a loss. How can I improve my chances of winning in this situation?
EDIT :
After I looked up the answers which were given to me, I found one solid win for black with the Cordel Defence that I found in Shredder Computer Chess if you insert this FEN to it.
[FEN ""]
1. e4 e5 2. Nf3 Nc6 3. Bb5 Bc5 4. c3 Bb6 5. d4 exd4 6. cxd4 Nce7 7. d5 Nf6 8. Nc3 O-O
Can someone show me, at which point does black take the initiative from white??
Thanks,
Ahmad
I used to prefer A6 and then continue.There are some good GM games from which you can learn how to tackle with this.Aren't there with me right now.
3...f5 can be a fun line to play.
@sammath I always mess up when played with A6
@akavali I not a good gambler.. ^^
@anyone I found an interesting opening as Ruy Lopez#Cordel Defence since it is fit with my knowledge that bishop is more worthed to captured than knight and since white bishop in Bb5 is not very dangerous bishop. all of you can see at my EDIT label
Are you sure that the losses are caused by the opening itself?
The Ruy Lopez is probably one of the most analyzed openings in chess. You can find plenty of resources on the Internet. (Wikipedia's article on the Ruy Lopez is a good point to start).
I would recommend that you not go searching for "winning lines". You first need to understand the principles that lie behind each move of this opening. Once you have that understanding it is easier to examine variations and choose the one that best suits your needs.
For example, if you are an aggressive player and your chess level is average, you might wish to consider the Open Defense and the Marshall Attack.
A "chance to win" with Black against the Ruy Lopez isn't necessary dependent on variation. Yes, there are aggressive lines, the most notable being the Archangel Variation, but having a better understanding of the opening line that you play will result in improved results. I would research other lines by using the Shredder Opening Database and see what other people are playing. Then, I would go online and play a few games in each variation to get a feel of the resulting middlegame positions and see which one is the best fit for me.
thanks for the links, I will compared with the other variation first..
Frankly, I don't like my chances of winning with Ruy Lopez as Black. When faced with 1. e4, I "never" reply with 1... e5 (except as a "surprise" - then I play 2... Nf6, the Petrov Defense, but mainly to draw, rather than to win.)
I prefer 1... c5 (Sicilian Defense) or 1... e6 (French Defense). These are "double-edged" openings that allow both Black and White to win OR lose.
I like your variation better than most. After your 8... O-O, White will play 9. O-O, and you can play 9... Ng6. That leads to a position somewhat like my French defenses (with White's center pawns on "reversed" colors). It's a tough, but by no means unplayable game that gives White fewer attacking chances than a conventional Ruy.
I've lost a few games with the Ruy Lopez playing White. But that was basically due to my ignorance of the opening more than anything else.
I hope you meant 2. ... Nf6 instead of 2. ... f6, since that would be the Damiano defense, and be a very dubious response on your part.
Thanks for all of you who have been give me the answer, and I realized that Classical Defence as Ruy Lopez variation fits me than the others variation.
From the latest FEN that I have been edited (and based from @Tom Au answer), normally White player will choose to castle his king, so it will give Black opportunity to neutralize centre with 9. O-O d6.
After I analyzed, 7. d5 is just bad idea from White player that give advantages to Black. It means, Black knight in 7. ... Nf6 is in the right place at the right time.
The classical/cordel is a very good choice against the Ruy Lopez. I switched to the sicilian for exactly this reason but this is what I would play if I hadn't .Black avoids the main lines but gets an active position. I think 4...Nf6 is the most natural choice here. Black develops a piece, prepares to castle and attacks the e4 pawn. 4...Bb6 doesn't really accomplish anything.
"Can someone show me, at which point does black take the initiative from white??"
It's difficult to take the initiative if white doesn't make a mistake. This line though avoids theory and puts pieces on active squares. You are positioning yourself to pounce on the first white mistake. That's the best you can hope for as black.
After 3. Bb5, black plays
a6
Nf6
a6
a6 puts the question to the bishop. If he takes your knight, recapture with the d pawn. Black has a very strong game at this point. Normally you capture toward the center, but in this case, you can play c6 to c5 and control the d4 square. d4 is very important to control in the Ruy Lopez.
What you see most often after a6 is the bishop retreating to a4. White may play c3 later in the game to give his bishop a square (c2) to back up into. You see the same thing happening on the other side of the board with f3 in the Queen's Indian, giving white's bishop on g5 the option to back up to Bh4 and then retreat to Bf2.
(White playing c3 also prepares for the d4 push, common for white in the Ruy Lopez. c3 protects the d4 pawn.)
Nf6
You're probably going to see Nf6 eventually. After Bb5 you might see it right away. Or you'll see it after 3. Bb5 a6 4. Ba4 Nf6. What does Nf6 do for black? It attacks the unguarded e4 pawn with the knight. But white doesn't have to worry here. Instead, white castles O-O. Why?
Take a moment to play this out on a board:
[FEN ""]
1. e4 e5 2. Nf3 Nc6 3. Bb5 a6 4. Ba4 Nf6 5. O-O Nxe4 6. Re1
Black is screwed after Re1. Notice that the rook is staring down the king. The pawn/knight are pinned. If black retreats the knight back to Nf6, now white takes the knight. Black recaptures the bishop. Now the pawn on e5 is unguarded. Nxe5 and you have a deadly discovered check on black's king when the knight moves.
[FEN ""]
1. e4 e5 2. Nf3 Nc6 3. Bb5 a6 4. Ba4 Nf6 5. O-O Nxe4 6. Re1 Nf6 7. Bxc6 dxc6 8. Nxe5
it's all over for black.
That's how Capablanca describes the Ruy Lopez in his book Chess Fundamentals (if I remember correctly).
No. Black is not even remotely screwed in the line you give, unless he just sits there like a stuffed dummy ignoring the discovery threats. 8. Nxe5 Be6 is entirely safe, and Black has easy development and two bishops to compensate for his doubled pawn. That's why 6. Re1 is generally not played when Black eats the e4-pawn; the main line is 6. d4, and White will get the pawn back but must still work to secure an advantage. Taking the pawn with 5. ... Nxe4 is a completely sound move played many times by grandmasters in serious matches, and it's flat wrong to claim that it loses by force.
As Evan said, 5...Nxe4 is a perfectly good move that is frequently played at the highest level. The variation is called the Open Ruy Lopez. 6.Re1 is not even that good (although it's not terrible); Black plays 6...Nxc5 and has a fine game. A better try for White is 6.d4 b5 7.Bb3 d5 8.dxe5 Be6, and we have reached the main tabiya (standard position) of the Open Ruy.
| common-pile/stackexchange_filtered |
VPN Masquerade and NAT
I'm trying to set up a VPS to forward traffic from a openvpn client to the internet, while forwarding incoming port 80 traffic back to the client. I've followed this guide to configure the server and create a client config. On the VPS I have the iptables rules:
-t nat -A POSTROUTING -s <IP_ADDRESS>/8 -o eth0 -j MASQUERADE
and
-t nat -A PREROUTING -p tcp -m tcp --dport 80 -j DNAT --to-destination <IP_ADDRESS>
The first rule is from the guide, and works well. The second rule allows me to connect to the VPN client on port 80 from the internet, but http requests from the client to the internet fail (https stills works, and goes through the VPN). Can anyone recommend a working configuration for this problem, or explain why this one doesn't work?
Edit: VPS configuration
# iptables -L -n
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
# iptables -L -n -t nat
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
DNAT tcp -- <IP_ADDRESS>/0 <IP_ADDRESS>/0 tcp dpt:80 to:<IP_ADDRESS>
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- <IP_ADDRESS>/8 <IP_ADDRESS>/0
# cat /proc/sys/net/ipv4/ip_forward
1
I guess the linux on which you are executing this rules is the gateway to internet, right? first route is ok because you are behind NAT. And with the second one if I didn't understood in a bad way... you want to reach your internal server from internet, is correct that? anyway we need to know all your iptables rules... use iptables -L and ìptables -t nat -L` to show them... is the FORWARD chain with ACCEPT policy?
Yes, the VPS is the gateway. I edited my post to show it's config.
I saw you edited post... nice. We need more data... what do you have if you execute cat /proc/sys/net/ipv4/ip_forward a 0 or a 1 ? because I think you need a 1 here.
What ports are needed to connect to openvpn from a internet client? only 80 tcp? I saw in the manual you put on the link, they use udp 1194. The mapping seems correct for 80 tcp port. Be sure of that openvpn host has the right gateway (the linux with the iptables machine)
I have opevpn set up to connect on udp 1194. http wget's initiated from the vpn client don't connect to the internet.
The prerouting rule didn't specify an interface or destination, so http requests coming from the VPN client over tun0 were being sent back to itself.
A working config is
iptables -t nat -A POSTROUTING -s <IP_ADDRESS>/8 -o eth0 -j MASQUERADE
iptables -t nat -A PREROUTING -p tcp -i eth0 -d $eth0_addr --dport 80 -j DNAT --to <IP_ADDRESS>
nice! so you can mark your own answer as resolved. I hope any of my comments helped you. Cheers!
| common-pile/stackexchange_filtered |
Access variable from FormType in types twig template
I created a custom form type like this:
class PositioningFlashType extends AbstractType
{
public function setDefaultOptions(OptionsResolverInterface $resolver)
{
$resolver->setDefaults(array(
'game' => new Game()
));
}
public function getParent()
{
return 'form';
}
/**
* Returns the name of this type.
*
* @return string The name of this type
*/
public function getName()
{
return 'positioning_flash';
}
}
And in another form (GameType) type I use it like this:
$builder
->add('flash', new PositioningFlashType(), array(
'mapped' => false,
'game' => $options['game']
))
And inside the controller I want to create the whole form:
private function createEditForm(Game $entity)
{
$form = $this->createForm(new GameType(), $entity, array(
'action' => $this->generateUrl('game_update', array('id' => $entity->getId())),
'method' => 'PUT',
'edit' => true,
'game' => $entity
));
$form->add('submit', 'submit', array('label' => 'Speichern'));
return $form;
}
So basically, all I want to do is to pass-through the Game instance to the PositioningFlashType and inside it's template I want to access this game instance like this:
value="{{ asset('data/swf/backendtool.swf') }}?game={{ game.id }}
But symfony throws an error, saying that the variable game is not defined.
What is the correct way to pass-through a variable from the controller to a nested FormType?
Have you tried using {{ form.game.id }} ?
You can add custom view variables by wrapping buildView().
/* ... */
use Symfony\Component\Form\FormView;
use Symfony\Component\Form\FormInterface;
/* ... */
class PositioningFlashType extends AbstractType
{
/* ... */
public function buildView(FormView $view, FormInterface $form, array $options)
{
parent::buildView($view, $form, $options);
$view->vars = array_merge($view->vars, array(
'game' => $options['game']
));
}
/* ... */
}
You'll now have game under form.vars. Simply override your custom form widget to do whatever you need to do with it.
{% block positioning_flash_widget %}
{% spaceless %}
{# ... #}
<input value="{{ form.vars.game.id }}" />
{% endspaceless %}
{% endblock %}
If you don't need to merge arrays and just want to add a key, you can also do $view->vars['some_key'] = 'Some value';
| common-pile/stackexchange_filtered |
How does os.path.join() work?
Please help me understand how the builtin os.path.join() function works. For example:
import os
print os.path.join('cat','dog') # 'cat/dog' no surprise here
print os.path.join('cat','dog').join('fish') # 'fcat/dogicat/dogscat/dogh'
On Mac (and i guess linux too) os.name is an alias for posixpath. So looking into the posixpath.py module, the join() function looks like this:
def join(a, *p):
"""Join two or more pathname components, inserting '/' as needed.
If any component is an absolute path, all previous path components
will be discarded. An empty last part will result in a path that
ends with a separator."""
path = a
for b in p:
if b.startswith('/'):
path = b
elif path == '' or path.endswith('/'):
path += b
else:
path += '/' + b
return path
So join() returns a string. Why does os.path.join('something').join('something else') even work? Shouldn't it raise something like 'str' object has no attribute 'join'? I mean if I copy the function some other place and call it like renamed_join('foo','bar') it works as expected but if i do renamed_join('foo','bar').renamed_join('foobar') will raise an AttributeError as expected. Hopefully this is not a very stupid question. It struck me just when I thought I was starting to understand python...
For more understanding check the answers here
You can say: os.path.join('cat','dog','fish') and get reasonable behavior.
"Shouldn't it raise something like 'str' object has no attribute 'join'?" No. Python is not a liar...
You can't chain os.path.join like that. os.path.join returns a string; calling the join method of that calls the regular string join method, which is entirely unrelated.
So it was all a fortunate names coincidence. Thank you for clearing this for me!
Your second join call is not os.path.join, it is str.join. What this one does is that it joins the argument (as an iterable, meaning it can be seen as f, i, s, h) with self as the separator (in your case, cat/dog)
So basically, is puts cat/dog between every letter of fish.
Because str has a join attribute.
| common-pile/stackexchange_filtered |
bpy remove duplicate images (.001 .002 .003 etc.)
I have this script, but its not working as expected
import bpy
for img in bpy.data.images:
print("%s" % img.name)
if( ".0" in img.name):
img.name=img.name[:-4]
print("%s" % img.name)
I want to have one texture on different materials, now i have something like this
Edit: Solved.
for mat in bpy.data.materials:
if mat.node_tree:
for n in mat.node_tree.nodes:
if n.type == 'TEX_IMAGE':
if n.image is none:
print(mat.name,'has an image node with no image')
elif n.image.name[-3:].isdigit():
n.image = bpy.data.images[n.image.name[:-4]]
for imgs in bpy.data.images:
if imgs.name[-3:].isdigit():
print(imgs.name)
imgs.user_clear()
Thanks to all who helped.
what are expecting ? you are just renaming the images ( they won't have the same name blender will add the numbers back )
I'm fairly new to python, but you should probably not try to rename the textures but to reload Tex.Generic_Spec.png if Image Texture File has a number in the materials.
PProbably this related topic gives you some idea how to do it: http://blender.stackexchange.com/questions/38052/how-can-i-make-a-script-to-make-cycles-load-a-new-texture-each-frame
img.name=img.name[:-4] is renaming the image to match the original image name, as this name already exists a numeric suffix is added to the other image with that name, which just undoes the change.
You need to go through your material nodes and get each image node to use the same image.
for mat in bpy.data.materials:
if mat.node_tree:
for n in mat.node_tree.nodes:
if n.type == 'TEX_IMAGE':
if n.image is None:
print(mat.name,'has an image node with no image')
elif n.image.name[-3:].isdigit():
n.image = bpy.data.images[n.image.name[:-4]]
Note that the last line doesn't just change the name used, it changes the image data block that the image node is linked to.
Once you have removed the users of the duplicate images they will be removed after you save and re-open the file.
Is 'if n.type == 'TEX_IMAGE' and '.0' in n.image.name:' sufficent to find only the image textures mentioned?
That will find all image texture nodes that use an image with '.0' in the name, using the same example test from your code. if n.type == 'TEX_IMAGE' and n.image.name[-3:].isdigit() should be a more accurate test. It does assume that bpy.data.images[n.image.name[:-4]] actually exists.
It works! on the test scene i had, but not on my real scene, n.image.name[-3:] returns "NoneType"
if n.image.name[-3:] returns NoneType then you don't have an image on that node.
Suggest adding a filepath test too, for the same image filename, different folder side case. (and for generated images?). Making a lookup / replacement table from bpy.data.images to start with could be the go.
The solution posted above bugs out in several situations. always have to check if data exists before pulling it, also if the original image has digits then remove it.
# for material check for texture nodes
# if it has a digit check if the original exists
# if original has also digits remove
for mat in bpy.data.materials:
if mat.node_tree:
for n in mat.node_tree.nodes:
if n.type == 'TEX_IMAGE':
if n.image is None:
print(mat.name,'has an image node with no image')
elif n.image.name[-3:].isdigit():
name = n.image.name[:-4]
exists = False
for img in bpy.data.images:
if img.name in name:
exists = True
if exists:
n.image = bpy.data.images[name]
else:
n.image.name = name
| common-pile/stackexchange_filtered |
Graphlab Sframe, retrieve multiple rows
I am trying to access multiple rows from a graphlab SFrame and convert them into a numpy array.
I have a database fd of 96000 rows and 4096 columns and need to retrieve the row numbers that are stored in a numpy array. The method that I have come up with is very slow. I suspect that it is because I keep increasing the size of the sframe at every iteration, but I don't know if any method of pre-allocating the values. I need to grab 20000 rows and the current method does not finish.
fd=fd.add_row_number()
print(indexes)
xs=fd[fd['id'] == indexes[0]] #create the first entry
t=time.time()
for i in indexes[1:]: #Parse through and get indeces
t=time.time()
xtemp=fd[fd['id'] == i]
xs=xs.append(xtemp) #append the new row to the existing sframe
print(time.time()-t)
xs.remove_column('id') #remove the ID Column
print(time.time()-t)
x_sub=xs.to_numpy() #Convert the sframe to numpy
You can convert your SFrame to pandas.DataFrame, find rows with ids from indexes, remove DataFrame's column 'id' and convert this DataFrame to numpy.ndarray.
For example:
import numpy as np
fd=fd.add_row_number()
df = fd.to_dataframe()
df_indexes = df[df.id.isin(indexes)]
df_indexes = df_indexes.drop(labels='id', axis=1)
x_sub = np.array(df_indexes)
| common-pile/stackexchange_filtered |
How to transform Columns to rows in R?
I kind of have the same problem. I have data in this kind of order: ;=column
D1 ;hurs
1 ;0.12
1 ;0.23
1 ;0.34
1 ;0.01
2 ;0.24
2 ;0.67
2 ;0.78
2 ;0.98
and I like to have it like this:
D1; X; X; X; X
1;0.12; 0.23; 0.34; 0.01;
2;0.24; 0.67; 0.78; 0.98;
I would like to sort it with respect to D1 and like to reshape it? Does anyone have an idea? I need to do this for 7603 values of D1.
Do you need an output file with that format? Is the list of factors (D1) a sequence?
Maybe I'm mising something, but why not just transpose the matrix? Then, use order to sort it. I may provide an example if you need.
I would look into Hadley's reshape package. It does all sorts of great stuff. The code below will work with your toy example, but there is probably a more elegant way of doing this. Simply, your data already appear to be in the ?melt form, so you can simply ?cast it.
Also, check out these links
http://www.statmethods.net/management/reshape.html
http://had.co.nz/reshape/
library(reshape)
help(package=reshape)
?melt
D1 <- c(1,1,1,1,2,2,2,2)
hurs <- c(.12, .23, .34, .01, .24, .67, .78, .98)
var <- rep(paste("X", 1:4, sep=""), 2)
foo <- data.frame(D1, var, hurs)
foo
cast(foo, D1~var)
Digging up skeletons not likely to ever be claimed, why not use aggregate()?
dat = read.table(header = TRUE, sep = ";", text = "D1 ;hurs
1 ;0.12
1 ;0.23
1 ;0.34
1 ;0.01
2 ;0.24
2 ;0.67
2 ;0.78
2 ;0.98")
aggregate(hurs ~ D1, dat, c)
# D1 hurs.1 hurs.2 hurs.3 hurs.4
# 1 1 0.12 0.23 0.34 0.01
# 2 2 0.24 0.67 0.78 0.98
If the lengths of each id in D1 are not the same, you can also use base R reshape() after first creating a "time" variable:
dat2 <- dat[-8, ]
dat2$timeSeq <- ave(dat2$D1, dat2$D1, FUN = seq_along)
reshape(dat2, direction="wide", idvar="D1", timevar="timeSeq")
# D1 hurs.1 hurs.2 hurs.3 hurs.4
# 1 1 0.12 0.23 0.34 0.01
# 5 2 0.24 0.67 0.78 NA
You might check Hadley Wickham's reshape package and its cast() function
http://had.co.nz/reshape/
I have assumed that there are unequal number of hurs per D1 (7603 values)
txt = 'D1 ;hurs
1 ;0.12
1 ;0.23
1 ;0.34
1 ;0.01
2 ;0.24
2 ;0.67
2 ;0.78
2 ;0.98'
dat <- read.table(textConnection(txt),header=T,sep=";")
dat$Lp <- 1:nrow(dat)
dat <- dat[order(dat$D1,dat$Lp),]
out <- split(dat$hurs,dat$D1)
out <- sapply(names(out),function(x) paste(paste(c(x,out[[x]]),collapse=";"),";",sep="",collapse=""))
reshape2 is actually better than reshape. Using reshape uses significantly more memory and time than reshape2 (at least for my specific example using something like 9million rows).
Not sure why you're criticizing answers that are nearly 3 months old, but yes it does count as a real answer. The newness of reshape2 makes it less likely that everyone knew about it.
| common-pile/stackexchange_filtered |
How to (chrome.tabs) refresh Popup with background script
I am opening the chrome store in a fullscreen popup.
onclick="window.open('https://chrome.google.com/webstore/detail...', 'targetWindow', 'width=' + window.outerWidth + ',height=window.outerHeight,top=' + window.screenTop + ',left=' + window.screenLeft);
After the user has installed the extension, the focus comes back to the main tab and when my Chrome extension background script run the following code :
chrome.runtime.onInstalled.addListener(function (details) {
if (details.reason == "install") {
thankyou();
}
});
function thankyou(){
chrome.tabs.update({
url: 'https://mythankyoupage.com',
active: true
});
}
The problem is the Thankyou page is loaded in the main tab but NOT in the original popup, which is lost in the background.
How can I make sure the thank you page is loaded in the popup that was used to install from the chrome store ?
I have read https://developer.chrome.com/extensions/tabs but could not understand how to target my popup
Thank you for your guidance
UPDATE 1 :
following good advices from comments, I tried this :
(I tried without "windowType":"popup" too)
function thankyou(){
chrome.tabs.query({url: "https://chrome.google.com/webstore/detail/...", "windowType":"popup"}, function(results) {
chrome.tabs.update({
url: 'https://thankyou.com'
})
}
tried this as well :
function(results) { chrome.tabs.update(tabs[0].id, { url: 'thankyou.com', active: true }) }
but all of them resulted in refreshing the main window and not the popup.
I do uninstall each time the extension (not just refresh).
Super simple, send a message from background to popup in your thankyou function that tells popup to display the thankyou screen.
Use chrome.tabs.query({url: 'your-webstore-url'}, tabs => chrome.tabs.update(tabs[0].id, { your-update-params }))
@TomShaw : could you point me to the direction on how to do that ?
Check out https://stackoverflow.com/questions/12265403/passing-message-from-background-js-to-popup-js @wOxxOm solution is also an excellent idea. The bottom line is popup needs to execute some code to display your thank you screen.
@wOxxOm That seems like a very interesting approach. I have tried this >chrome.tabs.query({url: "https://chrome.google.com/webstore/detail/...*"}, function(results) {
chrome.tabs.update(tabs[0].id, {
url: 'https://thankyou.com',
active: true
})
}
But still result in the main window, not the tab, to be refreshed
@TomShaw The difference with the link you provided, which I had read, is my popup was not generated by my extension and I have no control on it since its hosted by the chrome store. But that may be just me lacking proper knowledge
Make sure you've reloaded the extension on chrome://extensions page after editing. I can't help further without seeing more of the new code (MCVE).
| common-pile/stackexchange_filtered |
How to properly get a message out of a channel in Telegram?
I'm trying to retrieve a message from a channel in telegram, I have its id, that is the id with which it is displayed in the channel, the token of the bot that is added to this channel and is the administrator (I retrieve it by email from the database) and the name of the channel from which I retrieve the channel id of type Long.
Since I couldn't find a suitable method in the lib I'm using, I turned to this.
I tried writing this kind of code, but the response comes back 404.
fun getMessageFromChannel(email: String, channelUsername: String, messageId: Long): Message? {
return runBlocking {
val botToken = getTelegram(email) ?: return@runBlocking null
val chatId = getChannelId(botToken, channelUsername) ?: return@runBlocking null
val response= client.get("https://api.telegram.org/bot$botToken/getMessages") {
parameter("channel", chatId)
parameter("id", messageId)
}
if (response.status.value == 200) {
val getMessageResponse: GetMessageResponse = response.body()
if (getMessageResponse.ok) {
return@runBlocking getMessageResponse.result
}
}
null
}
}
I guess I don't understand something in the API telegram device, unfortunately I didn't find how to send requests. So I would appreciate any help.
| common-pile/stackexchange_filtered |
Zuul not routing to service
Zuul not routing to student-service which is registered in Eureka Server.
using Greenwich.SR1
bootstrap.yml
server:
port: 17005
# Eureka server details and its refresh time
eureka:
instance:
leaseRenewalIntervalInSeconds: 1
leaseExpirationDurationInSeconds: 2
client:
registry-fetch-interval-seconds: 30
serviceUrl:
defaultZone: http://localhost:8761/eureka/
healthcheck:
enabled: true
lease:
duration: 5
instance:
lease-expiration-duration-in-seconds: 5
lease-renewal-interval-in-seconds: 30
# Current service name to be used by the eureka server
spring:
application:
name: app-gateway
# Microservices routing configuration
zuul:
routes:
students:
path: /students/**
serviceId: student-service
host:
socket-timeout-millis: 30000
hystrix:
command:
default:
execution:
isolation:
thread:
timeoutInMilliseconds: 30000
I've added PreFilter to log the request from UI. Whenever the request from UI hits the Zuul - I observe the below in logs but no proceeding after - Not getting routed to the student-service.
Request Method : GET Request URL : http://localhost:17005/students/School2
2019-04-26 21:45:54.314 INFO 18196 --- [o-17005-exec-10] c.netflix.config.ChainedDynamicProperty : Flipping property: student-service.ribbon.ActiveConnectionsLimit to use NEXT property: niws.loadbalancer.availabilityFilteringRule.activeConnectionsLimit =<PHONE_NUMBER>
2019-04-26 21:45:54.387 INFO 18196 --- [o-17005-exec-10] c.n.u.concurrent.ShutdownEnabledTimer : Shutdown hook installed for: NFLoadBalancer-PingTimer-student-service
2019-04-26 21:45:54.387 INFO 18196 --- [o-17005-exec-10] c.netflix.loadbalancer.BaseLoadBalancer : Client: student-service instantiated a LoadBalancer: DynamicServerListLoadBalancer:{NFLoadBalancer:name=student-service,current list of Servers=[],Load balancer stats=Zone stats: {},Server stats: []}ServerList:null
2019-04-26 21:45:54.682 INFO 18196 --- [o-17005-exec-10] c.n.l.DynamicServerListLoadBalancer : Using serverListUpdater PollingServerListUpdater
2019-04-26 21:45:54.720 INFO 18196 --- [o-17005-exec-10] c.netflix.config.ChainedDynamicProperty : Flipping property: student-service.ribbon.ActiveConnectionsLimit to use NEXT property: niws.loadbalancer.availabilityFilteringRule.activeConnectionsLimit =<PHONE_NUMBER>
2019-04-26 21:45:54.723 INFO 18196 --- [o-17005-exec-10] c.n.l.DynamicServerListLoadBalancer : DynamicServerListLoadBalancer for client student-service initialized: DynamicServerListLoadBalancer:{NFLoadBalancer:name=student-service,current list of Servers=[<IP_ADDRESS>:56567],Load balancer stats=Zone stats: {defaultzone=[Zone:defaultzone; Instance count:1; Active connections count: 0; Circuit breaker tripped count: 0; Active connections per server: 0.0;]
},Server stats: [[Server:<IP_ADDRESS>:56567; Zone:defaultZone; Total Requests:0; Successive connection failure:0; Total blackout seconds:0; Last connection made:Thu Jan 01 05:30:00 IST 1970; First connection made: Thu Jan 01 05:30:00 IST 1970; Active Connections:0; total failure count in last (1000) msecs:0; average resp time:0.0; 90 percentile resp time:0.0; 95 percentile resp time:0.0; min resp time:0.0; max resp time:0.0; stddev resp time:0.0]
]}ServerList:org.springframework.cloud.netflix.ribbon.eureka.DomainExtractingServerList@42abe3b4
2019-04-26 21:45:55.742 INFO 18196 --- [erListUpdater-0] c.netflix.config.ChainedDynamicProperty : Flipping property: student-service.ribbon.ActiveConnectionsLimit to use NEXT property: niws.loadbalancer.availabilityFilteringRule.activeConnectionsLimit =<PHONE_NUMBER>
There is no exception trace in the Microservices.
The response to UI:
type=Not Found, status=404
Please help on establishing this routing.
Could it be that the downstream service is responding with a 404?
Thank you for the clue - My service method mapping is @GetMapping(value = "/students/{school}"). It works with the url http://localhost:17005/students/students/School2 (Two /students in the url)
| common-pile/stackexchange_filtered |
Loops in 8051 assembly? (at89s52)
I am trying to program in assembly for an at89s52 microprocessor, I have found a couple of very basic tutorials on youtube that have not helped me much since I am programming in Keil and most are in C, that is why I ask for help here.
I would like to do an insertion sort that accommodates the numbers I have, which are: 05H, 01H, 04H, 02H and 08H, but as much as I have tried I have not been able to do the cycle with which I would like to do it. I wanted to ask if someone could tell me how I could start, since I can't think of anything to do my insertion sort please, this is my code at the moment:
ORG 0000H
AJMP MAIN
ORG 0040H
MAIN:
MOV DPTR, #70H
MOV A, #05H
MOVX @DPTR, A
INC DPTR
MOV A, #01H
MOVX @DPTR, A
INC DPTR
MOV A, #04H
MOVX @DPTR, A
INC DPTR
MOV A, #02H
MOVX @DPTR, A
INC DPTR
MOV A, #08H
MOVX @DPTR, A
MOV R0, #1H
CJNE R0, #5H, CICLO
CICLO: //loop
MOV R1, R0
END
Write the algorithm first in C or pseudocode or draw a flowchart. Translate into assembly step by step. [Edit] your question if you get stuck, describe what specific issue you have.
Do you mean "circles" or "cycles" in the title? (Round shapes on a screen, or clock cycles, or something else cyclic like a data structure?)
Oh, I think you mean loops, since you're talking about doing an Insertion Sort.
MOV DPTR, #70H
L1:
movx a, @dptr
mov r0, a
inc dptr
movx a, @dptr
mov r1, a
clr c
subb a, r0
jc less
;more
mov a, r1
movx @dptr, a
mov a, dpl
clr c
subb a, #1
jnc L2
dec dph
L2:
mov dpl, a
mov a, r0
movx @dptr, a
jmp L4
less:
mov a, r0
movx @dptr, a
mov a, dpl
clr c
subb a, #1
jnc L3
dec dph
L3:
mov dpl, a
mov a, r1
movx @dptr, a
L4:
inc dptr
mov r0, dpl
mov r1, dph
cjne r0, #70h+5-1, L1 ;Your the Low Address equelly 70h
cjne r1, #00h, L1 ;Your the High Address equelly 00h
mov dptr, #70h
jmp L1
Please use triple backticks to format your code as code. I did that for you this time, but didn't undo the double-spacing so it still looks like a mess. Also, even more importantly, answers (especially assembly) should have some text that explain the answer, like how it works, and/or what was wrong with the code in the question.
| common-pile/stackexchange_filtered |
I'm using WinSQL Lite and trying to use DATEPART but am not sure how
I need to group this SQL string by the hour for one USER_ID but I am unsure how to use the DATEPART function I've been reading about.
Basically, I want to get the performance data for every hour of this user from the time they start work till the time they go home. If the user comes in at 6am I want to see what their performance data is at 7 am,8am,9am...etc grouped vertically so that I can import the data into excel and make a chart.
-------EmpPerf--------
select u.user_id, SUM(tl.elapsed_time) AS ELAPSED, SUM(CASE WHEN
(tl.standard_time = 0) THEN 0 ELSE (tl.elapsed_time) END)AS PERFTIME,
SUM(tl.standard_time)as Standard_time
from perfplus.tale tl, perfplus.task tk, perfplus.users u
WHERE tl.task_id = tk.ID
AND tl.facility_id = tk.facility_id
AND tk.start_time BETWEEN {ts '2017-05-04 04:00:00'} and {ts '2017-05-05 03:59:59'}
AND tk.facility_id = '130'
AND tk.user_id = u.user_id
AND u.shift_name NOT IN ('9WMS','9','T2016','T2017')
and tk.status like ('TS_PROCESSED')
and u.user_id = 'LBRSHALL'
group by u.user_id
Got it figured out:
-------EmpPerf--------
select hour(tk.start_time), SUM(tl.elapsed_time) AS ELAPSED, SUM(CASE WHEN
(tl.standard_time = 0) THEN 0 ELSE (tl.elapsed_time) END)AS PERFTIME,
SUM(tl.standard_time)as Standard_time
from perfplus.tale tl, perfplus.task tk, perfplus.users u
WHERE tl.task_id = tk.ID
AND tl.facility_id = tk.facility_id
AND tk.start_time BETWEEN {ts '2017-05-04 04:00:00'} and {ts '2017-05-04
23:00:00'}
AND tk.facility_id = '130'
AND tk.user_id = u.user_id
AND u.shift_name NOT IN ('9WMS','9','T2016','T2017')
and tk.status like ('TS_PROCESSED')
and u.user_id = 'LBRSHALL'
group by hour(tk.start_time)
order by hour(tk.start_time)
| common-pile/stackexchange_filtered |
The new flood mapping system keeps crashing every time we try to run vulnerability assessments across the entire county.
That's because you're probably still using a traditional relational database. Geographic queries don't work the same way as regular data lookups.
What do you mean? We're just storing coordinates and elevation data like any other numbers.
Think about it this way - when you want to find all properties within a flood zone, a regular database has to check every single record against your boundary coordinates. That's millions of calculations for something that should be simple.
But isn't that how databases work? You query the data and it searches through everything?
Not with spatial databases. They use something called R-tree indexing that organizes geographic data into hierarchical rectangles. Instead of checking every property individually, the system can eliminate entire regions at once if they don't intersect with your flood zone.
That sounds like it would speed things up, but how much of a difference could it really make?
Consider this - your current system probably takes hours to identify at-risk properties during an emergency. With proper spatial indexing, that same analysis runs in minutes. The filter-and-refine technique first uses the spatial index to eliminate obvious non-matches, then only does detailed geometric calculations on the remaining candidates.
So the database actually understands geography, not just numbers?
Exactly. It can handle complex queries like finding the shortest evacuation route that accounts for road capacity, or determining which emergency shelters serve overlapping coverage areas. Your traditional database would choke on a Voronoi diagram calculation to assign each neighborhood to its closest emergency facility.
We've been trying to solve our response time problems by adding more servers, but maybe we need different architecture entirely.
Right. Spatial databases integrate location-based data with attribute data in ways that make complex geographic analysis possible. When you're dealing with real-time sensor data from flood gauges, weather stations, and traffic monitors, you need systems designed specifically for geospatial relationships.
The emergency management office has been pushing us to incorporate more predictive modeling - tracking where floodwaters will spread based on current conditions and terrain.
That's exactly what spatial databases excel at. They can process geographic data alongside temporal data, factoring in variables like wind direction for wildfire spread or soil saturation levels for flood prediction. The key insight is that location isn't just another data field - it's a fundamental organizing principle that requires specialized storage and query methods.
This explains why our current system struggles with anything more complex than simple point lookups.
Traditional databases were never designed to understand that two points could be spatially related even if they're stored in completely different records. | sci-datasets/scilogues |
How to get powershell IIS Instance and associated certificate thumbprints
I am trying to list all the IIS sites and their associated certificate thumb prints.
I really just want it like
IIS Site, Thumbprint.
Seems simple enough, but I can't for the life of me get this to work. (note I want the actaull IIS site, not the friendly name or certificate subject)
I can get them individually, but how do I join it to show both.
$sites = Get-Website | ? { $_.State -eq "Started" } | % { $_.Name }
$certs = Get-ChildItem IIS:SSLBindings | ? {
$sites -contains $_.Sites.Value
} | % { $_.Thumbprint }
Try this:
$certs = Get-ChildItem IIS:SSLBindings
Get-Website |
Where-Object State -eq "Started" |
ForEach-Object {
$siteName = $_.Name
[PsCustomObject]@{
Site=$siteName
CertThumb=($certs | Where-Object {$_.Sites.value -contains $siteName}).Thumbprint
}
}
| common-pile/stackexchange_filtered |
Can I omit the verb in the second part of this sentence, after the colons?
Is this sentence grammatically correct? I mean, can I omit the verb in the second part after the colon?
Differently from other works, they consider all information usually available in social media collections: both user-provided information such as textual content (titles, descriptions and tags) and automatic generated content (creation time and geo-coordinates).
You mean "generated"?
The part after the colon is a subordinate clause in this case, so it's fine for it not to have a verb. There are a couple of other problems, however.
"Differently from other works" - as others have noted, this is awkward at the beginning of a sentence (and unnecessarily wordy elsewhere in a sentence). Try "Unlike" instead.
"they consider" - I assume the previous text gives a referent for "they"?
"automatic generated content" - since automatic modifies generated, not content, it needs to be an adverb, not an adjective; and because the phrase "automatically generated" is being used as a single adjective modifying "content", it needs a hyphen: automatically-generated content.
Rewrite:
Unlike other works, they consider all information usually available in social media collections: both user-provided information such as textual content (titles, descriptions, and tags) and automatically-generated content (creation time and geo-coordinates).
The sentence is not correct, but this has nothing to do with missing a verb after the colons. The colon introduces a list, so naturally needs no verb.
However, the first part of the sentence is, if not actually grammatically incorrect, at least clumsy. You probably want something like:
Unlike other works...
Or perhaps "In contrast to other works...". I don't think "differently from" is strictly wrong, but I definitely agree it is unidiomatic and sounds clumsy.
Ah yes, "in contrast" is definitely better.
Or: "Differing from other works..."
The sentence is grammatically incorrect. In addition, there are several non-grammar-related edits that you should consider.
Differently from other works,
This sounds awkward. You should move the adverb closer to the verb "consider".
You could change "differently from other works" to "unlike other works", but don't put it in the front of the sentence. If you write "Unlike other works" in front, then it would be a dangling modifier, in which "they" would mean a work "unlike other works".
What "other works"? This phrase is vague.
they consider
Who are "they"?
Why would "they" consider?
If this is an excerpt from a paragraph, then the paragraph should answer these questions. Alternatively, it is better and clearer to include a noun instead of a pronoun.
both / such as
You can use either. Do not use both words at the same time. It unnecessarily makes your sentence more complex.
titles, descriptions and tags
I must have serial commas in my sentences, but that is your choice. Titles, descriptions, and tags.
automatic generated content
automatically generated content
The revised sentence would be:
[They] consider, differently from
other works, all information usually
available in social-media collections:
user-provided information, such as
textual content (titles, descriptions,
and tags), and automatically generated
content (creation time and
geo-coordinates).
You offer some good advice. But plural "information"? I'd say "both types of information". And "they" might be clear from context. And "differently" still sounds a bit off. I also think a hyphen is needed in "social media".
Although I did not mention it, I agree that the "differently" phrase still sounds weird after the revision, but I included it anyway if the OP really insists on keeping it. Hmm, I did not notice the hyphen and plural "information". Thanks. How about "in both social-media collections of user-provided information"? Unfortunately, this changes the meaning, and your "types of" suggestion adds a double "of", perhaps a stylistic issue. A simple Google search reveals that the unhyphenated "social media collection" is the most common, however.
@Rhodri, thanks. The original text should have included commas: "user-provided information, such as textual content (titles, descriptions and tags), and automatically generated content (creation time and geo-coordinates)". The "content" and parentheses parallelism are misleading.
| common-pile/stackexchange_filtered |
How do I append text to the name of the file from a fullpath (string) in C?
How do I append text to the name of the file from a fullpath (string) in C?
For example:
"/home/erikas/myfile.txt" would become "/home/erikas/myfile-generated.txt"
"/home/erikas/myfile" would become "/home/erikas/myfile-generated"
"/my.directory/my.super.file.txt" would become "/my.directory/my.super.file-generated.txt"
It just feels like I am trying to re-invent a wheel. Is there any simple solutions to this problem? A function?
Maybe this will help https://stackoverflow.com/questions/18970103/split-a-file-name-from-a-path-in-c
You have a classic path-splitting case where you need to first determine if the filename contains a path component at all (e.g. is there a '/' character). If so, you can use strrchr and '/' to get a pointer to the last '/' and isolate the filename. Then the problem switches to "Do I have an extension?". Same thing regarding the '.' (with sanity checks to protect 'dot/hidden' files (e.g. .bashrc)). If there is a '.' and it is separating name.ext, you have a pointer to '.' which you can isolate and save .ext, append your string and put things back together.
or https://stackoverflow.com/questions/7180293/how-to-extract-filename-from-path
So, what's the rule for what needs changing? Add -generated before the last dot of the last component of the pathname, or append it if there is no dot in the filename component? What should happen with /home/erikas/old.file.name? Presumably /home/erikas/old.file-generated.name?
I've just managed to create my own solution for this.
Note that this code snippet will fail on full path like "myfile.txt" or "/home/my.user/", but in my case I used GTK to select file, so no issues to me:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <libgen.h>
#ifdef _WIN32
#define SEPARATOR '\\'
#else
#define SEPARATOR '/'
#endif
#define GENERATED_APPEND_TEXT "-generated"
char *generate_output_fpath(char* fpath) {
int last_separator_index = (int) (strrchr(fpath, SEPARATOR) - fpath);
int last_dot_index = (int) (strrchr(fpath, '.') - fpath);
char *new_fpath = malloc(strlen(fpath) + strlen(GENERATED_APPEND_TEXT) + 1); // +1 for \0
if(new_fpath == NULL){
return NULL; // malloc failed to allocate memory
}
// if dot does not exist or dot is before the last separator - file has no extension:
if ( !last_dot_index || last_dot_index < last_separator_index) {
new_fpath[0] = '\0'; //ensure it is empty
strcat(new_fpath, fpath);
strcat(new_fpath, GENERATED_APPEND_TEXT);
return new_fpath;
}
int fpath_length = strlen(fpath);
int append_text_length = strlen(GENERATED_APPEND_TEXT);
int i = 0;
int ii = 0;
for (; i < last_dot_index; i++) {
new_fpath[i] = fpath[i];
}
// We copied everything until dot. Now append:
for (; ii < append_text_length; ii++) {
new_fpath[i + ii] = GENERATED_APPEND_TEXT[ii];
}
// Now append extension with dot:
for (; i < fpath_length; i++) {
new_fpath[i + ii] = fpath[i];
}
return new_fpath;
}
Result:
"/home/erikas/myfile.txt" would become "/home/erikas/myfile-generated.txt"
"/home/erikas/myfile" would become "/home/erikas/myfile-generated"
Note that full example can be seen/tested here: onlinegdb.com/HyPyfTvw7
Any tips regarding code optimization are welcome!
You should check that the memory allocation succeeds before using the allocated memory. You should probably simply return NULL if the allocation fails, and the calling code should check that NULL was not returned. Alternatively, you can print an appropriate error message and exit. Whether that's a good idea depends on the context in which it is used. If you're writing a command line program, it maybe OK; it's what I'd do, usually. If you're writing a GUI program, it probably isn't appropriate for multiple reasons and simply returning NULL is better.
Also, I don't think your code yet handles the case of /home/some.user/plain_file; it will add -generated after the . in the path, which is not what you said you want. The fix for that is easy; use int last_dot_index = (int) (strrchr(&fpath[last_separator_index], '.') - fpath);. You might also need to be careful about trailing slashes (/home/erikas/) and plain file names erikas_file. Both could give your code problems.
Updated to check if malloc did not fail. Also - this works perfectly fine - https://onlinegdb.com/HyPyfTvw7
Also I am using GTK to select file, so I am fool-proof that it won't be like "myfile.txt" or "/home/some.user/". Thanks anyway - I will mention this.
I misread (didn't read) the || last_dot_index < last_separator_index part of the test — you're right, that handles the "last dot before last slash" condition (unless there is a trailing slash, but that's an exceptional problem anyway). My apologies for that mistake. There was no way for me to know that you're using GTK to get the original file name and hence can only get absolute names that don't have a trailing slash — though worrying about the trailing slash is something of a stretch at best. (Even saying "I'm using GTK" wouldn't have told me that you'd get clean names.)
I would suggest two handy functions : basename and dirname instead
| common-pile/stackexchange_filtered |
Date formatting in Javascript Template literals
I have the following:
$('#search').on('keyup',function(){
var value = $(this).val();
$.ajax({
type : 'POST',
url : '{{ route("search") }}',
dataType: 'json',
data: {
'_token' : '{{ csrf_token() }}',
value : value,
company_id : '{{ request()->company_id }}'
},
success: function(res){
var tableRow = '';
$('#table-content').html('');
$.each(res, function(index, value){
tableRow = `<tr>
<td hidden> ${value.id} </td>
<td> ${value.employee_number} </td>
<td> ${value.name} </td>
<td> ${value.code } - ${value.description } </td>
<td> ${value.quantity } </td>
<td> ${value.novelty.unit } </td>
<td> ${value.date} </td>
<td> ${value.informed}</td>
</tr>`
$('#table-content').append(tableRow);
});
}
});
Its working OK but the date its printing like this:
Image
And I want the date to print in d-m-Y format. I've been playing a lot with different options but none of them worked for me.
EDIT: I'dont know if it changes something but that template literal its inside the success function of an ajax request.
See https://stackoverflow.com/questions/13459866/javascript-change-date-into-format-of-dd-mm-yyyy for a helper function you could pass your "value.date"
Thanks!! @Tonton-Blax the first answer on that question worked like a charm. I've only had to add +1 to the date because for some reason it was always giving me a the day before. I image its something to do with timezones
| common-pile/stackexchange_filtered |
Required: Method of moments fitting routine for the two-parameter generalized Pareto
I am currently using the evd package which fits a two-parameter GPD by maximum likelihood.
Since in small samples the MOM is superior to the ML estimation I'd like to give it a go. However, the POT package - which could do the job - is offline due to memory access errors.
There are many extreme value packages around. However I am ONLY interested in the two-parameter GPD given by
$G(y)= \begin{cases} 1-\left(1+ \frac{\xi y}{\beta} \right)^{-\frac{1}{\xi}} & \xi \neq 0 \\
1-\exp\left(-\frac{y}{\beta}\right) & \xi=0 \end{cases}$
or alternatively
$g(y)= \begin{cases} \frac{1}{\beta} \left( 1+\frac{\xi y}{\beta} \right)^{-1-\frac{1}{\xi}} & \xi \neq 0 \\
\frac{1}{\beta} \exp\left(-\frac{y}{\beta} \right) & \xi=0
\end{cases}$
Is there any package that can fit such a distribution using a method of moments approach?
Do you have any reference or source for the statement "in small samples the MOM is superior to the ML estimation" in the case of the GPD? (It relates to another question here - one of mine, as it happens.)
See here: Hosking(1987) -- Parameter and Quantile Estimation for the Generalized Pareto Distribution. There seem to be quite a few typos in this paper though.
Thank you very much. This is the Hosking and Wallis paper in Technometrics?
Jep, that's the one.
@Joz, can I ask what is the function that can be used for MLE estimation of GPD parameters in evd package?
As the case when $\xi = 0$ simply corresponds to an exponential distribution with scale parameter $\beta$, it is trivial to compute the method of moments estimator given $\overline y$, the first sample raw moment (the second is not needed since there is only one parameter to estimate in this case). For the case $\xi < 1/2$ with $\xi \ne 0$, we can easily calculate $${\rm E}[Y] = \frac{\beta}{1-\xi}, \quad {\rm E}[Y^2] = \frac{2\beta^2}{(1-\xi)(1-2\xi)},$$ which shows that the first and second raw moments of $Y$ are defined only if $\xi < 1/2$. Consequently, setting these to their respective sample moments and solving for the parameters easily yields the closed form solution $$\widehat{\beta} = \frac{\overline y \overline{y^2}}{2(\overline{y^2} - (\overline y)^2)}, \quad \widehat{\xi} = \frac{1}{2} - \frac{(\overline y)^2}{2(\overline{y^2} - (\overline y)^2)},$$ where $\overline{y^2} = \frac{1}{n} \sum_{i=1}^n y_i^2$ is the second sample raw moment.
(+1) The raw moments are defined even for negative values of $\xi$, not just $0\lt \xi\lt 1/2$, with the same solution.
Indeed, I was wondering why I had put that restriction in there when the equations didn't seem to need it.
It can be confusing because the PDF and CDF as given in the question are not quite correct: for $\xi\gt 0$ both are zero for $y\lt 0$ and for $\xi\lt 0$ they are supported only on the interval $[0, -\beta/\xi)$.
Unless $n$ is very small the results should be similar, since the usual (i.e. Bessel corrected) sample variance and $\overline{y^2} - (\overline y)^2$ are the same up to a factor of $\frac{n-1}{n}$.
| common-pile/stackexchange_filtered |
java.nio.file package not detecting directory changes
I want to detect files being written to a directory and so thought the Java NIO package would be suitable. However, I've ran their tester code (https://docs.oracle.com/javase/tutorial/essential/io/examples/WatchDir.java)
and it will only detect file changes made by the machine running the script.
i.e. we have multiple servers running which share a number of mounted drives. If I log into one of these servers, run the test code, log into the same machine again via another terminal and make changes to the directory I'm watching, these changes will be detected. However, if I log into a different server, these changes are not. Is this a fundamental problem with NIO, in which case is there something else I should use, or is there a workaround?
sorry- didn't mean to be rude, both answers were useful.
Thanks for the quick comeback!
It kinda tries to warn you: WatchService
Platform dependencies
[...]
If a watched file is not located on a local storage device then it is
implementation specific if changes to the file can be detected. In
particular, it is not required that changes to files carried out on
remote systems be detected.
I'm afraid you'll need to periodically poll manually.
Ah, not mentioned in the tutorial! ok, thanks I'll do it manually then. Cheers
That isn't necessarily a problem a probem of Java NIO.
It very much depends on the overall stack, as in: which file systems are used, and which operating systems are running on your servers.
Example: when you have one server X create a new file on a shared file system, such as AFS for example, then another server looking at the shared directory might only see that there is a new file. But as long as the file is written to, you need to log into X to see the latest writes. Only when the writing process is done, and closes the file, the written content becomes visible to other servers.
So, as said, the real answer is to research the characteristics of the the file system / server OS(es) you are using. Which nicely aligns/explains what the other answer is stating: Java NIO can't give you generic guarantees, because the exact behavior is implementation specific.
| common-pile/stackexchange_filtered |
Why do we need the unit object to be the terminal object in a cartesian monoidal category?
I know almost for all the examples such as ${\mathrm{Set}}$, $\mathrm{Cat}$, e.t.c., with the categorical product being the monoidal tensor product, the unit object is the terminal object. Is this an assumption, or it can be proved that if $(\mathcal{C},\times,I)$ is a monoidal category where $\mathcal{C}$ contains a terminal object, then $I$ has to be terminal? (I tried to prove this but failded.)
If this is only an assumption, is there any other "counterexample"?
Generalization: https://mathoverflow.net/questions/437241/proof-that-the-unit-of-a-cartesian-monoidal-category-is-terminal
Let $T$ be a terminal object for $\mathcal{C}$. Then $T\times I\cong T$ since $I$ is the unit of your monoidal structure, but also $T\times I\cong I$ since $T$ is terminal. So, $I\cong T$ and $I$ is also terminal.
(This is just the usual argument that the unit of a monoid is unique, applied to the monoid of objects of $\mathcal{C}$ up to isomorphism with $\times$ as the operation.)
Why does terminality of $T$ imply $T \times I \cong I$?
See https://math.stackexchange.com/questions/542911/proving-basic-lemmas-about-categories-with-finite-products-and-terminal-initial.
| common-pile/stackexchange_filtered |
How to deal with rat and mouse, and fox, wolf, and jackal in story books?
I don't know the difference between rat and mouse, and fox, wolf, and jackal in real life. Means that I know they are different but I can't make out who's who on seeing them.
The story books have hand painted pictures of all these animals, and somewhere they call them mouse and other place they call them rat. Same is the case with fox, wolf, and jackal.
The toddler is 22 months old. I read her the stories and she listens attentively. I point out to rat and say this is rat and then ask her to point out to the rat again, which she does.
Question: Should I stick to rat and fox irrespective of what the story book writes? Should I use rat, mouse, and fox, wolf, jackal whenever story book mentions them?
Shouldn't that be confusing to the child? Till what age should I continue doing the same?
I forgot turtle and tortoise too! :(
Do you mean that sometimes they depict them similarly or that you do not know the difference personally? I'd stick with what the books says. at 22 months your daughter can soak in the idea that a tortoise looks like a turtle but lives on land and a turtle looks like a tortoise but lives in water. Same idea with a rat and a mouse. And I personally don't see a relation between a fox, wolf and jackal. Those are pretty far apart. Fun thing with kids is you can explain it however you feel it makes sense to you.
@KaiQing Books depict them similarly. As far as "fox, wolf and jackal" are concerned - seeing the hand painted pictures I am sure most people won't be able to notice the difference. Nor can anyone notice the difference between rat and mouse. The story books don't use photographs, they use paintings. Also, the animals here were clothes sometimes.
To me, even in hand drawn paintings, I would expect some difference, mostly in coloring (rat is brown, mouse is grey, wolf is grey and big, fox is red with white tail tip and small, jackal is brown with black... ) If it doesn't show this, I think it is a bit odd. Do they wear different clothing? If the point of the story is that they look the same/are confused for each other, maybe the book is meant for older kids?
Did anyone else immediately think of "What Does the Fox Say" when they read this question? (Search on youtube for that text if you don't know what I mean, and then be prepared for your children being utterly lost in laughter for a good five to ten minutes.)
If I'm understanding your question correctly (and please correct me if I'm wrong), the artwork in the book doesn't sufficiently differentiate the characters enough that you (as an adult) can tell them apart.
I think your concern about confusion is valid.
Is there a particular reason you really want to keep using that book? I would be a bit concerned that your child might not be getting the visual learning she can benefit from with such a confusing book.
There's a reason young children's books have easily identifiable creatures and objects. It's to help them to learn shapes, patterns, colors, relative sizes, etc. A picture book that fails to do this fails to give a child the visual cues they need.
How important is it? That's up to you. I doubt your daughter will mistake the animals as an adult in real life. But she is missing something now.
My eldest child's first word was "Moo." He was only seven months old at the time. (A paper I link to included a study that concluded my child should not have been able to do this for another two months!) I would read to him often, and one of my favorite authors had pictures that were plain but colorful and easy to distinguish. One day, as I had done many times, when I turned to a page with a cow, he said "moo". I was a bit stunned (I had not thought he would have been able to do so this early.) But he did this consistently. (I included a link at the bottom to the book. If you flip through the pages, you'll see what I mean about the simple yet colorful visual style.)
My child was (is) not a genius. I thought I was just entertaining both of us at the same time. However, he was clearly learning more than I realized.
My advice is to use books where the child receives better visual cues. You may enjoy it as much as your daughter, and she is learning more than you might think.
I was sad the day I donated all the children's books that my kids didn't want to keep to a children's hospital. I have many happy memories of reading these books to my children. I felt like they were old friends.
Tomie dePaola's Mother Goose
The Origins of Joint Visual Attention in Infants
+1 for the suggestion to use a different book and the links at the bottom. Good stuff per the norm :)
Over time kids get better at distinguishing animals... for a while, my toddler called most four legged mammals (cat, dog, cow, horse) all "cat" because we had one at home, and the other animals were close enough. It's basically a matter of whether they've been shown the difference, either in-person or in pictures.
Provide as much range of information as possible so her vocabulary (and imagination) have room to expand -- we patiently provided the correct name for dogs, cows, etc. and eventually she distinguished different species accurately. Looking at photographs can be a much better way to accurately explore differences (e.g., "Look, the fox is small and orange, the wolf is big and gray!") that hand-painted pictures may not show as well.
Dogs and cows, and cats can be seen daily in the streets here. She has seen a rat too. And I don't know whether that was a rat or a mouse. I called it rat. The major problem is with the animals which can't be seen on streets in daily life like - fox, wolf, jackal, turtle, tortoise.
Mice are much, much smaller than rats (if you see it on the street, it's probably a rat) :) The internet may be the best resource for "not local" animals -- while I've seen turtles and tortoises in a zoo, I have only once seen a wolf (in a wolf preserve) and never a fox or jackal, and I have no doubt that zoo stock varies widely from place to place!
There are some good answers, but I was puzzled over one part of your question.
You write:
I don't know the difference between rat and mouse, and fox, wolf, and jackal in real life. Means that I know they are different but I can't make out who's who on seeing them.
Do you mean that you not know the difference between those animals, and would not be able to tell them apart? If this is a book your child enjoys, my advice would be to take some time to learn them apart.
a good place for you to start would be wikipedia:
Fox
Jackal
Wolf
Mouse
Rat
Turtle
Tortoise
(note that use of turtle/tortoise actually differs a bit depending on American, British or other English usages.)
In addition you (and you child) can maybe go to a zoo and look at these animals, and I second getting a book with good clear pictures of animals, in addition to the more art and story driven book.
Then you write:
The story books have hand painted pictures of all these animals, and somewhere they call them mouse and other place they call them rat. Same is the case with fox, wolf, and jackal.
So my question is: does the book actually mix these characters up? Or are they distinct characters with similar pictures?
if the intent of the book is to mix them up, (which it could be to tell a certain type of story), the point of the story might be more suitable for older children.
if the characters are different, but you think they look the same, I would say to go with what the book says and treat them differently. Even if they don't look different to you, the idea that there are different animals and that they look similar is ok.
I suspect the point of this book is not to teach how the animals look, but to have a story involving some different characters. Treat the book as such, and worry about teaching how the animals look from a different source.
At 22 months, I wouldn't worry terribly about accurately identifying the animals in a drawn picture book. Drawn pictures won't ever be all that accurate, and honestly reading to her is more important than accuracy. Keeping her involved whether it's a rat or a mouse is what's important.
What I would do is get a photo book of animals which features the various animals separately, and use that to teach your child the difference between the similar animals. In the US for example, our kids like Priddy Books, which print "100 First Animals" and similar books.
At around 2, these are perfect, because you can start by introducing the animal names; then in a few months he/she will start learning which is which, and you can start discussing the differences. Then you can move back to the drawn picture books and have long, complex discussions about what specific animals are in them; as they're drawn pictures, it may not be possible to identify them perfectly, but the child will undoubtedly make an attempt, and have strong reasons behind that.
You can also point them out in real life, and make connections to the books (both drawn and photo). This will help a lot in developing an understanding of what the differences are, as there are things you can see in 3D that you can't in a photo.
With our older child, when he was around two we were reading the photo cars/trucks/trains/construction vehicles books, and by 2.5 he was able to identify the real ones and to correct me when I was wrong about something being a Bulldozer versus a Front Loader. Even when he was unable to identify something correctly, we would have good discussions about what they were, both in picture books and in real life, and we challenge him still to explain what reasons he has for calling something a particular kind of vehicle.
So, if I'm reading your answer correctly, at 22 months, you would have read your son a book about cars/trucks/trains/construction vehicles, where they all looked alike. Not to worry terribly about accuracy. At 2.5 years, he would have been confused, and that would be ok. Because, that is the question she'd asking.
I think there is some confusing about the original question: Is the book confusing, and/or is the parent confused and/or is the toddler confused. I think Joe is going on the assumption that it is the toddler that is confused.
@anongoodnurse I was differentiating between story books (which I assume is what the OP is talking about - drawn pictures, etc.) and photo books. I would not have particularly worried about being "correct" in the drawn picture books - go with what they say in the book, or what my guess is. I would (did) read in parallel photo books, which show actual photographs of the vehicles, and in those books pay attention to the differences and point them out. He was able by 2.5 to point out to me what the differences are better than I was.
@Ida No, I think the OP was talking about the OP being unsure of what a drawn picture is (is it a rat or a mouse? Who knows?).
I've added a couple of clarifications as it may have been a bit confusing exactly what I meant, particularly with negations in places that are unclear.
I used to worry similarly about "alligator" vs "crocodile." The truth is --it's nothing to stress about, one way or the other. This isn't your child's only chance to learn the distinction, and he or she won't hold it against you if you get it wrong.
| common-pile/stackexchange_filtered |
How to override default search in Magento?
I am working on a Magento site, and i would like to edit the default search result.
Editing search result is not editing the appearance of search result but the actual search results generated.
The idea is, if the Magento search does not return any value then i need to do a search in my custom table to fetch some relative products.
Could anyone help me to edit the default Magento search??
Thanks a lot in advance.
You will need to modify some of the models in catalogsearch module, this link has a good explanation of how to override Core classes. In your case you will need to add your logic somewhere in the Query model or its resource or collection models.
This CMS Search extension will provide an excellent example of how to extend the default search. It adds extra content into the search index and allows you to control how those results are presented to the searcher.
| common-pile/stackexchange_filtered |
How to pass default keyword for table valued function call in ADO.NET
So here's the deal. In our database, we wrap most of our reads (i.e. select statements) in table valued functions for purposes of security and modularity. So I've got a TVF which defines one or more optional parameters.
I believe having a TVF with defaulted parameters mandates the use of the keyword default when calling the TVF like so:
select * from fn_SampleTVF(123, DEFAULT, DEFAULT)
That's fine, everything works in the query analyzer, but when it comes time to actually make this request from ADO.NET, I'm not sure how to create a sql parameter that actually puts the word default into the rendered sql.
I have something roughly like this now:
String qry = "select * from fn_SampleTVF(@requiredParam, @optionalParam)";
DbCommand command = this.CreateStoreCommand(qry, CommandType.Text);
SqlParameter someRequiredParam = new SqlParameter("@requiredParam", SqlDbType.Int);
someRequiredParam.Value = 123;
command.Parameters.Add(someRequiredParam);
SqlParameter optionalParam = new SqlParameter("@optionalParam", SqlDbType.Int);
optionalParam.Value = >>>> WTF? <<<<
command.Parameters.Add(optionalParam);
So, anybody got any ideas how to pass default to the TVF?
Best solution I've seen so-far is to avoid using the default clauses in the TVF parameters definition, pass in nulls as Kevin suggested, and manually create defaults in your TVF body. Here's an article I found (it's about Sprocs, but the problem is the same).
http://social.msdn.microsoft.com/Forums/en/sqldataaccess/thread/3825fe1c-7b7e-4642-826b-ea024804f807
As with all other things associated with Microsoft, getting this to work will involve a half-baked workaround.
SqlParameter optionalParam = new SqlParameter("@optionalParam", SqlDbType.Int);
optionalParam.Value = >>>> WTF? <<<<
command.Parameters.Add(optionalParam);
You don't have to add above code (The optional parameter) for default. SQL Server will use the default as defined in your UDF. However if you would like to pass different value then you can pass:
SqlParameter optionalParam = new SqlParameter("@optionalParam", SqlDbType.Int);
optionalParam.Value = newValue;
command.Parameters.Add(optionalParam);
Additionally, it seems (from my testing so far) you don't even have to specify the datatype; so you can say:
command.Parameters.Add(new SqlParameter("@OptionalParam", null));
...and the [null] will be interpreted as [default]
EDIT: my prior comment is incorrect: [null] is not interpreted as [default]
Ok, after even more testing, at least with the current framework version I am using, adding the parameter without specifying a value does NOT result in the default value being used. Calling from SSMS with [default] does not yield the same result as passing the parameter with no value, even though that is the implied behavior here: https://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlparameter.value%28v=vs.110%29.aspx
Or in other words....I think this answer ("You don't have to add above code (The optional parameter) for default.") is not correct - or at least not always.
I would have done so:
public void YourMethod(int rparam, int? oparam = null)
{
String qry = string.Format("select * from fn_SampleTVF(@requiredParam, {0})"
, !oparam.HasValue ? "default" : "@optionalParam");
SqlParameter someRequiredParam = new SqlParameter("@requiredParam", SqlDbType.Int);
someRequiredParam.Value = rparam;
command.Parameters.Add(someRequiredParam);
if (oparam.HasValue)
{
SqlParameter optionalParam = new SqlParameter("@optionalParam", SqlDbType.Int);
optionalParam.Value = oparam.Value;
command.Parameters.Add(optionalParam);
}
}
You can pass Null as the parameter value.
This article shows examples.
Thanks I'll try that, but I'm fairly certain that passing an explicit null will just override the defaults defined in the tvf definition, making the defaults useless.
I'd love to hear your findings. I'm amazed that I've never needed to know this before seeing your question. I would still expect Null to work and that DBNull would override the paramater's definition.
Yeah I just checked, that's no good. And the reason is because there's a difference between:
select * from fn_SampleTVF(123, DEFAULT)
and
select * from fn_SampleTVF(123, null)
Thanks though.
P.S. The problem is that you're actually right. DBNull does indeed override the TVF's parameter definition. What I'm looking to do is USE the TVF's parameter definition.
| common-pile/stackexchange_filtered |
Can I modify ASP.NET session object this way?
Imagine that I have an instance (oEmp) of "Employee" class and I would like to store it session.
Session["CurrentEmp"] = oEmp;
If I modify a property in oEmp as follows:
oEmp.Ename = "Scott";
Am I referring to session item through above statement or just only "oEmp"?
Session["CurrentEmp"] = oEmp; //Do we still need this after any property is modified
Is that the same case, if I opted for SQL Server session state (instead of InProc).
thanks
Plase, be aware of this: http://www.hpenterprisesecurity.com/vulncat/en/vulncat/dotnet/asp_dotnet_bad_practices_non_serializable_object_stored_in_session.html . It's a bad practice to store non serializable objects in session variables.
Asp.net Session will hold the reference, so you shouldn't need to do the following:
Session["CurrentEmp"] = oEmp;
after modifying oEmp;
Joe, your answer contradicts with ARS. I am confused now :(
I am updating my response as my understanding of session data serialisation was not correct. I am not going to delete this answer as it might help other understand how session works. I would thank @Guru for point this out.
Irrespective of session mode, session data is updated back to session object only when the request is successful. So if you have assigned a reference object to session and then update the object in the same request, the session will hold the updated information.
Refer: Underpinnings of the Session State Implementation in ASP.NET for more information
ARS, your answer contradicts with Joe. I am confused now :(
@ARS Can you please provide some link or resource explaining this ? I am quiet surprised by this actually, the way Sessions behaves should not be affected by underlying place where it is stored
@Guru: I have respond to the query as per my understanding. However you can Refer to http://msdn.microsoft.com/en-us/library/aa479041.aspx and check Table 1. State client providers. Where it mention about the serialization of data in case of Out-Of-Proc. Also I believe once you have serialised an object you wont be able to change the state of it unless you are update the session data.
MSDN: "The binding between the session state values and the session object visible to developers lasts until the end of the request. If the request completes successfully, all state values are serialized back into the state provider and made available to other requests." So the data is serialized back after the request is complete. So no need to update your session variables.
@ARS - I agree with this - "once you have serialised an object you wont be able to change the state of it..." - but I don't think it is actually serialized until it is saved.
@Guru: Aha this makes sense. I missed that point. If the request fails the session object is not set. Its a kind of fail safe measure :) I will update my answer. Thanks for pointing this out.
Session Variables are held as reference types so there is no need to update its value every time.
You object instance that you store, only the reference to that object is stored in the session variable.
Here are some link to help you find more details
http://bytes.com/topic/asp-net/answers/447055-reference-types-session
http://forums.asp.net/t/350036.aspx/1
Do asp.net application variables pass by reference or value?
| common-pile/stackexchange_filtered |
expected ';' after top level declarator, error in VS Code
I try to run this code on my mac:
#include <iostream>
#include <array>
using namespace std;
const size_t rows{2};
const size_t columns{3};
int main() {
array<array<int, columns>, rows> array1{{1, 2, 3, 4, 5, 6}};
array<array<int, columns>, rows> array2{{1, 2, 3, 4, 5}};
cout << "Values in array1 by row are:" << endl;
printArray(array1);
cout << "\nValues in array2 by row are:" << endl;
printArray(array2);
}
void printArray(const array<array<int, columns>, rows>& a) {
for (auto const& row : a) {
for (auto const& element : row) {
cout << element << ' ';
}
cout << endl;
}
}
There're errors on a line that use array. For example:
const size_t rows{2};
said "expected ';' after top level declarator" even though I already put a semicolon.
array<array<int, columns>, rows> array1{{1, 2, 3, 4, 5, 6}};
said "non-type template argument of type 'size_t' (aka 'unsigned long') is not an integral constant expression" at 'columns' and 'expected ';' at end of declaration' at array1
I tried to run the same program on another computer and it worked normally but mine just always stuck at this error.
What causes the problem and how to resolve it?
Thank you in advance.
Add -std=c++11 or newer to the compiler options.
by default Macs compile to the 1998 C++ Standard. There's probably a reason for that default, but, in general, wow. A nasty surprise to a lot of people learning C++ from modern resources.
You'll likely need to update tasks.json AND ... . I can't remember the name of the other json file. The first tells the compiler what to do. The second, the one I can't remember, tells the code analysis tool what to do so it can put the red squiggles in the right spots. Only changing one of them is a common source of frustration.
@user4581301 c_cpp_properties.json. I've recently decided to give VSCode another go, God it is awful.
@john it depends how you use it, if you use it to compile c++ code without using a build system then yes it's fiddly to configure, but so would any other text editor be that you compile your code with bash scripts. Used with a build system, especially one like cmake that has a plugin that automatically configures intellisense it's a reasonably good ide
@AlanBirtles Using the cmake extension is what persuaded me to give VSCode another go. But it seems like every possible setting has a default different from what I would prefer. Plus the GUI for changing settings is clunky. I'm hoping I'll get it set up in a way I can live with eventually but it's taking a while.
| common-pile/stackexchange_filtered |
Parent-Child Tree in Java
I was trying to implement a class Node to build a tree of Nodes. Basically, each Node can have children, so if I specify multiple nodes I can build a tree out of it.
As an example:
node1 (the root) has node2 and node3 as children
node2 has node4 and node5 as children
The problem I am having problems to solve is to build this tree and find all children of a given element (in this case node1 would have 4 children, since it has node2 and node3 in the first place, and node2 has 2 children, so in total 4).
Does anyone have any suggestion?
EDIT:
package ex1;
import java.sql.Array;
import java.util.ArrayList;
import java.util.Collection;
import java.util.List;
public class Node {
private String name;
private String description;
private ArrayList<Node> children = new ArrayList<>();
Node(String name, String description){
this.name = name;
this.description = description;
}
private void setName(String name){
this.name = name;
}
private void setDescription(String description) {
this.description = description;
}
public void addChildren(Node child) {
this.children.add(child);
}
public String getName() {
return this.name;
}
public String getDescription() {
return this.description;
}
public boolean hasDescription() {
return !description.isEmpty();
}
public Collection<Node> getChildren() {
return this.children;
}
/*
public Node findNodeByName(String name, Node t) {
if (t.getName().equals(name))
return t;
t.getChildren().forEach(node -> node.findNodeByName(name,node));
return null;
}*/
public Node findNodeByName(String name, Node t){
if(t.getName().equals(name)){
return t;
}
else if (t.getChildren().size() != 0){
for(Node c: children){
Node ret = c.findNodeByName(name,c);
if(ret != null){
return ret;
}
}
}
return null;
}
// IMPORTANT: Don't change this method!
private String toString(int indentNo) {
String indent = "\t".repeat(indentNo);
StringBuffer b = new StringBuffer();
b.append(indent);
b.append(getClass().getSimpleName() + " '" + getName() + "' ");
if (hasDescription()) {
b.append("(description: " + getDescription() + ")");
}
b.append("\n");
for (Node node : getChildren()) {
b.append(node.toString(indentNo + 1));
}
return b.toString();
}
@Override
public String toString() {
return toString(0);
}
}
Method where I make use of the class:
Path path = Path.of(pathname);
String fileContent = null;
try {
fileContent = Files.readString(path);
} catch (IOException e) {
throw new RuntimeException(e);
}
List<String> lines = new ArrayList<>(Arrays.asList(fileContent.split("\n")));
String[] firstLine = lines.get(0).split(",");
Node parentNode = new Node(firstLine[0], firstLine[1]);
lines.remove(0);
/* This was just to test findNodeByName
for(String line: lines) {
String[] params = line.split(",");
System.out.println(params[2] + ": " + (parentNode.findNodeByName(params[2], parentNode) != null));
}*/
//Now we read all remaining data
Node tmpNode;
for(String line: lines) {
String[] params = line.split(",");
if (parentNode.findNodeByName(params[2])==null || parentNode.findNodeByName(params[0])!=null) //if parent was not found or name already exists
throw new IOException();
tmpNode = parentNode.findNodeByName(params[2]);
tmpNode.addChildren(new Node(params[0],params[1]));
}
CSV file I am getting the data from:
uni,"This is my university folder",
firstyear,,uni
secondyear,,uni
analysis,"folder for the analysis course",firstyear
ai,"folder for the artificial intelligence course",secondyear
db,"folder for the database course",firstyear
Do you want to find childrens just for printing their "names"? Do you need count them? It is not clear what you need.
Not entirely sure what you want to do with these children. Put them in a List? Just print them? In either case, you can easily find all descendants of a given Node using recursion. It would help if you elaborated on what exactly you're trying to do.
There are many Java tutorials on the subject of trees.
Here is some sample code that could help (explanation below):
Main class:
class Main {
public static void main(String[] args) {
Node v1 = new Node(1);
Node v2 = new Node(2);
Node v3 = new Node(3);
Node v4 = new Node(4);
Node v5 = new Node(5);
v1.addChild(v2);
v1.addChild(v3);
v2.addChild(v4);
v2.addChild(v5);
v1.printChildren();
}
}
Node class:
import java.util.*;
class Node{
private int val;
private ArrayList<Node> children = new ArrayList<Node>();
public Node(int v){
val = v;
}
public void addChild (Node c){
children.add(c);
}
public void printChildren(){
if (children.size() != 0){
System.out.print("Children of Node " + getValue() + ": ");
for(Node c: children){
System.out.print("Node " + c.getValue() + " ");
}
System.out.println();
for(Node c: children){
c.printChildren();
}
}
}
public int getValue(){
return val;
}
}
Output:
Children of Node 1: Node 2 Node 3
Children of Node 2: Node 4 Node 5
Ok so in our node class, let's say each node will have an integer value, val. That is our first private instance variable. Second, each node will have a list of children nodes, children.
When we first declare our nodes, they will have integer values, as shown in our constructor.
After we define our nodes, we can add some nodes as children to other nodes (v2 and v3 are children to v1, and v4 and v5 are children to v2).
Now we need to print them. We will use a recursive approach for this. If the node we are printing the children of has children (the length of our children ArrayList is nonzero), then we will first iterate through that list, and print out the children of our current node. Afterwards, we again iterate through each child and use the same method (recursion) to print out the children of that node.
I hope this helped! Please let me know if you need any further help or clarification :)
EDIT:
Added a getName() method:
public String getName(){
return "Node" + getValue();
}
Added the requested method:
public Node findChildNodeByValue(int v){
if(getValue() == v){
System.out.println(getName() + " has the value");
return new Node(getValue());
}
else if (children.size() != 0){
for(Node c: children){
Node ret = c.findChildNodeByValue(v);
if(ret != null){
return ret;
}
}
}
return null;
}
Quick Explanation: Very similar to the original method, we use a recursive approach to iterate through each nodes' children: Once we reach a node with no more children, we return null. Once we reach the node with the given value, we return a copy of that node, which will be sent back to wherever the function was called..
Also edited main method:
Node v1 = new Node(1);
Node v2 = new Node(2);
Node v3 = new Node(3);
Node v4 = new Node(4);
Node v5 = new Node(5);
v1.addChild(v2);
v1.addChild(v3);
v2.addChild(v4);
v2.addChild(v5);
// v1.printChildren();
Node valNode = v1.findChildNodeByValue(5);
System.out.println(valNode.getName());
Output:
Node5 has the value
Node5
SECOND EDIT:
Change the method to look like this:
public Node findNodeByName(String name){
if(getName().equals(name)){
Node t = new Node(getName(), getDescription());
return t;
}
else if (getChildren().size() != 0){
for(Node c: children){
Node ret = c.findNodeByName(name);
if(ret != null){
return ret;
}
}
}
return null;
}
The main method should look like this:
Node v1 = new Node("a","aa");
Node v2 = new Node("b","bb");
Node v3 = new Node("c","cc");
Node v4 = new Node("d","dd");
Node v5 = new Node("e","ee");
v1.addChildren(v2);
v1.addChildren(v3);
v2.addChildren(v4);
v2.addChildren(v5);
System.out.println(v1.findNodeByName("e"));
Output:
Node 'e' (description: ee)
THIRD EDIT:
Added a new method:
public void setChildren(ArrayList<Node> c){
children = c;
}
Edited method:
public Node findNodeByName(String name){
if(getName().equals(name)){
Node t = new Node(getName(), getDescription());
t.setChildren(getChildren());
return t;
}
else if (getChildren().size() != 0){
for(Node c: children){
Node ret = c.findNodeByName(name);
if(ret != null){
return ret;
}
}
}
return null;
}
Main Method:
Node v1 = new Node("a","aa");
Node v2 = new Node("b","bb");
Node v3 = new Node("c","cc");
Node v4 = new Node("d","dd");
Node v5 = new Node("e","ee");
Node v6 = new Node("f","ff");
v1.addChildren(v2);
v1.addChildren(v3);
v2.addChildren(v4);
v2.addChildren(v5);
v4.addChildren(v6);
Node vNew = v1.findNodeByName("d");
System.out.println(vNew);
System.out.println(vNew.getChildren());
Output:
Node 'd' (description: dd)
Node 'f' (description: ff)
[Node 'f' (description: ff)
]
As per your requirement, you are looking for all descendants / children-of-children of a particular node. Then breadth-first depth-search is fit more to this use case. There are already tons of discussions around these algorithms. For instance:
Breadth First Search and Depth First Search
You are already thinking in the right direction related to its DataStructure. One thing I would suggest use java generics so that it can support multiple data-type as needed.
class Node<T> {
T value;
List<T> children;
Node(T t) {
this.value = t;
children = new ArrayList<T>();
}
void addChild(T child) {
children.add(child);
}
}
The return value of the recursive method call is dismissed.
The line
t.getChildren().forEach( node -> findNodeByName(name, node));
induces the recursive invocation, but the return value is not used to form the return value of the enclosing method.
Instead we need something like
for (Node node : t.getChildren()) {
Node result = findNodeByName(name, node);
if (null != result) {
return result;
}
}
or with streams
return t.getChildren()
.map(node -> findNodeByName(name, node))
.filter(Objects::nonNull)
.findAny();
| common-pile/stackexchange_filtered |
Calculating date time from start time and elapsed seconds
I have a dataframe where my index is an elapsed seconds series.
Depth_m | Temperature_degC | Salinity_PSU | OBS S9604_mV | OBS highsens S9604_mV | OBS S9602_mV | OBS S9603_mV | Time elapsed_sec
0.00 | 35.687 | 28.9931 | 36.7530 | 0.0082 | 0.0024 | 0.0059 | 0.0120
0.25 | 35.684 | 28.9932 | 36.7531 | 0.0083 | 0.0026 | 0.0060 | 0.0106
0.50 | 35.687 | 28.9931 | 36.7532 | 0.0079 | 0.0021 | 0.0055 | 0.0099
0.75 | 35.687 | 28.9931 | 36.7532 | 0.0305 | 0.0075 | 0.0056 | 0.0101
I would like to calculate create a new series obtained from a start time and elapsed seconds.
I am using python v 2.7 with pandas.
Do any of you know how to obtain that?
Thanks
That should do the trick
from __future__ import print_function, division
import pandas as pd
start_time = 14
data = pd.read_csv('data.txt', sep="|", header=0, skip_blank_lines=True)
data['Time'] = pd.Series(data[' Time elapsed_sec'] + start_time, index=data.index)
print(data)
Missing is the conversion to datetime like Convert Pandas Column to DateTime
Thanks a lot Lageos, I tried your solution at first and then realised I first needed to use the time elapsed column as a series and not as my index.
Now I get a TypeError: unsupported operand type(s) for +: 'float' and 'datetime.datetime'. It might be because the series are fractions of a second?
Perhaps it is necessary to convert with ... pd.Series(float(data[' Time elapsed_sec'] ) + start_time....
the float conversion didn't do the trick although I've managed to convert to a datetime format (%S) but another error occurs, it says "can only operate on a datetimes for subtraction, but the operator [add] was passed" ... any other ideas?
Something like this?
start_time = pd.Timestamp('2016-1-1 00:00')
df = pd.DataFrame({'seconds': [ 1, 2, 3]})
df['new_time'] = [start_time + dt.timedelta(seconds=s) for s in df.seconds]
>>> df
seconds new_time
0 1 2016-01-01 00:00:01
1 2 2016-01-01 00:00:02
2 3 2016-01-01 00:00:03
| common-pile/stackexchange_filtered |
Getting Interface 'Response<ResBody>' incorrectly extends interface 'Response'
Im using typescript and express in node.js and when I compile it I get
When I compile from typescript I get this bug
node_modules/@types/express-serve-static-core/index.d.ts:505:18 - error TS2430: Interface 'Response<ResBody>' incorrectly extends interface 'Response'.
Types of property 'locals' are incompatible.
Type 'Record<string, any>' is missing the following properties from type 'i18nAPI': locale, __, __n, __mf, and 5 more.
505 export interface Response<ResBody = any> extends http.ServerResponse, Express.Response {
~~~~~~~~
from my package.json
"dependencies": {
"@types/i18n": "^0.8.6",
"@types/jest": "^26.0.7",
"@types/moment": "^2.13.0",
"@types/node": "^14.0.26",
"@types/node-schedule": "^1.3.0",
"@types/ws": "^7.2.6",
"@typescript-eslint/eslint-plugin": "^3.7.0",
"@typescript-eslint/parser": "^3.7.0",
"@types/express": "^4.17.7",
"chokidar": "^3.4.1",
"dblapi.js": "^2.4.0",
"discord.js": "^12.2.0",
"eslint": "^7.5.0",
"eslint-plugin-jest": "^23.18.2",
"express": "^4.17.1",
"findit2": "^2.2.3",
"i18n": "^0.10.0",
"jest": "^26.1.0",
"moment": "^2.27.0",
"node-schedule": "^1.3.2",
"npm-check-updates": "^7.0.3",
"tree-kill": "^1.2.2",
"ts-jest": "^26.1.4",
"typescript": "^3.9.7"
}
Anyone know what's wrong?
I was able to fix this error. I first tried to re-create this as a minimal code needed to re-produce (in a sub folder in my main project) and everything worked fine, and then I started copying all my main files into that minimal example and it still worked. Pretty soon I had my whole project in that sub folder and it worked there. So then I just deleted my main code and moved the sub folder contents up a level and it still worked there...... Not sure what is the difference. I have a hunch it may be the ordering of dependencies listed in the package.json might of changed which made it work...
Ref: https://github.com/DefinitelyTyped/DefinitelyTyped/issues/46639
Try npm i -D @types/express-serve-static-core, enable skipLibCheck is not a good idea.
Add "skipLibCheck": true in tsconfig.json. It worked for me.
If you're going to do this you should understand the consequences of doing so. 9 times out of 10, this is not the right thing to do
skipLibCheck is rather a workaround and you should not include it without knowing what you are doing.
The issue is fixed in the latest versions so if you have @types/express install update it to latest npm i -D @types/express@latest or install @types/express-serve-static-core explicitly with npm i -D @types/express-serve-static-core
You can check which version the @types/express-serve-static-core has with:
$ npm ls @types/express-serve-static-core
<EMAIL_ADDRESS>/path/redacted
└─┬<EMAIL_ADDRESS> └──<EMAIL_ADDRESS>
From: https://github.com/DefinitelyTyped/DefinitelyTyped/issues/46639
I'm getting exactly the same thing. I downgraded to the previous version (4.0.53) from the current version (4.17.9). Seems to be an issue in the latest release.
Could be. Also I was able to fix it, see here https://github.com/DefinitelyTyped/DefinitelyTyped/issues/46639
| common-pile/stackexchange_filtered |
Android Content Provider Test the REAL content Provider
hope you can help me ...
tl:dr
How can i write JUnit Tests which will NOT use the classes IsolatedContext and MockContentResolver ? I want to affect the REAL content Provider and not the Mock Database.
General
I have to write JUnit Tests for a special ContentProvider at work.
This Content Provider is connected to some different Hardware and sets there some values. I must check the Hardware Values AND the Values of the Content Provider Database.
Construct
-> ContentProvider -> Hardware Interface -> Hardware -> HardwareInterface-> ContentProvider
Code
public class DataLayerTests extends ProviderTestCase2<DataLayer> {
private static final String TAG = DataLayerTests.class.getSimpleName();
MockContentResolver mMockResolver;
public DataLayerTests() {
super(DataLayer.class, Constants.DATA_LAYER_AUTHORITY);
}
@Override
protected void setUp() throws Exception {
super.setUp();
Log.d(TAG, "setUp: ");
mMockResolver = getMockContentResolver();
}
@Override
protected void tearDown() throws Exception {
super.tearDown();
Log.d(TAG, "tearDown:");
}
public void testActiveUserInsert__inserts_a_valid_record() {
Uri uri = mMockResolver.insert(ActiveUserContract.CONTENT_URI, getFullActiveUserContentValues());
assertEquals(1L, ContentUris.parseId(uri));
}}
The real Database should be affected as well as the Real ContentRescolver should be used.
How could i arcive this ?
You can use Robolectric to test the real content provider, affecting a real sqlite database.
Robolectric is an implementation of the Android framework that can be run in any JVM, and thus can be used for tests.
Please note that the sqlite database will live in a temp folder on your computer and not on a phone or emulator.
If you want the tests to happen inside a real phone, you should look into Instrumented tests
Thanks for reply,
i've made already a real application with a button.
That App will affect the real database.
I will check Robolectric next time.
| common-pile/stackexchange_filtered |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.