text stringlengths 70 452k | dataset stringclasses 2 values |
|---|---|
How would I dynamically generate a page title based on the active navigation tab?
I'm using php includes to call in different sections to pages but some of these need to have some different content.
I am trying to set the text that is within an h tag to be the same as the currently clicked navigation item.
Im currently just setting the current page in the navigation section and then again at the top of the page I want to name, defining a variable then calling it within the title.
Navigation Include:
<div>
<ul class="menu">
<li><a href="about.php"><?php ($currentPage =='about')?>About</a> .
</li>
</ul>
</div>
About us page:
<?php ($currentPage =='about');
$title ='About Us'
?>
which has the
<?php include("includes/titles/page-title.php"); ?> within it, this is also called on different pages and within this include I have
<h2 class="title-center"><?php echo $title ?></h2>
I also have a contact page where I have the same include but need a different title:
<?php ($currentPage =='contact');
$title ='Contact'
?>
So, it's just one page and different tabs?
What is ($currentPage =='about'); supposed to do?
@AJ no sorry didn't make that clear, I have two pages, about.php and then contact.php but they both use the same header include (include/header.php) and (include/page-title.php). But I want the h tag text that is in the page-title include to be the same as the page that is active, that make sense?
Your requirement:
Need respective page headers for every page
Solution:
1) Take an array of page headers
2) Key should be $currentPage from url.
3) Value should be required Page Title.
4) Check if page title is set for $currentPage, if set, print it.
Simple...
And Code:
<?php
$pageHeaders = [];
$pageHeaders['about'] = 'About Us';
$pageHeaders['contact'] = 'Contact Us';
...
?>
Now, print your page header:
<h2 class="title-center">
<?php
$title = 'Your Default Page Title';
if (isset($pageHeaders[$currentPage])) {
echo $pageHeaders[$currentPage];
}
else {
echo $title;
}
?>
</h2>
Note: You can also, create dynamic page navigation with the same kind of array.
The key should be url and the value should be the text.
Loop over it and print navigation menu.
| common-pile/stackexchange_filtered |
Does any known substance ignite on cooling?
As the title says, I'm interested in knowing if there is any substance — or combination of substances — that ignites (or even increases its chance of spontaneous ignition) when cooled.
I've never heard of such a thing, nor can I find it in Atkins' Physical Chemistry, but I might easily have overlooked it, since I don't know what to call it. Googling gives me information on substances that lower the freezing point or ignition point of explosives/fuels.
It seems that this should obviously be forbidden on thermodynamic grounds, but I can't quite rule out a phase change allowing such behaviour, for instance.
Ignition is about kinetics, not thermodynamics. You can't cheat thermodynamics, but there may be a chance with kinetics. True, most reactions slow down upon cooling, but this is not an absolute law.
I think a lot of substances would ignite if you cool them down from a temperature initially so hot that molecular bonds can't form, but I'm not entirely sure the process would be classified as ignition.
Actually... yes! Iron(II) oxide is thermodynamically unstable below $848~\mathrm K$. As it cools down to room temperature (it has to do it slowly) it disproportionates to iron(II,III) oxide and iron:
$$
\ce{4FeO -> Fe + Fe3O4}.
$$
The iron is in a form of a fine powder, which is pyrophoric (it may catch a fire when exposed to air). You can see it in action here.
For those of us coming from the hot questions list, 848 K is 574.85 C or 1066.73 F. =)
When actually cooled all the way down to room temperature and left alone, $\ce{FeO}$ will stay like that till kingdom come. I guess the disproportionation is possible in a relatively narrow range of temperatures.
@IvanNeretin That's true, you have to cool it slowly. Like all chemical reactions, the rate of the disproportionation drops at lower temperatures.
I doubt ignition could ever be favored by cooling since, as for any combustion you need heat. Also the compound must emit enough vapors for ignition to take place and cooling, of course, goes in the opposite direction.
Some explosives, however, could well detonate (not ignite) if the temperature causes them to crystallize. Diazonium chloride salts, for example, are usually not isolated in their crystalline and dry form because they can cause explosions in such a state, while they are safe in aqueous solution.
Phosphorus vapor
Red phosphorus heated at the atmospheric pressure up to approx. $\pu{400 °C}$ sublimates, and upon cooling it builds up white phosphorus:
$$\ce{P_n ->[Δ] n/4 P4}$$
which spontaneously ignites in air at the room temperature being deposited as a film on the surface (bulk sample ignites at $\pu{50 °C}$):
$$\ce{P4 + 5 O2 → P4O10}$$
which is sometimes observed in a form of glowing light (chemiluminescence), as seen, for example, on the famous painting "The Alchemist Discovering Phosphorus" by J. Wright:
A published procedure for the reference [1]:
PREPARATION
Obtain two dry Pyrex test tubes, one 6-inch and one 8-inch. To the 8-inch test tube add 0.25 g red phosphorus. Fill the 6-inch test tube 1/2 full of cold water, dry the outside, and insert it into the larger tube. The hot tip of the smaller test tube will be supported by the
neck of the larger tube.
DEMONSTRATION
Heat the red phosphorus in the larger tube until a deposit appears on the cold surface of the inner tube. Allow to cool. Upon removal of the smaller tube the white phosphorus on the bottom mill ignite.
Plutonium
Bulk plutonium ignites only above $\pu{400 °C}$. However, when cooled down, it undergoes a series of phase transitions and its density is increased by approx. $11\%$:
This causes a large hazard issue as plutonium sample cracks while cooled and develops a large surface that swiftly reacts with traces of moist in the air forming plutonium hydrides $\ce{PuH_{2...3}}$ and plutonium(III) oxide $\ce{Pu2O3}$ which are both pyrophoric and ignite spontaneously at ambient temperatures.
References
Brodkin, J. Preparation of White Phosphorus from Red Phosphorus. Journal of Chemical Education 1960, 37 (2), A93. https://doi.org/10.1021/ed037pA93.1.
It depends on how you cool it. For example, if you try to cool an alkali metal by immersion in water, you can get a rapid exothermic reaction.
But that's more than just cooling, it's a completely new reagent.
@fafl That's true, and it's the new reagent rather than just the temperature drop which causes the reaction. However, immersion in water is one cooling technique that a lot of people are familiar with, and that specific act of cooling the right/wrong substance would generally cause ignition.
There is also the property of some salts to be cryoluminescent, which means they produce light when they are either precipitated upon cooling down a solution or when the filtered compounds crack upon cooling down. In this case, this is also an electronic transition which produces light, but it is reported to slightly warm up in this process as well. So perhaps you can have an excited state high enough to cause a reaction? Would be the only guess I have.
| common-pile/stackexchange_filtered |
Hovertemplate does not recognize the text vector I pass to it
I create the bar chart below using plotly but while I give values to text it is not recognized in my hovertemplate.
library(plotly)
"Country(s)"<-c("United Kingdom", "Brazil", "United States Of America", "India",
"Russian Federation")
"Number of cases"<-c(1032990, 1264637, 5905072, 618694, 735681)
"lab"<-c("1.0M", "1.3M", "5.9M", "618.7K", "735.7K")
cm4d<-data.frame(`Country(s)`,`Number of cases`,lab)
fig1 <- plot_ly(cm4d, x = ~`Country(s)`, y = ~`Number of cases`,text=~lab,
type = 'bar',
hovertemplate = paste('%{x}', '<br>Number of cases: %{text}<br><extra></extra>'),
colors = c("#60ab3d","#6bbabf","#c4d436","#3e5b84","#028c75"),
color = ~`Country(s)`)
fig1
while this works
library(plotly)
library(dplyr)
full_data<-data.frame("Name"=c("Q1","Q2","Q3","Q1","Q2","Q3"),"Values"=c(245645,866556,26440,65046,641131,463265),
"Week"=c("a","b","c","d","e","f"))
desc <- full_data %>%
group_by(Name,Week) %>%
summarise(values = sum(Values)) %>%
mutate(lab = scales::label_number_si(accuracy = 0.1)(values))
fig1 <- plot_ly(desc, x = ~`Name`, y = ~values,text = ~lab,
type = 'bar',
hovertemplate = paste('%{x}', '<br>Number of cases: %{text}<br><extra></extra>'),
colors = c("#60ab3d","#6bbabf","#c4d436"),
color = ~Name)
fig1
Not sure that this is an "answer". I think you ran into a bug! It seems like {plotly} expects a "stacked" bar chart.
You can quickly make your working solution "fail" when you limit the desc dataframe to single rows only.
fig2 <- plot_ly(desc[c(1,3,5), ]
, x = ~`Name`, y = ~values,text = ~lab
, type = 'bar'
, hovertemplate = paste('%{x}', '<br>Number of cases: %{text}<br><extra></extra>'),
colors = c("#60ab3d","#6bbabf","#c4d436"),
color = ~Name)
fig2
With such a "single key element" dataframe, fig2 shows the annoying %text in the hovertext!
I have not yet found a way to circumnavigate this.
thanks this may be an issue that plotly devs should look at
I think so. If you file a bug report, provide a link to this post and/or your story/experience
| common-pile/stackexchange_filtered |
Evaporation rate
Right now, when brewing, I have one step that is never the same : the boil. Actually, the boil phase is going alright, my problem is with the evaporation rate of my wort. I have a few questions...
What is the "standard" boil rate? Beersmith gives me 12% per hour by default. This is based on a 23 liters batch I think. That would mean an evaporation of 2.73 liters per hour, but normally I lose at least twice that amount.
What if I make a 40 liters batch? will evaporation be 12%, or the same liters/hour? (In this case it would mean an evaporation rate of 6.8%)
What are the advantages of a higher evaporation rate ?
What are the advantages of a lower evaporation rate ?
Is my evaporation rate way too high at 6 liters / hour?
And one big question here:
How do you standardize your evaporation rate? I never evaporate the same amount, but I would like that part becoming a constant, not a variable.
I am using a propane outside burner. Should I get something to regularize the psi of the propane?
Thanks!
Related: Does batch size affect evaporation rate?
First, expressing evaporation rate as a % is completely the worng way to do it. Boil is a constant gal./hr. and does not change due to batch size. You don't boil off twice as much for a 10 gal. batch as a 5 gal. batch. 6 liters/hr. is about what my boil off is using a converted keg kettle and a propane burner. I try to set the flame to the same level every time and I get a pretty consistent rate. There are theoretical downsides to too high or low a rate, but the reality is that consistency is much more important than amount.
I agree with @Denny that stating evaporation as a fixed percent in your assumptions is somewhat meaningless for homebrewing because boiloff is constant based primarily on your kettle geometry and BTUs of heat. In other words, the percent being evaporated constantly goes up as the remaining volume shrinks (with 100% evaporation when the last drop is evaporating).
I also agree that "there are theoretical downsides to too high or low a rate, but the reality is that consistency is much more important than amount."
That being said, the biggest downside to too high of a boiloff rate is waste of energy. Brewing water is not necessarily free either because you may have bought RO water or spent time dechlorinating, collecting, or adjusting your water. Commercial brewers shoot for an evaporation rate of 6-8% in the first hour, and that is sufficient to release any DMS and SMM. Because a boil does not get hotter the more vigorous it is, and hop alpha acids will isomerize at any boil, we should shoot for the gentlest boil that we can. So while using percent evaporation may not make sense for homebrewers, we can learn something from commercial brewers about not having an excessive boil.
You should calibrate your kettle with 1/2 gallon or quarter gallon graduations (either etched on kettle or marked on an unvarnished/unpainted wooden dowel you can immerse in the wort). Then calculate your boiloff rate with those graduations.
You can standardize your evaporation rate by doing the same thing every time. Same amount of flame, same kettle, same burner, and same vigor of boil. Then check it at intervals to see if you are spot-on. Boiloff is constant (linear). Even if you do this, you may get different rates -- my rate in Winter is higher than in Summer due to the extremely low humidity here in Winter, and high humidity in Summer.
Somewhat confused that you state "evaporation as a percent is meaningless" then go on to state "Commercial brewers shoot for an evaporation rate of 6-8%".
@uSlackr Thanks for the helpful feedback! I made some changes to clarify the inconsistency in my comment: "In other words, the percent being evaporated constantly goes up as the remaining volume shrinks (with 100% evaporation when the last drop is evaporating)...So while using percent evaporation may not make sense for homebrewers, we can learn something from commercial brewers about not having an excessive boil."
| common-pile/stackexchange_filtered |
I want to get a sum of the list except for the iterative value
columnData.tolist() outputs the following
[19.51 15.45 16.67 0. 12.06 5.97 15.56 0. 12.8 17.58]
I want it so, that each position in the list will be converted a sum of all other values.
Below is the code I am trying.
templ = [ sum( columnData.tolist().pop(i) )
for i,l in enumerate(columnData.tolist()) ]
And the output is:
'''
TypeError: 'float' object is not iterable
'''
Simple implementation using list comprehension.
lst = [] #your_list
s = sum(lst)
new_lst = [s-i for i in lst]
Nice answer, and clean solution.
@S3DEV that's the best part about Python
Thanks a lot for this. I did think I was doing something unnecessary and the solution would be simple.
| common-pile/stackexchange_filtered |
Transferring a string from one .cpp file to another
Good evening, I working on a project that requires me to use a string variable in one .cpp file and same string variable in the next .cpp file. This is my patientdatabase.cpp:
void patientDatabase::on_tableView_view_activated(const QModelIndex &index)
{
QString value = ui->tableView_view->model()->data(index).toString();
patientR.connOpen();
QSqlQuery qry;
qry.prepare("select * from imageTable where PatientId = '"+value+"'");
if (qry.exec()){
IVDB *x = new IVDB;
x->show();
}
This function basically changes windows to IVBD whenever I click on patientId on the table provided in my patientDatabase and now I need the string 'value' in my IVBD.cpp
QSqlQuery *qry = new QSqlQuery(registration.db);
qry->prepare("select * from imageTable where PatientId ='"+value+"'"); // this string 'value; is from the last .cpp file
I dont want to use global variables but stuck here and not sure how to approach it.
The best solution I can think of would be using signals and slots. In on_tableView_view_activated you should emit a signal and catch it in IVBD.cpp.
Using global variables doen't sound good, though it works!
If you don't want to use global variable it is better share your classes and we can think with your current design. Otherwise there is a global approach below.
You will define a global variable named value in a header file and you will use extern keyword while defining. It means that there is a global value variable that is created once and every .cpp file includes that header will use same variable. Let me write an example.
Think you have common.h file defining your string value:
extern std::string value;
And you have two different .cpp files they share that common value. In file1.cpp create that value.
file1.cpp:
#include "common.h"
std::string value;
void fooInFile1() {
value = "Changed from fooInFile1!";
}
file2.cpp
#include "common.h"
void fooInFile2() {
value = "Changed from fooInFile2!";
}
In this example file1.cpp and file2.cpp are affecting the same value variable.
| common-pile/stackexchange_filtered |
Kotlin Iterate over list of lists
Let's say we have a List<User> and each user has a List<Movies> of all movies that the users watched.
What if we want to get a combination of user id and all watched movides under "drama" genre types, how we could do it without creating a temp mutable List? Is there an operator to iterate over list of lists and get this data?
Do you mean get the "drama" movies for a specific id or the "drama" movies for all ids?
I mean "drama" movies for all existing users.
As far as I understand your solution should look like this:
users.map { user ->
user.id to user.movies
.filter { movie -> movie.genre == Genre.DRAMA }
}
.forEach { (userId, dramas) ->
//do whatever you want with combinations
}
It results to combinations of user id and all dramas which user has watched.
For more precise answer please add your User and Movies classes
Thank you, I talked with a friend and he showed an example using .flatMapIterable, since I am using RxKotlin. but your answer works too.
| common-pile/stackexchange_filtered |
Can XOR be expressed using SKI combinators?
I have question about SKI-Combinators.
Can XOR (exclusive or) be expressed using S and K combinators only?
I have
True = Cancel
False = (Swap Cancel)
where
Cancel x y = K x y = x
Swap: ff x y = S ff x y = ff y x
I can be represented as S K K. So, if you can express NOR using SKI, you can do it with just SK.
Would it infix? and how about bracketing? (p.s I have tried quite a few ways not successful yet) Thanks!
Assuming you enclose your logical connectives in parentheses, i.e. NOR = (... some expression ...): (1) NOR cannot be expressed as a postfix operator (as AND) -- to see this observe that
T T NOR = T, no matter what NOR is (but should be F);
(2) NOR cannot be expressed as an infix operator --
F NOR F = F (but should be T)
Ah yes of course! actually I have made a huge mistake of asking the wrong question! I meant to ask XOR not NOR! Very sorry!
Booleans
Your question is a bit unclear on the details, but it seems that what you mean is that you have the following representation of booleans:
T := K
F := S K
This works because it means the following reductions hold:
T t e => t
F t e => e
in other words, b t e can be interpreted as IF b THEN t ELSE e.
XOR in terms of IF _ THEN _ ELSE _
So given this framework, how do we implement XOR? We can formulate XOR as an IF expression:
xor x y := IF x THEN (not y) ELSE y = (IF x THEN not ELSE id) y
which can be eta-reduced to
XOR x := IF x THEN not ELSE id = x not id
Some function combinators
We have id = SKK as standard, and not can be expressed as flip, since flip b t e = b e t = IF b THEN e ELSE t = IF (not b) THEN t ELSE e. flip it self is quite involved but doable as
flip := S (S (K (S (KS) K)) S) (KK)
Now we just need to figure out a way to write a function that takes x and applies it on the two terms NOT and ID. To get there, we first note that if we set
app := id
then
app f x = (id f) x = f x
and so,
(flip app) x f = f x
We are almost there, since everything so far shows that
((flip app) id) ((flip app) not x) = ((flip app) not x) id = (x not) id = x not id
The last step is to make that last line point-free on x. We can do that with a function composition operator:
((flip app) id) ((flip app) not x) = compose ((flip app) id) ((flip app) not) x
where the requirement on compose is that
compose f g x = f (g x)
which we can get by setting
compose f g := S (K f) g
Putting it all together
To summarize, we got
xor := compose ((flip app) id) ((flip app) not)
or, fully expanded:
xor = S (K ((flip app) id)) ((flip app) not)
= S (K ((flip app) (SKK))) ((flip app) flip)
= S (K ((flip SKK) (SKK))) ((flip SKK) flip)
= S (K (((S (S (K (S (KS) K)) S) (KK)) SKK) (SKK))) (((S (S (K (S (KS) K)) S) (KK)) SKK) (S (S (K (S (KS) K)) S) (KK)))
Yes! In 2020, Stephen Wolfram constructed combinators for all 16 two-input Boolean functions, including XOR. It turns out XOR can be expressed as s[s[s[s[s]][s[s[s[k]]]][s]]][k] where k represents the value True and s[k] represents the value False.
For example, here is the combinator form of XOR operating on true and false the Wolfram Language.
true = k;
false = s[k];
ResourceFunction["CombinatorFixedPoint"][s[s[s[s[s]][s[s[s[k]]]][s]]][k][true][false], SKGlyphs -> {s, k}]
It returns the expected output – True:
k
| common-pile/stackexchange_filtered |
Weyl's Branching Rule for $SU(N)$-Setting
On the Wikipedia page for restricted representations
https://en.wikipedia.org/wiki/Restricted_representation
there is presented a number of explicit "branching rules". In particular, there is the Weyl's branching rule from U(N) to U(N-1) given in terms of signatures $f_1 \geq \cdots \geq f_N$, for $f_i \in \mathbb{N}$, labelling irreps of U(N). I would guess that this generalises directly to the case of branching from $SU(N)$ to $SU(N-1)$ but cannot find a reference. Can someone suggest a reference?
The question is answered on page 385 of the classical Zhelobenko book
Compact Lie groups and their representations
for the more general case of $SU(n+m)/SU(n) \times SU(m)$.
Every irrep of $SU(n)$ extends to irreps of $U(n)$, and conversely, the restriction of any irrep of $U(n)$ to $SU(n)$ remains irreducible. If your dominant weight of $SU(n)$ is $(a_1,\ldots,a_{n-1})$ then extend it to $(\sum_{i=1}^{n-1} a_i, \sum_{i=2}^{n-1} a_i, \ldots, a_{n-1}, 0)$, apply the $U(n)$ restriction, take differences $f_i-f_{i+1}$ of the resulting signatures.
Maybe the following paper might prove helpful to your question:
Masatoshi Yamazaki, Branching Diagram for Special Unitary Group SU(n), J. Phys. Soc. Jpn. 21, pp. 1829-1832 (1966)
| common-pile/stackexchange_filtered |
Back button referrer doesn't work - Javascript nor asp.net
I have a site that has the following framework: Page A = highest level. Page B = next Level.Page C = Lowest Level.
Page A contains 10 links the user can select that take it to a certain area of Page B (done by anchor hashes)
When you're taken to the correct anchor hash area on page B, the links that are shown are links that are part of theme Page B. Any link here takes you to a specific, unique Page C.
What I want to do is when the user clicks back (the one built in to browser, not a button on the page) or a nav tab to return to the Page B, the user is taken back to that anchored position.
The Problem:
I've tried using javascript and razor, but am getting the same issue:
If the user clicks the navigation tab to go to Page B, the referrer works and it takes the user back to the proper section of Page B.
If the user clicks the back button, the referrer does not return the Page C link and I cannot redirect the user to the proper area on Page B.
Here's the JS script:
var oldURL = document.referrer;
if (oldURL.indexOf("page") !== -1 ) {
window.location = //link with proper anchor tab
}
That didn't work, so tried razor (not the logic of redirect, just to see what it would return):
string referer = Request.UrlReferrer.ToString();
@referer
Does the back button mean 'go back one page' or does it mean 'go up one level'? If it's the former you can use history.go(-1); which is what the browser back button does. If it's the latter you can always make a link to the parent level. Trying to mix these is not a good idea, it's confuses users.
Both. The only time this instance needs to occur is when the user goes from page C to page B
| common-pile/stackexchange_filtered |
Error creating simple minikube: `Error updating cluster: generating kubeadm cfg: parsing kubernetes version`
I have a Homebrew installed kubernetes-cli 1.12.0 and minikube 0.30.0:
~ ls -l $(which kubectl)
/usr/local/bin/kubectl -> ../Cellar/kubernetes-cli/1.12.0/bin/kubectl
~ ls -l $(which minikube)
/usr/local/bin/minikube -> /usr/local/Caskroom/minikube/0.30.0/minikube-darwin-amd64
~ minikube delete
Deleting local Kubernetes cluster...
Machine deleted.
~ rm -rf ~/.kube ~/.minikube
~ minikube start --memory 8000 --kubernetes-version 1.12.0
Starting local Kubernetes 1.12.0 cluster...
Starting VM...
Downloading Minikube ISO
170.78 MB / 170.78 MB [============================================] 100.00% 0s
Getting VM IP address...
Moving files into cluster...
E1022 10:08:41.271328 44100 start.go:254] Error updating cluster: generating kubeadm cfg: parsing kubernetes version: parsing kubernetes version: strconv.ParseUint: parsing "": invalid syntax
================================================================================
An error has occurred. Would you like to opt in to sending anonymized crash
information to minikube to help prevent future errors?
To opt out of these messages, run the command:
minikube config set WantReportErrorPrompt false
================================================================================
parsing kubernetes version: strconv.ParseUint: parsing "": invalid syntax
Try to use different version notation. Here is an example from the Kubernetes documentation:
minikube start --kubernetes-version v1.7.3
wow! That actually worked. I just had to put the letter v in the version. Thanks so much!!
By default it starts with v1.10.0. I've tested minikube 0.30.0 with kubernetes v1.11.2, v1.12.1 and v.1.13.0-alpha.1. It works pretty well. All available Kubernetes releases can be found here: https://github.com/kubernetes/kubernetes/releases
| common-pile/stackexchange_filtered |
How to implement AsyncTask from the following example?
I try to fix it but the list data doesn't show up..also, there is some red line in my code. onPostExecute it inform thatThe constructor SimpleAdapter(EventsActivity.syncEvent, ArrayList<HashMap<String,String>>, int, String[], int[]) is undefined.
Another thing, I would like to do is to show a progress when we refresh the even like when we refresh notification in facebook, we will see animation of refreshing the data, last date and time we did a refresh.How can i do that? Do you have any idea? I am quite new to implementation. Thanks for your advice.
public class EventsActivity extends ListActivity {
EventDataSet sitesList = null;
ArrayList<HashMap<String, String>> mylist = new ArrayList<HashMap<String, String>>();
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.listplaceholder);
}
private class syncEvent extends AsyncTask<String, Integer, String> {
@Override
protected String doInBackground(String... arg0) {
try {
/** Handling XML */
SAXParserFactory spf = SAXParserFactory.newInstance();
SAXParser sp = spf.newSAXParser();
XMLReader xr = sp.getXMLReader();
/** Send URL to parse XML Tags */
URL sourceUrl = new URL(
"http://xxxx.heroku.com/xxxxxxxxxxx.xml");
/** Create handler to handle XML Tags ( extends DefaultHandler ) */
EventXMLHandler myXMLHandler = new EventXMLHandler();
xr.setContentHandler(myXMLHandler);
xr.parse(new InputSource(sourceUrl.openStream()));
} catch (Exception e) {
}
/** Get result from MyXMLHandler SitlesList Object */
sitesList = EventXMLHandler.sitesList;
/** Set the result text in textview and add it to layout */
for (int i = 0; i < sitesList.getName().size(); i++) {
HashMap<String, String> map = new HashMap<String, String>();
map.put("name", "Name: " + sitesList.getName().get(i));
map.put("createat", "Create-At: " + Convertor.getDateTimeDDMMYY(sitesList.getCreateat().get(i)));
mylist.add(map);
}
return null;
}
@Override
protected void onPostExecute(String result) {
ListAdapter adapter = new SimpleAdapter(this, mylist, R.layout.eventitem,
new String[]{"name", "createat"},
new int[]{R.id.item_title, R.id.item_subtitle});
setListAdapter(adapter);
//super.onPostExecute(result);
}
@Override
protected void onProgressUpdate(Integer... values) {
// TODO Auto-generated method stub
super.onProgressUpdate(values);
}
}
}
The problem is with the this reference. When you're inside the AsyncTask, this refers to your AsyncTask subclass, and SimpleAdapter requires a Context as the first parameter. You should replace the call with an explicit reference to the parent class:
new SimpleAdapter(EventsActivity.this, mylist , R.layout.eventitem,
new String[] { "name", "createat" },
new int[] { R.id.item_title, R.id.item_subtitle });
now you can call it by this :: new syncEvent().execute();
like ::
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.listplaceholder);
new syncEvent().execute();
}
probably one way.. to do, i have tried but it's not work i think the problem still occur coz i see the red underline in my code..at "onPostExecute" it inform that The constructor SimpleAdapter(EventsActivity.syncEvent, ArrayList<HashMap<String,String>>, int, String[], int[]) is undefined
| common-pile/stackexchange_filtered |
Akka / Futures - pipe different messages depending on success or failure?
I've got the following piece of code on an Actor where I ask someone else for an action (persist something in an external DB)
If that is successful:
Then I send a message to myself to reflect the result of the action in my local state and then return that to the original sender.
In case of a failure on the persistence to the DB:
Then I want to reply with Status.Failure (as returned to me) directly to the current sender.
The code looks like this:
case Event(SomeAction(name), _) =>
val origin = sender()
ask(someOtherActor, SomeAction(name)).mapTo[ActionResult]
.map(ActionCompleted)
.onComplete {
case Success(value) => self.tell(value, origin)
case Failure(e) => origin ! Status.Failure(e)
}
stay()
case Event(ActionCompleted(result), state) =>
stay using state.update(result) replying result
The code above works, but I need to rely on copying the sender into a local variable to avoid closing over it.
I was wondering if there is any better way to do this with pipeTo?
You can build your own pipeTo that does what you need. I think routeTo would be a good name for it.
implicit class RouteTo[A](f: Future[A]) {
import akka.pattern.pipe
def routeTo(successDestination: ActorRef,
failureDestination: ActorRef)(
implicit sender: ActorRef,
ec: ExecutionContext): Unit =
f onComplete {
case s @ Success(_) => successDestination.tell(s, sender)
case f @ Failure(_) => failureDestination.tell(f, sender)
}
}
import RouteTo._
Future("hello").routeTo(successRecipient, failureRecipient)
| common-pile/stackexchange_filtered |
search for occurences of an integer in a list in R
this is a really basic question, and I am probably not seeing something obvious but I am currently stuck with this problem:
In R, I generated a List of integers, made through the sample() function. Then want to find an exact pattern.
Should be obvious, but grep does the following:
1)
grep('03230', hugeListofNumbers)
>integer(0)
2)
pattern<-toString(03230)
x<-toString(hugeListofNumbers)
grep(pattern, x)
>[1] 1
3) And using matchPattern from the Biostrings Package:
matchPattern(pattern, x)
start end width
[1] 5146 5158 13 [0, 3, 2, 3, 2]
....
No result helps me find the occurences of the pattern. And although the last one using matchPattern seems ok, it finds some weird 13 characters long string that does not match in any way the 5 character long pattern...
What am I not seeing here? How can I just preform a normal grep search as in the shell??
Edit:
To generate the list with the properties I needed I used:
hugeListofNumbers<-sample(c(0,1,2,3), 10^5, replace=TRUE, prob=NULL)
pattern<-sample(c(0,1,2,3), 5 , replace=TRUE, prob=NULL)
Can you add some code to generate a specific list of numbers for us to help you with? Something like HugeListOfNumbers <- c(1,2,23452435, 245) ?
I think it will never be the case that an R integer will ever have a leading 0 digit when coerced by a regex function to a character vector. Notice that your pattern value is "3230", not "03230" My close vote is because I think this is isomorphic to a typographical error.
@BondedDust I am aware that the pattern value is different from the actual pattern. That is not a typographical error, but I am looking for the actual pattern regardless of its numeric significance. The same task could be performed with letters. The reason why I use integers, is for following steps where numeric operations are needed. That is why I tried converting it to string temporally for the search.
@waternova Edit has been made and the code is there.
OK I found the solution, which was, as expected, because I was overseeing a very basic problem: The sample() function of R returns a vector, so I could not match any pattern longer than 1, before converting it to a string.
| common-pile/stackexchange_filtered |
How to find/get class object from a class method, static method or instance method
For example.
class Klass:
def f(sel):
pass
f = Klass().f()
klass = # i want to get Klass object.
I searched a methond in inspect module but I can not find, Actually I find a private function named _findclass but this method does not run correctly in local class for example
def test():
class K:
def f(self):
pass
class_object = _findclass(K().f())
donnt call the method out = f = Klass().f
Once you get the output of a function, you can't get any information about it. In your example, Klass().f() returns None. Once this value is returned, it is indistinguishable from any other None. But if you instead do f = Klass().f, you can get the instance with f.__self__.
For example:
class Klass:
def f(self):
pass
obj = Klass()
f = obj.f
print(f.__self__ is obj) # prints True
obj.foo = 4
print(f.__self__.foo) # prints 4
If you want to get the class, you can use __class__:
print(f.__self__.__class__ is the same thing as Klass
If you really need the class from the output, you could try having every method return its class:
class Klass:
def f(self):
normal_result = "whatever"
return normal_result, Klass
result, cls = Klass().f()
But I want to find the obj with an inside decorator.
| common-pile/stackexchange_filtered |
NA/NaN/Inf error when fitting HMM using depmixS4 in R
I'm trying to fit simple hidden markov models in R using depmix. But I sometimes get obscure errors (Na/NaN/Inf in foreign function call). For instance
require(depmixS4)
t = data.frame(v=c(0.0622031327669583,-0.12564002739468,-0.117354660120178,0.0115062213361335,0.122992418345013,-0.0177816909620965,0.0164821157439354,0.161981367176501,-0.174367935386872,0.00429417498601576,0.00870091566593177,-0.00324734222267713,-0.0609817740148078,0.0840679943325736,-0.0722982123741866,0.00309386232501072,0.0136237132601905,-0.0569072400881981,0.102323872007477,-0.0390675463642003,0.0373248728294635,-0.0839484669503484,0.0514620475651086,-0.0306598076180909,-0.0664992242224042,0.826857872461293,-0.172970803143762,-0.071091459861684,-0.0128631184461384,-0.0439382422065227,-0.0552809574423446,0.0596321725192134,-0.06043926984848,0.0398700063815422))
mod = depmix(response=v~1, data=t, nstates=2)
fit(mod)
...
NA/NaN/Inf in foreign function call (arg 10)
And I can have input of almost identical size and complexity work fine...Is there a preferred tool to depmixS4 here?
Were you able to figure this one out?
There is no guarantee that the EM algorithm can find a fit for every dataset given an arbitrary number of states. For example were you to try fit a 2 state gaussian model to data generated from a Poisson distribution with lambda = 1 you would receive the same error.
set.seed(3)
ydf <- data.frame(y=rpois(100,1))
m1 <- depmix(y~1,ns=2,family=gaussian(),data=ydf)
fit(m1)
iteration 0 logLik: -135.6268
iteration 5 logLik: -134.2392
iteration 10 logLik: -128.7834
iteration 15 logLik: -111.5922
Error in fb(init = init, A = trDens, B = dens, ntimes = ntimes(object), :
NA/NaN/Inf in foreign function call (arg 10)
With regards to your data, you can fit a model to your data just fine with 1 state. With 2 states the algorithm cannot find a solution (even with 10000 random starts). For 3 states, the issue seems to be with the initialization of the starting states of the model. If one attempts to run the same model 100 times with the data you provided you get a convergence in some of the 100 iterations. Example below:
>require(depmixS4)
>t = data.frame(v=c(0.0622031327669583,-0.12564002739468,-0.117354660120178,0.0115062213361335,0.122992418345013,-0.0177816909620965,0.0164821157439354,0.161981367176501,-0.174367935386872,0.00429417498601576,0.00870091566593177,-0.00324734222267713,-0.0609817740148078,0.0840679943325736,-0.0722982123741866,0.00309386232501072,0.0136237132601905,-0.0569072400881981,0.102323872007477,-0.0390675463642003,0.0373248728294635,-0.0839484669503484,0.0514620475651086,-0.0306598076180909,-0.0664992242224042,0.826857872461293,-0.172970803143762,-0.071091459861684,-0.0128631184461384,-0.0439382422065227,-0.0552809574423446,0.0596321725192134,-0.06043926984848,0.0398700063815422))
>mod = depmix(response=v~1, data=t, nstates=2)
>fit(mod)
...
NA/NaN/Inf in foreign function call (arg 10)
>replicate(100, try(fit(mod, verbose = F)))
[[1]]
[1] "Error in fb(init = init, A = trDens, B = dens, ntimes = ntimes(object), : \n NA/NaN/Inf in foreign function call (arg 10)\n"
[[2]]
[1] "Error in fb(init = init, A = trDens, B = dens, ntimes = ntimes(object), : \n NA/NaN/Inf in foreign function call (arg 10)\n"
[[3]]
Convergence info: Log likelihood converged to within tol. (relative change)
'log Lik.' 34.0344 (df=14)
AIC: -40.0688
BIC: -18.69975
... output truncated
| common-pile/stackexchange_filtered |
In CLI, what multiple high-level languages means
I read a lot on .Net Framework on the Internet and I am not fully clear. One question I have is regarding the CLI (Common Language Infrastructure).
According to Wikipedia:
The Common Language Infrastructure (CLI) is an open specification developed by Microsoft and standardized by ISO[1] and ECMA[2] that describes executable code and a runtime environment that allow multiple high-level languages to be used on different computer platforms without being rewritten for specific architectures. The .NET Framework and the free and open source Mono and Portable.NET are implementations of the CLI.
What does multiple high-level languages refers to? Is it only .NET languages?
Just to make two examples from Microsoft: VB.NET and F#
I think multiple high-level languages refer to the languages in this list. All these languages allow compilation into Common Intermediate Language.
C# is the most prominent member of the family and a remarkable member is C# .NET, but there are many non .NET languages that implement CLI (mentioned in the list from the first link).
Those others have to be conformed to CLS and CTS, right? Do they use BCL?
"a remarkable member is C# .NET" Did you mean VB.NET?
| common-pile/stackexchange_filtered |
Screw positions in newer 3.5" hard drives do not match older case cage screw positions. Solution?
I want to replace the old 3.5" hdd drive with a new drive but the mounting screw holes on all (?) of the newer drives are in different positions that don't mount correctly. Is there any sort of adapter that will let me mount the new 3.5" hdd in an older case with different mounting holes?
on all (?) of the newer drives are in different positions .. please provide specific model information. I personally doubt that this is true but I am also often wrong.
What sort of drive was in this tray/case previously, and what sort of drive are you trying to install? Are they both SATA or PATA, or SCSI, or do/did they have different connector types? Are you perhaps just turned around (just checking the simple, most likely things)?
Please clarify your specific problem or provide additional details to highlight exactly what you need. As it's currently written, it's hard to tell exactly what you're asking.
There's a whole lot of incredulity in comments here. How about the adage, "Just because you don't understand it does not make it unclear."
This started with "larger than 4TB" drives that they changed the mount points. This was in 2014 just as everybody else was moving to smaller formats, 2.5", NVMe etc - which is I guess why people don't know about it.
Some are made with a 6-point mount which fits either system, others just 4 holes that won't fit anything you already own.
Toshiba have the simplest pictorial explanation
but others such as WD have more in-depth info for the mechanically inclined. https://support.wdc.com/images/kb/2579-771970-A02.pdf
So - either you find a drive with 6 holes, or you see if someone makes an adapter or alternate sled mount.
OWC make them for the old Mac Pros, but I don't know about any other sellers.
One saving factor may be if your drive bays use the side screws - those haven't changed.
Additionally, if your mounts are the other way up to the ones on the Mac Pro, ie 'screw side down' then you could probably get away with just two screws on a desktop, not subject to a great deal of physical handling. I have smaller 2.5" SSDs that literally just rest in place, because gravity does the work & the machines don't do a lot of travelling.
I've been confounded recently by this - being a late adopter and frugal bastard im just now buying a hard disk >6TB and teying to fit it into my decade old Cooler Master N400 case, which unfortunately only has holes for the middle and bottom screws on the old hdd design, so i can only get 1 screw into each large capacity hdd. I have scoured the interwebs for solutions and can't find it. So so I think i'm gonna be tapping some holes into sheet metal soon.. :(
| common-pile/stackexchange_filtered |
Remove duplicates and get the count based on 2 sub criteria in VBA
I appreciate if you can take a look at the below code and help me further as I'm stuck badly. below code helps me get the unique list of names from the Selection and their respective count. This code uses the "Scripting.Dictionary". However there is 1 more additional header, that is the "Date"; so the desired output would be based on the Date as well as showing in the picture. I have tried everything I could, but failed.
My Current Output based on the code
Sub vbacodechecker()
Dim d As Object, rng As Range, tmp As String
Set d = CreateObject("Scripting.dictionary")
For Each rng In Selection
tmp = Trim(rng.Value)
If Len(tmp) > 0 Then d(tmp) = d(tmp) + 1
Next rng
Range("B1").Select
For Each k In d.keys
ActiveCell.Offset(i + 1, 1).Value = k
ActiveCell.Offset(i + 1, 2).Value = d(k)
i = i + 1
Next k
End Sub
Desired Output, names unique date wise
[2
Create a composite dictionary key by concatenating the name and the date with (eg) | and then split the key when populating to the sheet
Thanks for the suggestion. I was able to do this by concatenating both "date" and "selection", and then further crack them into pieces using the delimiter and instr function for an end result.
Good to hear you worked it out.
Another approach would be to use Power Query, available in Excel 2010+.
Just select to do Group By \ Selection & Date \ Count rows
M-Code
let
Source = Excel.CurrentWorkbook(){[Name="Table1"]}[Content],
#"Changed Type" = Table.TransformColumnTypes(Source,{{"Date", type date}}),
#"Grouped Rows" = Table.Group(#"Changed Type", {"Selection", "Date"}, {{"Count", each Table.RowCount(_), type number}})
in
#"Grouped Rows"
Source Data
Results
Sorry, my project was prepared basis VBA only hence the pivot and power query wasn't the first option to achieve this. Appreciate your help in this.
| common-pile/stackexchange_filtered |
Is there a way to find out why a python program closed?
Is there a way to programmatically find out why a Python program closed?
I'm making a game in python, and I've been using the built in open() function to create a log in a .txt file. A major problem I've come across is that when it occasionally crashes, the log doesn't realise it's crashed.
I've managed to record if the user closes the game through pressing an exit button, but I was wondering if there is a way to check how the program closed. For instance if the user presses exit, if it crashes or if it is forcefully closed(through the task manager for instance)
Have a look at the signal-module. You should be able to trap the most common...
You can supply your own exception handler:
import sys, logging
def excepthook_logger(extype, value, traceback):
logging.exception("Oh no! An uncaught exception happened!")
# Uncomment to also show the exceptions
#sys.__excepthook__(extype, value, traceback)
sys.excepthook = excepthook_logger
Then every uncaught exception is logged in your log file.
A few tips:
use try catch wherever possible.
Even if it crashes, stack trace will tell which line was last executed.
But how can I get the information from stack trace to appear in the .txt file? Is there a way to convert that information in to a string? I dont entirely understand what stack trace is D:
usually the last line of stack trace will show you where you did wrong in your code. If you want stack trace in .txt file, you can append it in a file(using '>').
You can also set the Python debugger that is installed by defaut
In commandline for example you can inspect your programm
>>>import pdb
>>>import youprogram.py
and then test it by typing
>>>yourprogramm.test()
Traceback (most recent call last):
File "", line 1, in ?
File "./mymodule.py", line 4, in test
test2()
File "./mymodule.py", line 3, in test2
print spam
To have a closer look you should consider setting debug trace to stop the execution at critical points using
pdb.set_trace()
that stop the execution commenting the current action
For more details I recommend you to see the documentation page
http://docs.python.org/2/library/pdb.html
| common-pile/stackexchange_filtered |
Correct way to transform python request parameters
I'm using Pyramid web framework to build a web app. There are many times that I find myself doing this:
result = request.params.get('abc', None)
if result:
result = simplejson.loads(result)
else:
result = {}
The thing is, sometimes, 'abc' request parameter is not present and the value of "result" would be None. Hence I always have to check if it's None before I perform a simplejson.loads operation or else I would get a TypeError: expected string or buffer exception.
Is there a better/more "pythonic" way of doing this?
Try this:
result = simplejson.loads(request.params.get('abc', '{}'))
Brilliant! Thanks. Sorry I can only accept the answer in another 7 minutes.
| common-pile/stackexchange_filtered |
foreach loop syntax in javascript
Consider myArray as an array of objects.
Are these two pieces of code equivalent ?
myArray.forEach((item) => {
return doAnAction(item);
});
myArray.forEach((item) => {
doAnAction(item);
});
Is there any of them that is better in terms of syntax? If yes, why ?
Yes, they're equivalent. .forEach() does not use the return value of it's callback. For more information, you can always check MDN. It's a great resource.
They're syntactically identical but one of them has a pointless return. If you asking about code style then 1) opinion-based questions are off-topic on SO 2) Personally, I prefer the latter because adding return implies you're using the return value, which there is none.
The better question is, why you do not use the function directly as callback, like
myArray.forEach(doAnAction);
The callback must follow the API of Array#forEach, which is
Function to execute for each element, taking three arguments:
currentValue: The current element being processed in the array.
index: The index of the current element being processed in the array.
array: The array that forEach() is being applied to.
If you need only the element, then you could use the given callback directly without any mapping.
This is not specific to the forEach loop. It is just ES6 syntax.
With ES5, here is the right syntax:
myArray.forEach(function (item) {
doAnAction(item);
});
With ES6, you have more possibilities:
// Solution #1
myArray.forEach((item) => {
doAnAction(item);
});
// Solution #2
myArray.forEach(item => {
doAnAction(item);
});
// Solution #3
myArray.forEach(item => doAnAction(item));
They weren't asking about syntax. They're asking about if the return changes this at all and, if so, how.
actually I was asking about the syntax :) its in the title of the post, clarified my post as well for other viewers! Thanks
@Rose Yeah, that is why I strictly answered about the syntax... ;) I do not know how your doAnAction function looks like, but be aware that it is generally awkward to use return in a forEach loop. It is much more frequent to use return with other core array methods like map, reduce, filter, some, every, etc.
As I understand there's no difference between these two because you are providing an anonymous function to be applied to each element in the Array whereas the first one does return the result of doAnAction(item) (which is ignored anyway) and the second one does not.
So they are really equivalent, the second just makes more sense to me for the intended meaning.
| common-pile/stackexchange_filtered |
How do I print multiple DOCX files alphabetically?
I have a ZIP file that my PHP web application outputs. It contains merged DOCX files with user names for printing. The file structure is just like this:
Adam Gray.docx
Amanda Black.docx
Benjamin Franklin.docx
Zane.docx
When we select all the files in Windows explorer and click the print button, the print spooler doesn't care about order and wants to spit them out as fast as possible.
Any way to make them spool up alphabetically? This is needed because they have corresponding envelopes that need to be printed.
I had this exact need of yours in around 2007 to 2008. The only way I was able to get the files to be printed in order was to use a program. I just looked up that program again, installed it, and used it and it does print the files in order.
The program is called Print Conductor, which you may find here: http://www.print-conductor.com/ Once there, click on the download tab at the top and then on the link below. The program is 100% free.
Once you have installed the program, open it, drag the files you wish to print into it, click on the file name header to get them sorted in order, and then click the Start button in the bottom right.
That's it!
Have fun with it.
Works PERFECTLY. Also, it is really cool that you can run more than one instance of the application to print to different printers. You are a lifesaver @rolo
| common-pile/stackexchange_filtered |
Django - QuerySet filter - combing 2 conditions
I have a model(Delivery) with 2 fields called name and to_date. I just need to a object with the specific name and it's maximum to_date.
Delivery.objects.filter(name__exact = 'name1').aggregate(Max('valid_to'))
The above query will return the maximum date. Is it possible to fetch the complete object?
To get a single object ordered by valid_to:
obj = Delivery.objects.filter(name='name1', to_date=my_date).order_by('-valid_to')[0]
I thought it was an error as valid_to was not mentioned as a field. Could you post the model as well? Updated code to reflect what I think you are trying to achieve
Try this:
maximum_to_date = Delivery.objects.filter(name__exact='name1').aggregate(maximum to_date=Max('valid_to'))
result = Delivery.objects.filter(valid_to=maximum_to_date)
Note that you need filter() in the second line, because two or more Deliveries might have the same valid_to value. In such case you can either accept them all, or e.g. take the one with the smallest ID, depending on what you need.
| common-pile/stackexchange_filtered |
How to recognize DateTime fields as such when assigning a Json.net JArray to a Grid?
My winform client requests a DataTable from server to assign to Grid on forms.
The server will return DataTable as JSON using JSON.net.
We use JArray.Parse to read the returned string and assign this JArray to Grid.
The data is displayed well except for DateTime fields.
All of fields with DateTime type have names that contain "DATE" or "TIME".
I wonder if there is any way to parse JTokens that belong to these fields to DateTime format?
I am using C# with VS2013 .Net 4.0
My returned json string is:
[
{ "ORDER_ID":10, "CREATED_DATE":"20160617181008", "NOTE":"Hello" },
{ "ORDER_ID":20, "CREATED_DATE":"20160616140302", "NOTE":"Ciao" }
]
I parse this JSON string as follows:
JArray table = JArray.Parse(jsonString);
And I assign the variable "table" to DataSource of Grid Control.
I am using the following C# date format: "yyyyMMddHHmmss".
The grid will display CREATED_DATE column as:<PHONE_NUMBER>1008 and<PHONE_NUMBER>0302.
What language are you using? c#? vb.net? 2) Please [edit] your question to show an example of the JSON you are receiving - as embedded text, not as an image, so we could paste it into visual studio for testing. 3) Also, please [edit] your question to show what you have tried so far, that is not working. See how to ask.
Please see my edit
It would be easiest if you could output dates and times from the server to JSON in ISO 8601 format. If a date string is formatted in this style, LINQ to JSON will automatically recognize it as a DateTime and deserialize it as such. See Deserializing from JSON with LINQ and Serializing Dates in JSON for details.
That being said, you can post-process your JArray to convert the value of any property with "DATE" or "TIME" in its name to a DateTime as follows:
var table = JArray.Parse(jsonString);
var format = "yyyyMMddHHmmss";
var culture = CultureInfo.InvariantCulture; // Change if necessary
var style = DateTimeStyles.AssumeUniversal | DateTimeStyles.AdjustToUniversal; // Change if necessary.
foreach (var property in table.Descendants().OfType<JProperty>().Where(p => (p.Name.Contains("DATE") || p.Name.Contains("TIME")) && p.Value.Type == JTokenType.String))
{
var value = (string)property.Value;
DateTime date;
if (DateTime.TryParseExact(value, format, culture, style, out date))
property.Value = date;
}
| common-pile/stackexchange_filtered |
Powershell SQL Server SMO - Failed to Enumerate Collection error
I have a weird problem with PowerShell and the Invoke-Sqlcmd when used in a ForEach block. In an nutshell, I've done this before many times and not really seen this.
I have a list of SQL Server instances in a database table, let's call it inst_list, and lets say it contains 3 instances: Inst1, Inst2 and Inst3. I'm populating an array called $instances, like this:
$instances = @(Invoke-Sqlcmd -serverinstance $myserver `
-database $mydatabase -query "select name from inst_list") |
select-object -expand name
Then in a ForEach block, I'm enumerating over the list and running Invoke-Sqlcmd, like this:
foreach ($instance in $instances)
{
Invoke-Sqlcmd -serverinstance $instance -database master -query "SELECT 1"
}
Simple enough right? However, I'm getting a Failed to Enumerate Collection error output, and alongside that another error which is Can't Connect to Server Inst1 Inst2 Inst3.
It's as if it's seeing my variable $instances as one large, long string than as a collection of instances to loop over.
When you say your table (inst_list) has 3 instances, you mean there are 3 rows - one for inst1, one for inst2 and one for inst3 - correct?
I ran your command as is (of course change servername, table name. It runs perfect for me. I am pulling from SQL server 10.0.2430.0 to pull servernames. Powershell 4.0.
Scott, that's correct
@Molenpad - I also ran your command (with simple changes) and it ran fine for me as well - what's in $instances (before the foreach) - is it a string or an array?
Ok - it seems to be a perms thing related to SMO on 3/70 servers. I should have mentioned earlier that there SMO is used as well as an invoke-sqlcmd. If I remove the SMO bit it works ok. Does use of SMO require specific permissions.
Replcace Invoke -Sqlcmd with a print statement Write-Host $instance. That will tell you if you have the right value or if it is a permission issue connecting to the servers.
@Molenpad this might help. Caution: it is an old post. http://stackoverflow.com/questions/898423/does-sql-server-smo-require-special-permissions-to-be-used
| common-pile/stackexchange_filtered |
TypeORM create connection through proxy address
So I wanted to make a small project with a DB, Backend and mobile frontend. I have a mariadb database on a raspberry pi and have everything setup for connections from anywhere. I created my backend server with TypeORM and hosted it on heroku. The problem is that my heroku server has a dynamic ip and I want to only have a small amount of whitelisted IP's. So I added quotaguard to my heroku app. The problem is the only way to setup that proxy connection (from the quotaguard documentation) is through socksjs (once again from the documentation on quotaguard) which creates a SockConnection object. I know if I use mysql.createConnection() there's a stream option that allows me to pass in that object, but I don't see it in the createConnection function from TypeORM. I have a variable called sockConn and I have verified that the connection is made on the quotaguard dashboard, but I don't know how to add it as an option to the TypeORM createConnection function.
Here is the index.ts file from my project:
import "reflect-metadata";
import {createConnection, getConnectionManager} from "typeorm";
import express from "express";
import {Request, Response} from "express";
import {Routes} from "./routes";
import { DB_CONNECTION } from "./database";
import { MysqlConnectionOptions } from "typeorm/driver/mysql/MysqlConnectionOptions";
import { config } from 'dotenv';
import { parse } from 'url';
var SocksConnection = require('socksjs');
config().parsed
const setupConnection = () => {
DB_CONNECTION.username = process.env.DB_USER;
DB_CONNECTION.password = process.env.DB_PASS;
DB_CONNECTION.host = process.env.DB_HOST;
DB_CONNECTION.port = parseInt(process.env.DB_PORT);
DB_CONNECTION.debug = true;
// ---- this section from quotaguard documentation ---- //
var proxy = parse(process.env.QUOTAGUARDSTATIC_URL),
auth = proxy.auth,
username = auth.split(':')[0],
pass = auth.split(':')[1];
var sock_options = {
host: proxy.hostname,
port: 1080,
user: username,
pass: pass
};
var sockConn = new SocksConnection({host: DB_CONNECTION.host, port: DB_CONNECTION.port}, sock_options);
// ---- this section above from quotaguard documentation ---- //
}
setupConnection();
createConnection(DB_CONNECTION as MysqlConnectionOptions).then(async connection => {
// create express app
const app = express();
app.use(express.json());
// register express routes from defined application routes
Routes.forEach(route => {
(app as any)[route.method](route.route, (req: Request, res: Response, next: Function) => {
const result = (new (route.controller as any)())[route.action](req, res, next);
if (result instanceof Promise) {
result.then(result => result !== null && result !== undefined ? res.send(result) : undefined);
} else if (result !== null && result !== undefined) {
res.json(result);
}
});
});
// setup express app here
// start express server
app.listen(3000);
console.log("Express server has started on port 3000.
Open http://localhost:3000 to see results");
}).catch(error => console.log(error));
There might also just be a better package, but as I've never worked with this type of thing, I only went off of what the documentation had on it.
So I reached out to QuotaGuard and they gave me an answer that works. The answer is below:
However we usually recommend that you use our QGTunnel software for database connections.
The QGTunnel software is a wrapper program that presents a socket to your application from the localhost. Then you connect to that socket as if it were your database. Below are some setup instructions for the QGTunnel.
Download QGTunnel into the root of your project
Log in to our dashboard and setup the tunnel
Using the Heroku CLI you can log into our dashboard with the following command:
heroku addons:open quotaguardstatic
Or if you prefer, you can login from the Heroku dashboard by clicking on QuotaGuard Static on the resources tab of your application.
Once you are logged into our dashboard, in the top right menu, go to Setup. On the left, click Tunnel, then Create Tunnel.
Remote Destination: tcp://hostname.for.your.server.com:3306
Local Port: 3306
Transparent: true
Encrypted: false
This setup assumes that the remote database server is located at hostname.for.your.server.com and is listening on port 3306. This is usually the default port.
The Local Port is the port number that QGTunnel will listen on. In this example we set it to 3306, but if you have another process using 3306, you may have to change it (ie: 3307).
Transparent mode allows QGTunnel to override the DNS for hostname.for.your.server.com to <IP_ADDRESS>, which redirects traffic to the QGTunnel software. This means you can connect to either hostname.for.your.server.com or <IP_ADDRESS> to connect through the tunnel.
Encrypted mode can be used to encrypt data end-to-end, but if your protocol is already encrypted then you don't need to spend time setting it up.
Change your code to connect through the tunnel
With transparent mode and matching Local and Remote ports you should not need to change your code. You can also connect to <IP_ADDRESS>:3306.
Without transparent mode, you will want to connect to <IP_ADDRESS>:3306.
Change your startup code.
Change the code that starts up your application. In heroku this is done with a Procfile. Basically you just need to prepend your startup code with "bin/qgtunnel".
So for a Procfile that was previously:
web: your-application your arguments
you would now want:
web: bin/qgtunnel your-application your arguments
If you do not have a Procfile, then heroku is using a default setup in place of the Procfile based on the framework or language you are using. You can usually find this information on the Overview tab of the application in Heroku's dashboard. It is usually under the heading "Dyno information".
Commit and push your code.
Be sure that the file bin/qgtunnel is added to your repository.
If you are using transparent mode, be sure that vendor/nss_wrapper/libnss_wrapper.so is also added to your repository.
If you have problems, enable the environment variable QGTUNNEL_DEBUG=true and then restart your application while watching the logs. Send me any information in the logs. Please redact any sensitive information, including your QuotaGuard connection URL.
VERY IMPORTANT
7. After you get everything working, I suggest you download your QGTunnel configuration from our dashboard as a .qgtunnel file and put that in the root of your project. This keeps your project from not relying on our website during startup.
This did work for me, and I was able to make a connection to my database.
| common-pile/stackexchange_filtered |
Movie about hostile robots disguised as footballs
I saw this on TV in mid 90s I think.
One or more people travel to another celestial body (might or might not be the/a moon or mars), they find an American football. The football transforms into a robot that is hostile. They later find LOTS of these footballs/disguised robots.
Since there are lots of robotic projects with "football robots" or "hand-egg robots", it is very hard to filter search results using Google etc.
I think you mean "hand-egg"
@Richard yes, thank you
I suggest you undo that change - @Richard was making a joke.
@Wikis i know this, as i am german and for me, a "football" is the black&white/spherical thing. i think "hand-egg" is clear to everyone, even though it's actually a joke - right? i will reverse the question title though
Sure..........!
Fixed the question to mention hand-egg, but to indivcate its an "American football". There's a fellow asking about a robot building itself up from a football shaped head at http://www.onlygoodmovies.com/blog/movie-megalists/top-85-robot-movies/, "There is this robot, which build it self from his ‘football head’ up to an deadly machine. In one scene you see this robot taking swatches with a little buzz saw and putting them into a bowl. It would be great if someone could help." No answer to him, but in case that's an additional detail...
@SeanDuggan thank you for the improvement. I do not remind this scene, but i guess it is the same movie though - the head might still had the football surface in robot form, but i am not sure
It could be entirely unrelated, but whenever I do a Google search and come upon tenuous ties, I like to mention them in case it spurs a "Oh hey... I do remember that" moment. :D
Maybe Moontrap (1989)?
On July 20, 1969, during the last phase of the Apollo 11 mission to
the Moon, a robotic eye emerges from the lunar soil and takes notice
of the landing module as it takes off. The eye buries itself again.
Decades later the Space Shuttle Camelot encounters a derelict
spaceship in orbit around Earth. Mission commander Colonel Jason Grant
(Walter Koenig) leaves the shuttle to investigate. He discovers a
reddish-brown pod and a mummified human corpse. Both things are
brought back to Earth, where it is found that they originated on the
Moon some fourteen thousand years ago. Shortly thereafter, while being
unattended, the pod comes to life. It builds itself a cybernetic body
with parts from the lab and pieces of the ancient corpse. The cyborg
kills a lab technician and exchanges fire with security guards before
Grant destroys it with a shotgun blast to the head.
Using the last completed Apollo rocket, Grant and fellow astronaut Ray
Tanner (Bruce Campbell) go to the Moon on a search-and-destroy
mission. They discover the ruins of an ancient human civilization.
Inside, they find a woman in suspended animation who identifies
herself in a rudimentary fashion as Mera (Leigh Lombardi). Mera later
reveals the name of the killer cyborgs — the Kaalium. They survive the
attack of a spider Kaalium and return to the landing module, with Mera
wearing her own spacesuit but it turns out that the Kaalium have
stolen the module. The Kaalium also shoot down the command module,
leaving the astronauts stranded on the Moon. In subsequent attacks by
the Kaalium, Tanner is killed, Grant and Mera are taken prisoner, and
the Kaalium head to Earth.
Grant frees himself and rescues Mera from certain death at the hands
of a cyborg. In the meantime, the Space Shuttle Intrepid is launched
to intercept the approaching alien ship. Grant and Mera look for the
control room and find the landing module, which has been adapted into
the alien machinery. Grant supposes the module was the last piece of
equipment that the Kaalium needed to complete their ship. He starts
the module's self-destruct sequence and as they are attacked by a
Kaalium crew member, discovers that he can use his gun as a rocket to
get away. He and Mera exit through a breach in the hull. The ship
explodes after they have reached safe distance.
Some time later, Grant and Mera are shown as a couple living on Earth.
Mera, having learned to speak English, explains that she was put in
stasis to warn others about the Kaalium. Grant tells her that she does
not have to worry anymore, that it is over and hugs her. One of the
pods survived the explosion and is now in a junkyard preparing to
build itself a new body....
Football-like pod pics:
Cyborg alien:
Trailer:
Definitely matches that other movie that one guy was looking for I mentioned in the comments above.
yep, that's it! thank you very much! I could have sworn they were footballs, looks a bit different now..
Moontrap was actually a pretty good flick! Nice effects, pretty female lead, and of course... CHEKOV vs. ASH!!!
| common-pile/stackexchange_filtered |
Is there any document to work on Core animations and core graphics in iphone sdk?
I am new to core graphics in iphone can anyone suggest how to work on that and the process to go through it.Also can any one provide me a document with sample codes.
Thanks in advance.
Monish.
See Core Graphics section in iOS Reference Library - you can find Quartz 2D Programming guide, class references, and some samples there.
P.S. Same applies for CoreAnimation as well - there's whole relevant section in Apple's docs
A good place to start would be:
Introduction to Core Animation Programming Guide
There are a few examples there too.
What do you plan to use Core Animation for? If you are going to take a stab at make games Core Animation is a valid choice as long as performance is not critical. Here is a good article about it here.
Hope it helps
I am learning CoreData myself right now, and I found the most useful document to be the Locations App tutorial.
In this tutorial, you create an app called Locations which utilizes CoreData utilities and also teaches you how to do TableViewControllers.
Good luck!
| common-pile/stackexchange_filtered |
Intuition of the Bhattacharya Coefficient and the Bhattacharya distance?
The Bhattacharyya distance is defined as $D_B(p,q) = -\ln \left( BC(p,q) \right)$, where $BC(p,q) = \sum_{x\in X} \sqrt{p(x) q(x)}$ for discrete variables and similarly for continuous random variables. I'm trying to gain some intuition as to what this metric tells you about the 2 probability distributions and when it might be a better choice than KL-divergence, or Wasserstein distance. (Note: I am aware that KL-divergence is not a distance).
The Bhattacharyya coefficient is
$$
BC(h,g)= \int \sqrt{h(x) g(x)}\; dx
$$
in the continuous case. There is a good wikipedia article https://en.wikipedia.org/wiki/Bhattacharyya_distance. How to understand this (and the related distance)? Let us start with the multivariate normal case, which is instructive and can be found at the link above. When the two multivariate normal distributions have the same covariance matrix, the Bhattacharyya distance coincides with the Mahalanobis distance, while in the case of two different covariance matrices it does have a second term, and so generalizes the Mahalanobis distance. This maybe underlies claims that in some cases the Bhattacharyya distance works better than the Mahalanobis. The Bhattacharyya distance is also closely related to the Hellinger distance https://en.wikipedia.org/wiki/Hellinger_distance.
Working with the formula above, we can find some stochastic interpretation. Write
$$ \DeclareMathOperator{\E}{\mathbb{E}}
BC(h,g) = \int \sqrt{h(x) g(x)}\; dx = \\
\int h(x) \cdot \sqrt{\frac{g(x)}{h(x)}}\; dx = \E_h \sqrt{\frac{g(X)}{h(X)}}
$$
so it is the expected value of the square root of the likelihood ratio statistic, calculated under the distribution $h$ (the null distribution of $X$). That makes for comparisons with Intuition on the Kullback-Leibler (KL) Divergence, which interprets Kullback-Leibler divergence as expectation of the loglikelihood ratio statistic (but calculated under the alternative $g$). Such a viewpoint might be interesting in some applications.
Still another viewpoint, compare with the general family of f-divergencies, defined as, see Rényi entropy
$$
D_f(h,g) = \int h(x) f\left( \frac{g(x)}{h(x)}\right)\; dx
$$
If we choose $f(t)= 4( \frac{1+t}{2}-\sqrt{t} )$ the resulting f-divergence is the Hellinger divergence, from which we can calculate the Bhattacharyya coefficient. This can also be seen as an example of a Renyi divergence, obtained from a Renyi entropy, see link above.
The Bhattacharya distance is also defined using the following equation
where $\mu_i$ and $\sum_i$ refer to mean and and covariance of $i^{th}$ cluster.
interesting, is this a general result e.g. for any 2 distribution means and covariances or does this refer to a specific distribution?
The main difference between the two is that Bhattacharyya is a metric and KL is not, so you must consider what information you want to extract about your data points.
In the context of control theory and the study of the problem of signal selection, the Bhattacharyya distance is superior to the Kullback-Leibler distance.
+1 The BC(p,q) measures and angle (in radians) on a geodesic (great circle) connecting p and q, which is symmetric. The KL(p,q) measures the information difference (in bits), when going from q to p (assuming discrete distributions). These are clearly two different quantities. Could you expand on your answer?
Could you explain your affirmations in the second paragraph?
| common-pile/stackexchange_filtered |
Looping through array in Angular 2
Component:
export class PersonalRecordsComponent implements OnInit {
currentUser = [];
userRecords = [];
movements = [
"Back Squat",
"Bench Press",
"Clean",
"Clean & Jerk",
"Custom Movement",
"Deadlift",
"Front Squat",
"Jerk",
"Power Clean",
"Power Snatch",
"Push Press",
"Snatch",
"Strict Press"
];
constructor(private afService: AF) {
// Get current user details.
afService.getCurrentUserInfo().then(currentUserDetails => {
this.currentUser.push(currentUserDetails);
}).then(() => {
for(let movement of this.movements) {
this.afService.getRecords(movement, this.currentUser[0].userID).subscribe((data) => {
this.userRecords.push(data);
});
}
}).then(()=>{
console.log(this.userRecords)
})
}
HTML:
<ng-container *ngFor="let record of userRecords">
<div class="list-athletes">
<div class="list-athletes-details">
<p>{{ record.movement }} - {{ record.weight }}</p>
</div>
<div class="list-athletes-actions">
<div class="btn-group">
</div>
</div>
</div>
</ng-container>
The above code outputs 13 <div>'s, which is correct but they are empty due to *ngFor="let record of userRecords". If I instead write *ngFor="let record of userRecords[0]" in the *ngFor loop, it outputs the correct data, but only for the first array, obviously.
My question is: How do I output the correct data for each of the 13 arrays without writing 13 *ngFor loops, such as:
*ngFor="let record of userRecords[0]"
*ngFor="let record of userRecords[1]"
*ngFor="let record of userRecords[2]"
etc.
Each one of these arrays can contain multiple objects.
[
[
{
"comments": "Back squat comment alpha",
"movement": "Back Squat",
"userID": "wDHZv3OL55SIymHkhMUejNleNkx1",
"weight": "365"
},
{
"comments": "Back squat comment bravo",
"movement": "Back Squat",
"userID": "wDHZv3OL55SIymHkhMUejNleNkx1",
"weight": "325"
}
],
[],
[],
[],
[],
[],
[
{
"comments": "Front squat comment alpha",
"movement": "Front Squat",
"userID": "wDHZv3OL55SIymHkhMUejNleNkx1",
"weight": "315"
}
],
[],
[],
[],
[],
[],
[]
]
show us how your userRecords array look like
@Sajeetharan added it to the post
post your array! not a screenshot! its hard
@Sajeetharan better?
just do a console.log and post the array
@Sajeetharan pasted it
As an idea, why not run embedded *ngFor loops with an *ngIf to ensure the array isn't empty? So you're HTML would look like...
<ng-container *ngFor="let lift of userRecords">
<div *ngIf="records(lift)">
<div *ngFor="let record of lift">
<div class="list-athletes">
<div class="list-athletes-details">
<p>{{ record.movement }} - {{ record.weight }}</p>
</div>
<div class="list-athletes-actions">
<div class="btn-group"></div>
</div>
</div>
</div>
</div>
</ng-container>
And you'd add a method to your component called records that would look like this:
public records(lifts: any[]): boolean {
return (lifts.length > 0) ? true : false;
}
This function will just check to see if there are records to display.
P.S. Your front squat is pretty legit. I lose stability of the weight on my front squat so there's like always a big gap between my front squat and back squat. Do you crossfit?
I think it can just be written in template *ngIf="lifts.length > 0".
It's just to skip the loop if the array is empty. Is it necessary, not at all.
@MichaelSolati thanks, it never occured to me that you could nest *ngFor loops. Definitely fixed this issue.
@MichaelSolati and yeah I did CrossFit for two years.
The ngIf is completely unnecessary, the bested ngFor wouldnt make any iterations in case that the array is empty
@Jota.Toledo you're right, I did acknowledge that though.
You need to have two ngFor since records is also an array,
<ng-container *ngFor="let record of userRecords">
<div *ngFor="let reco of record">
also make sure you have CommonModule imported inside the app.module.ts
dude! why you removed the answer?
You can nest the loops:
<div *ngFor="let record of userRecords">
<div *ngFor="let item of record">...
Sample
| common-pile/stackexchange_filtered |
Unable to describe Kafka Streams Consumer Group
What I want to achieve is to be sure that my Kafka streams consumer does not have lag.
I have simple Kafka streams application that materialized one topic as store in form of GlobalKTable.
When I try to describe consumer on Kafka by command:
kafka-consumer-groups --bootstrap-server localhost:9092 --describe --group my-application-id
I can't see any results. And there is no error either. When I list all consumers by:
kafka-consumer-groups --bootstrap-server localhost:9092 --describe --all-groups
my application consumer is listed correctly.
Any idea where to find additional information what is happening that I can't describe consumer?
(Any other Kafka streams consumers that write to topics can be described correctly.)
If your application does only materialize a topic into a GlobalKTable no consumer group is formed. Internally, the "global consumer" does not use subscribe() but assign() and there is no consumer group.id configured (as you can verify from the logs) and no offset are committed.
The reason is, that all application instances need to consume all topic partitions (ie, broadcast pattern). However, a consumer group is designed such that different instances read different partitions for the same topic. Also, per consumer group, only one offset can be committed per partition -- however, if multiple instance read the same partition and would commit offsets using the same group.id the commits would overwrite each other.
Hence, using a consumer group while "broadcasting" data does not work.
However, all consumers should expose a "lag" metrics records-lag-max and records-lag (cf https://kafka.apache.org/documentation/#consumer_fetch_monitoring). Hence, you should be able to hook in via JMX to monitor the lag. Kafka Streams includes client metrics via KafkaStreams#metrics(), too.
| common-pile/stackexchange_filtered |
How can I show newest posts and comments on the top in my blogging website?
I have successfully made the models for both comments and posts and they are showing up properly on the webpage but I want the newest post to be showed up first and the same for comments as well.
views.py:
from django.shortcuts import render, HttpResponse, redirect
from blog.models import Post, BlogComment
from django.contrib import messages
# Create your views here.
def blogHome(request):
allpost = Post.objects.all()
context = {'allposts': allpost}
return render(request, 'blog/blogHome.html', context)
def blogPost(request, slug):
post = Post.objects.filter(slug=slug).first()
comments = BlogComment.objects.filter(post=post)
context = {'post': post, 'comments': comments}
return render(request, 'blog/blogPost.html', context)
def postComment(request):
if request.method == 'POST':
comment = request.POST.get('comment')
user = request.user
postSno = request.POST.get("postSno")
post = Post.objects.get(sno=postSno)
comment = BlogComment(comment=comment, user=user, post=post)
comment.save()
messages.success(request, 'your comment has been added')
return redirect(f"/blog/{post.slug}")
this is the blog home page where I want the newest post to be displayed first
blogHome.html:
{% extends 'base.html' %}
{% block title %} blogs {% endblock title %}
{% block blogactive %}active {% endblock blogactive %}
{% block body %}
<h2 class="text-center my-4">blogs by everythingcs</h2>
<div class="container">
{% for post in allposts %}
<div class="col-md-6">
<div class="row no-gutters border rounded overflow-hidden flex-md-row mb-4 shadow-sm h-md-250 position-relative">
<div class="col p-4 d-flex flex-column position-static">
<strong class="d-inline-block mb-2 text-primary">by-{{post.author}}</strong>
<h3 class="mb-0">{{post.title}}</h3>
<div class="mb-1 text-muted">Nov 12</div>
<p class="card-text mb-auto">{{post.content|truncatechars:200}}</p>
<div class="my-2">
<a href="/blog/{{post.slug}}" role="button" class="btn btn-primary">More..</a>
</div>
</div>
</div>
</div>
{% endfor %}
</div>
{% endblock body %}
finally the models.py for further reference:
from django.db import models
from django.contrib.auth.models import User
from django.utils.timezone import now
# Create your models here.
class Post(models.Model):
sno = models.AutoField(primary_key=True)
title = models.CharField(max_length=50)
content = models.TextField()
author = models.CharField(max_length=50)
slug = models.SlugField(max_length=200)
timeStamp = models.DateTimeField(blank=True)
def __str__(self):
return self.title + " by " + self.author
class BlogComment(models.Model):
sno = models.AutoField(primary_key=True)
comment = models.TextField()
user = models.ForeignKey(User, on_delete=models.CASCADE)
post = models.ForeignKey(Post, on_delete=models.CASCADE)
parent = models.ForeignKey('self', on_delete=models.CASCADE, null=True)
timestamp = models.DateTimeField(default=now)
def __str__(self):
return self.comment[0:13] + "..." + "by " + self.user.username
in short, I want to sort my blog post and blog comments by their time and then display them accordingly.
you can keep a DB field like timeStamp that you can update every time a post is edited and order_by that field when showing all the posts.
Take a look at this article. https://stackoverflow.com/questions/9834038/django-order-by-query-set-ascending-and-descending. I hope it solves your problem
Take a look at this article. https://stackoverflow.com/questions/9834038/django-order-by-query-set-ascending-and-descending I hope it solves your problem
You can specify a default ordering [Django-doc] in the Meta [Django-doc] of the objects:
class Post(models.Model):
sno = models.AutoField(primary_key=True)
title = models.CharField(max_length=50)
content = models.TextField()
author = models.CharField(max_length=50)
slug = models.SlugField(max_length=200)
timestamp = models.DateTimeField(auto_now_add=True)
class Meta:
ordering = ['-timestamp']
def __str__(self):
return f'{self.title} by {self.author}'
class BlogComment(models.Model):
sno = models.AutoField(primary_key=True)
comment = models.TextField()
user = models.ForeignKey(User, on_delete=models.CASCADE)
post = models.ForeignKey(Post, on_delete=models.CASCADE)
parent = models.ForeignKey('self', on_delete=models.CASCADE, null=True)
timestamp = models.DateTimeField(auto_now_add=True)
class Meta:
ordering = ['-timestamp']
def __str__(self):
return f'{self.comment[0:13]}... by {self.user.username}'
or you can order explicitly:
def blogHome(request):
allpost = Post.objects.order_by('-timestamp')
context = {'allposts': allpost}
return render(request, 'blog/blogHome.html', context)
Note: normally the name of the fields in a Django model are written in snake_case, not PerlCase, so it should be: timestamp instead of timeStamp.
Note: It is normally better to make use of the settings.AUTH_USER_MODEL [Django-doc] to refer to the user model, than to use the User model [Django-doc] directly. For more information you can see the referencing the User model section of the documentation.
Note: Django's DateTimeField [Django-doc]
has a auto_now_add=… parameter [Django-doc]
to work with timestamps. This will automatically assign the current datetime
when creating the object, and mark it as non-editable (editable=False), such
that it does not appear in ModelForms by default.
| common-pile/stackexchange_filtered |
Identifying individual values in a text box using Flash
I want to identify specific strings in a text box from user input to add to a score variable, like so -
if (userWords.text == firstWord) {
score = score + 1;
}
The example given adds 1 to the score, but if a user adds a space then a second word the text box views it as a whole and not individual words, resulting in no values added to the score variable.
The problem lies with the whole text box being viewed as one entire string. Instead, I want to split it up so word1 will add 1 to the score, word2 will add 1 to the score, etc.
I am ultra confused with this problem, so thank you to anyone that may help.
You can use the trim() method of the StringHelper class. This will removes all characters that match the char parameter before and after the specified string. You can find the class in the example at the bottom of the String class page on Adobe livedocs. The url is http://www.adobe.com/livedocs/flash/9.0/ActionScriptLangRefV3/String.html but its also as follows:
class StringHelper {
public function StringHelper() {
}
public function replace(str:String, oldSubStr:String, newSubStr:String):String {
return str.split(oldSubStr).join(newSubStr);
}
public function trim(str:String, char:String):String {
return trimBack(trimFront(str, char), char);
}
public function trimFront(str:String, char:String):String {
char = stringToCharacter(char);
if (str.charAt(0) == char) {
str = trimFront(str.substring(1), char);
}
return str;
}
public function trimBack(str:String, char:String):String {
char = stringToCharacter(char);
if (str.charAt(str.length - 1) == char) {
str = trimBack(str.substring(0, str.length - 1), char);
}
return str;
}
public function stringToCharacter(str:String):String {
if (str.length == 1) {
return str;
}
return str.slice(0, 1);
}
}
Then you can implement it as follows:
var strHelper:StringHelper = new StringHelper();
if (strHelper.trim(userWords.text, " ") == firstWord) { score = score + 1; }
To make life easier(especially if your using the timeline), you can simply extract the required methods from the StringHelper class and add it to your code. This way you can call the functions without the need of instantiating the StringHelper class and calling it from its instance. The following is an example of this:
function trim(str:String, char:String):String {
return trimBack(trimFront(str, char), char);
}
function trimFront(str:String, char:String):String {
char = stringToCharacter(char);
if (str.charAt(0) == char) {
str = trimFront(str.substring(1), char);
}
return str;
}
function trimBack(str:String, char:String):String {
char = stringToCharacter(char);
if (str.charAt(str.length - 1) == char) {
str = trimBack(str.substring(0, str.length - 1), char);
}
return str;
}
function stringToCharacter(str:String):String {
if (str.length == 1) {
return str;
}
return str.slice(0, 1);
}
if (trim(userWords.text, " ") == firstWord) { score = score + 1; };
| common-pile/stackexchange_filtered |
Import-Module for custom dll fails
I followed the demo here for creating a custom powershell cmdlet. When I attempt to import the module, I get the following error:
C:\PS> Import-Module DemoPS.dll
Import-Module : The specified module 'DemoPS.dll' was not loaded because no valid module file was found in any module directory.
If anymore info is needed, let me know.
The error is shown because it can't find your dll file. You need to specify full path for your modules DLL file (e.g. Import-Module c:\users\mj\desktop\DemoPS.dll).
As an alternative solution, you can save it in your "module" folder. This is a folder called "Modules" that you have to create in your profile directory. Your profile directory can be found using $profile. It is normally in C:\Users\<username>\Documents\WindowsPowerShell\. So to use this, place your dll in following path:
C:\Users\<username>\Documents\WindowsPowerShell\Modules\DemoPS\DemoPS.dll
| common-pile/stackexchange_filtered |
Sending a Checkbox Form to Specific URL
<form action="" method="get" name="combination" target="_self">
<input type="checkbox" name="tag[]" value="pop" />pop<br /><br />
<input type="checkbox" name="tag[]" value="rock" />rock<br /><br />
<input type="checkbox" name="tag[]" value="indie" />indie<br /><br />
<input type="checkbox" name="tag[]" value="electronic" />electronic<br />
<input name="" type="submit">
</form>
If a user clicks pop and rock boxes i want to send the form to www.mysite.com/?view=listen&tag[]=pop&tag[]=rock How can i do it?
User can select one, two or three tags. So element number of tag array is variable.
Maybe I'm not getting the exact problem here, but why not just make a few small changes to the form like this:
<form action="http://www.mysite.com/">
<input type="hidden" name="view" value="listen" />
<input type="checkbox" name="tag[]" value="pop" />pop<br /><br />
<input type="checkbox" name="tag[]" value="rock" />rock<br /><br />
<input type="checkbox" name="tag[]" value="indie" />indie<br /><br />
<input type="checkbox" name="tag[]" value="electronic" />electronic<br />
<button type="submit">submit</button>
</form>
You got the exact problem. You are great :)
You can't by using plain HTML. You can only specify a single URL to the action attribute.
But you could use JavaScript (which will render your form useless without JS enabled). Just check the checked state of the check-boxes and exchange the action URL accordingly.
@LauraSmit This has to be done in the browser. Since php is server side only, it has to access to the client's browser. Only javascript/jQuery can do this. I also posted an answer with a possible jQuery code.
@Laura Smit In fact you choose a strange approach. I would always send the form to a single page and if required redirect from the PHP script, not from the HTML page.
+1 for feeela's comment - I updated my answer on how to do it just in php after form submission
Actually you don't want the form to call the action, so you must cancel the submit.
It was missing the return false. Also the "location.href" was wrong, the right is "parent.window.location.href"
tried and worked now.
<form action="" method="get" name="combination" target="_self">
<input type="checkbox" name="tag[]" value="pop" />pop<br /><br />
<input type="checkbox" name="tag[]" value="rock" />rock<br /><br />
<input type="checkbox" name="tag[]" value="indie" />indie<br /><br />
<input type="checkbox" name="tag[]" value="electronic" />electronic<br />
<input name="" type="submit">
</form>
<script>
$("form").submit(function(){
var url = "http://www.mysite.com/?view=listen&";
var i = 0;
$("input:checked").each(function(){
url += "tag[" + i + "]=" + $(this).val() + "&";
i++;
});
parent.window.location.href = url;
return false;
});
</script>
when i alert(url) before location.href it alerts what i want(e.g:www.mysite.com/?view=listentag[]=rock). But after alerting it goes again www.mysite.com/?tag[]=rock. I think there is still problem about form action=""
This is what the form action is for - here you will specify where the form goes.
Using jQuery you can edit the form action on specific events. Give your form an id such as my_form and your checkboxes descriptive ids:
<form id="my_form" action="" method="get" name="combination" target="_self">
<input id="pop_check" type="checkbox" name="tag[]" value="pop" />pop<br /><br />
<input id="rock_check" type="checkbox" name="tag[]" value="rock" />rock<br /><br />
<input type="checkbox" name="tag[]" value="indie" />indie<br /><br />
<input type="checkbox" name="tag[]" value="electronic" />electronic<br />
<input name="" type="submit">
</form>
Then, below the form you can write the following jQuery code:
<script type="text/javascript">
$("checkbox[name='tag']").on("click", function( //when any of the checkboxes is clicked the following code runs
if($("#pop_check").is(":checked") || $("#rock_check").is(":checked")){ //if one of the two target check boxes is checked, the form action will be changed as desired
$("#my_form").attr("action","www.mysite.com/?view=listen");
}else if(!$("#pop_check").is(":checked") && !$("#rock_check").is(":checked")){ //since the user can also toggle off all checkboxes again, we need to check for that too and change the form action back to "" if required
$("#my_form").attr("action","");
}
));
</script>
If you are (about to be) learning jQuery, I can highly recommend it. Here are some very good tutorials you might find helpful, they taught me pretty much everything I know about jQuery: http://thenewboston.org/list.php?cat=32
EDIT
There is also another way to do it. In your php code that runs after the form is submitted, you can simply check whether any of the two checkboxes was selected and depending on that use php's header function to redirect or not:
<?php
//After form submission with form action="" (i.e. at the top of your php file)
if(in_array("pop", $_GET['tag']) || in_array("rock", $_GET['tag'])){
header('Location:www.mysite.com/?view=listen&tag=pop&tag=rock');
exit;
}
?>
In fact, then you should use post as form method instead of get, because there is no need for the form result to be bookmarkable. Read more about php's built in header() and in_array() functions.
Thank you Charles but it won't help. There is hundreds of tags I can not handle it with if else
| common-pile/stackexchange_filtered |
Custom subscribe err angular
I've been learning Angular 6 and I had a question about the subscribe method and error handling.
So a basic use of the subscribe function on an observable would look something like this:
this.myService.myFunction(this.myArg).subscribe(
// Getting the data from the service
data => {
this.doSomethingWith(data);
},
err => console.log('Received an error: ', err),
() => console.log('All done!')
);
So an error in this case might be a 500 error or something, but I'm wondering if there's a way to send my data from my back end (I'm using PHP) such that the subscribe function will recognise it as an error. For example, my app allows users to create a list of items from a specific finite set (the set of all Pokemon in my case) so if he/she tries to add a pokemon that doesn't exist, I want my back end to return an error. Currently I'm returning a JSON object like this: { "error" => "you did something wrong" } and I handle it in the first argument of my subscribe function. I suppose s fine, but if there's something in place for proper error handling I would think it would be best to utilise that.
Cheers!
Try something like this:
import {
HttpEvent,
HttpHeaders,
HttpInterceptor,
HttpResponse,
HttpErrorResponse,
HttpHandler,
HttpRequest
} from '@angular/common/http';
this.myService.myFunction(this.myArg).subscribe(
// Getting the data from the service
data => {
this.doSomethingWith(data);
},
(err: any) => {
if (err instanceof HttpErrorResponse) {
switch (err.status) {
case 400:
console.log("400 Error")
break;
case 401:
console.log("401 Error")
break;
case 500:
console.log("500 Error")
break;
}
}
);
| common-pile/stackexchange_filtered |
If the quadratic equation $x^2+px+q=0$ and $x^2+qx+p=0$ have a common root
If quadratic equations $x^2+px+q=0$ and $x^2+qx+p=0$ have a common root, prove that: either $p=q$ or $p+q+1=0$.
My attempt:
Let $\alpha $ be the common root of these equations. Since one root is common, we know:
$$(q-p)(p^2-q^2)=(q-p)^2.$$
How do I get to the proof from here?
Virtually a duplicate of https://math.stackexchange.com/q/2366474/265466.
If you can obtain
$$(q-p)(p^2-q^2)=(q-p)^2$$
Since $p^2-q^2=(q-p)(-p-q)$, we have
$$(q-p)^2(-p-q)=(q-p)^2$$
Hence $(q-p)^2(1+p+q)=0$.
Hence $q=p$ or $1+p+q =0$
Let $x$ be the common root.
Thus, $$px+q=qx+p$$ or
$$(x-1)(p-q)=0,$$
which gives $p=q$ or $x=1$ and from this we obtain $p+q+1=0$.
Where does that $x$ come from?
@blue_eyed_... $x$ is the common root. Hence, $x^2+px+q=x^2+qx+p$.
" from here" mean from where?
@blue_eyed_... It means from $x=1$ we obtain $1^2+p\cdot1+q=0$ or $p+q+1=0$.
Let $\alpha$ be the common root.
\begin{align}
\alpha^2 + p \alpha + q &= 0 \\
\alpha^2 + q \alpha + p &= 0 \\
\hline
(p-q)\alpha + (q-p) &= 0 \\
(p-q)(\alpha-1) &= 0
\end{align}
$\alpha = 1$ or $p=q$.
If $\alpha = 1$, then $\alpha^2 + p \alpha + q = 0$ becomes
$1+p+q = 0$
| common-pile/stackexchange_filtered |
Determinant is the same for $A$ and $B$?
Let us have $A=(a_{ij})$ arbitrary matrix and for $B=(b_{ij})$ matrix we have $b_{ij}=(-1)^{i+j}a_{ij}$.
Prove, that $det A = det B$.
I tried with some examples, and the determinant is same, but how should I prove it in general? I tried to use the general formula(Leibniz formula), but it didn't really seem to help me out. Any ideas?
Thanks!
I think the best approach is to prove it by induction on the size of the matrix.
That seems to be an interesting method, thanks. :)
Let the size of the matrices be $n$. Then
$$ \det A = \sum_\sigma \operatorname{sign}(\sigma) \prod_{i=1}^n a_{i, \sigma(i)}. $$
Also,
$$ \det B = \sum_\sigma \operatorname{sign}(\sigma) \prod_{i=1}^n b_{i, \sigma(i)}
= \sum_\sigma \operatorname{sign}(\sigma) \prod_{i=1}^n (-1)^{i+\sigma(i)} a_{i, \sigma(i)}.$$
The point is to now notice that $$\prod_{i=1}^n (-1)^{i+\sigma(i)} = (-1)^{1+2+\dots+n+1+2+\dots+n} = (-1)^{2k} = 1$$ where $k=1+\dots+n$. Hence each term in the sum matches up exactly.
Wow, very elegant :)
Hint. Note that $(-1)^{i+j}a_{ij}=(-1)^ia_{ij}(-1)^j$. Therefore $B=DAD$, where $D=\operatorname{diag}(-1,1,-1,1,\ldots)$.
| common-pile/stackexchange_filtered |
Cakephp 3.0 form element radio button selected issue
I am facing problem in edit mode.
The radio button is not showing as selected.
This is my code
<div class="control-group required"><label class="control-label"><?php echo __('status') ?></label><div class="controls">
<?php
$attributes = array(
'selected' => 1
);
echo $this->Form->radio(
'status', $statusData,$attributes
);
?>
<span id="error" class="error"></span>
</div>
</div>
use val or default instead of selected
Thanks. But None of your option worked.
http://book.cakephp.org/3.0/en/views/helpers/form.html#creating-radio-buttons, read here made a mistake reading it as select instead of radio.
try this please:
$attributes = array(
'value' => '1'
);
$statusData = array(
'1' => 'Yes', '0' => 'No'
);
echo $this->Form->radio(
'status', $statusData,$attributes
);
Only the last option is checked for radio inputs. It is not working for me also.
| common-pile/stackexchange_filtered |
How to stop my WCF server readline thread?
recently I implemented a pipe server which can be connected by multi clients. there is a readline function in the server thread to keep reading the messages sent from the connected client. When the application closing, the pipe server thread should safely exited. However, things is not happening as I expected, the server realine() can not be aborted by the thread.abort().
Here is my implemention:
private string TryReadLine(CancellationToken ct)
{
int TimeOutCount = 0;
var thread = new Thread(readerThread);
thread.Start();
Get.Set();
Got.Reset();
while (!Got.WaitOne(ServerWaitReadMillisecs))//for just 1s
{
//check if the task has been cancelled
if (ct.IsCancellationRequested)
{
thread.Abort();//stop the pipe reading thread
Log.Info($"Pipe{ID} task will be cancelled");
throw new PipeServerTaskCancelled();
}
TimeOutCount++;
if (TimeOutCount > MaxTimeout)
{
thread.Abort();//stop the pipe reading thread
throw new TimeoutException();
}
Log.Info($"this is the {TimeOutCount} time pipe read timeout for Pipe{ID}");
}
return inputContext;
}
private void readerThread()
{
Get.WaitOne();
inputContext = sr.ReadLine();//read message from the pipe(originally sent from the wcf service)
Got.Set();
}
sorry for confusion, this is not for WCF readline, it is for named pipe readline, the [sr] in the code is intialized with pipe connectted, like sr = new StreamReader(Server);
Stream.ReadTimeout not supported for NamedPipeServerStream
Abort() or Interupt() doesn't work on thread
NamedPipeServerStream.Disconnect() nether work
| common-pile/stackexchange_filtered |
context.savechanges loses value and errors out
I am trying to add and save a user's settings for my application by the following code:
SettingsPreference order = _context.SettingsPreference.SingleOrDefault(x => x.employeeNumber == settingsOrder.employeeNumber);
if (order != null)
{
_context.Entry(order).CurrentValues.SetValues(settingsOrder);
}
else
{
_context.SettingsPreference.Add(settingsOrder);
}
_context.SaveChanges();
My SettingsPreference object contains an int of employeeNumber and and a string of viewOrder. As I step through the method while debugging it, it holds the values of employeeNumber = 9999 and viewOrder = "randomString" throughout the whole method until it gets to SaveChanges. When it tries to execute the save after trying to add it to a database, it errors out and gives the exception of trying to add a null int into employeeNumber when I know its not null going in. The SaveChanges works when updating a value in the database, but when I try to add a new one, SaveChanges() loses my employeeNumber.
Here is my object class:
public partial class SettingsPreference
{
[Key]
[Column(Order = 0)]
public int employeeNumber { get; set; }
public string viewOrder { get; set; }
}
What or why is it losing the value and how can I fix this?
Without further notice, EF will consider int primary keys as identity columns. Which means, it doesn't insert a value for it, on the contrary, it reads the generated value from the database after the insert.
You must tell EF explicitly that the key is not database-generated:
using System.ComponentModel.DataAnnotations.Schema;
...
public partial class SettingsPreference
{
[Key]
[DatabaseGenerated(DatabaseGeneratedOption.None)]
public int employeeNumber { get; set; }
public string viewOrder { get; set; }
}
thank you, that worked. any idea as to why very similar code works at another area in my code? did i just get lucky?
Hard to tell without seeing it. Perhaps the column actually was an identity column there.
| common-pile/stackexchange_filtered |
How to show an ideal isn't radical
The ideal in question is $(wy-x^2,w^2z-x^3)$ in the ring $k[w,x,y,z]$ for algebraically closed $k$.
In fact, the radical of the given ideal is $(x^2-yw, xy-zw)$.
First, you need to find a simple polynomial that is zero in the zero-set $V(I)$ of the ideal $I$. If $(a,b,c,d)$ is in $V(I)$, then
$abd=a^3=d^2c$, so either $d=0$ or $ab=cd$. But if $d=0$, then $a^2=0$, so $ab=cd$ again. Hence the polynomial $xy-zw$ is zero in all $V(I)$. Therefore $xy-zw$ is in the radical of $I$. (One can show also that $(xy-zw)^3$ is in $I$, but you don't need this.)
Now you only need to show that $xy-zw$ is not in $I$.
| common-pile/stackexchange_filtered |
Split a paragraph containing words in different languages
Given input
let sentence = `browser's
emoji
rød
continuïteit a-b c+d
D-er går en
المسجد الحرام
٠١٢٣٤٥٦٧٨٩
তার মধ্যে আশ্চর্য`;
Needed output
I want every word and spacing wrapped in <span>s indicating it's a word or space
Each <span> has type attribute with values:
w for word
t for space or non-word
Examples
<span type="w">D</span><span type="t">-</span>
<span type="w">er</span><span type="t"> </span>
<span type="w">går</span>
<span type="t"> </span><span type="w">en</span>
<span type="w">المسجد</span>
<span type="t"> </span><span type="w">الحرام</span>
<span type="t"> </span>
<span type="w">তার</span><span type="t"> </span>
<span type="w">মধ্যে</span><span type="t"> </span>
<span type="w">আশ্চর্য</span>
Ideas investigated
Search stack exchange
Unicode string with diacritics split by chars lead me to answer that for using Unicode properties Grapheme_Base
Using split(/\w/) and split(/\W/) word boundaries.
That splits on ASCII as MDN reports RegEx \w and 'W
\w and \W only matches ASCII based characters; for example, a to z, A to Z, 0 to 9, and _.
Using split("")
Using sentence.split("") splits the emoji into its unicode bytes.
Unicode codepoint properties Grapheme_Base and Grapheme_Extend
const matchGrapheme =
/\p{Grapheme_Base}\p{Grapheme_Extend}|\p{Grapheme_Base}/gu;
let result = sentence.match(matchGrapheme);
console.log("Grapheme_Base (+Grapheme_Extend)", result);
splits each word but has still all content.
Unicode properties Punctuation and White_Space
const matchPunctuation = /[\p{Punctuation}|\p{White_Space}]+/ug;
let punctuationAndWhiteSpace = sentence.match(matchPunctuation);
console.log("Punctuation/White_Space", punctuationAndWhiteSpace);
seems to fetch the non words.
Testing my own answer 'a-b' gets split. 'c+d' is not split.
By combining Grapheme_Base/Grapheme_Extend and Punctuation/White_Space results we can loop over the whole Grapheme split content and consume the Punctuations list
let sentence = `browser's
emoji
rød
continuïteit a-b c+d
D-er går en
المسجد الحرام
٠١٢٣٤٥٦٧٨٩
তার মধ্যে আশ্চর্য`;
const matchGrapheme = /\p{Grapheme_Base}\p{Grapheme_Extend}|\p{Grapheme_Base}/gu;
const matchPunctuation = /\p{Punctuation}|\p{White_Space}/ug;
sentence.split(/\n|\r\n/).forEach((v, i) => {
console.log(`Line ${i} ${v}`);
const graphs = v.match(matchGrapheme);
const puncts = v.match(matchPunctuation) || [];
console.log(graphs, puncts);
const words = [];
let word = "";
const items = [];
graphs.forEach((v, i, a) => {
const char = v;
if (puncts.length > 0 && char === puncts[0]) {
words.push(word);
items.push({ type: "w", value: "" + word });
word = "";
items.push({ type: "t", value: "" + v });
puncts.shift();
} else {
word += char;
}
});
if (word) {
words.push(word);
items.push({ type: "w", value: "" + word });
}
console.log("Words", words.join(" || "));
console.log("Items", items[0]);
// Rejoin wrapped in '<span>'
const l = items.map((v) => `<span type="${v.type}">${v.value}</span>`).join(
"",
);
console.log(l);
});
You could also use a combination of replace(), split() and join().
const sentence = `browser's
emoji
rød
continuïteit a-b c+d
D-er går en
المسجد الحرام
٠١٢٣٤٥٦٧٨٩
তার মধ্যে আশ্চর্য`;
const splitP = (sentence) => {
const oneLine = sentence.replace(/[\r\n]/g, " "); // replace all \r\ns by spaces
const splitted = oneLine.split(" ").filter(x => x); // split & filter out falsy values
return `<span>${splitted.join("</span><span>")}</span>`; // join with span tags
}
console.log(splitP(sentence));
If you like a one-line solution.
const sentence = `browser's
emoji
rød
continuïteit a-b c+d
D-er går en
المسجد الحرام
٠١٢٣٤٥٦٧٨٩
তার মধ্যে আশ্চর্য`;
const result = `<span>${sentence.replace(/[\r\n]/g, " ").split(" ").filter(x => x).join("</span><span>")}</span>`;
console.log(result);
Your answer does not produce my wished <span> (examples are in question) as I forgot to tell I need distinction between words and not words as I was so busy writing ... I'll edit my question to emphasis this distinction.
| common-pile/stackexchange_filtered |
Native Target Data
In the doxygen reference manual for llvm you can create a target data instance from a Module object or execution engine.
How do I get the target data for the current/native platform?
Well... usually the information to be added to TargetData can be extracted from the platform ABI document. This is where all natural sizes, alignments, etc. are specified. Sometimes if you have a compiler for your platform you can try to match all entries with your compiler.
I believe in the latter case it is possible to write some binary which will generate the TargetData info for you but noone did this before iirc.
It sounds like what your are describing is a per-platform solution. I would like somthing that will port to any platform. I noticed that calling the TargetData constructor with an empty string works. Can anyone confirm that this is correct?
@sean, While empty / no TargetData will certainly work, it will inhibit many important optimizations. Also, in order to generate the binary code the TargetData should be non-empty. The backend will "fix" this problem for you :)
| common-pile/stackexchange_filtered |
An algorithm for efficiently replacing an array elements with groups
There are N elements, each element has its own cost. And there are M groups. Each group includes several indices of elements from the array and has its own cost.
input for example
6
100 5
200 5
300 5
400 5
500 5
600 3
2
4 6
100 200 300 700
3 5
300 400 500
The first number N is the number of elements. The next N lines contain the index and cost of a particular item. Then comes the number M (number of groups). After it comes 2*M lines . These lines contain the number of elements in the group, the cost of the group itself, and the indices of the elements.
I want to find the minimum cost for which can purchase all N items.
In the example, it is most advantageous to take both groups and purchase an element with the number 600 separately. The answer is 14. (6+5+3)
Here is my solution
from queue import PriorityQueue
N = int(input())
dct = {}
groups = PriorityQueue()
for i in range(N):
a,c = [int(j) for j in input().split()]
dct[a] = c
M = int(input())
for i in range(M):
k,c = [int(j) for j in input().split()]
s = 0
tmp = []
for j in input().split():
j_=int(j)
if j_ in dct:
s+=dct[j_]
tmp.append(j_)
d = c-s
if d<0:
groups.put([d, c, tmp])
s = 0
while not groups.empty():
#print(dct)
#for i in groups.queue:
# print(i)
g = groups.get()
if g[0]>0:
break
#print('G',g)
#print('-------')
for i in g[2]:
if i in dct:
del(dct[i])
s += g[1]
groups_ = PriorityQueue()
for i in range(len(groups.queue)):
g_ = groups.get()
s_ = 0
tmp_ = []
for i in g_[2]:
if i in dct:
s_+=dct[i]
tmp_.append(i)
d = g_[1]-s_
groups_.put([d, g_[1], tmp_])
groups = groups_
for i in dct:
s+=dct[i]
print(s)
But it is not completely true.
For example, for such a test, it gives an answer of 162. But the correct answer is 160. It is most beneficial to take only the first and second groups and take an element with index 0 separately.
20
0 24
1 32
2 33
3 57
4 57
5 50
6 50
7 41
8 2
9 73
10 81
11 73
12 55
13 3
14 54
15 43
16 98
17 8
18 41
19 97
5
17 61
17 9 11 15 1 13 14 7 20 2 3 16 12 5 8 4 6
13 75
20 15 5 9 10 11 7 8 18 2 4 19 16
10 96
3 9 4 18 11 6 8 5 2 14
9 92
18 1 6 9 19 8 4 16 10
19 77
14 17 18 3 2 4 7 6 8 9 10 20 13 12 15 19 1 16 5
I also tried brute-force search, but such a solution would be too slow
from itertools import combinations
N = int(input())
dct = {}
s = 0
for i in range(N):
a,c = [int(j) for j in input().split()]
dct[a] = c
s += c
m = s
M = int(input())
groups = []
for i in range(M):
k,c = [int(j) for j in input().split()]
s = 0
tmp = []
for j in input().split():
j_=int(j)
if j_ in dct:
s+=dct[j_]
tmp.append(j_)
groups.append( [c, tmp] )
for u in range(1,M+1):
for i in list(combinations(groups, u)):
s = 0
tmp = dct.copy()
for j in i:
s += j[0]
for t in j[1]:
if t in tmp:
del(tmp[t])
for j in tmp:
s += tmp[j]
#print(i,s)
if s < m:
m = s
print(m)
I think that this problem is solved with the help of dynamic programming. Perhaps this is some variation of the typical Knapsack problem. Tell me which algorithm is better to use.
I'd say dynamic programming has a good shot of working, but probably the numbers N and M have to be bounded for an efficient solution.
The so-called set cover problem(which is NP-Hard) seems like a special case of your problem. Therefore, I am afraid there is no efficient algorithm that solves it.
I am using this particular algorithm. You can see my code. But why does it decide incorrectly for the test I have indicated?
As already stated, this is a hard problem for which no "efficient" algorithm exists.
You can approach this as a graph problem, where the nodes of the graph are all possible combinations of groups (where each element on its own is also a group). Two nodes u and v are connected with a directed edge when there is a group g such that the union of the keys in u and in g, corresponds to the set of keys in v.
Then perform a Dijkstra search in this graph, starting from the node that represents the state where no groups are selected at all (cost 0, no keys). This search will minimise the cost, and you can use the extra optimisation that a group g is never considered twice in the same path. As soon as a state (node) is visited that covers all the keys, you can exit the algorithm -- typical for the Dijkstra algorithm -- as this represents the minimal cost to cover all the keys.
Such an algorithm is still quite costly, as at each addition of an edge to a path, a union of keys must be calculated. And,... quite some memory is needed to keep all states in the heap.
Here is a potential implementation:
from collections import namedtuple
import heapq
# Some named tuple types, to make the code more readable
Group = namedtuple("Group", "cost numtodo keys")
Node = namedtuple("Node", "cost numtodo keys nextgroupid")
def collectinput():
inputNumbers = lambda: [int(j) for j in input().split()]
groups = []
keys = []
N, = inputNumbers()
for i in range(N):
key, cost = inputNumbers()
keys.append(key)
# Consider these atomic keys also as groups (with one key)
# The middle element of this tuple may seem superficial, but it improves sorting
groups.append(Group(cost, N-1, [key]))
keys = set(keys)
M, = inputNumbers()
for i in range(M):
cost = inputNumbers()[-1]
groupkeys = [key for key in inputNumbers() if key in keys]
groups.append(Group(cost, N-len(groupkeys), groupkeys))
return keys, groups
def solve(keys, groups):
N = len(keys)
groups.sort() # sort by cost, if equal, by number of keys left
# The starting node of the graph search
heap = [Node(0, N, [], 0)]
while len(heap):
node = heapq.heappop(heap)
if node.numtodo == 0:
return node.cost
for i in range(node.nextgroupid, len(groups)):
group = groups[i]
unionkeys = list(set(node.keys + group.keys))
if len(unionkeys) > len(node.keys):
heapq.heappush(heap, Node(node.cost + group.cost, N-len(unionkeys), unionkeys, i+1))
# Main
keys, groups = collectinput()
cost = solve(keys, groups)
print("solution: {}".format(cost))
This outputs 160 for the second problem you posted.
Such a solution is not much better than brute force search.
I think that the problem is solved by a greedy algorithm.
| common-pile/stackexchange_filtered |
BeautifulSoup or urllib.request in python, returns different on different machines
So I have written this simple piece of script, but it only works on my linux machine and not Windows 8.1
The code is::
BASE_URL = "http://www.betfair.com/exchange/football/event?id="+ str(matchId)
html = urlopen(BASE_URL).read()
soup = BeautifulSoup(html)
homeScore = soup.find_all("span", {"class": "home-score"})[0].text
On my Windows 8 machine it returns this from the urlopen:
html bytes: b'\\n\\n \\n <!DOCTYPE html>\\n\\n <!--[if IE]><![endif]-->\\n\\n <!--[if IE 9]>\\n <html class="ie9" lang="da-DK"><![endif]--><!--[if IE 8]>\\n <html class="ie8" lang="da-DK"><![endif]--><!--[if IE 7]>\\n <html class="ie7" lang="da-DK"><![endif]--><!--[if lt IE 7]>\\n <html class="ie6" lang="da-DK"><![endif]-->\\n <!--[if (gt IE 9)|!(IE)]><!-->\\n <html lang="da-DK">\\n <!--<![endif]-->\\n <head>\\n <meta name="description" content="San Luis de Quillota v Deportes Temuco ma 15 dec 2014 11:00PM - betting odds. Find markedets bedste spil, samt links til andre ressourcer.">\\n <meta charset="utf-8">\\n <meta name="viewport" content="width=device-width,minimum-scale=1.0,maximum-scale=1.0,user-scalable=no"/>\\n <base href="http://www.betfair.com/exchange/"/>\\n <title> San Luis de Quillota v Deportes Temuco betting odds | Chilean Primera B | betfair.com\\n</title>\\n\\n <link rel="shortcut icon" href="//sn4.cdnbf.net/exchange/favicon_13031_....
The dots is the actual ending on the output. How can I make the same code work on both systems?
Edit: My Windows 8 is python 3.4 and linux is python 3.2
What do you get from the code on linux? What is difference of html data between on Win8.1 and on linux?
my linux machines gets the full html of the site, and it is able to find the expected span with the right class
If you have the opportunity, you should consider using Requests instead of urllib.
import requests
from bs4 import BeautifulSoup
base_url = 'http://www.betfair.com/exchange/football/event'
params = { 'id': str(matchId) }
r = requests.get(base_url, params=params)
html = r.content.decode('utf-8', 'ignore')
soup = BeautifulSoup(html, "lxml")
Provided that requests is build to handle several formats seamlessly, it should work on each of your platforms. If that's not the case, test different parameters for r.content.decode() but anyway, it will be much easier than with using urllib.
| common-pile/stackexchange_filtered |
Cancelling coroutine job after a certain amount of time
What I am trying to do is I am loading a webpage. If the pages takes too long to load I want to write a log message, if it loads in a predefined amount of time I want to cancel a job so that the log message does not get written
Here is what I have
override fun onStart() {
super.onStart()
loadTimer = launch {
delay(TimeUnit.SECONDS.toMillis(5))
ensureActive()
logUtil.w(TAG, "Loading took longer than expected")
}
webView.loadUrl(url)
}
Cancel when page is loaded
override fun onPageFinished(view: WebView?, url: String?) {
super.onPageFinished(view, url)
logUtil.i(TAG, "Finished loading $url")
launch{
loadTimer?.cancelAndJoin()
}
}
The problem I am having is that even after canceling, the log message still gets written. How can I cancel this properly so that it does not get written?
You mean "Loading took...." log gets written after "Finished loading" log? Also why do you use cancelAndJoin instead of just cancel?
Yes it is written after the Finished loading
I'm not sure which CoroutineScope you're launching those coroutines from. Since they are called on implicit this, I suppose you created your own, or you made your Activity implement CoroutineScope directly? A lot of old tutorials showed that pattern, but it is obsolete now that Android Jetpack provides lifecycleScope, which is already set up to use Dispatchers.Main(.immediate) and cancels its jobs automatically in onDestroy().
Anyway, I think maybe the problem is that you're cancelling your Job inside another launched coroutine. If your CoroutineScope is not using Dispatchers.Main.immediate, it will take a few ms until it makes the call to cancel. And since your initial Job was not on the main thread, there's a second race condition there. You don't need to use cancelAndJoin(), so you can eliminate the first race by calling join() synchronously.
Here's how to fix both of those race conditions:
override fun onStart() {
super.onStart()
loadTimer = lifecycleScope.launch { // or with your current scope, use launch(Dispatchers.Main)
delay(TimeUnit.SECONDS.toMillis(5))
logUtil.w(TAG, "Loading took longer than expected")
}
webView.loadUrl(url)
}
override fun onPageFinished(view: WebView?, url: String?) {
super.onPageFinished(view, url)
logUtil.i(TAG, "Finished loading $url")
loadTimer?.cancel()
}
This theory would only give the two race conditions a window of a few ms where the page could load but the five second timer just runs out at almost the same time, so I'm not totally confident it is the issue.
I do create the coroutine by my fragment implementing CoroutineScope then using SupervisorJob to create the context override val coroutineContext: CoroutineContext get() = Dispatchers.Main + job
Any reason you don't use lifecycleScope or viewLifecycleOwner.lifecycleScope? Those are already set up to make your code leak-proof.
honestly its just older code, I will give it a try with lifecycleScope and report back
You don't need to launch anything. Just use withTimeout to handle the cancellation and timeout correctly directly.
I dont see how I would be able to use withTimeout since I have to worry about two different methods for starting and knowing when its loaded
| common-pile/stackexchange_filtered |
Understanding OutPuts of Neural Network
I have a problem in outputs of neural network.
I have 3 layers and in last layer my activation method is softsign and accuracy of it is 97% but i don't understand output of it.
how can i interpret of it?
array([ 2.7876117e-04, -1.1861416e-04, -1.4989915e-04, 1.0406967e-04,
3.3736855e-04, 2.3964542e-04, -5.1546731e-04, -1.9980034e-05,
-8.2800347e-05, 9.0804613e-01, -3.4179697e-03, 5.5045313e-03,
1.9953583e-05, 2.4235848e-04, -1.0185772e-05, 8.0279395e-04,
-2.2013453e-04, -1.3151007e-03, -7.8655517e-04, 2.5021945e-05,
3.0023622e-04, -1.2777583e-05, 2.2269458e-04], dtype=float32)
| common-pile/stackexchange_filtered |
converting tif to jpg using imagemagick
Iam trying to convert Tiff to jpg file, its converting fine using imagemagick but content is missing when converting to jpg
Note: i have added annotation in tiff image as Stamping. But Couldn't convert it properly.
Dim sourcefile, loadfile, sCmdArgs1 As String
sourcefile = "S:\wip\ppa\opoint\OPP20110526\test.jpg"
loadfile = "S:\wip\ppa\opoint\OPP20110526\test3.tif"
sCmdArgs1 = """" & loadfile & """" & " """ & sourcefile & """"
sCmdArgs1 = "C:\Program Files\ImageMagick-6.6.1-Q16\convert.exe " & sCmdArgs1
Shell(sCmdArgs1, AppWinStyle.Hide, True) '' calling convert.exe
I have attached link for my input image and output image.
Input:
https://drive.google.com/file/d/0B_nzYHWVJJ7KaEVxT0RJZ1lGWlk/view?usp=sharing
output:
https://drive.google.com/file/d/0B_nzYHWVJJ7Kem5FSkxsSTVIME0/view?usp=sharing
for me in- and output look the same O.o
If I run compare test3.jpg test3.tif diff.jpg using ImageMagick, they are almost identical apart from some speckles of noise here and there.
| common-pile/stackexchange_filtered |
Host specific volumes in Kubernetes manifests
I am fairly sure this isn't possible, but I wanted to check.
I am using Kubernetes stateful sets, so my hosts get obvious hostnames.
I'd like them to provision a hostPath mount that is mapped to their hostname.
An example helm chart that I'm using might look like this:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: app
namespace: '{{ .Values.name }}'
labels:
chart: '{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}'
spec:
serviceName: "app"
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: app
spec:
terminationGracePeriodSeconds: 30
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}/{{ .Values.image.version}}"
imagePullPolicy: '{{ .Values.image.pullPolicy }}'
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
ports:
- containerPort: {{ .Values.baseport | add 80 }}
name: app
volumeMounts:
- mountPath: /NAS/$(POD_NAME)
name: store
readOnly: true
volumes:
- name: store
hostPath:
path: /NAS/$(POD_NAME)
Essentially, instead of hardcoding a volume, I'd like to have some kind of dynamic variable as the path. I don't mind using helm or the downward API for this, but ideally it would work when I scale the stateful set outwards.
Is there any way of doing this? All my docs reading seems to think it's not... :(
Is there any specific reason that you are not using storage backends, persistent volume claims and storage class for dynamic provisioning?
We are using a custom flexvolume/storage driver that doesn't support persistent volume claims. I'm hoping to write one shortly, but the fact it relies on flexvolume has made it difficult,
Yea that sure will make life easier, looking forward to see more storage backend drivers.
| common-pile/stackexchange_filtered |
Is there a built-in IsLowerCase() in .NET?
Is there a built-in IsLowerCase() in .NET?
The implementation's only trivial until you need to consider other locales...
Is this for a string or just a char?
Do you mean Char.IsLower(ch); ?
public static bool IsLowerCase( this string text ) {
if ( string.IsNullOrEmpty( text ) ) { return true; }
foreach ( char c in text )
if ( char.IsLetter( c ) && !char.IsLower( c ) )
return false;
return true;
}
"someString".IsLowerCase();
Keep in mind that localization makes this a non-trivial question. The first example is fine as long as you don't care about loc:
string s = ...
s.All(c => char.IsLower(c));
If you do care, do it this way:
s.ToLower(CultureInfo.CurrentUICulture) == s;
This gives you the chance to address culture issues.
Why not s.All(c => char.IsLower(c))?
dalle: You need to do a String.ToCharArray() before you can do lambda expressions on characters. That is, bool isStringLower = str.ToCharArray().All(c => char.IsLower(c));
@DrJokepu: Actually, you don't need to do ToCharArray() before you can do the lambda - I just tested it, it works fine on a string too...
@DrJokepu: According to http://msdn.microsoft.com/en-us/library/system.string_members.aspx#extensionMethodTableToggle you don't need it.
Ahh, I think IntelliSense just supresses them on string. I'll modify my answer.
s.All(char.isLower)
Edit: Didn't see the actual meaning of your question. You could use:
char.IsLower(c);
As far as easily converting between cases:
Sure is:
MSDN says:
string upper = "CONVERTED FROM UPPERCASE";
Console.WriteLine(upper.ToLower());
It's part of the string class.
There's also the TextInfo class:
CultureInfo cultureInfo = Thread.CurrentThread.CurrentCulture;
TextInfo textInfo = cultureInfo.TextInfo;
Console.WriteLine(textInfo.ToTitleCase(title));
Console.WriteLine(textInfo.ToLower(title));
Console.WriteLine(textInfo.ToUpper(title));
Which allows for more variation to change caps and whatnot (like ToTitleCase).
I think what he is asking is if there is a function that identifies a lower case string
As others have mentioned you can easily do this for a single char using char.IsLower(ch)
But to extend the String primitive, it wouldn't be very difficult. You can extend the BCL relatively simply using the Runtime.CompilerServices namespace:
Imports System.Runtime.CompilerServices
Module CustomExtensions
<Extension()> _
Public Function IsLowerCase(ByVal Input As String) As Boolean
Return Return Input.All(Function(c) Char.IsLower(c))
End Function
End Module
Or in C#, that would be:
using System.Runtime.CompilerServices;
static class CustomExtensions
{
public static bool IsLowerCase(this string Input)
{
return Input.All(c => char.IsLower(c));
}
}
Now you can figure it out using:
Console.WriteLine("ThisIsMyTestString".IsLowerCase())
Which would return false because there are upper case characters contained within the string.
simply "return Input.All(c => char.IsLower(c))" is enough, and faster since it can return as soon as it finds the first upper case letter.
Ah, nice...I never noticed you could do an All on a String object... thanks.
How about:
public bool IsLower(string TestString)
{
if (String.IsNullOrEmpty(TestString))
{
return true;
}
string testlower = TestString.ToLowerInvariant();
if (String.Compare(TestString, testlower, false) == 0)
{
return true;
}
else
{
return false;
}
}
Long windedness is intentional in this case. And yeah, i think asd234as!!!df is lower case. 2,3,4 and ! by definition don't have case at all so are both lower and upper case.
balabaster,
please do not use this approach with FindAll/Count. All you need is
return Input.ToList().Exists(c => Char.IsUpper(c));
It will stop the iteration on the first upper case character.FindAll create a new List and you use only the Count property. If we have a long string that's in upper case, you will end up with a copy of the original string.
@Petrov: .All (as I have used) drops out on the first existence of a non-lowercase character. What you've suggested is equally long winded. If you drop the ToList().Exists() and use instead just .All(c => char.IsLower(c)) then you get even better results!
Guys why this LINQ abuse (ToList(), ToArray(), All(), Any(), ...) ? I love LINQ and lambdas too but in this case I think the good old foreach is what we need. See the answer of TcKs as reference - but we can do better if we remove the superfluous
char.IsLetter( c )
because IsLower() is doing the same check.
Because a nice .All(c => Char.IsLower(c)) takes care of the whole lot... forget iterating over a collection - just query it like you would a table in a database. Much more elegant...
| common-pile/stackexchange_filtered |
How do you query a list of message id's on MS Graph? Current response says invalid nodes
My query:
https://graph.microsoft.com/beta/me/messages?$filter=id+in+('AAMkAGVmMDEzMTM4LTZmYWUtNDdkNC1hMDZiLTU1OGY5OTZhYmY4OABGAAAAAAAiQ8W967B7TKBjgx9rVEURBwAiIsqMbYjsT5e-T7KzowPTAAAAAAEMAAAiIsqMbYjsT5e-T7KzowPTAAUEQPKcAAA=','AAMkAGVmMDEzMTM4LTZmYWUtNDdkNC1hMDZiLTU1OGY5OTZhYmY4OABGAAAAAAAiQ8W967B7TKBjgx9rVEURBwAiIsqMbYjsT5e-T7KzowPTAAAAAAEMAAAiIsqMbYjsT5e-T7KzowPTAAT57zAUAAA=')
Response:
{
"error": {
"code": "ErrorInvalidUrlQueryFilter",
"message": "The query filter contains one or more invalid nodes."
}
}
If this is not the way to do it. How do you do it?
This answer to a similar question seems to indicate that some properties are just not filterable: https://stackoverflow.com/a/44965572/5078765
Perhaps is the id field not filterable in such a way?
Addendum:
I've also tried the search.in() method described here: https://learn.microsoft.com/en-us/azure/search/search-query-odata-filter#code-try-17
Like so:
https://graph.microsoft.com/v1.0/me/messages?$filter=search.in(id,'AAMkAGVmMDEzMTM4LTZmYWUtNDdkNC1hMDZiLTU1OGY5OTZhYmY4OABGAAAAAAAiQ8W967B7TKBjgx9rVEURBwAiIsqMbYjsT5e-T7KzowPTAAAAAAEMAAAiIsqMbYjsT5e-T7KzowPTAAUEQPKcAAA=,AAMkAGVmMDEzMTM4LTZmYWUtNDdkNC1hMDZiLTU1OGY5OTZhYmY4OABGAAAAAAAiQ8W967B7TKBjgx9rVEURBwAiIsqMbYjsT5e-T7KzowPTAAAAAAEMAAAiIsqMbYjsT5e-T7KzowPTAAT57zAUAAA=',',')
But this returns a different error:
{
"error": {
"code": "BadRequest",
"message": "Invalid filter clause",
"innerError": {
"date": "2023-01-12T21:34:46",
"request-id": "993206ca-950d-4f0f-aee1-149219509b04",
"client-request-id": "993206ca-950d-4f0f-aee1-149219509b04"
}
}
}
Make sure that the ids passed in the filter parameter are in the correct format and you are using the correct version of the API.
Check the permissions you have on your access token, maybe you don't have the permission to use this filter.
I'm pretty sure I have permission, there is no permission error. I tried using my own credentials with the Postman application and then I tried using the Graph Explorer using the demo credentials but they both give similar errors. If I make a very similar request but instead query "groups" it seems to work fine and I can get the groups I want.
If you can provide a query string that works on the Graph Explorer that fetches 2 or more specific emails by their ID, you have found a solution to my problem. Graph Explorer here: https://developer.microsoft.com/en-us/graph/graph-explorer
Is there any reason why you don't want to call https://graph.microsoft.com/beta/me/messages/{id} endpoint?
If you try filter id with eq: https://graph.microsoft.com/v1.0/me/messages?$filter=id eq '{message_id}' it will return error message The property 'Id' does not support filtering.
@user2250152 That's sad. I just want to be able to pull a collection of them instead of just one single item.
Do you might know the answer to this one as well? https://stackoverflow.com/questions/75245865/copybatchrequestproperties-and-deleterequestcontainer-replacement-in-aspnetcore
You cannot use Id of the entity, as Glen Scales correctly noted. If you try to query using filter to the only one id you'll see self-explanatory message
GET 'https://graph.microsoft.com/v1.0/me/messages?filter=id eq 'AAMkA..8AAA='
{
"error": {
"code": "ErrorInvalidProperty",
"message": "The property 'id' does not support filtering."
}
}
Also, in MSGraph filter operators allowed depend on resource and field type. I.e. for GUID types only eq and not operators are supported in filter.
Check if extended properties may serve your needs: first you mark a set of messages with a single value extended property, for example, and later you can filter messages by that property
Thank you, but before I accept your answer, what is a GUID type and how am I supposed to know that a "message resource type" is also a GUID type? The message resource page does not mention the word GUID anywhere.
message id is not definitely GUID.
@user2250152 yes, you're right. But MS documentation is not always perfect, unfortunately, so I thought it's the case of wording.
ADJenks, GUID is a global unique identifier. As soon as get to pc, I've checked, and really you cannot filter by id as mentioned in Glen's answer (regardless of identifier's type). But I couldn't find this mentioned anywhere in documentation. I've updated my answer.
Ah, yes, I know what a GUID is but I didn't understand what make a particular object a GUID type, I guess it's a GUID type if its primary identifier is a GUID? This is why I was confused, because the message ID was not a GUID so I thought something else must make it a GUID type.
It's silly that the error message is much more clear when you're only filtering on one...
Do you might know the answer to this one as well? https://stackoverflow.com/questions/75245865/copybatchrequestproperties-and-deleterequestcontainer-replacement-in-aspnetcore
You can't query by the Id property if you want to know if a message exists then you need to bind/get the item in question. Eg a batch get on the Id's you have and check for the batch response to see which one's exist (or not) you have the identifiers already so a search isn't needed and is much more expensive. If you want a message Identifier you can query on use the either the InternetMessageId or the PidTagSearchKey https://learn.microsoft.com/en-us/office/client-developer/outlook/mapi/pidtagsearchkey-canonical-property the latter should be unique.
Why can't I query by the Id property and where does it specify this?
Also, I see that there is an InternetMessageId, but I don't see PidTagSearchKey as a property of a MS Graph message resource: https://learn.microsoft.com/en-us/graph/api/resources/message?view=graph-rest-1.0
Also, I'm sorry but your answer seems to be missing commas or something, so it's a bit difficult for me to read. Also I don't know what "bind" means in relation to Microsoft's Graph API.
This also seems to fail: https://graph.microsoft.com/v1.0/me/messages?$filter=internetMessageId in<EMAIL_ADDRESS>
I'm pretty sure In isn't supported in the Mail endpoint so you need to use an OR filter if you wish to do it. A filter is really the wrong way to go about this from a performance point of view, the mail API's are optimized to bind to multiple items at once (as they is the way a lot of email client work) doing large filters on folders with large item count is not the best way to go.
Yes, it seems it's not supported, and @kadis showed good proof of that. Sorry, once again, I don't know what "bind" means in this context and I can't seem to find a reference to the word bind in any of the mail or graph documentation. From my perspective, a filter directly on a list of primary keys seems like something that should be very performant. It all depends on how it's implemented. I'm not sure how it all works under the hood, but from an API user perspective with no knowledge of the internal workings, this should be simple, but you seem to have a deeper understanding of MS Exchange.
A bind it just a Get on a particular GraphId (in EWS it was method in the Managed API that did the GetItem operation). Exchange isn't a RDBMS its more inline with NOSQL but predates most of these. An Id will contain all the routing information the backend needs to know to get to an Item, that item maybe in the users mailbox or any mailbox so its a global indentifier in this regard and one of reasons its not filterable.
Ah okay, that makes a lot of sense now. Thank you for clarifying that.
Do you might know the answer to this one as well? https://stackoverflow.com/questions/75245865/copybatchrequestproperties-and-deleterequestcontainer-replacement-in-aspnetcore
| common-pile/stackexchange_filtered |
Some examples of iterative functions to find join occurence in columns' dataset
I have a very complex dataset where I am struggling to find with R, the joint occurrence through four main columns in their values' rows. To explain this better, I am providing the following example with the airquality dataset, where just for simplicity purposes, I will show two main examples:
library(readxl)
library(tidyverse)
libra
library(data.table)
install.packages('hablar')
library(hablar)
#1st example
airquality %>%
select(Wind, Temp) %>%
find_duplicates(Wind, Temp)
#2nd example
airquality %>%
select(Wind, Month) %>%
find_duplicates(Wind, Month)
And so on. Just to avoid writing separate chunks of code to iterate the same operation for further columns, could you please suggest another iterative mode for the same results (for loops, map() functions, apply() functions). Could you please show as follows some examples?
Just to clarify the output I am looking for seems more a less to a table such as the one provided below:
# A tibble: 31 x 2
Wind Temp
<dbl> <int>
1 6.9 74
2 11.5 68
3 14.9 81
4 7.4 76
5 8 82
6 11.5 79
7 14.9 77
8 10.3 76
9 6.3 77
10 14.9 77
# ... with 21 more rows
But I would like to iterate the same result for many columns of interest
P.s. I know that instead of using function 'find_duplicates from the package ('hablar') there are different. So you have any further suggestions, feel free to implement this in your solution. It will be very appreciated.
Thank you so much for paying attention
What's your desired output? I don't want to install hablar, but maybe you can try combn(airquality, 2, FUN = function(x) cbind(x, Dup = duplicated(x)) ,simplify = FALSE).
Thanks for your example. This is the kind of what I am looking for. However, since my dataset has many columns I am trying to find a solution all-in-one round, just on that selected columns.
The below will return you a list of your expected data.frames
mycolumns= c("Wind", "Month", "Ozone", "Solar.R")
columnpairs <- as.data.frame(combn(mycolumns, 2))
result <- lapply(columnpairs, function(x)
airquality %>%
select(Wind, Temp) %>%
find_duplicates(Wind, Temp)
)
Regular dplyr runs it 37% faster:
result_dplyr <- lapply(columnpairs, function(x) {
airquality %>%
select(all_of(x)) %>%
group_by(across(all_of(x))) %>% filter(n() > 1)
}
)
result_dplyr
$V1
# A tibble: 105 x 2
# Groups: Wind, Month [40]
Wind Month
<dbl> <int>
1 7.4 5
2 8 5
3 11.5 5
4 14.9 5
5 8.6 5
6 8.6 5
7 9.7 5
8 11.5 5
9 12 5
10 11.5 5
# ... with 95 more rows
$V2
# A tibble: 31 x 2
Benchmarking:
> microbenchmark::microbenchmark(
+ hablar = lapply(columnpairs, function(x) airquality %>% select(Wind, Temp) %>% find_duplicates(Wind, Temp)),
+ dplyr= lapply(columnpairs, function(x) {airquality %>% select(all_of(x)) %>% group_by(across(all_of(x))) %>% filter(n() > 1) })
+ )
Unit: milliseconds
expr min lq mean median uq max neval
hablar 123.6340 127.66205 132.8422 130.6805 133.7052 210.2259 100
dplyr 75.8762 81.19705 82.5891 82.2986 83.6467 98.0426 100
| common-pile/stackexchange_filtered |
How to have an image instead of a string inserted into GridView that uses a data source
This is homework, a Website to rate doctors, using Visual Studio in C#. I have a GridView that gets data from a repository. The DoctorPicture column should display a picture that is in my Images folder, not a string. When I run the page with the GridView, an error says, "A field or property with the name 'Images' was not found on the selected data source." This is my GridView:
<asp:ObjectDataSource ID="ObjectDataSourceDoctor" runat="server" DataObjectTypeName="MidtermApplication.Models.Doctor" DeleteMethod="Remove" InsertMethod="Add" SelectMethod="GetItems" TypeName="MidtermApplication.Models.TestDoctorRepository" UpdateMethod="Update"></asp:ObjectDataSource>
<asp:GridView ID="GridViewDoctor" runat="server" DataSourceID="ObjectDataSourceDoctor" DataKeyNames="DoctorPicture" AutoGenerateColumns="False">
<Columns>
<asp:CommandField ShowSelectButton="True" />
<asp:BoundField DataField="DoctorName" HeaderText="DoctorName" SortExpression="DoctorName" />
<asp:BoundField DataField="DoctorSpecialty" HeaderText="DoctorSpecialty" SortExpression="DoctorSpecialty" />
<asp:BoundField DataField="rating" HeaderText="rating" SortExpression="rating" />
<asp:BoundField DataField="times" HeaderText="times" SortExpression="times" />
<asp:CheckBoxField DataField="fave" HeaderText="fave" SortExpression="fave" />
<asp:CheckBoxField DataField="rated" HeaderText="rated" SortExpression="rated" />
<asp:ImageField DataImageUrlField="Images" HeaderText="DoctorPicture">
</asp:ImageField>
</Columns>
</asp:GridView>
Here is where the data is coming from (I think):
public class TestDoctorRepository : IDoctorRepository
{
List<Doctor> doctors;
public TestDoctorRepository()
{
doctors = new List<Doctor> {
new Doctor { DoctorPicture = "Images/0cropped.jpg", DoctorName = "Michael Shores", DoctorSpecialty = "Opthamology", times = 0, rating = 0, rated = true, fave = true },
new Doctor { DoctorPicture = "Images/1cropped.jpg", DoctorName = "Ming Wu", DoctorSpecialty = "Cardiology", times = 0, rating = 0, rated = true, fave = true },
new Doctor { DoctorPicture = "Images/1bcropped.jpg", DoctorName = "Susan McInerney", DoctorSpecialty = "Gynecology", times = 0, rating = 0, rated = true, fave = true },
new Doctor { DoctorPicture = "Images/2cropped.jpg", DoctorName = "Michelle Adkins", DoctorSpecialty = "Dermatology", times = 0, rating = 0, rated = true, fave = true },
new Doctor { DoctorPicture = "Images/5cropped.jpg", DoctorName = "Kathy Powers", DoctorSpecialty = "Chiropractor", times = 0, rating = 0, rated = true, fave = true }
};
}
"A field or property with the name 'Images' was not found on the
selected data source."
Error explains the reason, if there is no 'Images' property then which property can be set as DataImageUrlField?
try by changing DataImageUrlField="Images" to DataImageUrlField="DoctorPicture"
<asp:ImageField DataImageUrlField="DoctorPicture" HeaderText="DoctorPicture">
| common-pile/stackexchange_filtered |
Page redirected and displays "URL No Longer Exists" in a webservice callout from Salesforce
There is no issue when the callout to a SAP webservice returns records less than 3k. But if it exceeds i got "URL No Longer Exists" error. Actually this error is not displaying in the current page where i am initiating the webservice call and display the results.
When i initiate the callout by the click of a button on the vf page, it takes a while to get the response. But after 2 to 3 minutes suddenly the page is redirected and took me to a page where i am getting the above error and the URL of the page is different from the page where i actually there.
https://cs7.salesforce.com/servlet/servlet.Integration?lid=066M00000005Igz&ic=1
I checked the debug log but it displays "Maximum log size reached".
Any suggestions why i am getting "URL No Longer Exists" error?
Update:
After marking several adjustments to the log filters, i am able to see the summary but not the exception.
Number of SOQL queries: 0 out of 100
Number of query rows: 0 out of 1000000
Number of SOSL queries: 0 out of 20
Number of DML statements: 0 out of 0
Number of DML rows: 0 out of 0
Maximum CPU time: 11938 out of 10000 ******* CLOSE TO LIMIT
Maximum heap size: 0 out of 6000000
Number of callouts: 1 out of 10
Number of Email Invocations: 0 out of 0
Number of future calls: 0 out of 10
Number of Mobile Apex push calls: 0 out of 10
Did you apply log filters to narrow down to only the useful info ? That might provide more insight.
Thanks @SamuelDeRycke. I am changing the log filters now. I'll get back to you with more details.
Keep in mind that the total of callout time for a single execution context is limited to 2 minutes, so be sure your SAP is able to respond within that timeframe, else you'll need an async integration. Though that should give a clear governor limit error message.
I checked and the response time is 30 to 50 seconds. Actually the error should be in handling the response. Is the exceed in view state size would result in such URL No Longer Exists error?
@SamuelDeRycke, i've updated the post with debug log summary. But I did not find any exception. What could be wrong? - Thanks.
| common-pile/stackexchange_filtered |
How to get result for cities via google places api?
I am developing an application in which I am trying to get only cities names via google places api but the api is returning all the related results to related keyword.Below is the url I am using
https://maps.googleapis.com/maps/api/place/autocomplete/json?input=jai&types=geocode&sensor=false&types=regions&key=AIzaSyAJP9EH1TnS5pYo-xEpQPEvBpbLXed8ets
I am not entirely sure if this is what you want to do, but remove the two types= parameters from your GET request and put types=(cities) instead.
See this:
https://maps.googleapis.com/maps/api/place/autocomplete/json?input=jai&types=(cities)&sensor=false&key=AIzaSyAJP9EH1TnS5pYo-xEpQPEvBpbLXed8ets
| common-pile/stackexchange_filtered |
undefined function ldap_connect()
I have a problem with my extension ldap:
Fatal error: Call to undefined function: ldap_connect()
Apache and PHP are installed on a Windows 2012 R2 virtual machine.
The version of PHP is 7.0
I have already edited the php.ini folder and enabled php_ldap.dll
I have already added the files libsasl.dll and ssleay32.dll in System32, SysWOW64 and Apache24/bin.
I have already added path of my PHP to Windows System Path.
And obviously I have restarted my Apache and my virtual machine.
This is an extract of my phpinfo()
This is the only place which ldap appears in phpinfo()
Please could you resolve my problem?
PS: Excuse my English, I'm French.
EDIT 02/06/2017: I have solved the problem. Apache can't load
ldap.dll with the error "is not a valid Win32 application". I have
changed the dll (downloaded a new php and take the ldap.dll). It works!
libsasl.dll and ssleay32.dll only need to be in your Apache24/bin If you put them everywhere you will get probelms later when you want to change the version of any of these
Most likely the extension fails to load, possibly due to binary incompatibility reasons. Please take a look at your http servers error log file. Usually you will find a hint on that.
Are you sure you edited the php.ini file that lives in the apache\bin folder? The one in the php folder is used only by PHP CLI
I have solved the problem.
Apache can't load ldap.dll with the error "is not a valid Win32 application".
I have change the dll (download a new php and take the ldap.dll).
It work !
| common-pile/stackexchange_filtered |
Custom Data ACLs
I'm a little confused on how custom data ACLs work - specifically, when I give a role access to edit a set of custom data fields that are associated with a contact, like so:
Does this allow that role to edit the custom data belonging to any contact or just their own contact?
If it would allow that role to edit custom data belonging to any contact, how can I lock it down so they can only edit their own data?
My use case is this: I want users to be able to log into the site, update their own information, and only their their own. Nobody else's. Their information is in a profile and custom fields.
Can you say a bit more about your use case? Should contacts other than their own contact be editable at all? If it's just the custom data you want to lock down you'll need some custom code. If it's acceptable to lock down the contacts generally you're likely in luck.
ACL's are complex - it's worth setting up a test and verifying that the behaviour you observe is what you what.
agree with Jon, it helps if you expain what outcome you are after as you may be barking up the wrong tree eg compared to using a checksum to direct folk to a Profile edit screen
Updated with my use case. Sorry for the lack of clarity!
CiviCRM has two permission models - there are ACLs and CMS permissions. You don't need ACLs to accomplish what you're trying to accomplish. Use Thomas' CMS suggestion and you should be done.
This will allow to view/edit a custom data field set for every contact the user has view/edit permissions.
Despite the option named Edit there is currently no difference to View, as descibed in the note. Whether you can actually edit it depends on the ability to edit the contact.
There is a CMS-based permission CiviCRM: edit my contact to edit your own contact.
This is the key I needed, thank you! So as long as "edit my contact" is enabled and not "edit all contacts," this ACL will only allow them to edit data associated with their own contact record. Makes perfect sense. Have a checkmark.
so you are using both the CMS permission and the Profile ACL?
| common-pile/stackexchange_filtered |
Are "Christ" and "Son of God" two things or one in John 20:31?
John 20:30-31 says the following:
Therefore many other signs Jesus also performed in the presence of the disciples, which are not written in this book; but these have been written so that you may believe that Jesus is the Christ, the Son of God; and that believing you may have life in His name.
My question is about the Greek syntax in the phrase highlighted above. Here it is:
Ἰησοῦς ἐστιν ὁ Χριστὸς ὁ Υἱὸς τοῦ Θεοῦ
Judging by the English alone I can see two distinct possibilities:
John wrote so that his readers would believe that Jesus is (A) the Christ, and (B) the Son of God (i.e. two distinct identifiers), or
John wrote so that his readers would believe that Jesus is the Christ / Son of God (presenting the two as synonymous)
Which is it?
Possibly related: Matt. 26:63, Luke 4:41, John 11:27
| common-pile/stackexchange_filtered |
How to get IPv4 Connection Status to JAVA (JT400) Application
I want to know how to get IPv4 Connection Status to java application
Work with IPv4 Connection Status
System: V172172
Type options, press Enter.
3=Enable debug 4=End 5=Display details 6=Disable debug
8=Display jobs
Remote Remote Local
Opt Address Port Port Idle Time State
* * ftp-con > 092:54:32 Listen
* * ssh 092:25:07 Listen
* * telnet 000:01:20 Listen
* * smtp 092:25:36 Listen
* * netbios > 092:25:36 Listen
* * netbios > 000:00:01 *UDP
* * netbios > 000:00:01 *UDP
* * netbios > 092:25:32 Listen
* * ldap 092:25:31 Listen
* * 427 000:00:09 *UDP
* * 427 000:00:01 *UDP
* * cifs 092:25:32 Listen
More...
F3=Exit F5=Refresh F9=Command line F11=Display byte counts F12=Cancel
F20=Work with IPv6 connections F22=Display entire field F24=More keys
I want to know how to get All IP address (to jlistBox or jTable), display details of that every IP address (option 5) and display job detals of IP adress (option 8) to my Java Application
==========================================================================
What is the underlying problem you need to solve?
I want to get this IPv4 Connection status to my java application using JT400
I'm developing IBM monitoring software using java , JT400
do you know how to do it ?
You will want to call the
List Network Connections (QtocLstNetCnn) API. I expect you will want a server side program to interface the API for you, and then call that program using the ProgramCall class.
See How to get PSF Settings in an AS400 Server using Java
can you give me java sample to get IPv4 Connection Status ?
Sorry, don't have anything appropriate lying around. You might try searching Code400.com for samples showing concepts of server side code using various APIs, and they also have Java sample code.
The server side program cannot be written in Java. Use RPG, COBOL or C.
| common-pile/stackexchange_filtered |
Android Studio showing errors on restart
I was working fine. My p.c powered off due to light. When I start again my android studio, there came a lot of errors in my all java files. How can I solve them?
public class MainActivity extends Activity {
private boolean detectEnabled;
private TextView textViewDetectState;
private Button buttonToggleDetect;
private Button buttonExit;
FlashLight flashLight;
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
textViewDetectState = (TextView) findViewById(R.id.textViewDetectState);
buttonToggleDetect = (Button) findViewById(R.id.buttonDetectToggle);
buttonToggleDetect.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
setDetectEnabled(!detectEnabled);
}
});
buttonExit = (Button) findViewById(R.id.buttonExit);
buttonExit.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
Intent intent = new Intent(MainActivity.this , FlashLight.class);
startActivity(intent);
}
});
}
@Override
public boolean onCreateOptionsMenu(Menu menu) {
getMenuInflater().inflate(R.menu.activity_main, menu);
return true;
}
private void setDetectEnabled(boolean enable) {
detectEnabled = enable;
Intent intent = new Intent(this, CallDetectService.class);
if (enable) {
// start detect service
startService(intent);
buttonToggleDetect.setText("Turn off");
textViewDetectState.setText("Detecting");
}
else {
// stop detect service
stopService(intent);
buttonToggleDetect.setText("Turn on");
textViewDetectState.setText("Not detecting");
}
}
}
Here is a screenshot of my issue:
how should we know? we can just randomly guess everywhere.
This question is probably going to be downvoted for lack of detail. You need to share the specifics of the errors — at least the first few of them.
sir i had upload pic plz check it
Did my answer work? Feel free to ask me any questions if it didn't. If it did, make sure to mark it best answer! :)
Some time this occurs when compileSdkVersion,
buildToolsVersion and your support library version has different versions.
Recently I had same problem and that was because I was messed up with the different versions. And other than that you can just invalidate/cache restart
Because we don't have enough information, we can't tell you. This has happened to me before though, so I can give you a few tips:
Just wait until the gradle build is done, often times there are random errors which resolve themselves after a few minutes. This especially happens with resources that you are getting based on an id, R from R.id shows as an error until the gradle build is finished
Clean your project. Build->Clean Project
Rebuild Project. Build->Rebuild Project
Restart Android Studio
Restart Computer
Honestly, we can't give you more information than that until we get more details. Chances are it's number 1 that is the problem, so just wait.
{Rich}
Sir i have created new project. But there have also errors.
@ShahryarAziz No, don't create a new project. Just open android studio and wait. Trust me, don't mess with it until it stops saying Gradle Build Running If the Gradle Build is finished and the issue persists, then do steps 2 - 5. Don't just build a new project.
sir i had upload pic plz check it
@ShahryarAziz Okay, I saw your picture and the code has no actual errors. It is just what I thought. Just leave the screen open for some time and it will go away. It is still loading (Gradle still running). Don't worry, it's not your fault, it's either your computer or android-studio's fault.
@ShahryarAziz Don't keep messing with it, that will just go slower. Just let it load.
Build--> Clean Project
Build-> Rebuild Project
Go to File
Then Invalidate Cache/ Restart
After that hit Invalidate and Restart
It's not a pleasant moment when your computer suddenly turns off and corrupting your project files. Sadly, when using Windows, you have a small hope that your files integrity are still intact when the Windows suddenly turn off due to power failure or having a friendly BSOD. I've experienced project error twice in a month because of it. Windows can't be counted for reliability, tried Linux instead.
To prevent another files error (or should I says, corrupted files?), you should always back up your project. Use a Version control systems (VCS) like git. Then copy/clone your project to USB flash drive. Or you can use a free private repository for git:
Bitbucket
GitLab
Always, always, always backup.
Build->Clean Project. This will solve your problem.
| common-pile/stackexchange_filtered |
Disabled then enabled kext signing; some keys no longer work properly
MacOS Sierra MacBook Pro mid2012
I disabled kext signing (to use change some icons in liteicon, which I never got to), and pretty soon some of my keys didn't work like they should. I then enabled it, and rebooted it. As soon as I logged in, some keys no longer did what they should do. The keys that are affected are i, j, k, l, m, o, u, and I don't know if any non-letter keys are affected.
I have noticed that the i key functions as cmd+o, and some of the other keys do stuff too (but I don't know what).
Is there any way that I can fix this? This is my school computer (I'm currently doing this at school), and I really need my computer working.
SIP and root are enabled, and so is kext signing currently.
Also, idk if it matters here but Finder keeps crashing (not responding) when I do minor things like clicking on folders when logged into my user account.
Interestingly enough, this only happens on my user account, not on root.
This is an issue with accessibility settings. The function is called "Enable Mouse keys". It is configured in Settings - Accessibility - Mouse & Trackpad (left pane option), "Enable Mouse Keys" AND ALSO via the "Options..." button.
There is a checkbox setting "Press the Option key five times to toggle Mouse Keys". If this is active then simply pressing the option key 5 times will activate or deactivate this feature thus changing the functional mode of your keyboard.
https://www.youtube.com/watch?v=wxSo1xnxL_k
Thanks, I didn't realize that I had mouse keys enabled
| common-pile/stackexchange_filtered |
Entity Framework - get SQL Server field data type
I am new to SQL Server and trying to figure out if there is a way to get the data type of a column in a table that I have created and entity for.
For example, if I have a SQL Server table 'Employees' with the following column
employeeID int, empName varchar(255), empDept varchar(100)
Is there a way to find the data type for the empName field in C# / Entity Framework?
In the end what I am trying to see is the data type and the column length. So, if I were looping through all of the fields in the table/entity I would like to see: "varchar(255)", "varchar(100)", "int" etc.
When you say find the data type... when and how do you want to get the data type? From the code? Looking it up by metadata?
I would prefer in code. If there were a function that I could use and store in a variable that would be ideal.
Click here!
The above link could help you solve the problem.
How about this? http://stackoverflow.com/questions/18901720/get-column-datatype-from-entity-framework-entity
You could either create a stored procedure and call that from EF passing in the table names, or run a raw sql command something like the following:
var sqlToRun = string.format(@"SELECT column_name as 'Column Name',
data_type as 'Data Type',
character_maximum_length as 'Max Length'
FROM information_schema.columns
WHERE table_name = '{0}'", myTableName);
using (var context = new dbContext())
{
var tableFieldDetails= context.SqlQuery(sqlToRun).ToList();
//do something with the tableFieldDetails collection
}
...where myTableName is the name of the table to return field info from.
Do note that a -1 is returned if you are using an nvarchar(MAX) or varchar(MAX) and I'm sure there may be some other anomalies as I have not tested all types.
Also I have not tested the c# above, so it may need a tweak to work, but the principle is good and would benefit from being encapsulated in a nice method ;-)
For more info on running raw sql in EF see https://msdn.microsoft.com/en-us/data/jj592907.aspx
| common-pile/stackexchange_filtered |
python incorrect rounding with floating point numbers
>>> a = 0.3135
>>> print("%.3f" % a)
0.314
>>> a = 0.3125
>>> print("%.3f" % a)
0.312
>>>
I am expecting 0.313 instead of 0.312
Any thought on why is this, and is there alternative way I can use to get 0.313?
Thanks
Search for IEEE 754 and round to even. The rounding is not wrong. It conforms to the standard.
Python 3 rounds according to the IEEE 754 standard, using a round-to-even approach.
If you want to round in a different way then simply implement it by hand:
import math
def my_round(n, ndigits):
part = n * 10 ** ndigits
delta = part - int(part)
# always round "away from 0"
if delta >= 0.5 or -0.5 < delta <= 0:
part = math.ceil(part)
else:
part = math.floor(part)
return part / (10 ** ndigits) if ndigits >= 0 else part * 10 ** abs(ndigits)
Example usage:
In [12]: my_round(0.3125, 3)
Out[12]: 0.313
Note: in python2 rounding is always away from zero, while in python3 it rounds to even. (see, for example, the difference in the documentation for the round function between 2.7 and 3.3).
Fails on: my_round(4354788000,-9) -><PHONE_NUMBER>.9999995
@VincentAlex I'm pretty sure I wrote this code 8 years ago assuming ndigits >= 0. Anyway the issue is that you are hitting float limits. If you check the code you end up on the return line of my_round with 4 / (10 ** (-9)) which should return the correct result but it doesn't. You can easily check for the sign of ndigits and use part * 10 ** abs(ndigits) for the negative cases and this seems to work better.
If you need accuracy don't use float, use Decimal
>>> from decimal import *
>>> d = Decimal('0.3125')
>>> getcontext().rounding = ROUND_UP
>>> round(d, 3)
Decimal('0.313')
or even Fraction
try
print '%.3f' % round(.3125,3)
I had the same incorrect rounding
round(0.573175, 5) = 0.57317
My solution
def to_round(val, precision=5):
prec = 10 ** precision
return str(round(val * prec) / prec)
to_round(0.573175) = '0.57318'
Since round() doesn't work correctly, I used the LUA approach instead:
from math import floor
rnd = lambda v, p=0: floor(v*(10**p)+.5)/(10**p)
Testing:
>>> round( 128.25, 1 )
128.2
>>> rnd( 128.25, 1 )
128.3
>>>
To not conflate with other answers
Other tests include:
>>> rnd = lambda v, p=0: round(v*(10**p))/(10**p) # hairetdin's answer
>>> rnd( 128.25, 1 )
128.2
>>> print( '%.3f' % round(128.25,1) ) # Tushar's answer
128.200
Other details:
The duplicate processing of 10**p is negligible in performance from storing and referencing a variable.
| common-pile/stackexchange_filtered |
How to use count with limit function in codeigniter
i have some model like this,
public function get_payment_method($date_from, $date_to){
$op_status = array('settlement','capture');
$this->db->select('op_payment_code,COUNT(order_id) as total,pc_caption');
$this->db->from("(dashboard_sales)");
$this->db->where('order_date BETWEEN "'. date('Y-m-d 00:00:00', strtotime($date_from)). '" and "'. date('Y-m-d 23:59:59', strtotime($date_to)).'"');
$this->db->where_in('op_status',$op_status);
$this->db->where('pc_caption is NOT NULL', NULL, FALSE);
$this->db->group_by('op_payment_code');
$query = $this->db->get();
return $query->result();
}
but when I check my database there is duplicate data on order_id as below,
the question is how to do count only once at order_id? so if there is the same order_id then it will not be counted
you can try with this:
$this->db->select('op_payment_code, COUNT(DISTINCT order_id) as total, pc_caption', false);
| common-pile/stackexchange_filtered |
fl <- as.factor(sml) & gset$description Code Pairing is Not Working - Dataset Has Too Many Rows
I am having a slight problem with trying to fix an error. My dataset has 30 samples with 38.830 features. I copied and pasted the program code that is producing the error that I am trying to fix below:
> colnames(pData(pehnoData(gset)))[1:40]
"At positions 39 & 40 the output results are both NA & NA"
> pData(phenoData(gset))[ , c(11,12)]
characteristics_ch1.1 characteristics_ch1.2
GSM1690577 tissue: Tonsils cell type: mononuclear cells
GSM1690578 tissue: Tonsils cell type: mononuclear cells
GSM1690579 tissue: Tonsils cell type: mononuclear cells
GSM1690580 tissue: Tonsils cell type: mononuclear cells
GSM1690581 tissue: Tonsils cell type: mononuclear cells
GSM1690582 tissue: Tonsils cell type: mononuclear cells
GSM1690583 tissue: Tonsils cell type: mononuclear cells
GSM1690584 tissue: Tonsils cell type: mononuclear cells
GSM1690585 tissue: Tonsils cell type: mononuclear cells
GSM1690586 tissue: Tonsils cell type: mononuclear cells
GSM1690587 tissue: Tonsils cell type: mononuclear cells
GSM1690588 tissue: Tonsils cell type: mononuclear cells
GSM1690589 tissue: Tonsils cell type: mononuclear cells
GSM1690590 tissue: Tonsils cell type: mononuclear cells
GSM1690591 tissue: Tonsils cell type: mononuclear cells
GSM1690592 tissue: Tonsils cell type: mononuclear cells
GSM1690593 tissue: Tonsils cell type: mononuclear cells
GSM1690594 tissue: Tonsils cell type: mononuclear cells
GSM1690595 tissue: Tonsils cell type: mononuclear cells
GSM1690596 tissue: Tonsils cell type: mononuclear cells
GSM1690597 tissue: Tonsils cell type: mononuclear cells
GSM1690598 tissue: Tonsils cell type: mononuclear cells
GSM1690599 tissue: Tonsils cell type: mononuclear cells
GSM1690600 tissue: Tonsils cell type: mononuclear cells
GSM1690601 tissue: Tonsils cell type: mononuclear cells
GSM1690602 tissue: Tonsils cell type: mononuclear cells
GSM1690603 tissue: Tonsils cell type: mononuclear cells
GSM1690604 tissue: Tonsils cell type: mononuclear cells
GSM1690605 tissue: Tonsils cell type: mononuclear cells
GSM1690606 tissue: Tonsils cell type: mononuclear cells
> tr <- levels(unique(pData(phenoData(gset))[12])[,1])
> tr1 <- gsub("b-cell subset: ","", tr[1])
> tr2 <- gsub("b-cell subset: ","", tr[2])
> tr3 <- gsub("b-cell subset: ","", tr[3])
> sml <- c("C0","C0","C0","C0","C1","C1","C1","C1","C2","C2","C2","C2");
> ex <- exprs(gset)
> qx <-as.numeric(quantile(ex, c(0., 0.25, 0.5, 0.75, 0.99, 1.0), na.rm=T))
> LogC <- (qx[5] > 100) || (qx[6] - qx[1] > 50 && qx[2] > 0) || (qx[2] > 0 && qx[2] < 1 && qx[4] > 1 && qx[4] < 2)
> if (LogC) {ex[which(ex <= 0)] <- NaNexprs(gset) <- log2(ex) }
> par(mfrow=c(1,2))
> hist(2^exprs(gset), breaks=25)
> hist(exprs(gset), breaks=25)
> fl <- as.factor(sml)
> gset$description <- fl
Error in `[[<-.data.frame`(`*tmp*`, i, value = c(1L, 1L, 1L, 1L, 2L, 2L, :
replacement has 12 rows, data has 30
I have been told that this error is possibly subsettable and could be occurring because I might have "NA" in my data. A solution to eliminate the "NA" in my dataset was to put this code as the beginning of the dataset
gset <- na.omit(gset)
I used this code at the beginning, ran the other codes, and still produced this error. My questions are these:
Is there a specific code function to increase the number of replacement rows to fit and correctly process my dataset?
Is there a specific code syntax method that I could use to reduce the number of rows my dataset has so that the gset$description <- fl will work? Could I modify the code fl <- as.factor(sml) or modify the code sml <- c("C0","C0","C0","C0","C1","C1","C1","C1","C2","C2","C2","C2");?
I have tried to modify the sml code, but I received a different error saying that the data is not subsettleable.
If anyone could offer any direction or answers to any of my questions if or when they have the available chance I would greatly appreciate it.
I can't address all of this, because I can't figure out what you are trying to do from your code. When you define sml, it's a vector of length 12. We can define this multiple ways, but it's always length 12:
# equivalent definitions for `sml`, if you like them better:
sml <- rep(c("C0", "C1", "C2"), each = 4)
sml <- paste0("C", rep(0:2, each = 4))
The length of this variable is not changed by making it a factor. All that changes is how the variable exists in memory. A factor has two components - a vector of characters (the levels), and a vector of integers which map to the characters, telling you which level of the factor belongs in a given spot. So fl still has length 12.
Now, you're trying to assign fl to gset$description, which is (from https://bioconductor.org/packages/devel/bioc/vignettes/Biobase/inst/doc/ExpressionSetIntroduction.pdf - I'm very rusty on bioconductor, and probably out of date as well), accessing phenoData(gset). According to the error, you have 30 rows in this data set - one for each sample. What you appear to be trying to do is assign each of the 30 samples a value of C0, C1, or C2. If that is not what you are trying to do, you need to clearly explain what you are trying to accomplish so we can help you on this. Otherwise, you need to explain how C0, C1, and C2 should be assigned to the samples so they can be assigned appropriately.
One last thing. These are the only lines of code which affect sml and fl.
sml <- c("C0","C0","C0","C0","C1","C1","C1","C1","C2","C2","C2","C2") # can be rewritten as above
fl <- as.factor(sml)
gset$description <- fl
Everything else has no affect on sml, fl, or gset. While gset <- na.omit(gset) might remove some elements from your exrpession set, I'm guessing it will remove features, not samples - which will have no effect on your overall error.
That is what I am trying to do. I was following a R Program Microarray Gene Expression Tutorial and code processed similar to what you mentioned were not accurately described in the tutorial and why to use them. I will try this approach to see if it works (it probably will) and I will mention what you said when I create a more accurate tutorial for the data I am analyzing. Thank you so much again and much success!
| common-pile/stackexchange_filtered |
Convert JSON to YAML using PowerShell
I have a json data like this.
sample.json
[
{
"id": 0,
"name": "Cofine",
"title": "laboris minim qui nisi esse amet non",
"description": "Consequat laborum quis exercitation culpa. Culpa esse sint consectetur deserunt non.",
"website": "cofine.com",
"image": "http://placehold.it/32x32",
"labels": ["blue", "red"],
"labels_link": ["http://cofine.com/labels/blue","http://cofine.com/labels/red"],
},
{
"id": 1,
"name": "Zomboid",
"title": "adipisicing mollit esse aliquip ullamco nisi laboris",
"description": "Enim consectetur eu commodo officia. Id pariatur proident nostrud occaecat adipisicing voluptate do nisi incididunt id ex commodo.",
"website": "zomboid.com",
"image": "http://placehold.it/32x32",
"labels": ["red"],
"labels_link": ["http://zomboid.com/labels/red"],
},
{
"id": 2,
"name": "Sulfax",
"title": "non minim anim irure nulla ad elit",
"description": "Pariatur anim officia adipisicing Lorem dolor cillum eu ex veniam sint consequat incididunt. Minim mollit reprehenderit mollit sint laboris consequat.",
"website": "sulfax.com",
"image": "http://placehold.it/32x32",
"labels": ["green", "yellow", "blue"],
"labels_link": ["http://sulfax.com/labels/green","http://sulfax.com/labels/yellow","http://sulfax.com/labels/blue"],
}
]
How do I convert this json data to yaml using PowerShell where each json object will be converted to yaml and saved as yaml in its own file with the file name being the value of the title keys properties?
When I run the following command ($json | ConvertFrom-Json) | ConvertTo-YAML (where the ConvertTo-YAML function is taken from the simpletalk website), this is the output I get.
Output
---
id: 0
name: 'Cofine'
title: 'laboris minim qui nisi esse amet non'
description: Consequat laborum quis exercitation culpa. Culpa esse sint consectetur deserunt non.
website: 'cofine.com'
image: 'http://placehold.it/32x32'
labels:
- 'blue'
- 'red'
labels_link:
- 'http://cofine.com/labels/blue'
- 'http://cofine.com/labels/red'
---
id: 1
name: 'Zomboid'
title: 'adipisicing mollit esse aliquip ullamco nisi laboris'
description: Enim consectetur eu commodo officia. Id pariatur proident nostrud occaecat adipisicing voluptate do nisi incididunt id ex commodo.
website: 'zomboid.com'
image: 'http://placehold.it/32x32'
labels:
- 'red'
labels_link:
- 'http://zomboid.com/labels/red'
---
id: 2
name: 'Sulfax'
title: 'non minim anim irure nulla ad elit'
description: Pariatur anim officia adipisicing Lorem dolor cillum eu ex veniam sint consequat incididunt. Minim mollit reprehenderit mollit sint laboris consequat.
website: 'sulfax.com'
image: 'http://placehold.it/32x32'
labels:
- 'green'
- 'yellow'
- 'blue'
labels_link:
- 'http://sulfax.com/labels/green'
- 'http://sulfax.com/labels/yellow'
- 'http://sulfax.com/labels/blue'
However, the output I am looking for would look like this - where the filename is the value of the title keys properties and the content of the file would be the corresponding json object converted to yaml.
laboris minim qui nisi esse amet non.yaml
---
id: 0
name: 'Cofine'
title: 'laboris minim qui nisi esse amet non'
description: Consequat laborum quis exercitation culpa. Culpa esse sint consectetur deserunt non.
website: 'cofine.com'
image: 'http://placehold.it/32x32'
labels:
- 'blue'
- 'red'
labels_link:
- 'http://cofine.com/labels/blue'
- 'http://cofine.com/labels/red'
---
adipisicing mollit esse aliquip ullamco nisi laboris.yaml
---
id: 1
name: 'Zomboid'
title: 'adipisicing mollit esse aliquip ullamco nisi laboris'
description: Enim consectetur eu commodo officia. Id pariatur proident nostrud occaecat adipisicing voluptate do nisi incididunt id ex commodo.
website: 'zomboid.com'
image: 'http://placehold.it/32x32'
labels:
- 'red'
labels_link:
- 'http://zomboid.com/labels/red'
---
non minim anim irure nulla ad elit.yaml
---
id: 2
name: 'Sulfax'
title: 'non minim anim irure nulla ad elit'
description: Pariatur anim officia adipisicing Lorem dolor cillum eu ex veniam sint consequat incididunt. Minim mollit reprehenderit mollit sint laboris consequat.
website: 'sulfax.com'
image: 'http://placehold.it/32x32'
labels:
- 'green'
- 'yellow'
- 'blue'
labels_link:
- 'http://sulfax.com/labels/green'
- 'http://sulfax.com/labels/yellow'
- 'http://sulfax.com/labels/blue'
---
Have you read this? https://www.simple-talk.com/sysadmin/powershell/getting-data-into-and-out-of-powershell-objects/
@SadBunny yeah, I have
It seems like that has a full implementation for your exact purpose. No?
@SadBunny, No that article does not do exactly what I want. See my updated question.
Well, that is certainly a very specific function... So, you want to split the incoming data out into one or more files where the filename depends on the value of a specific key? That would mean that you have to adapt your current code to do just that. If you don't know how, it's probably a good idea to post your code to stack overflow and ask for help changing the code to your functionality. It's probably not that hard, you'd have to split the YAML objects, loop through them, get the value and then save that thing. A possible shortcut is a massive regex search-and-replace on the total result.
By the way, you shouldn't begin AND end your yaml with "---", yaml only begins with "---". The next "---" is the start of the next block.
I'm a linux guy, so here's an example, if you'd pipe that yaml through the following gawk script it does what you want :) Don't know if it's of any use, but I tested it and it works:
moo@monsterkill:/tmp$ cat test.txt | gawk -f test.gawk
Wrote file: laboris minim qui nisi esse amet non
Wrote file: adipisicing mollit esse aliquip ullamco nisi laboris
Wrote file: non minim anim irure nulla ad elit
BEGIN{filename=""}
{
if ($0=="---") {
writeTheFile();
content="";
}
content = content "" $0 "\n";
if ($0 ~ "title: ") {
filename=substr($0, index($0, "'") + 1)
filename=substr(filename, 1, index(filename, "'") - 1)
}
}
END{writeTheFile();}
function writeTheFile() {
if (filename=="") {return;} else {
print content > filename; close(filename);
print("Wrote file: " filename);
}
}
Sorry, imagine your own newlines as SU has killed them :/
Could you please include your gawk script on github gist. It is very difficult to read at the moment.
Sure, here you go: https://gist.github.com/SadBunny/a2d0f5421eb660096896f22cbbc46e4a
@SadBunny, When I run your script on git bash on windows, it just creates a non minim anim irure nulla ad elit.yaml file with the whole data in yaml format as its content which is not what I am after. However running the same script on linux does exactly what you said.
Probably a newline issue.
I'm answering my own question in case someone else is looking for an answer.
$obj = ($json | ConvertFrom-Json)
ForEach($item in $obj) {
$filename = "$($item.title).yaml"
$item | ConvertTo-YAML > $filename
"---" >> $filename
}
That's pretty sweet :) Nice job.
| common-pile/stackexchange_filtered |
RectBivariateSpline producing only nan values with smoothing>0
Using RectBivariateSpline to interpolate over a 2D image (raw data illustrated below), if smoothing is 0, I get an interpolation, but if I set smoothing to a non-zero value, even 0.001, the result contains only nan values. My "image" is a 1000x800 grid of numbers from 0~14.
from scipy import interpolate
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
x=np.arange(img.shape[1])
y=np.arange(img.shape[0])
X, Y = np.meshgrid(x,y)
fig=plt.figure(1)
for smooth in [0,.001,.01,.1]:
plt.clf()
plt.cla()
ax=fig.add_subplot(111,projection='3d')
bvspl=interpolate.RectBivariateSpline(x,y,np.transpose(img),s=smooth)
print(np.min(bvspl(x,y)),np.max(bvspl(x,y)))
ax.plot_surface(X,Y,np.transpose(bvspl(x,y)))
fig.savefig(path+str(smooth)+'_3d.png')
The result of the print statement is:
-2.15105711021e-15 14.3944312333
nan nan
nan nan
nan nan
Similar to Bivariate structured interpolation of large array with NaN values or mask.
In my case, I simply set all the missing values to -9999 before interpolating. After interpolation, I set all the negative values to missing values again - yes I will lose the information on the boundary, but that's what I can do so far. I am looking for another fast and accurate solution as well.
| common-pile/stackexchange_filtered |
How to display json data in list view?
I want to retrive data froma php page using json calling and display it in listview using jquery mobile. I am able to retrive the data and also able to display in list but problem i am facing is that :
my data lokks like this in php file:
([{"id":"1","name":"Big Ben","latitude":"51.500600000000","longitude":"-0.124610000000"},{"id":"4","name":"Hadrian's Wall","latitude":"55.024453000000","longitude":"2.142310000000"},{"id":"2","name":"Stonehenge","latitude":"51.178850000000","longitude":"-1.826446000000"},{"id":"3","name":"White Cliffs of Dover","latitude":"51.132020000000","longitude":"1.334070000000"}]);
when I am displaying it in list it's showing in one row of list only. How can I disply the datas in separate row with respect to their "id" ?
$(document).ready(function(){
$(document).bind('deviceready', function(){
//Phonegap ready
onDeviceReady();
});
var output = $('#output');
$.ajax({
url: 'http://samcroft.co.uk/demos/updated-load-data-into-phonegap/landmarks.php',
dataType: 'jsonp',
jsonp: 'jsoncallback',
timeout: 5000,
success: function(data, status){
$.each(data, function(i,item){
var landmark = '<h1>'+item.name+'</h1>'
+ '<p>'+item.latitude+'<br>'
+ item.longitude+'</p>';
output.append(landmark);
});
},
error: function(){
output.text('There was an error loading the data.');
}
});
});
var i;
for(i = 0; i < data.length; i+=1) {
// do whatever here, the id is in data[i].id
console.log(data[i].id);
}
Since you mentioned you were using jQuery mobile, perhaps you'd be appending <li> elements to a listview, in that case you'd use the .append method on your <ul data-role=listview>.
EDIT: As per your comment, change your each function to this:
$.each(data, function(i, item) {
$('ul').append('<li><h1>' + item.name + '</h1><p>' + item.latitude + '<br>' + item.longitude + '</p></li>');
});
And remove the <li><a id="output"> </a></li> from the HTML code.
However if you have more than one <ul> element on the page it won't work, so it is recommended to set an id on the correct <ul> element.
yes actually data is appending successfully but I am not getting how to populate in list with respect to "id". .. when I am displaying everything is coming inside one row. I want each item in separate row.. i hope u got my problem?
hanks rink. but still it's not showing in list. It's simply displaying data in simple format. How to display in list ? it's seems like this one is working
| common-pile/stackexchange_filtered |
Can I create custom directives in ASP.NET?
I have created a menu control in asp.net and have put it into master page. This menu control have property ActiveItem which when modified makes one or another item from menu appear as active.
To control active item from child page I have created a control which modifies master page property enforced by IMenuContainer interface to update menu control's active item in master page.
public class MenuActiveItem : Control
{
public ActiveItemEnum ActiveItem
{
get
{
var masterPage = Page.Master as IMenuContainer;
if (masterPage == null)
return ActiveItemEnum.None;
return masterPage.ActiveItem;
}
set
{
var masterPage = Page.Master as IMenuContainer;
if (masterPage == null)
return;
masterPage.ActiveItem = value;
}
}
}
Everything works perfectly and I really enjoy this solution, but I was thinking that, if I knew how, I would have created a custom directive with same feature instead of custom control because it just makes more sense that way.
Does anybody know how to do it?
You should be able to turn this into a custom property of your Page, which you can set in the Page directive.
Create a base class for your page, and then change your Page directives like this:
<%@ Page Language="C#" MasterPageFile="~/App.master"
CodeFileBaseClass="BasePage" ActiveItem="myActiveItem" AutoEventWireup="true"
CodeFile="Page1.aspx.cs" Inherits="Page1" %>
You may have to change the property to be a string, and do a conversion to the enum. But otherwise your code can remain the same, and it doesn't have to be in a control.
| common-pile/stackexchange_filtered |
What topological properties need a space have to be topologically euclidean?
To clarify, if I have some some topological space, what are the sufficient and necessary topological properties it must have to be topologically identical to a euclidean space of given dimension?
I have only a very loose, piecemeal understanding of topology currently, so please be light with the jargon, or give explanations. If possible, define things in terms of open sets or neighborhoods, I've found that definitions of closed sets/closure/interior have a nasty habit of either being circular or so dryly axiomatic that I have trouble intuiting it, so I don't understand them well currently.
By "Euclidean space of a given dimension" do you mean isomorphic to $\mathbb R^n$ with the usual topology?
There are certainly multiple such sets of conditions, so “the” is the wrong word.
Why would you feel the definitions of closed sets/closure/interior circular or dry? Are you trying to study topology without first learning metric space?
There is actually quite a lot of theory on such characterisations. It regularly happens that we have a nice standard topological space $S$ (like $\mathbb{R}^n, \mathbb{Q}$, the Cantor set etc.) and that we can draw up a list of topological properties $\mathscr{L}$ such that a space $X$ if homeomorphic to $S$ iff $X$ satisfies all properties from $\mathscr{L}$. These are called topological characcterisations of $S$. We like the list to be as small and intuitive as possible, of course, and not all spaces admit such a nice characterisation (already for the reason there are at most countably many finite lists of "properties", and way more spaces), but since the start of topology as a field quite a few have been shown.
For the real line $\mathbb{R}$ this is known, the list is:
connected.
locally connected.
$T_3$ (i.e. regular and $T_1$).
separable.
Every point of the space is a strong cut point. (a strong cut point of $X$ is such that $X$ with that point removed has exactly two components.)
I'm not aware of such a list for $\mathbb{R}^n$ for $n > 1$. A trivial or self-circular property ("$X$ is homeomorphic to the plane" would qualify as a topological property, though a boring one in this context) is of course not allowed. It could be that topological dimension could play a role; this is a topologically invariant way to assign a natural number or $\infty$ to a space, such that $\mathbb{R}^n$ gets assigned $n$ for all $n$. There are several such definitions. I think the case for a general Euclidean space is open (it's not mentioned in Engelking as being solved, nor in any of my other books).
Other spaces are simpler: e.g. $\mathbb{Q}$:
countable.
first countable.
$T_3$.
no isolated points.
The Cantor set:
compact.
second countable.
$T_3$.
basis of clopen subsets.
no isolated points.
But both of these are very disconnected. It seems easier to prove characterisations of such spaces. Only in infinite-dimensional spaces do we get positive results again. (e.g. all separable metric complete infinite-dimensional linear spaces are homeomorphic, so we get fewer "types" of spaces).
Thank you. It is odd that for something so basic and archetypal as a euclidean space of dimension n, we don't have a neat topological characterisation. I'm interested in this question because I'm learning physics and want a manifestly coordinate free description of euclidean topological properties (as opposed to building on R^n then abstracting out all the coordinates, which seems standard but extremely backwards for what I'm after).
| common-pile/stackexchange_filtered |
Line integral of a vector field over a intersection of a cylinder and a plane
I need help calculating the line integral of this vector conservative field
$F(x,y,z)=(2xyz^2, x^2z^2+zcos(yz), 2x^2yz+ycos(yz))$
Compute
$\oint_{c}^{} Fds$
Where C is the intersection of the cylinder $\frac{x^2+y^2}{25}=1$ and the plane x+z=2.
There's a trick here, take another look at $F.$
is it about it being a coservative field?
Yep, it's conservative. So FTLI
I know how to apply the FTLI when they give me the starting and ending points, but i'm having trouble here
Pick any point on the contour $\vec{r}_0.$ This is also your ending point. What is $f(\vec{r}_0) - f(\vec{r}_0)$?
Do you realize your boundary is a closed curve? What is the line integral of a conservative vector field over a closed curve?
It's 0. I feel so silly now. But thanks a lot for the help!
| common-pile/stackexchange_filtered |
How to return value from function? (translate actionscript to c#)
So... I want to return value when C# function is called. I need a code example (simple summ of a,b values will be ok) Please help
I need something like this ( I know ActionScript so I will write in it):
public function sum(valueA:int, valueB:int):int
{
var summ:int = valueA + valueB;
return summ;
}
How to translate it into C#?
As a side-note, C# (3.0 above) also supports the var keyword when declaring variables, which means that you can write:
public int Sum(int valueA, int valueB) {
var summ = valueA + valueB;
return summ;
}
The meaning of var is porbably different than in ActionScript - it looks at the expression used to initialize the variable and uses the type of the expression (so the code is statically-typed). In the example above, the type of summ will be int just like in the version posted by Oded.
(This is often a confusing thing for people with background in dynamic languages, so I thought it would be useful to mention this, especially since var is also a keyword in ActionScript).
and if I need to concat 2 strings vill 'var' work? like ' var summ = valueA + " "+ valueB; '
Yes, it will work. The compiler knows that valueA and valueB are both int and so it will figure out that you're adding two integers, so the result will also be integer (and therefore it will infer that summ is integer)
Here:
public int sum(int valueA, int valueB)
{
int summ = valueA + valueB;
return summ;
}
Differences to note:
The return type is declared immediately after the public visiblity qualifier
Variable types are declared before them
Or even so =)
public int Sum ( int valueA, int valueB ) { return valueA + valueB; }
if you don't need to store a result in a function for some purpose.
| common-pile/stackexchange_filtered |
C# Tutorial - Output PowerShell Command in C# Winform App
As the title says I want to do a C# app and running powershell scripts and print the output in TextBox.
I followed the tutorial from here https://www.youtube.com/watch?v=cIrCce5l7NU and everthing works as in the video but I have one problem my script never ends or it last for 10 minutes and I need to output in the TextBox in realtime with the outpout of Powershell line by line. I've tried different approaches but it seems that I cannot make it work.
Thanks!
This is the powershell script as example:
test.ps1 - prints numbers foverver
for (;;)
{
$i++; Write-Host $i
}
You have an infinite loop so it never ends. You need a foreach (PSObject pSObject in results) to loop through a Collection<PSObject> results = pipeline.Invoke();
private string RunScript(string script)
{
Runspace runspace = RunspaceFactory.CreateRunspace();
runspace.Open();
Pipeline pipeline = runspace.CreatePipeline();
pipeline.Commands.AddScript(script);
pipeline.Commands.Add("Out-String");
Collection<PSObject> results = pipeline.Invoke();
runspace.Close();
StringBuilder stringBuilder = new StringBuilder();
foreach (PSObject pSObject in results)
stringBuilder.AppendLine(pSObject.ToString());
return string.ToString();
}
Hi this is the same code as in the video, and yes it can be that I have an infite loop script and I want to display it line by line. More then that the las line should be
return stringBuilder.ToString();
Pls post a screenshot of your current output
| common-pile/stackexchange_filtered |
Model property is empty
I am trying to move from webForms to Asp.net-MVC and have some problems. I am trying to figure why this is not working, I am getting this error: "Object reference not set to an instance of an object"
I have the class 'Pages':
namespace _2send.Model
{
public class Pages
{
public string PageContent { get; set; }
public string PageName { get; set; }
public int LanguageId { get; set; }
}
}
I am inserting the value to 'Pages.PageContent' property with this class:
namespace _2send.Model.Services
{
public class PagesService : IPagesService
{
public void GetFooterlinksPage()
{
DB_utilities db_util = new DB_utilities();
SqlDataReader dr;
Pages pages = new Pages();
using (dr = db_util.procSelect("[Pages_GetPageData]"))
{
if (dr.HasRows)
{
dr.Read();
pages.PageContent = (string)dr["PageContent"];
dr.Close();
}
}
}
The Controller method looks like this:
private IPagesService _pagesService;
public FooterLinksPageController(IPagesService pagesService)
{
_pagesService = pagesService;
}
public ActionResult GetFooterLinksPage()
{
_pagesService.GetFooterlinksPage();
return View();
}
I am trying to write the property in the view like this:
@model _2send.Model.Pages
<div>
@Model.PageContent;
</div>
When debugging, the method is fired and the dataReader is inserting the value to the 'PageContent' property, but I am still getting this error from the view.
Thanks!
You need to rewrite service method to return Pages:
public Pages GetFooterlinksPage()
{
DB_utilities db_util = new DB_utilities();
Pages pages = new Pages();
using (var dr = db_util.procSelect("[Pages_GetPageData]"))
{
if (dr.HasRows)
{
dr.Read();
pages.PageContent = (string)dr["PageContent"];
return pages;
// Because you use using, you don't need to close datareader
}
}
}
And then rewrite your action method:
public ActionResult GetFooterLinksPage()
{
var viewmodel = _pagesService.GetFooterlinksPage();
return View(viewmodel);
}
return View();
You didn't pass a model.
You need to pass the model as a parameter to the View() method.
I cannot figure this out, how do I do this? (pass the model)
declare the model, put the information in it, then return View(model)
I have a misunderstanding and I must understand the concept: from the controller I am calling 'GetFooterlinksPage();' which insert a value to 'pages.PageContent'. from the view I want to read this value '@Model.PageContent;' but I am still getting an error.
@Eyal: Writing new Pages() just creates an object. It doesn't actually do anything with that object; you still need to pass it to the view.
You can return a model:
var viewmodel = new _2send.Model.Pages().
//here you configure your properties
return View(viewmodel);
| common-pile/stackexchange_filtered |
jms serializer @Exclude condition on class
My question is rather equal to Symfony2 - JMS Serializer - Exclude entity if getDeleted() is not null but the accepted answer offered a workaround and not an actual response and does not fit my requirements.
I have a class OriginalText and it has a getPublic() method that returns true if the entity is public. I want to exclude every entity which is not public.
use JMS\Serializer\Annotation\Exclude;
/**
* @Exclude(if="!object.getPublic()")
*/
class OriginalText{
public getPublic(){
//returns true if $this->public == true
}
}
However this is not working.
As suggested, I have installed symfony/expression-language; I also tried with !this.getPublic(), and with == false insted of !. None of this is working.
Any idea?
| common-pile/stackexchange_filtered |
Prove Convolution is Equivariant with respect to translation
I was reading the following statement about how convolution is equivariant with respect to translation from the Deep Learning Book.
Let g be a function mapping one image function to another image
function, such that I'=g(I) is the image function with I'(x, y)
=I(x−1, y). This shifts every pixel ofIone unit to the right. If we apply this transformation to I, then apply convolution, the result will
be the same as if we applied convolution to I', then applied the
transformation g to the output.
For the last line I bolded, they are applying convolution to I', but shouldn't this be I? I' is the translated image. Otherwise it would effectively be saying:
f(g(I)) = g( f(g(I)) )
where f is convolution & g is translation.
I am trying to execute the same myself in python using 3D kernel equal to the depth of the image as would be the case in the convolution layer for a colored image, a house.
Here is my code for applying a translation & then convolution to an image.
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import scipy
import scipy.ndimage
I = scipy.ndimage.imread('pics/house.jpg')
def convolution(A, B):
return np.sum( np.multiply(A, B) )
k = np.array([[[0,1,-1],[1,-1,0],[0,0,0]], [[-1,0,-1],[1,-1,0],[1,0,0]], [[1,-1,0],[1,0,1],[-1,0,1]]]) #kernel
## Translation
translated = 100
new_I = np.zeros( (I.shape[0]-translated, I.shape[1], I.shape[2]) )
for i in range(translated, I.shape[0]):
for j in range(I.shape[1]):
for l in range(I.shape[2]):
new_I[i-translated,j,l] = I[i,j,l]
## Convolution
conv = np.zeros( (int((new_I.shape[0]-3)/2), int((new_I.shape[1]-3)/2) ) )
for i in range( conv.shape[0] ):
for j in range(conv.shape[1]):
conv[i, j] = convolution(new_I[2*i:2*i+3, 2*j:2*j+3, :], k)
scipy.misc.imsave('pics/convoled_image_2nd.png', conv)
I get the following output:
Now, I switch the convolution and Translation steps:
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import scipy
import scipy.ndimage
I = scipy.ndimage.imread('pics/house.jpg')
def convolution(A, B):
return np.sum( np.multiply(A, B) )
k = np.array([[[0,1,-1],[1,-1,0],[0,0,0]], [[-1,0,-1],[1,-1,0],[1,0,0]], [[1,-1,0],[1,0,1],[-1,0,1]]]) #kernel
## Convolution
conv = np.zeros( (int((I.shape[0]-3)/2), int((I.shape[1]-3)/2) ) )
for i in range( conv.shape[0] ):
for j in range(conv.shape[1]):
conv[i, j] = convolution(I[2*i:2*i+3, 2*j:2*j+3, :], k)
## Translation
translated = 100
new_I = np.zeros( (conv.shape[0]-translated, conv.shape[1]) )
for i in range(translated, conv.shape[0]):
for j in range(conv.shape[1]):
new_I[i-translated,j] = conv[i,j]
scipy.misc.imsave('pics/conv_trans_image.png', new_I)
And now I get the following output:
Shouldn't they be the same according the book? What am I doing wrong?
FWIW, you should probably convert those ## Translation and ## Convolution blocks into functions translation and convolution. Then it's easy for a reader to verify translation(convolution(I)) vs convolution(translation(I)) without having to compare line-by-line.
Also, what's the difference between your two images?
One image is cropped at the top, the other is not. They are different sizes too
Just as the book says, the linearity properties of convolution and translation guarantee that their order is interchangable, excepting boundary effects.
For instance:
import numpy as np
from scipy import misc, ndimage, signal
def translate(img, dx):
img_t = np.zeros_like(img)
if dx == 0: img_t[:, :] = img[:, :]
elif dx > 0: img_t[:, dx:] = img[:, :-dx]
else: img_t[:, :dx] = img[:, -dx:]
return img_t
def convolution(img, k):
return np.sum([signal.convolve2d(img[:, :, c], k[:, :, c])
for c in range(img.shape[2])], axis=0)
img = ndimage.imread('house.jpg')
k = np.array([
[[ 0, 1, -1], [1, -1, 0], [ 0, 0, 0]],
[[-1, 0, -1], [1, -1, 0], [ 1, 0, 0]],
[[ 1, -1, 0], [1, 0, 1], [-1, 0, 1]]])
ct = translate(convolution(img, k), 100)
tc = convolution(translate(img, 100), k)
misc.imsave('conv_then_trans.png', ct)
misc.imsave('trans_then_conv.png', tc)
if np.all(ct[2:-2, 2:-2] == tc[2:-2, 2:-2]):
print('Equal!')
Prints:
Equal!
The problem is that you're overtranslating in the second example. After you shrink the image 2x, try translating by 50 instead.
| common-pile/stackexchange_filtered |
How to wait for complete render of React component in Mocha using Enzyme?
I have a Parent component that renders a Child component. The Child component first renders unique props like 'name' and then the Parent component renders common props such as 'type' and injects those props into the Child component using React.Children.map.
My problem is that Enzyme is not able to detect the common props rendered by the Section component so I cannot effectively test whether or not the common props are being added.
The test:
const wrapper = shallow(
<Parent title="Test Parent">
<div>
<Child
name="FirstChild"
/>
</div>
</Parent>
)
// console.log(wrapper.find(Child).node.props) <- returns only "name" in the object
expect(wrapper.find(Child)).to.have.prop("commonPropOne")
expect(wrapper.find(Child)).to.have.prop("commonPropTwo")
expect(wrapper.find(Child)).to.have.prop("commonPropThree")
The code for injecting common props:
const Parent = (props) => (
<div
className="group"
title={props.title}
>
{ React.Children.map(props.children, child => applyCommonProps(props, child)) }
</div>
)
You will have to use enzyme's mount.
mount gives you full DOM rendering when you need wait for components to render children rather than only rendering a single node like shallow.
| common-pile/stackexchange_filtered |
Does Vue Need The Function Keyword in Computed Properties and Methods?
I see computed properties and methods in the Vue docs like this:
export default {
computed: {
foo: function() { return 'foo'; }
},
methods: {
bar: function(x) { return 'bar ' + x; }
}
}
However, I've also seen the function keyword omitted. Is this allowed? Are there any drawbacks?
export default {
computed: {
foo() { return 'foo'; }
},
methods: {
bar(x) { return 'bar ' + x; }
}
}
The only difference between them is that the shorthand function cannot be used as a constructor function with new operator (Constructor behaving differently using ES6 shorthand notation). But, this is inside Vue object, so it is irrelevant here. You can use them interchangeably.
Yes, this is allowed, starting from ES6.
From the MDN
Starting with ECMAScript 2015, a shorter syntax for method definitions on objects initializers is introduced. It is a shorthand for a function assigned to the method's name.
const obj = {
foo() {
return 'bar';
}
};
console.log(obj.foo());
// expected output: "bar"
Drawbacks:
This syntax is not supported by IE11
You probably need a transpiler, like Babel to use this syntax in non-supported environments
| common-pile/stackexchange_filtered |
How To add a horizintal listview inside a column in Flutter?
i am building a layout where i need to add a horizintal listview inside a column , i,ve tried making it expanded or flexible , and nothind seems to work , this is the column that i am putting the list in :
class HomePage extends StatelessWidget {
HomePage({super.key});
double? h;
double? w;
@override
Widget build(BuildContext context) {
h = MediaQuery.of(context).size.height;
w = MediaQuery.of(context).size.width;
return SafeArea(
child: Scaffold(
body: Column(crossAxisAlignment: CrossAxisAlignment.start, children: [
Padding(
padding: EdgeInsets.only(left: (h! * 0.02)),
child: Container(
width: w! * 0.09,
height: h! * 0.06,
decoration: BoxDecoration(
color: Colors.white,
border: Border.all(color: Colors.white),
borderRadius: BorderRadius.circular(15),
),
child: Image.asset(
"assets/images/menu_from_left.png",
// fit: BoxFit.fitHeight,
),
)),
Padding(
padding: EdgeInsets.only(top: h! * 0.04, left: 15.0),
child: Text(
" Discover New Opportunities !",
style: TextStyle(fontSize: 24, fontWeight: FontWeight.bold),
),
),
SizedBox(
height: h! * 0.02,
),
Padding(
padding: const EdgeInsets.all(15.0),
child: Container(
decoration: BoxDecoration(
color: Colors.white,
border: Border.all(
color: Colors.white,
),
borderRadius: BorderRadius.circular(8)),
child: Row(children: [
Icon(Icons.search),
Expanded(
child: TextField(
decoration: InputDecoration(
border: InputBorder.none,
hintText: "Search for new Jobs"),
showCursor: true,
cursorColor: Colors.black,
))
]),
),
),
SizedBox(
height: h! * 0.0119,
),
Padding(
padding: EdgeInsets.only(left: 15.0),
child: Text(
"For You !",
style: TextStyle(fontSize: 24, fontWeight: FontWeight.bold),
),
),
SizedBox(
height: h! * 0.0119,
),
Container(
height: 140,
width: 200,
child: Expanded(
child: ListView.builder(
// shrinkWrap: true,
scrollDirection: Axis.horizontal,
itemBuilder: ((context, index) {
return ItemBuilder(h, w, cards[index]);
}),
itemCount: cards.length,
),
),
),
SizedBox(
height: 20,
)
]),
),
);
}
}
and this is the listview item that i am trying to create , it should have more than one item m but i was just trting it first and it didn't work for one item :
Widget ItemBuilder(h, w, Map mp) {
return Padding(
padding: const EdgeInsets.all(20.0),
child: ClipRRect(
borderRadius: BorderRadius.circular(12),
child: Container(
height: 200,
child: Column(
mainAxisAlignment: MainAxisAlignment.start,
crossAxisAlignment: CrossAxisAlignment.start,
children: [
Expanded(
child: Row(
crossAxisAlignment: CrossAxisAlignment.start,
children: [
Padding(
padding: const EdgeInsets.all(8.0),
child: Container(
decoration: BoxDecoration(
borderRadius: BorderRadius.circular(4),
border: Border.all(color: Colors.black)),
child: Image.asset(
mp["image"],
width: w! * 0.28,
fit: BoxFit.cover,
height: w! * 0.18,
),
),
),
Spacer(),
Padding(
padding: const EdgeInsets.all(8.0),
child: Container(
width: w! * 0.25,
height: w! * 0.18,
decoration: BoxDecoration(
color: mp["jobcontainercolor"],
border: Border.all(
color: mp["jobcontainercolor"]),
borderRadius: BorderRadius.circular(8)),
child: mp["jobcontainertitle"]),
)
]),
),
SizedBox(
height: w! * 0.05,
),
Padding(padding: EdgeInsets.all(8), child: mp["Job_title"]),
Spacer(),
Padding(
padding: EdgeInsets.fromLTRB(8, 4, 0, 24),
child: mp["job_salary"])
]),
color: mp["cardcolor"],
),
));
}
i,ve got couple of exceptions but the last one was this :
FlutterError (RenderFlex children have non-zero flex but incoming width constraints are unbounded.
When a row is in a parent that does not provide a finite width constraint, for example if it is in a horizontal scrollable, it will try to shrink-wrap its children along the horizontal axis. Setting a flex on a child (e.g. using Expanded) indicates that the child is to expand to fill the remaining space in the horizontal direction.
These two directives are mutually exclusive. If a parent is to shrink-wrap its child, the child cannot simultaneously expand to fit its parent.
Consider setting mainAxisSize to MainAxisSize.min and using FlexFit.loose fits for the flexible children (using Flexible rather than Expanded). This will allow the flexible children to size themselves to less than the infinite remaining space they would otherwise be forced to take, and then will cause the RenderFlex to shrink-wrap the children rather than expanding to fit the maximum constraints provided by the parent.
If this message did not help you determine the problem, consider using debugDumpRenderTree():
https://flutter.dev/debugging/#rendering-layer
http://api.flutter.dev/flutter/rendering/debugDumpRenderTree.html
The affected RenderFlex is:
RenderFlex#985b1 relayoutBoundary=up10 NEEDS-LAYOUT NEEDS-PAINT NEEDS-COMPOSITING-BITS-UPDATE(creator: Row ← Expanded ← Column ← ColoredBox ← ConstrainedBox ← Container ← ClipRRect ← Padding ← RepaintBoundary ← IndexedSemantics ← _SelectionKeepAlive ← NotificationListener<KeepAliveNotification> ← ⋯, parentData: offset=Offset(0.0, 0.0); flex=1; fit=FlexFit.tight (can use size), constraints: BoxConstraints(0.0<=w<=Infinity, h=0.0), size: MISSING, direction: horizontal, mainAxisAlignment: start, mainAxisSize: max, crossAxisAlignment: start, textDirection: ltr, verticalDirection: down)
The creator information is set to:
Row ← Expanded ← Column ← ColoredBox ← ConstrainedBox ← Container ← ClipRRect ← Padding ← RepaintBoundary ← IndexedSemantics ← _SelectionKeepAlive ← NotificationListener<KeepAliveNotification> ← ⋯
See also: https://flutter.dev/layout/
If none of the above helps enough to fix this problem, please don't hesitate to file a bug:
https://github.com/flutter/flutter/issues/new?template=2_bug.md)
You can just remove Expanded and use height on Sizedbox,
SizedBox(
height: 140,
// width: 200, //not needed
child: ListView.builder(
scrollDirection: Axis.horizontal,
itemBuilder: ((context, index) {
To fixed ItemBuilder issue, remove Expanded from Row widget
Widget ItemBuilder(h, w, Map mp) {
return Padding(
padding: const EdgeInsets.all(20.0),
child: ClipRRect(
borderRadius: BorderRadius.circular(12),
child: Container(
height: 200,
child: Column(
mainAxisAlignment: MainAxisAlignment.start,
crossAxisAlignment: CrossAxisAlignment.start,
children: [
Row(
crossAxisAlignment: CrossAxisAlignment.start,
mainAxisAlignment: MainAxisAlignment.spaceBetween,
children: [
Padding(
padding: const EdgeInsets.all(8.0),
child: Container(
decoration: BoxDecoration(
borderRadius: BorderRadius.circular(4),
border: Border.all(color: Colors.black)),
child: Image.asset(
mp["image"],
width: w! * 0.28,
fit: BoxFit.cover,
height: w! * 0.18,
),
),
),
Padding(
padding: const EdgeInsets.all(8.0),
child: Container(
width: w! * 0.25,
height: w! * 0.18,
decoration: BoxDecoration(
// color: mp["jobcontainercolor"],
border: Border.all(
// color: mp["jobcontainercolor"],
),
borderRadius: BorderRadius.circular(8)),
// child: mp["jobcontainertitle"],
),
)
],
),
// SizedBox(
// height: w! * 0.05,
// ),
// Padding(padding: EdgeInsets.all(8), child: mp["Job_title"]),
// Spacer(),
// Padding(
// padding: EdgeInsets.fromLTRB(8, 4, 0, 24),
// child: mp["job_salary"],
// )
]),
// color: mp["cardcolor"],
),
),
);
}
Also try using SizedBox instead of Spacer
tried this , the same exception is back
The new issue is coming from ItemBuilder
Glad to help, You can find more about layout
You need to use Expanded widget directly inside Column, try this:
Expanded(child: Container(
height: 140,
width: 200,
child: ListView.builder(
// shrinkWrap: true,
scrollDirection: Axis.horizontal,
itemBuilder: ((context, index) {
return ItemBuilder(h, w, cards[index]);
}),
itemCount: cards.length,
),),),
Note that when you use Expanded you don't need to set height for horizontal list.
Update: in your ItemBuilder, you are calling spacer which mean call infinit space in horizontal list view, try add width to this container in ItemBuilder:
Container(
height: 200,
width:300,//<--- add this
child: Column(
...
),
),
i've tried this and now i'm getting this error : FlutterError (RenderFlex children have non-zero flex but incoming width constraints are unbounded.
When a row is in a parent that does not provide a finite width constraint, for example if it is in a horizontal scrollable, it will try to shrink-wrap its children along the horizontal axis.
I updated my answer, check it out, @hamz
tried this, but not working
| common-pile/stackexchange_filtered |
unexpected binary tree result
struct BSTreeNode
{
struct BSTreeNode *leftchild;
AnsiString data;
struct BSTreeNode *rightchild;
};
struct BSTreeNode * root;
String tree = "";
struct BSTreeNode * newNode(AnsiString x)
{
struct BSTreeNode * node = new struct BSTreeNode;
node->data = x;
node->leftchild = NULL;
node->rightchild = NULL;
return node;
}
struct BSTreeNode * insertBSTree(struct BSTreeNode * node , AnsiString x)
{ if(node == NULL) return newNode(x);
if(x < node->data)
node->leftchild = insertBSTree(node->leftchild, x);
else
node->rightchild = insertBSTree(node->rightchild, x);
return node;
}
void printBSTree(struct BSTreeNode * node)
{ if(node != NULL)
{ printBSTree(node->leftchild);
tree += node->data+"_";
printBSTree(node->rightchild);
}
}
//--- insert button ---
void __fastcall TForm1::Button1Click(TObject *Sender)
{
AnsiString data;
data = Edit1->Text;
root = insertBSTree(root, data);
tree = "";
printBSTree(root);
Memo1->Lines->Add(tree);
}
Supposed that I insert A、B、C、D、E、F、G into the binary tree(Button1Click is the button to insert data into the binary tree)
the binary tree should be like
A
/ \
B C
/ \ / \
H J D E
/ \
F G
but it turned out to be like
A
\
B
\
C
\
D
\
E
\
F
\
G
struct BSTreeNode ---> tree node
struct BSTreeNode * newNode(AnsiString x) ---> creating a new node
button1Click ---> insert the data from Edit->text; to the binary tree.
insertBSTree ---> if the node is null, then create a new node. Inserting the data into the leftchild/rightchild
In C++, it is useless to use struct keyword when specifying a type. BSTreeNode is already a type, don't write struct BSTreeNode everywhere.
@prapin: one would also suggest using references or making these functions member functions.
Please post a minimal example of the problem (looks like just A & B will suffice), and what you did to try and diagnose the problem (i.e. use a debugger to see where it first didn't do what you expected).
What are the data values in your example?
Code looks OK, what's not clear is why you are expecting the result you are expecting. That is not a valid binary search tree for the data shown.
I debugged and there's no bugs.
"data" stores the value of the binary tree. For exmaple, if I insert "A" into the binary tree(Button1Click), the value "A" will be stored into "data".
My teacher's assisant told me that my binary tree turned out to be like the second one in the post.
You've created a binary search tree when you use < to determine which branch to insert into:
struct BSTreeNode * insertBSTree(struct BSTreeNode * node , AnsiString x)
{ if(node == NULL) return newNode(x);
if(x < node->data)
node->leftchild = insertBSTree(node->leftchild, x);
else
node->rightchild = insertBSTree(node->rightchild, x);
return node;
}
Your second example is a binary search tree, just not a balanced one.
Your expected result cannot be created by the functions shown because they maintain a strict invariant where the left branch must only contain values less than the root. The efficiency involved in a binary search tree only applies to balanced ones.
A balanced binary search tree containing this data would look like the following, because all trees involved (including subtrees) are balanced.
D
/ \
/ \
/ \
B F
/ \ / \
A C E G
You can get to this by adding a mechanism to check if your tree is balanced, and making adjustments by "rotating" trees left or right.
A
\
B
\
C
Becomes balanced, by rotating left around the smallest value on the right side of the tree.
B
/ \
A C
To balance your initial tree:
A
\
B
\
C
\
D
\
E
\
F
\
G
You'd check each subtree to see if it's balanced. The first subtree that is unbalanced is E, F, G.
A
\
B
\
C
\
D
\
F
/ \
E G
We'd then see that D, E, F, G is unbalanced to the right.
A
\
B
\
C
\
E
/ \
D F
\
G
And so on...
A
\
B
\
D
/ \
C E
\
F
\
G
A
\
B
\
D
/ \
C F
/ \
E G
A
\
C
/ \
B D
\
F
/ \
E G
A
\
C
/ \
B E
/ \
D F
\
G
A
\
D
/ \
C E
/ \
B F
\
G
A
\
D
/ \
C F
/ / \
B E G
B
/ \
A C
\
D
\
F
/ \
E G
B
/ \
A C
\
E
/ \
D F
\
G
B
/ \
A D
/ \
C E
\
F
\
G
B
/ \
A D
/ \
C F
/ \
E G
C
/ \
B D
/ \
A F
/ \
E G
C
/ \
B E
/ / \
A D F
\
G
If we go another few steps further:
D
/ \
C E
/ \
B F
/ \
A G
D
/ \
B E
/ \ \
A C F
\
G
D
/ \
/ \
/ \
B F
/ \ / \
A C E G
| common-pile/stackexchange_filtered |
There is no smooth submersion from $S^2$ to $S^1$.
Show that there is no smooth submersion from $S^2$ to $S^1$. I know of one algebraic topology proof which I think is not the shortest one. That submersion is an open map should be a useful fact in obtaining a nicer proof. Any ideas?
The immediate proof that comes to mind lifts such a map to the real numbers, and shows that submersivity is impossible in the presence of a maximum. It's short, did you have this in mind or something else?
Another approach to showing that there is no such submersion $f:S^2\to S^1$ would be to endow $S^2$ with its standard riemannian metric, and to pull the vector field $\frac\partial{\partial t}$ back to $S^2$ along the submersion, by selecting the only tangent vector at $m\in S^2$ that projects onto $\frac\partial{\partial t}$ while being orthogonal to $ker(d_mf)$. This contradicts the hairy ball theorem.
Thanks for your first solution, which is a nice one. I think such a lift is possible because because the induced map from $\pi_1(S^2)$ to $\pi_1(S^1)$ is injective. The solution in my mind is more or less similar to your second solution, but without mention of Riemannian metric and with more algebraic topology favor.
$\pi_1(S^2)=0$.
http://math.stackexchange.com/questions/1294161/existence-of-submersions-from-spheres-into-spheres answers your question in greater generality (there is no submersion $S^{2n} \to M$ where $M$ is a manifold of dimension $< 2n$). See also this: http://mathoverflow.net/questions/207589/when-is-there-a-submersion-from-a-sphere-into-a-sphere
| common-pile/stackexchange_filtered |
Drag and Drop to Move Original File and Create Copy in Current Folder
Summary: I need a solution so that when I use Windows Explorer (or something similar) to drag and drop files, it actually moves the original file to the new location and creates a copy in the original location that is marked as read-only and hidden.
Full Description: I'm helping an organization (mostly on Windows 10/11) that needs to migrate and reorganize a large collection of documents on a file share to a new location. Altogether they have many thousands of "client" files in thousands of directories with no consistent organization. They have created a template for how they want each client's folder and subfolders to be structured and will enlist all their users to manually migrate the files, so they are correctly sorted.
To keep things running smoothly during the transition period and as a safety measure, they don't want to remove or change anything in the original structure, but simply copying the files causes problems too. They want the files in the new location to keep their original Created and Modified dates, plus they want an easy way to tell which files have already been migrated.
I'd like to implement a temporary solution, so that when they drag and drop from the current archive, it actually moves the file and creates a copy, with the properties read-only and also hidden (or maybe archive flag cleared). For a bonus, it could also add a line to a text file in the current folder stating where the file was moved!
P.S. Simpler is better, so pie in the sky, (most of) this gets done with just Group Policy and registry keys. but PowerShell, a small app, or other solutions will be considered.
If you're copying to a new drive or partition, what you ask [as you've asked it] is impossible. Really this is an XY Problem - what you need is a copy mechanism that preserves the file dates.
Simpler idea : Keep a backup as a safety measure.
Is it fair to say that there currently aren't any known solutions for the problem that would consitute a good Answer on the SuperUser.SE? If so, does it make sense to answer along these lines and then reformat the question for Stack Overlflow where the community could Answer with novel code to solve the problem?
| common-pile/stackexchange_filtered |
How to vectorize whole text using fasttext?
To get vector of a word, I can use:
model["word"]
but if I want to get the vector of a sentence, I need to either sum vectors of all words or get average of all vectors.
Does FastText provide a method to do this?
if you have any idea about implementation in java !
If you want to compute vector representations of sentences or paragraphs, please use:
$ ./fasttext print-sentence-vectors model.bin < text.txt
This assumes that the text.txt file contains the paragraphs that you want to get vectors for. The program will output one vector representation per line in the file.
This has been clearly mentioned in the README of fasttext repo.
https://github.com/facebookresearch/fastText
is their another implementation using java .
AFAIK, fasttext supports only CLI for now. But, I was able to find a library that was the pythonic interface of fasttext. You can google to see if you can find one in java.
i found one https://github.com/vinhkhuc/JFastText but has the same question of @Andrey. i should got the line by for loop then another loop for words to getvector for each one . but how can i got total . i couldn't find like the line you posted
jft.runCmd(new String[] {
"supervised",
"-input", "src/test/resources/data/labeled_data.txt",
"-output", "src/test/resources/models/supervised.model"
});
This snippet has been picked from the library you mentioned.
You can use the command 'print-vectors' just like this, but you will have to figure out how to pass in the parameters as I don't know much about running commands from java code.
thanks for replying , i should deal with data line by line as i'm using this in real time i think it will be false if i used the whole file once time , Right ?
No, the purpose of this 'print-vectors' command is to give you the vectors of all the lines in a file. If you see the command again 'text.txt' is a file that contains preprocessed data (i.e. one paragraph per line). You just have to put all your sentences in a file in the format specified and pass in that file to 'print-vectors' as an option.
you mean that every call print-vectors means for each line not for all the lines in the file in once time
Okay this is getting really difficult to explain :P I'll try to explain in more simple words. When you call print-vectors, you provide it a file (your input file with lots of paragraphs or sentences and one line of the file is treated as one paragraph). You can have as many paragraphs in a file as you like. You have to call print-vectors only once and it will output the vectors of all the lines in the input file.
I suggest you go through the Fasttext docs, everything has been mentioned there nicely. :)
Many thanks Aanchal for helping and sorry for late reply . Still want to make sure that i got it well : i will call print-vectors only once and it will output vectors for all lines in a file like for-loop ? I'm really appreciate your help and patience
@AanchalSharma Thanks a lot for your great answer. Please let me know if you know an answer for this: https://stackoverflow.com/questions/46923066/fasttext-bigram-vs-sentence-word-vectors
You can use python wrapper also. Install it using official install guide from here:
https://fasttext.cc/docs/en/python-module.html#installation
And after that:
import fasttext
model = fasttext.load_model('model.bin')
vect = model.get_sentence_vector("some string") # 1 sentence
vect2 = [model.get_sentence_vector(el.replace('\n', '')) for el in text] # for text
Note, this is pretty difficult to make work on windows computers. I'll suggest using gensim
To get vector for a sentence using fasttext, try the following command
$ echo "Your Sentence Here" | ./fasttext print-sentence-vectors model.bin
For an example on this, refer Learn Word Representations In Fasttext
| common-pile/stackexchange_filtered |
Split text effect
I am trying to mimic this look of the text split, I found the codepen.io for it but it uses SCSS and I am looking for it to be CSS only if possible. If someone could help me translate the code or make it so that is CSS, that would be great. Thanks for the help in advance.
Codepen here: https://codepen.io/alexfislegend/pen/NGaaWY?q=split+text&limit=all&type=type-pens
On CodePen, click the arrow to the right of "CSS" and click "View Compiled CSS".
Viewing the compiled CSS on Codepen
You are required to post your markup and code here, not a codepen. [mcve]
just google "convert scss to css" and you're 90% done your task
While I am not going to write out all of the code for you I will suggest a method of doing it:
I suggest you make two div boxes, one for the filled in or solid text and one for the outlined text.
Then you set the color, font-family (Google Fonts is a good resource), font-size, and font-weight, to suit your needs for the first div.
On the second div again set the font-family, font-size, and font-weight to the same values, except set the color to transparent and add a colored border to the text. This will simulate the sort of outlined effect in the codepen.
Oh, and to make the two divs appear on the same line look at this answer.
While this will not automatically split the text between the two texts, it is a simple way to get a similar effect to what you want.
If he doesn't supply the code then the question is off topic and shouldn't be answered at all.
Not all questions can or should be answered
@Rob Oo... Thank you kind sir.
| common-pile/stackexchange_filtered |
How to concat string to a variable in Terraform Template DSL
I am passing a map from terraform code to jinja2 for creating an ansible's hosts file. In the hosts file, I am printing the map using the following code and it is working fine.
1. %{ for key, value in cat ~}
2. [${ key }]
3. %{ for v in value ~}
4. ${ v }
5. %{ endfor ~}
6. %{ endfor ~}
I am now trying to append :children with the key but it is not working. I tried the following 04 ways of changing the line 2 in the above code but no luck. Following the link String concatenation in Jinja
Attempt 1:
[${ key|join(":children") }]
Attempt 2:
[${ key + ':children' }]
Attempt 3:
[${ key ~ ':children' }]
Attempt 4:
[${ key ~ ':children' ~}]
Not tested, but
-I don’t understand addressing variables using $ in Jinja...
I know only addressing via {{ variable }}...
-I would escape square brackets. Below I escape also complete :children] part:
So maybe following will work:
{{ '[' }} {{ key }} {{ ':children]' }}
@KonstantinVolenbovskyi this is a Terraform template, so it is their own DSL.
["${key}:children"] should be what you are looking for.
Does this answer your question? Terraform Combine Variable and String
I tried that now [${ key }:children] and so far so good. I dont know if it will have a space or not i.e. key :children or will it be key:children the later is what I want. I will try ["${key}:children"] that after above run
[${ key }:children] that did the trick so the complete answer will be
%{ for key, value in cat ~}
[${ key }:children]
{ for v in value ~}
${ v }
%{ endfor ~}
%{ endfor ~}
Which will give us
[blue:children]
[green:children]
| common-pile/stackexchange_filtered |
How can I integrate WPML or other multilingual options for freelance marketplace wordpress theme?
tried searching first but haven't found the exact answer I need. I am trying to run a freelance marketplace site on Wordpress that will be in both Chinese and English depending on the user's display language. Each freelancer profile, employer profile, project title and description, and project workspace communications will need to be translated into both languages.
The problem is that with using a plugin like WPML, each project will be split up into a separate post for each language. I am not sure exactly the extent that WPML is able to "sync" posts, and that is important because in this marketplace, an employer posts a project, in which a freelancer then views it and makes a bid. The project manager then accepts the best bid from a freelancer and they both enter a project workspace in which they can communicate in. If all posts are kept separate, and have separate URLs, wouldn't the functions for making a bid, accepting a bid, and the workspace be separate and therefore broken as well?
Anyone have this problem before? Is there a solution just for this sort of problem that is hiding out there?
Thank you!!
Google Translate has a widget that you can embed in the header, or really anywhere on the page for that matter. Using some PHP to detect the browser/system language set, and JavaScript to set the "selected" translate option, you should be able to get the desired effect. One site I worked on made use of the widget without the JavaScript portion that selects it for you, and I can confirm that it works with WordPress on that manner at least. The JavaScript portion shouldn't be too hard, and I can edit my answer with actual code snippets/ideas if needed.
EDIT: A link to the site I mentioned, when I worked at a past employer, is still active here: http://adkalpha.com/gmlaw2/
Thanks for the reply! Unfortunately, Google Translate makes errors frequently when translating, and that will not work for what we are trying to create. We need to make sure that both sides completely understand each other
@borie88 I see, and yes, that makes sense. If you had another translation service in mind, you could do something similar with that service as well (there are many paid services available out there that will do similar things, and I was simply suggesting the Google option as it is free). If you want more complex solutions, that may require more advanced coding, there are options out there for sure, but you may be better off re-thinking your strategy if it gets much more complex.
| common-pile/stackexchange_filtered |
Top 5 time-consuming SQL queries in Oracle
How can I find poor performing SQL queries in Oracle?
Oracle maintains statistics on shared SQL area and contains one row per SQL string(v$sqlarea).
But how can we identify which one of them are badly performing?
I found this SQL statement to be a useful place to start (sorry I can't attribute this to the original author; I found it somewhere on the internet):
SELECT * FROM
(SELECT
sql_fulltext,
sql_id,
elapsed_time,
child_number,
disk_reads,
executions,
first_load_time,
last_load_time
FROM v$sql
ORDER BY elapsed_time DESC)
WHERE ROWNUM < 10
/
This finds the top SQL statements that are currently stored in the SQL cache ordered by elapsed time. Statements will disappear from the cache over time, so it might be no good trying to diagnose last night's batch job when you roll into work at midday.
You can also try ordering by disk_reads and executions. Executions is useful because some poor applications send the same SQL statement way too many times. This SQL assumes you use bind variables correctly.
Then, you can take the sql_id and child_number of a statement and feed them into this baby:-
SELECT * FROM table(DBMS_XPLAN.DISPLAY_CURSOR('&sql_id', &child));
This shows the actual plan from the SQL cache and the full text of the SQL.
You should add elapsed_time in the select otherwise it's pretty confusing.
Add this WHERE Clause in the inner query to only include reacent slow queries:
WHERE LAST_LOAD_TIME > '2020-04-10'
When running APEX these two fields are also useful: module, action,
You could find disk intensive full table scans with something like this:
SELECT Disk_Reads DiskReads, Executions, SQL_ID, SQL_Text SQLText,
SQL_FullText SQLFullText
FROM
(
SELECT Disk_Reads, Executions, SQL_ID, LTRIM(SQL_Text) SQL_Text,
SQL_FullText, Operation, Options,
Row_Number() OVER
(Partition By sql_text ORDER BY Disk_Reads * Executions DESC)
KeepHighSQL
FROM
(
SELECT Avg(Disk_Reads) OVER (Partition By sql_text) Disk_Reads,
Max(Executions) OVER (Partition By sql_text) Executions,
t.SQL_ID, sql_text, sql_fulltext, p.operation,p.options
FROM v$sql t, v$sql_plan p
WHERE t.hash_value=p.hash_value AND p.operation='TABLE ACCESS'
AND p.options='FULL' AND p.object_owner NOT IN ('SYS','SYSTEM')
AND t.Executions > 1
)
ORDER BY DISK_READS * EXECUTIONS DESC
)
WHERE KeepHighSQL = 1
AND rownum <=5;
Isn't DISK_READS the total number of disk reads, so you don't need to multiply by executions?
You could take the average buffer gets per execution during a period of activity of the instance:
SELECT username,
buffer_gets,
disk_reads,
executions,
buffer_get_per_exec,
parse_calls,
sorts,
rows_processed,
hit_ratio,
module,
sql_text
-- elapsed_time, cpu_time, user_io_wait_time, ,
FROM (SELECT sql_text,
b.username,
a.disk_reads,
a.buffer_gets,
trunc(a.buffer_gets / a.executions) buffer_get_per_exec,
a.parse_calls,
a.sorts,
a.executions,
a.rows_processed,
100 - ROUND (100 * a.disk_reads / a.buffer_gets, 2) hit_ratio,
module
-- cpu_time, elapsed_time, user_io_wait_time
FROM v$sqlarea a, dba_users b
WHERE a.parsing_user_id = b.user_id
AND b.username NOT IN ('SYS', 'SYSTEM', 'RMAN','SYSMAN')
AND a.buffer_gets > 10000
ORDER BY buffer_get_per_exec DESC)
WHERE ROWNUM <= 20
It depends which version of oracle you have, for 9i and below Statspack is what you are after, 10g and above, you want awr , both these tools will give you the top sql's and lots of other stuff.
There are a number of possible ways to do this, but have a google for tkprof
There's no GUI... it's entirely command line and possibly a touch intimidating for Oracle beginners; but it's very powerful.
This link looks like a good start:
http://www.oracleutilities.com/OSUtil/tkprof.html
Is there any way to get the data with a sql query?
Does Oracle maintains relevant data in some system tables?
It does not maintain as much data in the system tables as you get with tkprof. See my answer for a quick and dirty way to look for bad statements. tkprof is better but you need to specifically setup a test and run it.
The following query returns SQL statements that perform large numbers of disk reads (also includes the offending user and the number of times the query has been run):
SELECT t2.username, t1.disk_reads, t1.executions,
t1.disk_reads / DECODE(t1.executions, 0, 1, t1.executions) as exec_ratio,
t1.command_type, t1.sql_text
FROM v$sqlarea t1, dba_users t2
WHERE t1.parsing_user_id = t2.user_id
AND t1.disk_reads > 100000
ORDER BY t1.disk_reads DESC
Run the query as SYS and adjust the number of disk reads depending on what you deem to be excessive (100,000 works for me).
I have used this query very recently to track down users who refuse to take advantage of Explain Plans before executing their statements.
I found this query in an old Oracle SQL tuning book (which I unfortunately no longer have), so apologies, but no attribution.
the complete information one that I got from askTom-Oracle. I hope it helps you
select *
from v$sql
where buffer_gets > 1000000
or disk_reads > 100000
or executions > 50000
While searching I got the following query which does the job with one assumption(query execution time >6 seconds)
SELECT username, sql_text, sofar, totalwork, units
FROM v$sql,v$session_longops
WHERE sql_address = address AND sql_hash_value = hash_value
ORDER BY address, hash_value, child_number;
I think above query will list the details for current user.
Comments are welcome!!
This query is not limited to the current user, and would only work if the query appears in v$session_longops. Longops records how far through a sort, table scan, index full scan Oracle is. If your query is slow because of a bad nested loops plan, it will not show becauase there are no longops.
| common-pile/stackexchange_filtered |
R: How to include lm residual back into the data.frame?
I am trying to put the residuals from lm back into the original data.frame:
fit <- lm(y ~ x, data = mydata, weight = ind)
mydata$resid <- fit$resid
The second line would normally work if the residual has the same length as the number of rows of mydata. However, in my case, some of the elements of ind is NA. Therefore the residual length is usually less than the number of rows. Also fit$resid is a vector of "numeric" so there is no label for me to merge it back with the mydata data.frame. Is there an elegant way to achieve this?
I think it should be pretty easy if ind is just a vector.
sel <- which(!is.na(ind))
mydata$resid <- NA
mydata$resid[sel] <- fit$resid
As an alternative, the model can be fit with lm(..., na.action=na.exclude). residuals() will then pad its output with NAs for the omitted cases - see this answer.
I thought this only worked for NA's in the data.frame, not in an array of weights that aren't in the data.frame.
Apparently, it works for weights, see http://pastebin.com/0VUd68cz. Although I didn't investigate this very thoroughly.
| common-pile/stackexchange_filtered |
How to add a class to a form tag that only present on click event?
How to add a class to a form tag that only present on click event?
For example: I do have this...
<p>Below is a form: <span>show form</span></p>
<div class="container"></div>
when user clicks the "show form", jQuery will add a new form inside the class "container"
<p>Below is a form:</p>
<div class="container">
<form>
<-- some code here -->
</form>
</div>
note: The form still does not exist on first page load. It will only shows up when a user clicks on the span. Is there a way wherein the jQuery will just load after the form shows up?
This is how I plan to make it on jQuery, but it don't work, please correct this...
$("span").click(function(){
$(".container form").ready(function(){
$(this).addClass("active");
})
})
Thank you.
I think that you might want to make use of the .live jQuery event. This allows you to attach events to elements created after the DOM has loaded. For instance when you load additional elements via AJAX. In this simple example you click on 'click me.' A form is dynamically loaded and it is given the active class.
http://jsfiddle.net/v5j42/1/
$("span#clicker").click( function() {
$("div#container").append('<form class="foo"></form>');
});
$(".foo").live('DOMNodeInserted', function() {
$(this).addClass("active");
});
Hmmm, after posting this I started to research DOmNodeInserted event. It looks like this is deprecated. I don't know a workaround though. Can anyone help? SO discussion on this page - http://stackoverflow.com/questions/2143929/domnodeinserted-equivalent-in-ie
@Eron - good point. I feel as though there must be some other event that you can tie in here though and this would work for you. I can't figure it out though.
How about
$(document).ready(function () {
$("#form-button").click(function() {
$(this).addClass("yourCSSclass");
})
});
This will add the desired CSS class to your form button ID when clicked.
If you would want to also be able to remove the added class, say on another click event, then you can do: $(document).ready(function () { $("#form-button").click(function() { $(this).toggleClass("yourCSSclass"); }) });
you can do it in this way..not confirm but should solve your problem.
<form id="form"></form>
try to check form id exist in DOM or not..like this..
$("span").click(function(){
var formId= $("em").attr("id");
if(formId!=undefined && formId == "form"){
$(this).addClass("active");// will add a class on span
$('#form').addClass("active"); // will add a class on form tag
}
});
hope this helps you...
$("span").click(function(){
$(".container form").addClass("active");
})
document.ready goes outside of the binding code:
$(document).ready(function () {
$("span").click(function(){
$(".container form").addClass("active");
})
});
why don't you hard-code in the form and give it a style of display: none; then on-click set its display to be 'block'?
i can't touch the code, it is already built. I just need to add a class name after the form loads
$(document).ready(); should not fire until all dom elements are loaded and ready, so if the form is a part of the page the second block of code in my answer should attach the 'active' class to any forms inside of any elements with the 'container' class (http://api.jquery.com/ready/)
the form does not exist yet on page load. it only shows after the user clicks. is there a way on how to tell jQuery to addClass after the form shows up?
I think you might be looking for something as simple as this?
HTML
<p>Below is a form: <span id="showForm">show form</span></p>
<div class="container">
<form>
<input />
</form>
</div>
jQuery
$('.container').hide();
$('#showForm').click(function(){
$('.container').show();
});
http://jsfiddle.net/jasongennaro/9YNLP/1/
| common-pile/stackexchange_filtered |
Can't load hawtio plugin
I have hawtio standalone application (hawtio-app-1.4.52.jar).
I wanted to deploy the simple plugin, but I wasn't successful at building it alone, so I downloaded the hawtio application from github and built the "simple plugin" without changing it. (I opened the whole project and used build artifact in IntelliJ idea.) - Building was without warning or errors and I got the .WAR file of this plugin.
I created "plugins" directory next to the hawtio.jar and put the war inside.
There is no new tab in hawtio or no other sign of this plugin running.
I can access plugin's folder on http://<IP_ADDRESS>:8090/simple-plugin-1.5-SNAPSHOT/ but it only shows folder structures and plain text of source files.
In this question is mentioned PluginServlet, but I have no idea, how to import it to the simple plugin.
Does anyone know, how to make plugin visible and usable?
Thanks.
When I run the hawtio jar in terminal, this shows up:
[main] INFO jetty - using temp directory for jetty: /Users/mcejka/.hawtio/tmp
[main] INFO jetty - Scanning for 3rd party plugins in directory: plugins
[main] INFO org.eclipse.jetty.webapp.StandardDescriptorProcessor - NO JSP Support for /simple-plugin-1.5-SNAPSHOT, did not find org.apache.jasper.servlet.JspServlet
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/Users/mcejka/.hawtio/tmp/simple-plugin-1.5-SNAPSHOT.war/webapp/WEB-INF/lib/slf4j-log4j12-1.7.12.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/Users/mcejka/Downloads/hawtio-app-1.4.52.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
log4j:WARN No appenders could be found for logger (io.hawt.web.plugin.HawtioPlugin).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
[main] INFO org.eclipse.jetty.webapp.WebAppContext - hawt.io simple plugin at http://<IP_ADDRESS>:8090/simple-plugin-1.5-SNAPSHOT
[main] INFO jetty - Added 3rd party plugin with context-path: /simple-plugin-1.5-SNAPSHOT
Added 3rd party plugin with context-path: /simple-plugin-1.5-SNAPSHOT
Embedded hawtio: You can use --help to show usage
Using options [
war=/private/var/folders/bm/gscmy_6d5f52xz038twpwymjfjz5kl/T/hawtio-1582436603468747864.war
contextPath=/hawtio
port=8090
extraClassPath=file:/Library/Java/JavaVirtualMachines/jdk1.8.0_51.jdk/Contents/Home/lib/tools.jar
plugins=plugins
jointServerThread=false
help=false]
About to start hawtio /private/var/folders/bm/gscmy_6d5f52xz038twpwymjfjz5kl/T/hawtio-1582436603468747864.war
[main] INFO org.eclipse.jetty.server.Server - jetty-8.y.z-SNAPSHOT
[main] INFO org.eclipse.jetty.webapp.WebInfConfiguration - Extract jar:file:/private/var/folders/bm/gscmy_6d5f52xz038twpwymjfjz5kl/T/hawtio-1582436603468747864.war!/ to /Users/mcejka/.hawtio/tmp/webapp
[main] INFO org.eclipse.jetty.webapp.StandardDescriptorProcessor - NO JSP Support for /hawtio, did not find org.apache.jasper.servlet.JspServlet
[main] INFO io.hawt.system.ConfigManager - Configuration will be discovered via system properties
[main] INFO io.hawt.jmx.JmxTreeWatcher - Welcome to hawtio 1.4.52 : http://hawt.io/ : Don't cha wish your console was hawt like me? ;-)
[main] INFO io.hawt.jmx.UploadManager - Using file upload directory: /var/folders/bm/gscmy_6d5f52xz038twpwymjfjz5kl/T//uploads
[main] INFO /hawtio - Loading Blueprint contexts [file:/Users/mcejka/.hawtio/tmp/webapp/WEB-INF/classes/OSGI-INF/blueprint/blueprint.xml, jar:file:/Users/mcejka/.hawtio/tmp/webapp/WEB-INF/lib/hawtio-aether-1.4.52.jar!/OSGI-INF/blueprint/blueprint.xml, jar:file:/Users/mcejka/.hawtio/tmp/webapp/WEB-INF/lib/hawtio-core-1.4.52.jar!/OSGI-INF/blueprint/blueprint.xml, jar:file:/Users/mcejka/.hawtio/tmp/webapp/WEB-INF/lib/hawtio-git-1.4.52.jar!/OSGI-INF/blueprint/blueprint.xml, jar:file:/Users/mcejka/.hawtio/tmp/webapp/WEB-INF/lib/hawtio-ide-1.4.52.jar!/OSGI-INF/blueprint/blueprint.xml, jar:file:/Users/mcejka/.hawtio/tmp/webapp/WEB-INF/lib/hawtio-json-schema-mbean-1.4.52.jar!/OSGI-INF/blueprint/blueprint.xml, jar:file:/Users/mcejka/.hawtio/tmp/webapp/WEB-INF/lib/hawtio-kubernetes-1.4.52.jar!/OSGI-INF/blueprint/blueprint.xml, jar:file:/Users/mcejka/.hawtio/tmp/webapp/WEB-INF/lib/hawtio-local-jvm-mbean-1.4.52.jar!/OSGI-INF/blueprint/blueprint.xml]
[main] INFO io.hawt.git.GitFacade - hawtio using config directory: /Users/mcejka/.hawtio/config
[main] INFO io.hawt.git.GitFacade - Performing a pull in git repository /Users/mcejka/.hawtio/config on remote URL: https://github.com/hawtio/hawtio-config.git. Subsequent pull attempts will use debug logging
[main] INFO io.hawt.web.AuthenticationFilter - Starting hawtio authentication filter, JAAS authentication disabled
[main] INFO /hawtio - jolokia-agent: Using access restrictor classpath:/jolokia-access.xml
[main] INFO org.eclipse.jetty.webapp.WebAppContext - hawtio at http://<IP_ADDRESS>:8090/hawtio
[main] INFO org.eclipse.jetty.server.AbstractConnector - Started SelectChannelConnector@<IP_ADDRESS>:8090
hawtio: Don't cha wish your console was hawt like me!
=====================================================
http://localhost:8090/hawtio
[qtp1501587365-23] INFO io.hawt.web.keycloak.KeycloakServlet - Keycloak integration is disabled
Have you try to remove the version number from the war name?
Solved by properly renaming the plugin. It has to be "simple-plugin" instead of "simple-plugin-1.42" or anything else.
| common-pile/stackexchange_filtered |
Add custom sphinx extension like scipy.minimize
scipy.optimize.minimize has a nice interface with only one public function to perform minimization, and a lot of private functions for each method.
Looking at the source code, taking the Nelder-Mead method for instance, the documentation of the method comes from this private function, but notice that the header (with the name of the function and the parameters) corresponds to the scipy.optimize.minimize function.
Looking at the source code of this page, it seems that they manage to do that with a custom Sphinx directive called scipy-optimize:function:.
My question is: How can I find where they defined this directive? I would like to do something very similar for a project of mine.
Edit: I found the source code for the definition of the directive.
Here is the relevant source code.
| common-pile/stackexchange_filtered |
ControlFX PropertySheet not showing anything
I am using ControlFX library in my project to generate forms dynamicly using PropertySheet.
Controllor class:
public class Controllor implements Initializable
{
@FXML
private PropertySheet sheet;
@Override
public void initialize(URL location, ResourceBundle resources)
{
sheet = new PropertySheet(BeanPropertyUtils.getProperties(new BeanObj(someProperties)));
sheet.setMode(PropertySheet.Mode.NAME);
}
}
My fxml file contains an AnchorPane and PropertySheet (just for testing).
The program runs with no errors but it shows an empty propertySheet control!
So , am I doing this right? Please any help would be appreciated !!
EDIT: I manage to get the application running by implementing the same code in the Start() method of the MainClass
I am still confused !! I can't figure it out...
EDIT 2: F5 solve everything to me
You were creating a second instance of sheet. With the @FXML annotation, the FXMLLoader created one.
I tried this approach but it gives me a nulPointerException
You were creating a second instance of sheet, but without being added to the scene graph.
With the @FXML annotation, the FXMLLoader created one that was the one added, but without content or elements.
This should work:
@FXML
private PropertySheet sheet;
public void initialize() {
sheet.getItems().setAll(BeanPropertyUtils.getProperties(new BeanObj()));
sheet.setMode(PropertySheet.Mode.NAME);
}
I am using eclipse and scene builder, for some reason fxml files were not (syncing) in eclipse!! that's why I kept getting error and empty scene !!! .
F5 solved the problem !!! thanks anyway
| common-pile/stackexchange_filtered |
How is it in my best interest not to submit a paper to two journals simultaneously?
The two journals I am considering for my paper each demand that the paper be submitted exclusively to their journal for consideration.
What do I win if I don’t keep this rule?
If I get rejected by both, I will have found out earlier that the paper is not worthy of publication.
If I get accepted by one of the journals, this is very good. I will have found out faster which of the two journals is willing to publish me.
If I get accepted by both, this is really a dream. I should probably find a reason (excuse) to withdraw the publication from one of them.
In any case how can submitting to two journals negatively impact my reputation more than the gain by being actually accepted? (More so considering I will probably leave academia after my PhD dissertation.)
What can I lose if I don’t adhere to this rule? What sanctions, if any, can I expect?
"What can I lose if I don't keep this rule?" — Your reputation.
possible duplicate of Can we submit our research paper for review at two IEEE conferences simultaneously?
If you're going to lie, quit the PhD and just tell everybody you finished it. While you're at it you can fabricate research results.
How is it in your best interests not to date two people while telling each of them that they're the only one? If they both break up with you, then you'll have found out earlier that you are unworthy of love. If one wants to marry you, then you'll have found the right one faster than you would have by dating only one at a time. If they both want to marry you, then it's really a dream and you should probably find an excuse to break up with one of them. In any case, how can cheating on them with each other negatively impact your reputation more than the gain from finding your soul mate?
Wasting two groups of people's (from editors, reviewers, staff, etc.) time for your personal favor. It is absolutely unethical.
Related Question "Is Honesty Really the Best Policy?"
@AnonymousMathematician How is it in your best interest to submit query letters to multiple publishers outside acadamia? How is it in your best interest to pursue two different job offers at the same time? Why the comparison between scholarship and marriage and not business?
@Michael: The other cases you mention aren't analogous because nobody is asking for exclusivity. The same applies to dating or academic paper submission: there's nothing wrong with dating two people at once if they both understand that the relationship is not exclusive, and there's nothing wrong with submitting a paper to two journals at once if they both allow simultaneous submissions. (That's how law reviews work, for example.) The ethical problem lies in pretending to comply with the other party's conditions while not actually doing so.
(Note that the original question explicitly states at the beginning that these journals disallow simultaneous submissions, but asks whether it would be in the author's interests to do so anyway.)
@AnonymousMathematician Thanks, the way I read the question I thought the OP was implicitly questioning the reasoning and validity of exclusivity. Of course one should abide by whatever rules are in place, but so far the best argument I have seen for exclusivity is "it prevents politics from entering into the equation".
The other argument about wasting people's time doesn't seem to hold as much water since the same could be applied to non-exclusive dating and job searches, i.e. the potential benefits (publishing a good paper in this case) necessarily demands investing time without guarantee of success.
@Michael The difference is that the academic system is already strained by the peer review process close to (or, in some cases, past) the breaking point. You'll understand that people have very hard feelings towards multiplying this overhead by 3 or 4 by allowing everybody to submit everything everywhere at the same time.
It is quite likely that there is an overlap in who gets asked to referee the two submissions, which would lead to the double submission being detected.
Double submission of papers is sufficient ground for retraction - even after the paper has been accepted for publication. As some journals publish submission dates with published papers, it is conceivable that your double submission is uncovered after publication - and that you end up with a retraction, ie no published paper.
Academic misconduct (and double submission is such) typically is sufficient for dismissal from a PhD program.
I doubt that double submission will be considered academic misconduct by everybody.
I agree - double submission is not going to get you kicked out of a PhD unless you make a habit of it after being told why you shouldn't.
Referees are very unlikely to detect double submission, but when you paper is published, the submission and acceptance dates are published. This makes is trivial for the second journal to detect double submission.
It's worth noting that retractions are permanently public, and might directly describe what you did wrong.
Simultaneous submission to more than one journal can be lethal to your reputation.
For an example, see this article from COPE (Committee On Publication Ethics). In that case, the author(s) were effectively blacklisted by all the journals to which they had submitted simultaneously, and the information was made public. Even if you are planning to leave academia, you do not want a reputation of being willing to break the ethical norms in your field. And it is taken seriously.
Editors take this so seriously that they may ban authors from submitting to their journal if they have broken the rules.
As @Arno noted, double submission may be sufficient to get you dismissed from your PhD program--this is serious academic misconduct. Also, if the double submission is discovered after acceptance, you may well see your paper being retracted. You might thus end up with neither a publication or a doctorate!
Other answers have touched on the reasons why you should not submit to more then one journal at once, including the non-trivial consideration of wasting reviewers' time with a submission that you will retract if another journal accepts first. In addition, simultaneous submission to multiple journals will increase the cost of publishing journals, thus increasing the subscription cost for every one of us.
There are multiple reasons to submit to only one journal at a time, from respect for others' time and effort to consideration of your reputation and future in the field. As mentioned, the stigma of being perceived to have attempted to cheat the system is so severe that it may well follow you even outside academia. Just don't do it!
Putting it more briefly: It may help you get published the first time but keep you from getting published again. I believe you'd agree that this is "not to your advantage."
Just as a note which won't fit in a comment, the use of single submission is wide-spread in the western academy outside of law. Legal academics in the US, on the other hand, are expected to pursue a multiple-submission strategy where they shop their articles to many journals at the same time. There is all sorts of gamesmanship and politicking when it comes to finding one to publish it, which I see as a negative for their publishing environment. Some of their issues are probably also due to the fact that the vast majority of law journals are the so-called "law reviews" which are run and edited by second and third-year law students with little to no faculty input. There is no peer review or blind (single nor double) review.
The standard accusation about this model is that articles are accepted based on author prestige not quality or correctness since third year law students are not experts and cannot evaluate quality. Furthermore, it is often asserted that authors choose their venue based on the highest prestige law review that makes them an offer. As such, there is some brinksmanship around who offers when and how long authors have to accept. As a result of all this, many submitted articles by prestigious professors are not ready for publication and require substantial work with the student editors.
If you'd like to know what it's like to publish in such an environment, there's lots of blogging and literature on the state of legal academic publishing. I think you'd find that since peer review is volunteer and that acceptance happens after all the reviews are conducted, reviewers would dry up. I certainly wouldn't volunteer in an environment where I knew that the article could get yanked out of the journal I reviewed for because it got accepted by a more prestigious one. Single-submission helps prevent the volunteer peer-reviewing model from falling apart.
I had no idea that the legal model was so ridiculous. Shopping for journals and forcing multiple editor groups to waste their time seems like a horrible way to decide things.
It's a different community with different goals. I tend to think it's suboptimal, but it seems to work for them. I wouldn't want my field done this way, but it's not my field. I could put up a strong criticism of the way we do things, too, for what it's worth. My understanding is that most of the editor's work of prepping an article in the legal academy is done after acceptance, not before, so it's not as bad as you think, workload-wise. Also, tradition is a very powerful thing.
@aeismail Are editors obligated to review every single submission in detail? I thought they already got more submissions than they had time for.
@Superbest: They're supposed to read through the article to make sure they satisfy at least minimal criteria before sending it on. (At least in reputable science journals.)
OK, I take your point about this being too long for a comment but it doesn't even begin to try to answer the question.
@DavidRicherby, it does, just not in the way that you'd like. Everyone here seems ignorant of other academic cultures where multiple simultaneous submission is not only normal but practically required. I think it worth having a counter-point to our typical model on display so that people who read this question understand that the model is historical, culturally contingent, and not the only option.
@BillBarth Answering the question in a way I don't like would be saying, e.g., "Sure, go ahead and submit to multiple journals against their rules. It won't hurt you at all." That is not what you're saying. The question is whether there is any disadvantage to breaking the rules for submission to journals (in this case, rules that forbid multiple simultaneous submissions). You are discussing whether journals should have different rules for submission (in this case, allowing multiple simultaneous submissions). That is interesting but it doesn't answer the question.
When I read this, I understand the answer to "How is it in my best interest not to submit a paper to two journals simultaneously?" to be "It's in everybody's best interests, including yours, to disallow double submission. Here's an example illustrating why." Bill, if that's accurate, perhaps you could add a line like that at the top (and remove the "just a note" thing.)
To submit the same material to two journals simultaneously is against the ethics of publishing. When you submit to many journals, you will be specifically asked to verify that your work is not under consideration in another journal. This has to do with copyright. Your paper will be published somewhere and under a specific copyright. If the same paper then appears somewhere else, it will likely be subject to a different copyright. Journals and publisher's therefore look very seriously at such attempts. You may be rejected by both in the end and as was stated in other answers, your reputation will be ruined very quickly.
Copyright doesn't really come into this at all. Plenty of journals just ask for a non- exclusive permission to distribute, but you still musn't double-submit. The point of the rule is not to waste other peoples time, and to discourage submitting bad stuff everywhere in the hope that it slips through eventually.
You are not correct when you say "at all". ... and I was merely trying to add to the points made by, for example, yourself.
I think you have a good point. It may be that journals are no longer
all requiring exclusivity, though some still do, AFAIK. The point is
not that you will publish twice, thus breaking the requirement, which
could mean legal trouble. But it may force you to confess that you have to
withdraw because you also submitted elsewhere. Since double submission
is poorly regarded for reasons expounded in other answers, copyright
requirements may be a good guarantee that you will be uncovered. But
acadenic cheating is always a dangerous game. (+1)
One thing not mentioned yet is that publisher policies typically have a line forbidding multiple submission. For example, Springer's publishing ethics page requires that "The manuscript has not been submitted to more than one journal for simultaneous consideration", whie Wiley's research integrity page says "The Copyright Transfer Agreement, Exclusive License Agreement or the Open Access Agreement, one of which must be submitted before publication in any Wiley journal, requires signature from the corresponding author to warrant that the article [...] is not being considered for publication elsewhere in its final form."
If you dual submit, you are violating the publisher's policies. If you are detected, there's a chance you'll be blacklisted by the publisher (not just the journal), which will close all the publisher's journals to you.
Being honorable means striving to meet all the things that are expected of you. It would be dishonorable to game the system as you appear to be contemplating doing.
Be honorable. It's that simple.
I can't get more out of this answer than "You should strive to be honorable at all times. Indeed, not doing so would be dishonorable." True, but...
I think being honorable implies "over time". A single act of honor does not make one "honorable." However a single act of dishonor could very well make you dishonorable. Given that it can be exceedingly hard to recover from that, it's probably best to keep it on the up and up, the straight and narrow, and stay away from slippery slopes and gray areas. If you poke the tiger and git bitten nobody will feel sorry for you.
Aside from the ethical problems, I would think that trying to prepare the paper for two publishers simultaneously would be counterproductive, and that the quality of the paper would suffer.
Each journal has different requirements for article length, organization, style, abbreviations, and so forth. Journal requirements may overlap, but a paper prepared for one journal generally requires at least some revision before it's ready to submit to a different journal.
Journals are more likely to publish papers that match the background and interests of their audience. Considering what the audience understands and what bits of knowledge need to be explained rather than assumed is best done for one audience at a time.
Most journals I've worked with require the authors to state (as part of the submission process or in a cover letter) that no part of the article has been published, and that the paper is not under consideration elsewhere.
This is very much field dependent. Most of the journals I submit to (theoretical computer science) have no length requirements and no particular formatting requirements at the point of submission. Organization is up to the authors and there are no standard abbreviations.
It certainly is in your best interest, otherwise there would be no need to prevent you from doing so. As far as I can tell, it seems like an edict against simultaneous publication somehow turned into an axiom that simultaneous submission is deeply unethical. It is not. (If others have more insight into this history, I'd love to know more!) It is not unethical for you to have your work under consideration at more than one journal, and it's certainly not "a crime." Arguments to the contrary tend to hinge on one of two things:
That it raises the cost of the journal. I do not work work in a field that pays its reviewers. If you do, perhaps this argument has more merit. In most disciplines, however, review is volunteer work. It does not raise the cost of the journal.
If it doesn't raise the cost of the journal, then the argument is that you are wasting people's time. You are not. If I receive an article I have seen before, I can submit the same review and be done with peer review for the day. Copy-paste is there, and it does not waste my time.
My personal belief is that it is unethical to prevent simultaneous submission. People are precariously employed. Others are on a 5-year tenure clock. It is deeply wrong to waste their time sitting on an article for months before telling them you are not interested. Sure. If your reviewers suggested minor revisions and you really want to publish the piece, you can ask the author to withdraw it from consideration elsewhere. But, at the very least, the initial outset should allow simultaneous submission.
I am not surprised that law reviews tend to operate this way. Those journals are obviously interested in ethics and legal considerations (it's one of their fields!), and their conclusion is that simultaneous submissions should be allowed. I agree. Unfortunately, you will risk offending people and possibly being "blacklisted." Then again, I have to wonder if this is an urban myth. I have heard about people being punished for simultaneous submissions, but I have never actually seen it happen...
The chance that a two journals select the same referees is very small. Double submission wastes a lot of time of other academics, usually the editors and the referees. So point (2) is not correct.
Regarding your second point, I believe the argument is that if two or more different sets of reviewers perform the review, all but one of their efforts is wasted.
You say that People are precariously employed or on 5-year tenure clocks, so its an immoral waste of their time. But your reviewers are almost certainly also precariously employed or on 5-year tenure clocks, who are taking their valuable time to review your paper for no gain for themselves.
These comments do not convince me that simultaneous submission wastes the editor or reviewer's time in any meaningful way. Copy-paste is not a waste of time that is comparable to making an author wait months for a first-round rejection.
It is often a condition of submission to declare that the work is novel and not currently considered by another journal.
| common-pile/stackexchange_filtered |
mysql: get last success and last fail in one select
Goal:
Get the last success and the last fail in one select query:
Expected output:
bj, feature, name, env, lastSuccess, lastFail
(varchar(100), varchar(100), varchar(100), varchar(1), timestamp, timestamp)
bj1, ft1, example, a, 10-04-2017 16:19, 10-04-2017 15:20
bj1, ft1, example, t, 10-04-2017 16:19, 10-04-2017 15:20
Tabel script_status (source):
created, bj, feature, name, env, online, id
(timestamp, varchar(100), varchar(100), varchar(100), varchar(1), tinyint(1), int)
10-04-2017 16:19, bj1, ft1, example, a, 1, 900
10-04-2017 15:20, bj1, ft1, example, a, 0, 899
10-04-2017 16:19, bj1, ft1, example, t, 1, 898
10-04-2017 15:20, bj1, ft1, example, t, 0, 800
.
.
.
10-03-2017 16:19, bj1, ft1, example, a, 1, 600
10-03-2017 16:19, bj1, ft1, example, a, 0, 500
(online is 1 on success 0 on fail)
this is how far i got but this doesnt work
(select s.* from script_status s where s.id in (SELECT max(ss.id) as id from script_status ss group by ss.bj, ss.feature, ss.name, ss.env))
left JOIN
(select f.* as lastFailed from script_status f where f.id in (SELECT max(ff.id) as id from script_status ff where ff.online = 0 group by ff.bj, ff.feature, ff.name, ff.env)) on s.bj = f.bj and s.feature = f.feature and s.name = f.name
left JOIN
(select p.* as lastSuccess from script_status p where p.id in (SELECT max(pp.id) as id from script_status pp where pp.online = 1 group by pp.bj, pp.feature, pp.name, ff.env)) on s.bj = p.bj and s.feature = p.feature and s.name = p.name
Edit your question and provide sample data and desired results.
I don't see the point why it has to be one query, however you can use UNION to combine your queries
See http://meta.stackoverflow.com/questions/333952/why-should-i-provide-an-mcve-for-what-seems-to-me-to-be-a-very-simple-sql-query - and store dates using an appropriate data type.
I typed the example without the db next to me but the date field are stored (as is written between the brackeds) in a timestamp field
SELECT
`bj`,
`feature`,
`name`,
`env`,
(SELECT MAX(`created`) FROM `script_status` b WHERE a.`id` = b.`id` AND `online` = 1) as `lastSuccess`,
(SELECT MAX(`created`) FROM `script_status` c WHERE a.`id` = c.`id` AND `online` = 0) as `lastFail`,
FROM `script_status` a
GROUP BY `bj`,`feature`,`name`,`env`
Thx it was not exactly where i was afther but i managed to get the data as i wanted by change the where clouse in the sub queries
| common-pile/stackexchange_filtered |
Why isn't the identity/unit matrix upright?
I realize this is more of a typesetting problem then a mathematical one. I've already tried the TeX stack exchange and the question got canned.
In ISO 80000-2:2009, variables and running numbers are italicised while mathematically defined constants and contextual independents are to be upright. However, it advises that the identity matrix is either the italic E or I. (Either is boldface to indicate it is a matrix/vector/tensor)
Given a fixed matrix size, the identity isn't contextually dependent, and it's certainly constant. You can even define it purely in terms of the Kronecker delta, which is, according to the standard, set upright. So, if I were using tensor-notation, it would be upright in the form of the Kronecker! But the matrix notation equivalent is italic. This doesn't make sense to me.
Edit:
I seem to have kicked up a bit of dander, even somehow earning a downvote. Please note that I'm not advocating an ideology here. I take notation very seriously and I like to consult conventional wisdom before landing on something I like. This particular inconsistency made me curious and I thought that the broader wisdom of this community may enlighten me.
You should ask the ISO committee that decided on that.
Perhaps the ISO committee should have asked you.
@hardmath Why the sarchasm? It's an honest question that doesn't have a clear answer. I came here because of the diversity of experience in this community, hoping someone may have insight that I don't.
How does it propose to denote operators and/or functions? Perhaps it is based on the interpretation of the identity as an operator that took precedent. I do not have access to the article (the source I found on iso.org would require payment), but does it really matter what it says? So long as your reader understands your notation (be it based on context, or on an international standard), that is all one would hope for, right?
My point is that the gospel of an International Standards Organization can often be improved by those for whom the standards are promulgated. In the mathematical tradition authors are free to use the notation best suited to the purpose of exposition, provided definitions are provided for anything unconventional. ISO standards tend to favor engineering practices rather than purely scientific ones, so my "sarcasm" is really to encourage you to write like a mathematician and choose the notational convention you think best.
@MarianoSuárez-Alvarez That's a good idea, but I'm not sure who was on the committee. I wouldn't have been surprised if someone in this community was in fact on the committee. And, this way it is public. I know I'm not the first to point this out as I've found the same concern in books on typesetting.
@hardmath Correct me if I'm mistaken, but it reads like you have an ideological bent that you're taking out on me. I have nothing against inventing notations, I like looking at conventional wisdom before making decisions. Notation is something I take very seriously.
I applaud your thoughtful choice of notation, and hope nothing I've said struck a discouraging note. You've not really given the context in which you are choosing notation (a textbook, a lecture note, an operator guide/user manual). If you use $I$ or $E$ and make it clear by context that the matrix is the identity, your Readers will surely be able to follow.
@JMoravitz The content of the standard doesn't matter so much, since I will take the notation that makes sense to me. But I like to consult conventional wisdom before making decisions, and I find that particular ISO standard has proven useful as a reference. If there is a good reason that they violate their own convention for this particular case, I am curious to know what it is.
Thanks, @hardmath. I appreciate the encouraging note.
The full title of ISO 80000-2 is Quantities and units — Part 2: Mathematical signs and symbols to be used in the natural sciences and technology. It doesn't signify to me that these guidelines were intended to be binding or normative for mathematical exposition per se.
I don't think I know even one mathematician Who is aware the standard even exists, let alone if it prescribes a slanted I for the identity mateix, and I am sure no one I know cares.
| common-pile/stackexchange_filtered |
Where do I put front-end code in my backend project and how/when to run it?
The question is, say I have written a backend REST service using Python, and then some other guy wrote some frontend code in Angular JS. Is the typical workflow putting them into two separate folders and run them separately? So the process will look like below
python manage.py runserver
in Python colder and then probably
# in the angular2 folder
npm start
Or should I place all the JS code into some assets folder and when I start my server, all the JS code will run automatically with them? If so, how should I do it?
Another question related is, when is all the JS code sent to users’ browsers? If the app is a client-side rendering app, is the code sent to browser at the first time of server requesting? How does the server know it should package your JS code and ship it?
You need to serve an HTML page with <script> tag(s).
@SLaks Are you saying I should point to the Angular JS code I wrote at every backend HTML template?
That's not how SPAs work. You want a single HTML file that just loads the SPA.
Q)The question is, say I have written a backend REST service using Python, and then some other guy wrote some frontend code in Angular JS. Is the typical workflow putting them into two separate folders and run them separately?
Both Angular and Python can be run differently as well as together.
You could choose to place the Angular files (which qualify for all practical purposes as public files) in the public (or related folder) depending on which framework you're using - Django, Flask, Web2py or so on.
You can also choose to run them independently as standalone apps.
Depends on your requirement. Honestly, there are so many ways to answer this question.
Q)Or should I place all the JS code into some assets folder and when I start my server, all the JS code will run automatically with them? If so, how should I do it?
If you place everything in your assets folder, then as soon as the home route or any other route is made a request for [from a browser] , the public folder passes on to the browser.
Although I'm not sure if it is the case with server side rendering.
If you're using Angular 1, I do not think it fits server side rendering although you could use some libraries out there.
Q)Another question related is, when is all the JS code sent to users’ browsers? If the app is a client-side rendering app, is the code sent to browser at the first time of server requesting? How does the server know it should package your JS code and ship it?
All the files in PUBLIC folder on the server are accessible by a browser.
All of your questions seem to essentially ask the same question.
Thanks a lot for your answer! So if I decided to run Angular JS and Python code as two standalone apps, when does the browser get the JS code?
the browser gets the js code from angular app.
it makes requests to the python server ( which serves as your API/backend ) . The server in this case does not usually serve the js.
There are many approaches to this problem.
If infrastructure management is difficult for you, than maybe it's easier to place them in the same server. You can create another directory and place your html that is served to browser with your JavaScript code.
In case you have a good pipeline (which I think it pays for it self) than having another server that serves your application is better. You can do more things without affecting your service, like caching etc. And your server that runs your service wont be busy serving assets as well.
I have a case where we run a node server, which serves the html and javascript to our users, because that way we can do server side rendering.enter link description here
The flow of code execution will be this : Once the browser of your user hits the server that is serving html and assets, it will download it locally and it will start with JavaScript execution (Parsing, Compiling and Executing). Your JavaScript app might do some API calls upon initialization or it might not, but your service will be executed only if frontend makes a request.
As for CORS it is irrelevant how you serve them, because you can keep them always in the same domain.
| common-pile/stackexchange_filtered |
Ajax call from CakePHP
I am new to cakephp. I want to update dropdown corrosponding to a diffent dropdown. Most of the tutorials on the internet are outdated.
This is my script file
<script type="text/javascript">
$(document).ready(function($){
$('#city_change').change({
source:'<?php echo $this->Html->url(array("controller" =>"officers","action"=> "locality_ajax")); ?>
});
});
</script>
My action in the controller
public function locality_ajax() {
$city_name = $this->request->data['Post']['city'];
$locality = $this->Locality->find('all', array(
'conditions' => array('city_name' => $city_name),
));
$this->set('locality',$locality);
$this->layout = 'ajax';
}
Any help would be appreciated
Ajaxcall is not able to call the above method
http://stackoverflow.com/questions/19071022/cakephp-ajax-call
This one and its examples is not: http://www.dereuromark.de/2014/01/09/ajax-and-cakephp/
If the Framework doesn't have ton of support, you should have chosen CodeIgniter or something widely used, so if you're stuck there are forums and ton of other help for that framework.
CakePHP is PHP and it runs on JQuery, CSS and other web standards. You don't have to use cakephp's built-in function if it isn't there. You can create your own PHP function within the framework since it runs on PHP, so you're OK if the help isn't there.
Lets say you have 2 select drop downs, and you want to update dropdown corresponding to a different drop-down, here is what you'd do:
<select name="select1" id="category">
<option value="0">None</option>
<option value="1">First</option>
<option value="2">Second</option>
<option value="3">Third</option>
<option value="4">Fourth</option>
</select>
<select name="items" id="select2">
<option value="3" label="1">Smartphone</option>
<option value="8" label="1">Charger</option>
<option value="1" label="2">Basketball</option>
<option value="4" label="2">Volleyball</option>
</select>
add this JQuery to your file or the view:
$("#category").change(function() {
if($(this).data('options') == undefined){
/*Taking an array of all options-2 and kind of embedding it on the select1*/
$(this).data('options',$('#select2 option').clone());
}
var id = $(this).val();
var options = $(this).data('options').filter('[label=' + id + ']');
$('#select2').html(options);
// alert(options);
});
here is a JSFiddle: http://jsfiddle.net/PnSCL/2/
if you want to experiment.
Thanks for reply. But the problem is that I want to dynamically get the value from the database according to the value selected in field 1.
| common-pile/stackexchange_filtered |
Passing C# data to MySQL
I am trying to pass data from some textboxes in C# to a MySQL database, but everytime I click submit from my C# application, the row creates empty spaces, as if my textboxes were empty.
public void Insert()
{
string query = "insert into albertolpz94.miembros(usuario,contraseña,permisos,nombre,apellidos) values('" + userAdmin.registroUsuario.Text + "','" + userAdmin.registroContraseña.Text + "','" + userAdmin.registropermisos2 + "','" + userAdmin.registroNombre.Text + "','" + userAdmin.registroApellidos.Text + "');";
//open connection
if (this.OpenConnection() == true)
{
//create command and assign the query and connection from the constructor
MySqlCommand cmd = new MySqlCommand(query, connection);
//Execute command
cmd.ExecuteNonQuery();
//close connection
this.CloseConnection();
}
}
registroUsuario for example is a public textbox in my userAdmin class, the database is named "albertolpz94" and the table is "miembros". I would like to know why is this happening or how to use parameters (as someone suggested) as I am new to Mysql > C#
Try to use parametrized query - http://stackoverflow.com/questions/652978/parameterized-query-for-mysql-with-c-sharp
So? Plug in a debugger, set a breakpoint and look at the SQL you so clumsily are putting together (instead of using parameters).
What do you mean with the row creates empty spaces? There is a new row in your table and this row has no data?
Yes, that what I mean. It creates a new row in the table, and values are empty. (Even though I set those to be NotNull)
I'm baffled by how you can have a row of empty values with a NOT NULL constraint on the columns... I know MySQL's awfully quirky, but that's ridiculous.
First off, know that the connection and command variables need to be properly disposed of. We ensure this by using a using block like so:
using (MySqlConnection conn = new MySqlConnection())
using (MySqlCommand cmd = new MySqlCommand()) {
// ensure your connection is set
conn.ConnectionString = "your connection string";
cmd.Connection = conn;
}
Then, work up your CommandText using @ParamName to denote parameters:
using (MySqlConnection conn = new MySqlConnection())
using (MySqlCommand cmd = new MySqlCommand()) {
// ensure your connection is set
conn.ConnectionString = "your connection string";
cmd.Connection = conn;
cmd.CommandText = "INSERT INTO ThisTable (Field1, Field2) VALUES (@Param1, @Param2);";
}
Next, use cmd.Parameters.AddWithValue() to insert your values. This should be done so the framework can avoid SQL Injection attacks (Rustam's answer is very dangerous, as is the way you were doing it):
using (MySqlConnection conn = new MySqlConnection())
using (MySqlCommand cmd = new MySqlCommand()) {
// ensure your connection is set
conn.ConnectionString = "your connection string";
cmd.Connection = conn;
cmd.CommandText = "INSERT INTO ThisTable (Field1, Field2) VALUES (@Param1, @Param2);";
cmd.Parameters.AddWithValue("@Param1", userAdmin.YourProperty);
cmd.Parameters.AddWithValue("@Param2", userAdmin.YourOtherProperty);
}
and finally, you can then open the connection and execute it:
using (MySqlConnection conn = new MySqlConnection())
using (MySqlCommand cmd = new MySqlCommand()) {
// ensure your connection is set
conn.ConnectionString = "your connection string";
cmd.Connection = conn;
cmd.CommandText = "INSERT INTO ThisTable (Field1, Field2) VALUES (@Param1, @Param2);";
cmd.Parameters.AddWithValue("@Param1", userAdmin.YourProperty);
cmd.Parameters.AddWithValue("@Param2", userAdmin.YourOtherProperty);
cmd.Connection.Open();
cmd.ExecuteNonQuery();
cmd.Connection.Close();
}
And finally, realize that when these command and connection variables exit the using block, they will be properly disposed of. The using block ensures proper disposal regardless of any exceptions that might be thrown or whether we forget to close/dispose ourselves.
Now, for helpful debugging, wrap the core execution and connection opening in a try/catch block, and write the exception to the output window if there is one:
using (MySqlConnection conn = new MySqlConnection())
using (MySqlCommand cmd = new MySqlCommand()) {
// ensure your connection is set
conn.ConnectionString = "your connection string";
cmd.Connection = conn;
cmd.CommandText = "INSERT INTO ThisTable (Field1, Field2) VALUES (@Param1, @Param2);";
cmd.Parameters.AddWithValue("@Param1", userAdmin.YourProperty);
cmd.Parameters.AddWithValue("@Param2", userAdmin.YourOtherProperty);
try {
cmd.Connection.Open();
cmd.ExecuteNonQuery();
cmd.Connection.Close();
} catch (Exception e) {
System.Diagnostics.Debug.WriteLine("EXCEPTION: " + e.ToString());
}
}
At this point you set a breakpoint on the exception catching and check your locals window to see what your state is... in locals, you should see your cmd variable... expand it, look for the Parameters collection, expand that, check the count and values, etc, as shown:
If seting my parameters to userAdmin.textBox 1 , this is what my database receives: "System.Windows.Forms.TextBox, Text:" , if setting my parameters to userAdmin.textBox1.text, I get blank spaces
In this case it sounds like a binding problem with the textbox. Your requesting the value of the textbox itself appears to be returning null or an empty string... can you confirm this? (e.g., before you start doing any of the db work, throw a breakpoint and use the immediate window to check, or output the values to the output window as described in the catch block of my answer)
The best option for you is to debug and find out if the input reaches the query section. Likewise, make sure you OPEN the connection before executing the query. It is not the best practice to mention input variables like you are doing. Instead, try this:
public void Insert()
{
string query = "insert into albertolpz94.miembros(usuario,contraseña,permisos,nombre,apellidos) values('{0}','{1}','{2}','{3}','{4}')", userAdmin.registroUsuario, userAdmin.registroContraseña, userAdmin.registropermisos2, userAdmin.registroNombre, userAdmin.registroApellidos;
//DOUBLECHECK IF YOU REALLY OPENED A CONNECTION
if (this.OpenConnection() == true)
{
//create command and assign the query and connection from the constructor
MySqlCommand cmd = new MySqlCommand(query, connection);
//Execute command
cmd.ExecuteNonQuery();
//close connection
this.CloseConnection();
}
}
Also, check the validity of your connection
Sorry, but the downvote is for suggesting to put non-parameterized formatted strings into a querystring, and not properly ensuring connection disposal.
| common-pile/stackexchange_filtered |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.