text stringlengths 70 452k | dataset stringclasses 2 values |
|---|---|
Route attributes range constraint vs range attribute validation
With the new Attribute Routing and its constraints it seems the border why I should use attributes to validate my DTO`s gets blurry more and more.
I can do a Range validation on 2 ways now - consider that the first approach has an action with a complex class + Weight property -
When should I use which approach and what are their advantages vs the other approach?
[Range(0,500)]
public int Weight {get;set;}
vs
[GET("{id:range(0, 500)}")]
public Machine GetMachine(int weight)
{
}
The first approach result in a Bad Request.
The second approach result in a Not Found Request.
Your last two sentences summarize it well. In the case of [Range], that sets up validation of the input so it can tell the caller their request was bad. In the second, it's defining rules for matching a URL to the route, which would be used for sending requests to different routes, not for validating input. The second could be useful, for example, if you have IDs of different formats that you want to handle with different methods.
In short, the UX for the first would be "I must be sending invalid data" and for the second "I must have the wrong URL."
Update:
Here's a demonstration of what I meant for how to use the route attribute for different formats:
[GET("/users/{id:int}")]
public User GetUserById(int id)
{
}
[GET("/users/{email:regex(^[^@]+@[^@]$)}")]
public User GetUserByEmail(string email)
{
}
[GET("/users/{username}")]
public User GetUserByName(string username)
{
}
"...if you have IDs of different formats that you want to handle with different methods." I do not understand this, would you mind slapping 2 actions in your answer? :-)
@Pascal, updated the answer. Hopefully is more clear.
So the Route constraints define actually ONE route. The difference is resolved at runtime which action is choosen finally. Yes... but still I do not know yet wether I should use now validation range or constraint range. Hm...
It depends. If someone provided a weight that was out of range, do you want to tell them the resource is not found or that the request was bad? I'd guess you'd want the latter (Range attribute).
Well this is to be discussed in our team tomorrow then ;-)
| common-pile/stackexchange_filtered |
Which one is having better complexity f1 = (n+m) +(n+m)log(n+m) or f2 = n*m
Which function f1 or f2 have better time complexity if
f1 = (n + m) + (n + m) * log(n + m)
and
f2 = n * m
It is easy to see that for larger values of n, m (n+m) + (n+m)log(n+m) ~ (n+m) because (n+m)(log(n+m) + 1 ) ~ (n+m). Where as n*m will be much larger at larger values of n,m. Hence (n+m) + (n+m)log(n+m) is better.
It depends. To choose the winner from
f1 = (n + m) + (n + m) * log(n + m)
f2 = n * m
we should know what m and n are; what is the relation between n and m. For Instance
Let m be constant:
f1 = O((n + m) + (n + m) * log(n + m)) = O(n + n * log(n)) = O(n * log(n))
f2 = O(n * m) = O(n)
f2 is better.
Let m ~ n:
f1 = O((n + m) + (n + m) * log(n + m)) = O(2 * n + 2 * n * log(2 * n)) = O(n * log(n))
f2 = O(n * m) = O(n * n) = O(n**2)
now f1 is a better choice
Finally, let m ~ log(n):
f1 = O((n + m) + (n + m) * log(n + m)) = O(n + log(n) + n*log(n + log(n))) = O(n * log(n))
f2 = O(n * m) = O(n * log(n)) = O(n * log(n))
f1 and f2 have equal complexities
| common-pile/stackexchange_filtered |
Good sources for learning Markov chain Monte Carlo (MCMC)
Any suggestions for a good source to learn MCMC methods?
Related question: good summaries (reviews, books) on various applications of Markov chain Monte Carlo (MCMC)
Nice coverage of sampling methods is in Bishop https://www.microsoft.com/en-us/research/uploads/prod/2006/01/Bishop-Pattern-Recognition-and-Machine-Learning-2006.pdf and MacKay https://www.inference.org.uk/itprnn/book.pdf
For online tutorials, there are
A tutorial in MCMC, by Sahut (2000)
Tutorial on Markov Chain Monte Carlo, by Hanson (2000)
Markov Chain Monte Carlo for Computer Vision, by Zhu et al. (2005)
Introduction to Markov Chain Monte Carlo simulations and their statistical analysis, by Berg (2004).
A Tutorial on Markov Chain Monte-Carlo and Bayesian Modeling by Martin B. Haugh (2021).
Practical Markov Chain Monte Carlo, by Geyer (Stat. Science, 1992), is also a good starting point, and you can look at the MCMCpack or mcmc R packages for illustrations.
fyi : 2nd two links in the list are broken
I haven't read it (yet), but if you're into R, there is Christian P. Robert's and George Casella's book:
Introducing Monte Carlo Methods with R (Use R)
I know of it from following his (very good) blog
This book doesn't go in depth into MCMC. It actually have a page on it and skip MCMC theory in its entirety to go into Metropolist hashing.
While you are 100% entitled to your own opinion on our book, I beg to disagree with the notion that we do not go in depth into MCMC there. Judging from the question I do not think the OP was asking for the deep theory of MCMC algorithms (which is somehow covered in our earlier book).
Gilks W.R., Richardson S., Spiegelhalter D.J. Markov Chain Monte Carlo in Practice. Chapman & Hall/CRC, 1996.
A relative oldie now, but still a goodie.
Handbook of Markov Chain Monte Carlo, Steve Brooks, Andrew Gelman, Galin Jones and Xiao-Li Meng, eds. 2011 CRC Press.
Chapter 4, 'Inference from simulations and monitoring convergence' by Gelman and Shirley, is available online.
Looks set to kick Gilks, Richardson & Spiegelhalter (1996) into the long grass when that comes it in May.
Dani Gamerman & Hedibert F. Lopes. Markov Chain Monte Carlo: Stochastic Simulation for Bayesian Inference (2nd ed.). Boca Raton, FL: Champan & Hall/CRC, 2006. 344 pp. ISBN 0-412-81820-5.
-- a more recently updated book than Gilks, Richardson & Spiegelhalter. I haven't read it myself, but it was well reviewed in Technometrics in 2008, and the first edition also got a good review in The Statistician back in 1998.
Another classic position (as accompanied to already mentioned Introducing Monte Carlo Methods with R):
Monte Carlo Statistical Methods by Robert and Casella (2004)
in the Use R! series there is also:
Introduction to Probability Simulation and Gibbs Sampling with R by Suess and Trumbo (2010)
The text I have found most accessible is Bayesian Cognitive Modeling: A Practical Course. Very clear exposition. The book has great examples in BUGS, and they have been ported to Stan on its github examples page.
| common-pile/stackexchange_filtered |
Is there a way to create a function that does not need parentheses?
Well, probably a strange question, I know. But searching google for python and braces gives only one type of answers.
What I want to as is something low-level and, probably, not very pythonic. Is there a clear way to write a function working with:
>>>my_function arg1, arg2
instead of
>>>my_function(arg1, arg2)
?
I search a way to make function work like an old print (in Python <3.0), where you don't need to use parentheses. If that's not so simple, is there a way to see the code for "print"?
Old print is a statement not a function. You can not define a statement. Why do you want it?
Ok, I see I have looked at this with a way too much C view. I have found some further information here: http://stackoverflow.com/questions/214881/can-you-add-new-statements-to-pythons-syntax
You can do that sort of thing in Ruby, but you can't in Python. Python values clean language and explicit and obvious structure.
>>> import this
The Zen of Python, by Tim Peters
Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than *right* now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!
What are you trying to do? If you are trying to embed this code into another program (non-python) or invoke from the interpreter somehow, can you use sys.argv as an alternative instead? Here is an example of how sys.argv works.
The requirement for braces lies in the Python interpreter and not in the code for the print method (or any other method) itself.
(And as eph points out in the comments, print is a statement not a method.)
As you've already been told, the print in Python 2.x was not a function, but a statement, like if or for. It was a "1st class" citizen, with its own special syntax. You are not allowed to create any statement, and all functions must use parentheses (both in Python 2.x and in 3.x).
Ironically, "print" was a second-class citizen: for example, it couldn't be passed around as an object, the way new "print" can.
| common-pile/stackexchange_filtered |
How was SEELE able to find the last angel (Kaworu)?
At the end of the original series, Gendo Ikari seems to be diverging from their original idea, and to prevent this SEELE (committee?) sends Kaworu to NERV directly, aiming to make contact with the angel in terminal dogma.
Ritsuko seemed to know the real identity, so why let it pass? (this is a comment not the question).
But how was SEELE able to do this (detect, find and recruit the fifth child)? It is supposed to be the case that no one knows where the angels come from.
They had him since the begginning.
Kaworu body was created as result of the Contact Experiment with Adam, from the fusion of the DNA of an unknown donor and the flesh of the Angel. SELEE then salvaged Adam's Soul in his body. This is hinted in differents points during some dialogues.
Man E: "The contact experiment with the donor is scheduled for the
13th of next month. There will be time for any adjustments."
(later) Woman B: "The genes that dived into Adam have already undergone
physical fusion!"
Neon Genesis Evangelion. Episode 21
Hyuga: "But the one thing we do know [about Kaworu Nagisa] is that his birthday coincides with the Second Impact".
SEELE A (speaking to Kaworu): "[Adam's] salvaged soul exists only within you."
Neon Genesis Evangelion. Episode 24 Platinum subtitles
This connection is only hinted in the original Anime, while in the Manga and in the Rebuild is stated blatantly.
| common-pile/stackexchange_filtered |
drive mountpoint automatic renaming
What is causing this to happen every few months?
BackinTime: "Can't find Snapshots location!"
Never use backintime. Please show getfacl /media/$USER normally udisks2 is mounting harddrives unter /media/your-user-name. when the mountpoint is loosing its acl marks udisks can not remove the folder. The next time by plugin it creates a new folder with a number as suffix.
You don't use BackinTime, or no one should use BackinTIme?
getfacl: Removing leading '/' from absolute path names
file: media/mgm
owner: root
group: root
user::rwx
user:mgm:r-x
group::---
mask::r-x
other::---
I mean I never use it. the permissions are okay. ho did you start the backup? As user or in background over a cron job?
The main issue is not with RSync or BiT...the issue is the drive being backed up to has it mountpoint somehow automatically renamed intermittently. You mentioned mountpoint folder security being changed...how it that happening...better how is it prevented?
Does anyone know if this was fixed in "20.04.1"?
| common-pile/stackexchange_filtered |
iOS - sqlite database replacement made replaced db readonly
While working in an iOS app, i have created a database when app launches. I also had a preloaded database in app/resource which actually same database with some preloaded data. When i replaced app database with preloaded app/resource database with NSFileManager copy API,
[fileManager copyItemAtURL:testDbUrl toURL:destination error:&error];
Problem
I found newly replaced database became readonly. I cannot execute any insert operation but reading is fine. How can i make newly replaced database writable.
Are there any open connections?
how can i find it?
You should know where your app opens any databases ...
Thanks for your comment. When i reinitialize my cache db object, it became writable, so i assume there were some connection related issue. @CL.
Finally, I resolved the issue by re-initializing my database helpers/managers.
Solution
Make sure that the database is replaced without any exception/error from file managers
Make sure that the newly replaced database managers is re-initialized and reopened
| common-pile/stackexchange_filtered |
How to serialize an object and string variable?
objectApiName = 'Account'
newFieldConfigJson={"Type":"Text","Value":"Name","Order":"8"}
I have these two values that I need to serialize in the below given format:
{"objectApiName":"Account", "fields": [ {"Type": "LABEL", "Value": "Please enter additional details below.", "Order": 1 }, {"Type": "FIELD", "Value": "AccountNumber", "Order": 2 }, {"Type": "FIELD", "Value": "NumberOfEmployees", "Order": 3 } ] }
How can I do this?
You could create DTO (Data transfer object) class, set values as appropriate and serialize them.
For ex.,
class DtoField {
public String Type;
public String Value;
public Integer Order;
public DtoField(String type, String value, Integer order) {
this.Type = type;
this.Value = value;
this.Order = order;
}
}
class DtoObject {
public String objectApiName;
public List<DtoField> fields;
}
DtoObject obj = new DtoObject();
obj.objectApiName = 'Account';
obj.fields = new List<DtoField>{
new DtoField('Order', 'Please enter additional details below.', 1),
new DtoField('FIELD', 'AccountNumber', 2),
new DtoField('FIELD', 'NumberOfEmployees', 3)
};
System.debug(JSON.serialize(obj));
Which prints
{"objectApiName":"Account","fields":[{"Value":"Please enter additional details below.","Type":"Order","Order":1},{"Value":"AccountNumber","Type":"FIELD","Order":2},{"Value":"NumberOfEmployees","Type":"FIELD","Order":3}]}
how can I make this dynamic if I want the values to be supplied from a controller
The code example above can be put into a controller and create instances of DtoField as many times as you want based on the soql data.
Another choice is to use generic maps and lists in your controller:
Map<String, Object> root = new Map<String, Object>{
'objectApiName' => 'Account',
'fields' => new List<Object>{
new Map<String, Object>{
'Type' => 'LABEL',
'Value' => 'Please enter additional details below.',
'Order' => 1
},
...
}
};
System.debug(JSON.serialize(root));
but +1 to metasync's DTO approach if you have many field instances because it locks down the names in the JSON rather than requiring them to be repeated.
If you have many values on an object to set, it can be better to use the fluent interface approach so each value is named rather than relying on the position in the constructor parameters:
class DtoField {
public String Type;
public String Value;
public Integer Order;
DtoField type(String Type) {
this.Type = Type;
return this;
}
DtoField value(String Value) {
this. Value = Value;
return this;
}
DtoField order(Integer Order) {
this.Order = Order;
return this;
}
}
so:
new DtoField().type('LABEL').value('Please enter additional details below.').order(1);
or:
new DtoField()
.type('LABEL')
.value('Please enter additional details below.')
.order(1);
This approach is particularly good if some values are optional as it lets you leave the values out without having to have null place keepers.
| common-pile/stackexchange_filtered |
UWP Hosted App (Javascript) - authentication
according to my research, UWP-hosted apps should be able to use the current user to authenticate at webservices / webpages, if the following is true:
Capabilities:
Enterprise Authentication
Private Network (Client&Server)
Internet Client
However - I experience the following:
If I disable anonymous authentication on the webpage:
The Hosted-App tries to load the page, gets a 401 (with "WWW-Authentication: Negotiate and WWW-Authentication: NTLM) and then.....just sits there and does nothing (no login dialog, no error, just displays the splash-screen)
If I enable anonymous authentication, but [Authorize] my controllers:
The initial page loads OK (of course...there is no authentication)
The first calls to a webservice will show the login-dialogue, subsequent calls are OK.
So - my questions:
is what I want (automatically use the current logged in user for authentication) even possible?
If yes - what could be my problem?
Thanks in advance
Johannes Colmsee
Update:
It seems that (all observations I made in the last hour - the following are all "from remote PC connect to host PC"):
my Kerberos settings were fucked up (if you install Forefront - it will setup all so that it works, but nothing else....
After fixing that - I can connect to the page with "regular browsers"
However - if I try it from the UWP-App, this happens:
if I use the IP-Adresse - after the first "401" response of the server....nothing
if I use the "Hostname" (not the FQDN) - communicates 3 times with server (3x 401) - after this point a dialoge should show up, but it does not.
Unfortunately I cannot use FQDN (some name resolving problem idk...)
Both - IP-Adresse and Hostname work fine in "regular browsers".
I cannot try out HTTPS right now (browsers I can shut up about certificate problems, UWP-hosted I can't)
Now....some observations from "local-to-local" connection:
Hostname: current user is picked up automatically
localhost: same
IP-Adresse: sits at splash screen
In this scenario I cannot watch the network-traffic (no fiddler or other means).
More Infos tomorrow maybe.
Can your current user login successfully?
When I enter my current user in the login-dialog that pops up when I do WebAPI calls - yes. The problem is, that it is not "automatically used"
Your mean that you need to typing user name password each time right?
The first time I do call WebAPI Controller. Subsequent calls will use that user.
If I do "disable anonymouse" authentication", there is no login dialogue, the app simply "gives up" after the 401 response it gets (which seems to be "OK"....the WWW-Authenticate headers sent from IIS seem to be OK) (gives up at loading index.html)
I have some more Information on this Problem - it might help others to fix their Problems - so, I add it as an extra entry instead of updating.
After some Investigation - I found out, that also Edge has problems loading the pages.
this thread has some Information to Workaround / fix the issue:
https://social.technet.microsoft.com/Forums/de-DE/0face535-3c7a-4658-be34-6c376322ca34/microsoft-edge-cant-open-local-domains?forum=win10itpronetworking
for me - what worked was putting the page into "trusted sites" list - after that - Edge did load the page.
About the "automatically use current user" thingy - again - test it with Edge - if Edge does not use it, neither your app will.
For me - Opening the page with "just the Computer Name" (vs. FQDN) did use the "current user" for both - Edge and UWP-App.
Maybe one could configure it in a way that FQDN would also use the "current user" automatically.
More Information might follow.
| common-pile/stackexchange_filtered |
What is the difference between destroySteps() and removeAllSteps() in SAP UI5?
I wanted to do something with a Wizard in SAP UI5 and came across those two methods. When looking a the method description of the documentation, the description has the same meaning (to me):
Where is the difference?
Thanks in advance!
See https://stackoverflow.com/a/71131717/5846045
| common-pile/stackexchange_filtered |
Select box not showing properly on a survey form
div{
padding: 10px;
}
<div>
Occupation: <select name="occupation">
<option value="student1">Student (high school or less)</option>
<option value = "student2">University student</option>
<option value = "unemployed">Unemployed</option>
<option value="intern">Internship</option>
<option value="full-time">Full Time Job</option>
<option value="retired">In Retirement</option>
</div>
<div>
What do you like the most about our product? <select name="likes">
<option value="look">The Look</option>
<option value="functionality">Functionality</option>
<option value="usefulness">Usefulness</option>
<option value="users">Other Users</option>
</div>
I'm not sure what I'm doing wrong and why is the first select box working and the other one isn't
This is what it looks like:
Your HTML is invalid. You never closed the first (or second) select. You need </select>
Issue is due to missing closing tags.
You should close your div and select tags.
div{
padding: 10px;
}
<div>
Occupation: <select name="occupation">
<option value="student1">Student (high school or less)</option>
<option value = "student2">University student</option>
<option value = "unemployed">Unemployed</option>
<option value="intern">Internship</option>
<option value="full-time">Full Time Job</option>
<option value="retired">In Retirement</option>
</select>
</div>
<div>
What do you like the most about our product? <select name="likes">
<option value="look">The Look</option>
<option value="functionality">Functionality</option>
<option value="usefulness">Usefulness</option>
<option value="users">Other Users</option>
</select>
</div>
</div>
Vote to close as off-topic due to a typo. Don't farm rep from a question that has no value to anyone
@j08691 How is this a typo? The OP didnt close his tags twice, that's clearly not a typo. Furthermore, even if the OP had forgotten it that doesn't mean it's no value to the OP, your reason to downvote is very flawed.
It's a typo in that he simply omitted a closing tag. It's a mistake. Has no value to anyone other than the OP.
@j08691 Which means it has value to someone.
I wouldn't know who would consider invalid markup as "helpful". Voted to close.
You are missing the closing tag of your first <select>:
div{
padding: 10px;
}
<div>
Occupation:
<select name="occupation">
<option value="student1">Student (high school or less)</option>
<option value = "student2">University student</option>
<option value = "unemployed">Unemployed</option>
<option value="intern">Internship</option>
<option value="full-time">Full Time Job</option>
<option value="retired">In Retirement</option>
</select> <!-- was missing -->
</div>
<div>
What do you like the most about our product? <select name="likes">
<option value="look">The Look</option>
<option value="functionality">Functionality</option>
<option value="usefulness">Usefulness</option>
<option value="users">Other Users</option>
</div>
Vote to close as off-topic due to a typo. Don't farm rep from a question that has no value to anyone.
I dont see how it could be off topic
For the same reason one asking about invalid code that a compiler can't compile due to errors in the code. If the OP would have run it through the validator, as everyone should be doing (right?), it would be glaringly obvious what the error is and he wouldn't have to waste our time with it.
| common-pile/stackexchange_filtered |
How to get Saved WIFI SSID information (not saved wifi Password) by using adb shell command - on Non-rooted devices?
I am trying to find a way by which I can get a list of the remembered networks' SSID on an Android device.
I have seen a few threads asking similar questions. However, the few questions that I have found are trying to get the known network passwords not the SSIDS, but my questions is about Getting the SAVED WIFI SSID by using adb shell command.
Is there any adb shell to get the saved WIFI ssid's?
Most recently on Android O developer preview 3, I can get that info this way (YMMV):
From linux:
adb shell bugreport | grep "mSavedSsids:"
You can also do it from the shell:
bugreport | grep "mSavedSsids:"
Just parse the strings that are within the brackets.
"adb bugreport" has been deprecated for the new "adb bugreport ", but the deprecated way still works for me now.
| common-pile/stackexchange_filtered |
How to post file and data to api using httpclient
I am at learning phase and i want to post file and data to api using httpclient.
i have tried this.
Here is my controller code
string baseUrl = ServerConfig.server_path + "/api/Payment/AddMedicineOrder";
Dictionary parameters = new Dictionary();
parameters.Add("username",user.Username);
parameters.Add("FullName", FullName);
parameters.Add("Phone", Phone);
parameters.Add("CNIC", CNIC);
parameters.Add("address", address);
parameters.Add("Email", Email);
parameters.Add("dateofbirth", dateofbirth.ToShortDateString());
parameters.Add("Gender", Gender);
parameters.Add("PaymentMethod", PaymentMethod);
parameters.Add("Title", Title);
parameters.Add("PhramaList", medList);
HttpClient client = new HttpClient();
client.BaseAddress = new Uri("https://localhost:44391/");
MultipartFormDataContent form = new MultipartFormDataContent();
HttpContent content = new StringContent("fileToUpload");
HttpContent DictionaryItems = new FormUrlEncodedContent(parameters);
form.Add(content, "fileToUpload");
form.Add(DictionaryItems, "medicineOrder");
var stream = PostedPrescription.InputStream;
content = new StreamContent(stream);
content.Headers.ContentDisposition = new ContentDispositionHeaderValue("form-data")
{
Name = "fileToUpload",
FileName = PostedPrescription.FileName
};
form.Add(content);
var response =await client.PostAsync("/api/Payment/AddMedicineOrder", form);
var k =response.Content.ReadAsStringAsync().Result;
How to pass this in Web api Method
[HttpPost]
public async Task<API> AddMedicineOrder()//string key, [FromUri]MedicineOrder medicineOrder
{
var request = HttpContext.Current.Request;
bool SubmittedFile = (request.Files.Count != 0);
this.Request.Headers.TryGetValues("medicineOrder", out IEnumerable<string> somestring);
var k = somestring;
return OK("Success");
}
catch (Exception ex)
{
return InternalServerError("Technical Error.");
}
please help me. Thanks in advance
So what is not working?
Your Web API action doesn't accept parameters. Either add all 11 parameters to the action or create an object with those 11 properties. Don't try to read the parameters from the Request
if i add parameters (i have a class that has all the parameters) to web api. it gives null. I am only reading the data this way because the file does not get submitted. i get file in Request in this way.
Try to convert the file contents into byte[] (for example, by using File.ReadAllBytes(...). If that doesn't work, try converting byte[] into Base64String (i.e., Convert.Base64String(byte[] object)), and after that post that file to the service via httpClient
You need to support add multipart/form-data support to your web api. For this you can add custom media type formatter which will read your json content, as well as file content and you can bind that to a concrete model directly.
| common-pile/stackexchange_filtered |
I still have to use my old e-mail address to sign-in, despite my new e-mail address showing up in "Your logins" page
Whenever I try to log-in using my new e-mail address (that is listed in "Your logins" page), I get "No user found with matching email" message.
Update: I have solved the issue on my own and posted the answer below. No other solutions in related questions helped in my case.
This is probably the reason why: https://meta.stackexchange.com/a/341933/282094
Even though both new and old e-mail addresses were appearing on my Logins page (Settings->ACCESS->Your logins) I wasn't able to log-in with my new e-mail address.
My solution:
Delete - all - logins listed on the abovementioned page, then
Use Account recovery (through "Forgot password" link found in the log-in form), then
Go to the password reset link sent to your e-mail, then
Type in the new password, the procedure is completed.
From now on, I have been able to log-in using my intended e-mail address. Hope this helps someone.
| common-pile/stackexchange_filtered |
Having issues with charset in my blazor project
I'm starting to work with blazor, with one demo project, and Im having a problem with the charset, its not working, maybe it's not implemented where it is needed, I don't really know.
So I have the charset implemented in the index.html, a html file inside the wwwroot, and in my Index.razor component. In my Index.razor Im calling other component only with body elements, where are the words that need accents.
The code that I have in index.html (I don't really know what this code does, it came with the demo):
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="ie=edge" />
<meta name="viewport" content="width=device-width, initial-scale=1.0, shrink-to-fit=no" />
<meta name="description" content="Guild Project"/>
<title>Mission Control</title>
<base href="/" />
<script src="localStorage.js"></script>
<link href="_content/RazorComponentsMaterialDesign/styles.css" rel="stylesheet" />
<script src="_content/RazorComponentsMaterialDesign/script.js"></script>
<script src="_content/RazorComponentsMaterialDesign/lib/material-components-web.js"></script>
<link href="css/Layout.css" rel="stylesheet" />
<script src="https://api.tiles.mapbox.com/mapbox-gl-js/v1.0.0/mapbox-gl.js"></script>
<link href="https://api.tiles.mapbox.com/mapbox-gl-js/v1.0.0/mapbox-gl.css" rel="stylesheet" />
</head>
<body style="margin: 0px">
<app>
<!-- "Loading" spinner -->
<div role="progressbar" class="mdc-linear-progress mdc-linear-progress--indeterminate">
<div class="mdc-linear-progress__buffer"></div>
<div class="mdc-linear-progress__bar mdc-linear-progress__primary-bar"><span class="mdc-linear-progress__bar-inner"></span></div>
<div class="mdc-linear-progress__bar mdc-linear-progress__secondary-bar"><span class="mdc-linear-progress__bar-inner"></span></div>
</div>
</app>
<script src="_framework/blazor.webassembly.js"></script>
<script src="_content/PinMapLibrary/PinMap.js"></script>
</body>
</html>
Index.razor code:
@page "/"
@using MissionControl.Client.Components;
<html>
<head>
<meta charset="UTF-8">
</head>
<body>
<div class="layout">
<div class="sidebar">
<Sidebar/>
</div>
</div>
</body>
</html>
Here Im calling the component Sidebar where I have the words that need accents.
Sidebar code:
<p>Permissões</p>
As you can see Im implementing the charset in the index.html and in the Index.razor but its not working. You can see that in the image bellow.
I think I put all the information that you need to know about this issue but if you have some doubts just leave a comment. Thank you for your attention.
Index.razor should not contain , or tags. That certainly isn't helping.
No repro. I dropped <p>Permissões</p> on a page and it works correctly.
Create a [mcve]. Start with a new standard template and make the minimal changes to demonstrate the problem. Post all steps here, along with all relevant version numbers.
I just deleted , and tags but still with same issue. I just finished testing and the problem is the calling the component. So if I put that word in my Index.razor it's ok but in my Sidebar component is not working.
I copied it to a component: Still no repro.
Make sure you try it in different Browsers.
So I created a new blazor project. I added my index.html to wwwroot. After that I had the default components (Pages file -> Index.razor, Counter.razor...). I added the <p> Olá </p> to my Index.razor page and works fine. After I added to my Index.razor and in Counter.razor I just addedOlá` and the that "Olá" didn't worked.
example
As you can see here is the default project and the issue. Yes I tried with firefox and chrome and same result.
Why did you "added my index.html to wwwroot" ? That shouldn't be necessary. Also, which template?
I created a ASP.NET Core Web Application, put the name, I chose Blazor Server App and there is the template.
Then you should have a _Host.cshtml, not an index.html. Remove that index.html again.
My net went off sorry for the delay. Deleted it and I see that _Host have the charset implemented but same issue.
Did you try to save Index.razor with UTF8 enconding ?
Well, no clue here then. Consider reweritng your question with the minimal steps. When you see the error, use the "View Source" option in your browser and post the HTML..
@aguafrommars no I didn't.
@HenkHolterman So If you call Counter with <p>Olá</p> its fine for you???
in VS select Index.razor, then File -> Save Index.razor As. Click on down arrow beside the Save button. Select Save with Enconding. And Select Unicode (UTF-8 with signature) - Codepage 65001
Did you try to do the same with the Sidebar.razor ? Basicaly, save all with utf-8 enconding
@aguafrommars Omg finally. Thank you so so much!!!
Save all your source code with UTF-8 encoding:
In Visual Studio
select a file
click File -> Save {the file} As...
click the down arrow on save button
click Save with Enconding...
select Unicode (UTF-8 with signature) - Codepage 65001 in Encoding
click OK
To configure VS to save as Unicode per default :
Tools -> Options -> Environment -> Documents
Check Save documents as Unicode when data cannot be saved in codepage
| common-pile/stackexchange_filtered |
Rails Devise Omniauth omniauth_openid_connect issue, how to work with endpoints with different hosts
I need some help in configuring omniauth_openid_connect gem (https://github.com/omniauth/omniauth_openid_connect). I have two endpoints one for Authorization and another for token:
Authorization endpoint:
https://oauth.provider.com/authorize
Token endpoint:
https://oauth-secured.provider.com/token
as you can see eachone has a different host, but are the same provider, Im not sure how to configure this in the gem as you can only specify one host
config.omniauth :openid_connect,
{
name: :openid_connect,
scope: [:openid],
issuer: "oauth.provider.com"
response_type: :code,
discovery: :true,
client_options:
{
port: 443,
scheme: "https",
host: "oauth.provider.com",
authorization_endpoint: "/authorize",
token_endpoint: "/token", #How to specify here correctly https://oauth-secured.provider.com/token
identifier: 'CLIENT_ID',
secret: 'CLIENT_SECRET',
redirect_uri: "https://myapp.com/users/auth/openid_connect/callback",
},
}
Doesn't look like that's configurable. The client only takes one host and endpoints are relative to the host. Configuration eventually ends up in Rack::OAuth2::Client:
https://github.com/omniauth/omniauth_openid_connect/blob/v0.4.0/lib/omniauth/strategies/openid_connect.rb#L94
https://github.com/nov/openid_connect/blob/v1.3.0/lib/openid_connect/client.rb
https://github.com/nov/rack-oauth2/blob/v1.19.0/lib/rack/oauth2/client.rb#L8
Rack::OAuth2::Client has an absolute_uri_for method and looks like endpoints go through it.
def absolute_uri_for(endpoint)
_endpoint_ = Util.parse_uri endpoint
_endpoint_.scheme ||= self.scheme || 'https'
# NOTE: just one host
_endpoint_.host ||= self.host
_endpoint_.port ||= self.port
raise 'No Host Info' unless _endpoint_.host
_endpoint_.to_s
end
I'm only guessing here:
Rack::OAuth2::Client.class_eval do
private
def absolute_uri_for(endpoint) # endpoint # => /token or /authorize ...
_endpoint_ = Util.parse_uri endpoint
_endpoint_.scheme ||= self.scheme || 'https'
# NOTE: now there are two
_endpoint_.host = if endpoint == "/token"
"oauth-secured.provider.com"
else
self.host
end
_endpoint_.port ||= self.port
raise 'No Host Info' unless _endpoint_.host
_endpoint_.to_s
end
end
Probably something will explode. I did not test it. There must be a reason for a single host.
Thanks, however I was getting this error Uninitialized constant Util I had to extract the method from the Util class and add it inside the class_eval to make it work
| common-pile/stackexchange_filtered |
How to fake user on staging environment with omniauth and devise
I use the developer provider for omniauth (with devise and rails 3) on local development environment and several provider on production. For the staging environment (manual integration test system) I would like to have something similar, where I can login with different users / providers simulating the authentication provider. How can I do that?
| common-pile/stackexchange_filtered |
Search File Paths Based on File Name Array
I have two string arrays in my PowerShell script:
$search = @('File1', 'File2');
$paths = @('C:\Foo\File1.pdf', 'C:\Foo\Bar.doc', 'C:\Foo\File2.txt');
How can I get all file paths which contain the file names from the search array? Can this be done in a pipeline?
You could use the GetFileNameWithoutExtension method to retrieve the filename of the path and use -in to filter them:
$paths | ? { [System.IO.Path]::GetFileNameWithoutExtension($_) -in $search }
If you need to do partial matches you could do something like this:
$paths | Where-Object {
$filename = Split-Path $_ -Leaf
$search | Where-Object { $filename -like "*$_*" }
}
| common-pile/stackexchange_filtered |
Push all objects after a selected object into an array
I have a web page that returns a list of objects like:
date.pdf
names.csv
address.pdf
age.csv
cost.csv
budget.csv
data.pdf
race.pdf
contractors.csv
When a user checks budget.csv, I want every object with the .csv extension from that point to be pushed into csv_files[]. If they select names.csv, then every .csv including and after names is pushed into the array.
So the only data that gets pushed into the array is from the selected object downwards. How can I implement this?
Current code
const csv_files = []
$scope.listAllobjects = (err, data) => {
$.each(data.Contents, (index, value) => {
if (value.Key.endsWith("csv")) {
csv_files = [];
}
// Handle click on selection checkbox
$("#filesobjects-table tbody").on("click", 'input[type="checkbox"]', (e1) => {
const checkbox = e1.currentTarget;
const $row = $(checkbox).closest("tr");
const data = $tb.DataTable().row($row).data();
let index = -1;
// Prevent click event from propagating to parent
e1.stopPropagation();
// Find matching key in currently checked rows
index = $scope.view.keys_selected.findIndex((e2) => e2.Key === data.Key);
if (checkbox.checked && data.Key.endsWith("csv")) {
console.log(selected csv)
}
});
}
Here you go with a pure JavaScript solution (Descriptive comments has been added in the below code snippet).
var contentData = ["date.pdf", "names.csv", "address.pdf", "age.csv", "cost.csv", "budget.csv", "data.pdf", "race.pdf", "contractors.csv"];
var myDiv = document.getElementById("cboxes");
for (var i = 0; i < contentData.length; i++) {
var checkBox = document.createElement("input");
var label = document.createElement("label");
checkBox.type = "checkbox";
checkBox.value = contentData[i];
myDiv.appendChild(checkBox);
myDiv.appendChild(label);
label.appendChild(document.createTextNode(contentData[i]));
}
// Event to handle the checkbox click
document.getElementById('getResult').addEventListener('click', () => {
document.getElementById('showResult').innerHTML = getCheckedValues();
});
function getCheckedValues() {
// filtered out the checked items.
const element = Array.from(document.querySelectorAll('input[type="checkbox"]'))
.filter((checkbox) => checkbox.checked).map((checkbox) => checkbox.value);
// element[0] will always return the first checked element and then we are getting index of that.
const checkedElemIndex = contentData.indexOf(element[0]);
// Slice the content data to get the elements from the checked element index.
return contentData.slice(checkedElemIndex, contentData.length)
}
<div id="cboxes"></div>
<button id="getResult">Get Result</button>
<pre id="showResult"></pre>
There's a few ways, I suppose, to approach this problem, but the most intuitive to me is this:
const csvList = ["date.pdf","names.csv","address.pdf","age.csv","cost.csv","budget.csv","data.pdf","race.pdf","contractors.csv"];
const selectedCsv = 'budget.csv';
function getCsvsAfter(csvList, selectedCsv) {
const filteredCsvs = [];
let found = false;
for (let csv of csvList) {
if (csv === selectedCsv) found = true;
if (found) filteredCsvs.push(csv);
}
return filteredCsvs;
}
console.log(getCsvsAfter(csvList, selectedCsv));
Iterate over every csv, and when you've hit the one you're trying to match, set a variable called found to true. Once it's true, you can add every following csv onto the list.
const list = ['date.pdf','names.csv','address.pdf','age.csv','cost.csv','budget.csv','data.pdf','race.pdf','contractors.csv'];
const selected = 'budget.csv'
const csv_files = list.slice(list.indexOf(selected))
console.log(csv_files)
| common-pile/stackexchange_filtered |
Database Model description using constants
I want to create a class with static constants which are used whenever a part of an application want to access the database in any way. This way I want to remove all magic numbers and string inside my application code except at one place. So any time later any changes to the database model have to be done, all adjustments must be taken at one place or the compiler finds all others.
Anyway, this is what I have so far
#include <QObject>
class DatabaseModel : public QObject
{
Q_OBJECT
public:
static const QChar SCHEMA_SEPARATOR() { return '.'; }
static const QString qualifiedTableName(const QString &schema, const QString &table) {
return schema + SCHEMA_SEPARATOR() + table;
}
static const QString SCHEMA_VERSION() { return "0.2"; }
static const QString DATA_SCHEMA() { return "core"; }
static const QString DEFAULT_SCHEMA() { return "public"; }
static const QString LOG_SCHEMA() { return "audit"; }
static const QString UNIT_TABLE() { return "unit"; }
static const QStringList UNIT_TABLE_COLS() {
return QStringList() << "id"
<< "name"
<< "abbreviation";
}
};
Sample usage
// get the fully qualified database table name
// result: core.unit
DatabaseModel::qualifiedTableName(DatabaseModel::DATA_SCHEMA(), DatabaseModel::UNIT_TABLE());
// get the database column name
// result: name
DatabaseModel::UNIT_TABLE_COLS().at(1);
Resulting problems from my points of view:
pretty long function invocation
separation between table and its columns
I tried some class constellation but way not happy either. So my question: Is there a simple, nice approach for accessing database names and qualifier for a complex database model?
I am using Qt (5.6), PostgreSQL with C++. It might be possible to describe a solution in pure C++ because there should be no advantages of using Qt's functionality.
Edit: Additionally I use Qt's MVC for accessing the database, namely its QSqlTableModel.
How many of your classes are involved in composing database queries?
@D.Jurcau
Currently I do not use queries directly becuase I use Qt's MVC which requires only the name of the table/view here and then. See post edit for more information.
If it's only about the length of the expressions, you could:
create a local object DatabaseModel dbm;
or use a namespace and a using claus instead of a class.
Nevertheless, this would end up in a constantly growing class, without real separation of concern. And it will leave you the problems of table names and column names.
I'd therefore suggest you to use the table data gateway pattern:
you could group the table independent constants in a DataGateway parent class, together with some common table access/management functions.
You would derive a concrete TableXGateway for each table X. This would allow to keep not only table name and column names together, but also some additional table specific logic (queries).
Ideally, all your SQL code would be concentrated in these classes (and perhaps in some query objects) so that the rest of your code would become less database dependent.
That seems to be a resonable approach, I'll give it a thought or two once I'm up with the project again.
| common-pile/stackexchange_filtered |
SF Workbench - API Version - v55 Not Appearing
I require to use Workbench with SF API v55 (Summer '22) so that I may deploy Service Cloud Voice (SCV) Contact Center with features that are in v55
My SF Org is running Summer '22 (v55)
Currently when I select the dropdown for APIs I only see v54, a week or so ago there was v55
Where did it go?
Thanks
It's pending: https://github.com/forceworkbench/forceworkbench/pull/850 . If you're in a hurry you can have your own, private instance of Workbench on Heroku, nearly a one-click experience via https://github.com/forceworkbench/forceworkbench
Resolved
There is currently a pending Pull Request for forceworkbench
Which will make the changes to use API v55 from v54.
https://github.com/forceworkbench/forceworkbench/pull/850
Thank you @identigral for your help
The following Pull Request has been Merged in for the SF - Force Workbench master branch which allows the v55 of the API to be used.
Currently this is also the same v60
| common-pile/stackexchange_filtered |
JavaScript - cannot set property of undefined - Apigee
I am keep getting the below error
{
"fault": {
"faultstring": "Execution of Testing1 failed with error: Javascript runtime error: \"TypeError: Cannot set property \"case\" of undefined to \"120\". (Testing1.js:3)\"",
"detail": {
"errorcode": "steps.javascript.ScriptExecutionFailed"
}
}
}
My code is as below
var Backend_Response = JSON.parse(context.getVariable('response.content'));
Backend_Response.test.case = 120;
context.setVariable('response.content',JSON.stringify(Backend_Response));
I am trying to add a nested JSON in to my code with case as the nested object. Expected output is
{
"Primary": "Status",
"test": {
"case": 120
}
}
Can anyone help me to resolve this issue.
This means that Backend_Response doesn't have a test property. What does console.log(Backend_Response) show? Add the output to the question (not a comment).
If you're adding test.case then you'd want Backend_Response.test = { case: 120 }. Tangential: it may be better to pick a single variable naming convention, camelCase or snake_case.
Thank you Barmer for immediate response. I have tried below and it worked now.
var Backend_Response = JSON.parse(context.getVariable('response.content'));
Backend_Response.test = {};
Backend_Response.test.case = 120;
context.setVariable('response.content',JSON.stringify(Backend_Response));
| common-pile/stackexchange_filtered |
Is “Oxygen-Starvation” during exercise a real phenomen?
Cross-Posted from Physical Exercise, as I suspect this Q is a bit more bio-science focussed than is appropriate for that StackExchange
I play hockey (field hockey) at a slightly-below-elite level. I train (but don't play) with players who play in the UK National Prem.
I, and other players I play with, and coaches I've trained under, have always talked about us and our brains being "oxygen-starved", when you're playing beyond your fitness level. (Especially common during pre-season when everyone's unusually un-fit :D )
What we mean by this in terms of external observable actions is that, when we get very tired we stop making good decisions, and our fine motor control drops - we chose bad passes, we take people on when we wouldn't normally do so when not tired, our hand-eye cooridnation drops so that we can no longer execute skills as well, etc. etc. etc.
Certainly the effect exist - those actions are clearly observable.
But is there truly a physiological effect behind it, and if so, is that effect truly about oxygen levels in the brain?
If one were able to install an O2 sensor in one of major arteries leading to the brain, would you be able to see actual drops in O2 levels, when someone is reaching a point of combined aerobic and anaerobic exhaustion (hockey is a heavy mix of both forms of activity, like many team sports)
| common-pile/stackexchange_filtered |
Displaying/scrolling through heaps of pictures in the browser
I want to be able to browse through heaps of images in the browser, fast. THe easy way (just load 2000 images and scroll) slows down the scrolling a lot, assumedly because there's too much images to be kept in memory.
I'd love to hear thoughts on strategies to be able to quickly scroll through 10000s of images (as if you were on your desktop) in the browser. What would expected bottlenecks be? How to address them? How to fake things so that the user experience is still good? Examples in the wild?
You'll want to use AJAX to dynamically load the pictures as you scroll.
Basically, you'll load certain pictures based on how far down you've scrolled. Just don't forget to dump the ones that are off the screen so that the browser stays responsive.
If you just want to flip through lots of pictures one at a time you could preload the images that haven't been shown. Facebook albums preload the next image so that going to your next image is really fast.
I think RIA is your best bet here. Technologies like Flex and Silverlight were designed specifically to provide desktop-like behavior, while HTML/JS were never really intended to be used for that many images.
Additionally, if you haven't seen any Pivot demos yet, those are some good examples about how to maintain a good UX at massive scale. The best link I can find right now is unfortunately the sales-pitch video on the official site, but it still shows some neat interactions:
http://www.getpivot.com/
Yes, pivot is awesome, already checked it out, thanks. My challenge right now is the browser though, see how far we can get in that.
| common-pile/stackexchange_filtered |
What does @:keep mean in Haxe?
I am new to Haxe and playing with the OpenFL Starling Sample code -
I noticed a @:keep metadata before the class declaration. What does it mean?
@:keep class TouchScene extends Scene {
// ...
}
Haxe allows metadata tags on classes and functions.
@:keep is a metadata tag that instructs the compiler's dead code elimination feature not to remove the class or function, even if it believes that the class or function is unused.
There are many other built-in metadata tags.
FYI, for advanced users, you can create schemas and specify your own metadata tags (and parse them using macros). For example, my lazy-props library does exactly this.
for what reason need compiler eliminate dead code? how it knows need I code or not? I didn't get it.
@VakhtangiBeridze it's a compiler optimization available in many compilers. Read all about it here: https://en.wikipedia.org/wiki/Dead_code_elimination Obviously there are pros and cons, and metatags like @:keep allow us control over the feature.
@VakhtangiBeridze Oh, and the compiler knows whether you need code or not by tracking the usage of classes, functions, variables, etc. It's especially useful for large libraries, where you may only need a subset of the features for your app. The compiler removes all the code that's unnecessary for your app to function.
| common-pile/stackexchange_filtered |
$S$ and $G$ are roots of $x^2 −3x +1=0$. Find equation whose roots are $\frac{1}{S -2}$ and$\frac{1}{G-2}$
If S,G are roots of $x^2 −3x +1=0$, then find equation whose roots are $\frac{1}{S -2}$ , $\frac{1}{G-2}$.
So , the way my sir solved it is that
He took x = $\frac{1}{S -2}$ , then he got S = $\frac{2x+1}{x}$.
I am confused with is that we were told roots as $\frac{1}{S -2}$ and not only S for 2nd equation.
Why not just put $\frac{1}{S -2}$ in x and then solve ?
See https://math.stackexchange.com/questions/2209034/finding-sum-k-0n-1-frac-alpha-k2-alpha-k-where-alpha-k-are-the OR https://math.stackexchange.com/questions/1909362/product-of-one-minus-the-tenth-roots-of-unity OR https://math.stackexchange.com/questions/1811081/problem-based-on-sum-of-reciprocal-of-nth-roots-of-unity
We have $S^2-3S+1=0$ and not $A^2-3A+1=0$ where $A=1/(S-2)$. It is $S$ that satisfies the given quadratic, not $1/(S-2)$.
Note $SG=1,\>S+G=3$. Then
$$\frac1{S-2}\frac1{G-2}=\frac1{SG-2(S+G)+4}=-1
$$
$$\frac1{S-2}+\frac1{G-2}=\frac{S+G-4}{SG-2(S+G)+4}=1
$$
Thus, the equation with roots $\frac1{S-2}$ and $\frac1{G-2}$ is
$$x^2-x-1=0$$
The substitution method your teacher is referring to goes as follows:
Let $y=\frac{1}{x-2}$ where $x=S, G$ are the roots of the given equation $x^2-3x+1=0$
Rearranging gives $x=\frac{1+2y}{y}$
Substituting this into the equation satisfied by $x$ gives $$(\frac{1+2y}{y})^2-3(\frac{1+2y}{y})+1=0$$
This simplifies to become $$y^2-y-1=0$$
This is the same as the answer obtained by Quanto using Vieta's formulas.
| common-pile/stackexchange_filtered |
Optical Audio out stuck on on a MacBook
Apple have made an interesting headphone port for the MacBook (and some other Intel Mac models). It works like a standard jack:
nothing plugged in -> audio comes out of built-in speakers
headphones/external speakers plugged in -> plays through headphones/external speakers
but you can also use a special adapter (which trips a tiny microswitch) to get an optical audio out signal (which you can presumably plug into a nice surround-sound system).
This is all well and good except when, like auto-tracking, it doesn't work, and you are left with nothing to adjust. Users report that they get no sound when they have nothing plugged in and that a red light emanates from the headphone port. If you go to System Preferences -> Sound -> Output, it will say (IIRC) "Optical Out" instead of "Internal Speakers".
The only solution I'm aware of is to try to reset the switch by inserting and removing a set of headphones or a toothpick, perhaps wiggling it inside of the port, and hoping that you luck out and get it.
Are there other ways to fix this problem? Does anyone know where the microswitch is or have a good technique to reset it?
iPhone users have reported a similar problem, when the device thinks that headphones are attached even when they are not. The solution is the same .. use a toothpick or paper clip to toggle the switch. Apparently the problem is more acute when using 3rd party headphones.
This is probably better suited to http://superuser.com
I felt it was relevant here as, as a Mac admin, I've seen the issue come up several times.
It's not organized so much around the role of the person as the role of the machine. The idea is to concentrate workstation-specific issues to SU, but it's easy enough to migrate. ;)
I've used toothpicks or half of a Q-Tip - really, anything non-metallic that will fit - to do a counter-clockwise sweep of the inside of the opening with success. I've seen various reports as to where the switch may be; I typically just sweep until it's fixed. If you have iTunes open and playing or the Sound PreferencePane of System Preferences up you'll be able to tell when you've hit the switch.
+1 excellent idea -- especially the part about having the audio playing while doing it.
While this works I don't always have a something like a toothpick or Q-tip lying around to do this. I put my headphones in at an able and that works.
I'm going to share my last-resort solution. You may no longer be able to use headphones, but you will get your speakers back.
I've tested this on a 15-inch Macbook Pro 2012 (non-retina). It may or may not work on other models. The Macbook Pro I tested this on was stuck on optical audio after the jack was damaged removing a broken connector.
The solution is to create a short-circuit between these two pins on the logic board (at the right of this image):
How you do this is up to you. I was successful using a small piece of wire and some electrical tape as a short-term solution, and eventually ended up soldering it in.
Do not attempt soldering on your logic board if you do not have experience soldering small electronics. Make sure no other components are shorted. You risk causing even more damage.
Do not hold me responsible if you destroy your logic board.
Thank you for sharing this solution. I similarly had a semi-destroyed plug after the tip of a TRS plug broke inside the audio jack. Soldering a small piece of wire across these did the trick on a mid-2009 Macbook Pro
Toothpick and/or plugging and unplugging regular headphones worked for (I was doing both, not sure which did the trick, but I bet it was the toothpick).
This just repeats what the OP mentioned in the question. It does not add a solution.
I tried the toothpick, and couldn't get it to work. (I was probably doing it wrong.)
However, after reading this thread, I tried plugging my headphones in, then pulling them out slowly while jiggling the headphone plug. This fixed it straight away.
Plug in your headphones, (they will work) now turn off the computer. Remove the headphones and turn on the computer.
| common-pile/stackexchange_filtered |
How do I start a script for a button in HTML to change font size?
I'm not sure how to start a script that sets a font size when the button is clicked.
I'm supposed to have a button labeled normal, medium and large.
There are several ways to change the properties of HTML entities:
Via the onclick attribute in HTML:
<button onclick="func()">Text</button>
Via id or class attributes in the HTML code:
<button id="btnId">Text</button>
<button class="btnClass">Text</button>
and the following JavaScript code -
For id element:
let btn_with_id = document.getElementById("btnId") // Getting element with specific id
btn_with_id.onclick = () => {
btn_with_id.style.fontSize = "55px" // Can be anything: em, % etc.
}
And for multiple class elements (if you want to change many items):
let btns_with_className = document.getElementsByClassName("btnClass") // Getting elements with the same class
for (let btn of btns_with_className) {
btn.onclick = () => btn.style.fontSize = "55px"
}
Also we can add className with some pre-made properties
btn_with_id.classList.add("btnCustomClass")
In addition, there is another way of onclick event - addEventListener()
Here's how it works:
btn_with_id.addEventListener('click', event => {
btn_with_id.classList.add("Your_class")
})
If you want to change the button text based on the font-size, here is the solution:
if (btn_with_id.style.fontSize == "55px") btn_with_id.innerHTML = "Large" // Here can be different `fontSize` and `innerHTML` text
Buttons that changes font size:
<button type="button" onclick="document.body.style.fontSize = '12px';">Small</button>
<button type="button" onclick="document.body.style.fontSize = '16px';">Normal</button>
<button type="button" onclick="document.body.style.fontSize = '20px';">Large</button>
This sets the fontsize on the body element, so other elements that have their own font size set wont change, except if you used em or rem as your size unit.
Ah ok thanks that worked, out of curiosity what is em or rem? Im very new to HTML and CSS and Javascript
em and rem are units that change their size depending on font size. https://developer.mozilla.org/en-US/docs/Learn/CSS/Building_blocks/Values_and_units
| common-pile/stackexchange_filtered |
Swift Charts with Custom Symbol Shapes
The following code tries to show two points with different symbols. The first one has a custom shape. The code causes the error No exact matches in reference to static method 'buildExpression'. Replacing .symbol(point.symb) with .symbol(CustomTriangle()) shows both points with the custom symbol, indicating that CustomTriangle struct is okay.
There are two .symbol modifiers in Charts framework and the one returning a ChartContent (not a View) is essential for chaining other chart content modifiers. Is there a way to show the first point with the custom triangle while the next one with the square symbol?
Here is the code:
import SwiftUI
import Charts
struct Point: Identifiable {
public let id = UUID()
let x: Double
let y: Double
let symb: any ChartSymbolShape
}
public struct CustomTriangle: ChartSymbolShape {
public var perceptualUnitRect = CGRect(x: 0, y: 0, width: 1, height: 1)
public func path(in rect: CGRect) -> Path {
var path = Path()
path.move(to: CGPoint(x: rect.midX, y: rect.minY))
path.addLine(to: CGPoint(x: rect.minX, y: rect.maxY))
path.addLine(to: CGPoint(x: rect.maxX, y: rect.maxY))
path.closeSubpath()
return path
}
}
struct MyPlot: View {
let points = [Point(x: 0, y: 0.5, symb: CustomTriangle()),
Point(x: 1, y: 1.0, symb: .square)]
var body: some View {
Chart {
ForEach(points, id: \.id) {point in
PointMark(x: .value("xaxis", point.x),
y: .value("yaxis", point.y))
.symbol(point.symb) // this works --> .symbol(CustomTriangle())
}
}
.padding()
}
}
struct ContentView: View {
var body: some View {
MyPlot()
.padding()
}
}
| common-pile/stackexchange_filtered |
Bootstrap span classes not working as expected
I wrote my first bootstrap program.
<!DOCTYPE html>
<html>
<head>
<link href="//netdna.bootstrapcdn.com/bootstrap/3.1.1/css/bootstrap.min.css" rel="stylesheet">
</head>
<body>
<div class="row">
<div class="span9">Level 1 of Column</div>
</div>
<div class="row">
<div class="span6">Level 2</div>
<div class="span3">Level 2</div>
</div>
<script src="//ajax.googleapis.com/ajax/libs/jquery/1.11.0/jquery.min.js"></script>
<script src="//netdna.bootstrapcdn.com/bootstrap/3.1.1/js/bootstrap.min.js"></script>
</body>
</html>
My hope was that the div class 9 will only take up 9 columns instead of entire browser width. My hope was that span6 and span3 will take up 6 and 3 columns and will be in the same row.
But the output looks like
So the rows do not appear properly (they are being cut) and also the rows and columns are not being respected. (in chrome I can see that level 1 is taking up the entire browser width) so is the 2 span6 and span3 classes. they take up entire browser width.
You refer to bootstrap 3.1.1 css file in your header, but you are using bootstrap 2 css classes (e.g. span3) within your html code.
Use bootstrap 3 classes instead as described here:
http://getbootstrap.com/css/
So for example use col-lg-3 instead of span3.
Try this:
...
<div class="row">
<div class="col-lg-9">Level 1 of Column</div>
</div>
<div class="row">
<div class="col-lg-6">Level 2</div>
<div class="col-lg-3">Level 2</div>
</div>
...
| common-pile/stackexchange_filtered |
Hitachi CR210 console command
I am looking for command for console for Hitachi CR210.
this is the equivalent of HP ILO, where you can connect to ILO IP with ssh, have access to console (if OS is in text mode), issue power off/on hosts, simulate a blade withdrawal and all.
login to host give me the following
host:~ # ssh -l user01 <IP_ADDRESS>
<EMAIL_ADDRESS>password:
# inactivity_timer[min] : 10
S0051
ALL RIGHTS RESERVED, COPYRIGHT (C), 2011, 2012, HITACHI, LTD.
Chassis ID : 9 T999999999
Firmware Revision :
$ ?
E0025 : Command is invalid.
S0005 : Command was invalid.
S0000 : Command was finished.
I have tried replacing ? by h,he,help,info but to no avail.
All information is located here (annexe B5)
host:~ # ssh -l user01 <IP_ADDRESS>
<EMAIL_ADDRESS>password:
# inactivity_timer[min] : 10
S0051
ALL RIGHTS RESERVED, COPYRIGHT (C), 2011, 2012, HITACHI, LTD.
Chassis ID : 9 T999999999
Firmware Revision :
$ show remote-access protocol http
-- HTTP setting --
Port number : 80
Allow : allow
S0002 : Command succeeded.
S0000 : Command was finished.
$ show language system
-- System language --
Language : english
S0002 : Command succeeded.
S0000 : Command was finished.
$ show chassis setting
-- chassis setting --
Chassis ID : 9 T999999999
Maintenance classification : normal
-- chassis FRU setting --
Part/model number : XX-XXXXXXX-XXX-X
Serial number : 12345-XXXXXXX-XXX-000000000
Model ID : YY
Midplane ID : ZZ
First WWN :<PHONE_NUMBER>000000
S0002 : Command succeeded.
S0000 : Command was finished.
etc
etc
etc
| common-pile/stackexchange_filtered |
Modify instance variable used by multiple Tasks
I need some help with handling Tasks. I have an XML String which is deserialized into a class. The class itself contains a property, e.g. rssProvider which is set to an unique value like MSN or YAHOO. However there can be multiple values delimited with an , in this field.
I am using this deserialized class instance in multiple Tasks. However the function which gets called in this task can only work with one rssProvider value, so I have a split on that string and a foreach loop which creates the task.
In the task itself the function I call needs the full object but with one rssProvider. However when I change the value of that property to the value in the foreach the other tasks will fail as they will get the same single value from the task which runs first.
Any ideas how I should restructure the logic? Thanks!
My code:
List<Task<ResponseOBJ>> tasks = new List<Task<ResponseOBJ>>();
// get List of rssProviders
string[] providers = request.requestBody.rssProvider.Split(',');
//go through each provider
foreach (string provider in providers)
{
Task<ResponseOBJ> task = Task.Factory.StartNew<ResponseOBJ>(() =>
{
request.requestBody.rssProvider = provider;
doStuff(request);
}
tasks.Add(task);
}
Task.WaitAll(tasks.ToArray());
This question is too broad... thousands of options are available to solve your problem. The easiest one, create two properties, AvailableProviders (which contains all the values) and CurrentProvider (the current one being used).
Post your code, and we'll see what can be done
The problem is if I would set the CurrentProvider for that object in the task it will immediately be overridden by the next Task whoever is the fastest one.
How about creating a fresh request object per request?
Would it be enough to use the new operator and then assign the request and change the value or do I need to assign every single property ?
I would create a copy constructor in your Request object which copies the content of the original Request and creates a fresh one:
public class Request
{
public Request(Request oldRequest)
{
// initalize new request from the old
}
}
And then change my code to create a new request per task:
List<Task<ResponseOBJ>> tasks = new List<Task<ResponseOBJ>>();
// get List of rssProviders
string[] providers = request.requestBody.rssProvider.Split(',');
//go through each provider
foreach (string provider in providers)
{
Task<ResponseOBJ> task = Task.Factory.StartNew<ResponseOBJ>(() =>
{
request.requestBody.rssProvider = provider;
var newRequest = new Request(request);
doStuff(newRequest);
}
tasks.Add(task);
}
Task.WaitAll(tasks.ToArray());
One option might be to change doStuff(request) to doStuff(request, provider) and remove the line request.requestBody.rssProvider = provider;, and then change your doStuff accordingly.
foreach (string provider in providers)
{
Task<ResponseOBJ> task = Task.Factory.StartNew<ResponseOBJ>(() =>
{
doStuff(request, provider);
}
tasks.Add(task);
}
Another option (as also mentioned in the above comments) is to create a new request object for each provider.
| common-pile/stackexchange_filtered |
Can you create a pattern for HTML input fields with a minimum number of letters of a certain type?
I want to create a pattern for an HTML input field that needs to have at least 10 numbers in it and may also have spaces and a plus sign on top of that, but it's not required.
It's important that numbers and spaces can be mixed though. Also, the whole field can only have 17 characters all in all.
I'm not sure if it's even possible. I started doing something like that:
pattern="[0-9+\s]{10,17}*"
But like this, it's not guaranteed that there are at least 10 numbers.
Thanks in advance! Hope the question doesn't exist already, I looked but couldn't find it.
You probably want pattern="(?:[+\s]*\d){10,17}[+\s]*". If you need to match a more specific pattern, you should share the requirements.
It's worthwhile mentioning that you should still sanitise the input in your server code as it's trivial to bypass these pattern filters
Thanks to both of you @WiktorStribiżew your answer worked perfectly
@Martin yeah I'm doing that but thanks for the tip :)
You can use
pattern="(?:[+\s]*\d){10,17}[+\s]*"
The regex matches
(?:[+\s]*\d){10,17} - ten to seveteen occurrences of zero or more + or whitespaces and then a digit
[+\s]* - zero or more + or whitespaces.
Note the pattern is anchored by default (it is wrapped with ^(?: and )$), so nothing else is allowed.
| common-pile/stackexchange_filtered |
Simple imperative linked list
I've decided to learn a bit of C++, and so I've started with a simple linked list. There's no input to the program and should always output hlo. This is as in main I build a list consisting of hello, and use Get and last to get the data.
I don't know C++, or C, and so this is mostly a stab in the dark. I don't know if I'm using C++ correctly, but I got a working program where I'm mostly exploring pointers, so I would like to know if my usage for them is correct. Otherwise please review any and all aspects of my code.
#include <iostream>
class Node {
public:
Node(char data_) {
data = data_;
}
Node *next = NULL;
char data;
};
class LinkedList {
public:
Node *root = NULL;
Node *last = NULL;
LinkedList() {}
void Add(char data) {
if (root == NULL) {
root = new Node(data);
last = root;
} else {
last->next = new Node(data);
last = last->next;
}
}
char Get(int index) {
return _Get(index)->data;
}
private:
Node* _Get(int index) {
Node *node = root;
for (int i = 0; i < index; i++) {
if (node == NULL) {
throw std::out_of_range("List doesn't contain that item.");
}
node = node->next;
}
return node;
}
};
int main() {
LinkedList list = LinkedList();
list.Add('h');
list.Add('e');
list.Add('l');
list.Add('l');
list.Add('o');
std::cout << list.Get(0) << list.Get(2) << list.last->data << std::endl;
return 0;
}
Note: _Get is a reserved identifier. See: What are the rules about using an underscore in a C++ identifier? Prefer not to use _ in C++ identifiers. If you must don't use them at the beginning.
Memory Leak
Your code leaks. I see calls to new but I see no smart pointers or calls to delete to reclaim the memory. You need a destructor on your linked list.
class LinkedList {
public:
LinkedList() {}
~LinkedList() {/* Reclaim dynamically allocated memory here */}
};
Rule of three
Once you correct for memory management you also realize that you are violating the rule of three. By default the compiler will automatically add copy constructor and assignment opertors for you.
LinkedList a;
a.add('x');
LinkedList b(a); // Makes a copy of a
// But only a shallow copy.
a.add('y');
// This changes 'a' to be {'x', 'y}
// Problem is that b is now also {'x', 'y}
// **BUT** in b last points at the wrong node.
// Now things get interesting:
b.add('z');
// Now a and b are {'x', 'z'} (if you follow from root)
// a.last still points {'y'} even though this is no longer
// in the chain pointed to from a.root
The default implementation of the copy constructor (and assignment operator) is to make a shallow copy of the members. This fine as long as your members are not owned pointers. But your pointers are owned (even though you missed the deletion). So you either need to implement the copy semantics of the class or explicitly remove these methods.
class LinkedList {
public:
LinkedList() {}
~LinkedList() {/* Reclaim dynamically allocated memory here */}
LinkedList(LinkedList const&) = delete;
LinkedList& operator=(LinkedList const&) = delete;
};
Prefer nullptr
In C++11 we introduced nullptr to replace NULL. This is because NULL is the integer zero and can accidentally be converted to an integer type without any warning. nullptr on the other hand has a type of std::nullptr_t and can only be converted to pointer types (not integers).
Design.
When building linked lists I prefer to use a Sentinel object. This removes the need for checking for null in your list and thus makes the code easier to write and understand. There are a lot of C++ code reviews on linked lists where I explain the principle.
Node should be private
You declare Node publicly. There is no reason for people to know the implementation details of your linked list. Make this a private member of LinkedList so show that this is an implementation detail.
Prefer to use initializer list:
Node(char data_) {
data = data_;
}
// Prefer to write like this:
Node(char data_)
:data(data_)
{}
For non pod types this is because the object are constructed before the body of the constructor is entered then the constructor code is applied. If data was a non pod object that means you would first be constructing it and then applying the assignment operator (ie initializing it twice).
For pod data it makes no difference, BUT does no harm either. So it is a good habit to get into. Because that way of the code is later modified to be a non pod type (i.e. CharUTF) then doing it the prefered way keeps the code optimal.
Not a fan
Node *root = NULL;
Node *last = NULL;
Personally I like to see all the members initialized in the initializer list. But I can see some advantages for objects with lots of constructors in avoiding writing the same thing repeatedly.
Style Check.
Most style guides for C++ (though this is not ubiquitous so just take as advice and not a rule). User defined types have an initial capital letter, while objects (which includes functions/methods) have an initial lowercase letter.
This is because the most important thing in C++ is the types. So it is useful to be able to quickly identify types from objects.
void Add(char data) {
// I would prefer
void add(char data) {
Self Documenting code
Personally I don't like add() as the name of the function. It does not tell me exactly what the function does. I think a better name would be append().
Dont' use _ in identifers.
The rules on using underscore are non trivial. Even if you know them. Other people don't know them as well as they think they do. So best to avoid to prevent accidental mistakes.
Node* _Get(int index) { // Note this is a reserved identifier.
// And being a common word is very likely
// to be used somewhere. including the
// wrong header file before this is likely
// to generate some confusing error messages
// on some systems.
// Why Note
Node* getNode(int index)
Bug
LinkedList a;
a.Add('a');
a.Get(1); // Undefined behavior.
// I believe _Get() is going to return a `nullptr`
// which is then dereferenced by Get()
Declaring variables
LinkedList list = LinkedList();
// ^^^^^^^^^^^^ Creates a temporary object.
Technically this is creating a LinkedList temporary object then calling the copy constructor of list to copy the temporary object then destroying the temporary object.
Fortunately most compilers will spot this and optimize out the copy and destruction and just create your object.
But this is simpler to declare as:
LinkedList list;
Prefer \n to std::endl;
The only difference is that std::endl performs an extra flush. Forcing a manual flush is usually never the correct solution as the libraries will do this for you when required.
Main and return
The main() function is special. If the compiler detects there is no return it will plant one for you.
return 0;
So there is no need to add it yourself. It is become standard that when main can return no other value than 0 that the return 0 is omitted to indicate that the application can never fail. So when I see a return 0 at then end I generally look around for the other situations where it can fail. So if there are potentially failure situations and you exit early with an return 1; then add a return 0; and then end otherwise don't bother.
For the beginner, and given the scope of the program, looks good. Few remarks.
NULL is frowned upon. A politically correct NULL is nullptr.
void Add(char data)
After the call, the newly added node becomes a last node, no matter what. You better make this explicit. I also recommend to document the invariant with assertion:
void Add(char data)
{
Node * n = new Node(data);
if (root == nullptr) {
assert(last == nullptr);
root = n;
} else {
assert(last != nullptr);
last->next = n;
}
last = n;
}
Node(char data)
next is undefined. Set it to nullptr explicitly.
C++ allows a cute shortcut Node(char data): data(data), next(nullptr) {}
Exposing next to public hardly has a reason. With it exposed, client is free to violate all invariants. To let a client iterate over the list, provide an iterator.
I don't see the point of delegating Get to a private member.
"next is undefined" No it's not, look at the declaration of next.
@Rakete1111 Point taken, see edit.
I see some opportunities for improvement, which will get this up to about what we expect from a LinkedList. First, the imposition that the stored elements are char is not optimal. Sooner or later you will want non-char elements, and you will not want to write and maintain another LinkedList for each data type you want. char is, in fact, the least useful element type I can think of. The solution is a template class; instance objects would be declared LinkedList <whateverType >; a programmer will expect this syntax and the reusabilty it produces.
Second, there should be no add method as it is unclear where the new element is added. Instead, we expect methods like insert(int index, T element) for O(index) insertion at index and O(N) append for what you are currently doing with add.
Third, you may recognize your add as being O(k); this is because you've actually written a queue (with minimal modification, a deque), which extends a real LinkedList. A real LinkedList has no need of a tail pointer (last here).
Nice points, to be a pedant, I see add as $O(1)$ not as $O(k)$. But your points stand none the less. Thank you.
| common-pile/stackexchange_filtered |
two laravel installations session destroy
I have two laravel 4 installations in my htdocs folder.
htdocs/laravel1 and htdocs/laravel2.
Both have different databases and also a different key in app/config/app.php
Both installations have the driver database for sessions in the config.
I want to start both installations with the artisan serve command.
the first laravel is started with artisan serve on port 8000.
the second laravel is started with artisan serve --port=4000
The problem:
When I login in laravel1, and then login into laravel2, my Session in laravel1 is away ... why is this happen? as I say, they are in different databases, and they have a different key in the config file. How can I avoid this problem? Thanks!
You have use different session cookie name for each installation, as they are on the same domain. You should be able to set it in app/config/session.php.
| common-pile/stackexchange_filtered |
Number of subgroups of order 2017 in S2017
I want to prove that the number of subgroups of order 2017 in the Symmetric group $S_{2017}$ equals $2015!$
First of all, 2017 is a prime number. I used that to prove that the number of elements of order 2017 in $S_{2017}$ equals $2016!$ earlier.
I tried using the Sylow Theorems to calculate this number of subgroups. I would write the order of $S_{2017}$ as $|S_{2017}| = 2017! = 2016! \cdot 2017$. Then the amount of 2017-Sylow subgroups (name this T) should divide $2016!$ and $T ≡ 1 \mod 2017.$ There are a lot of possibilties for T using just that information. I'm also not even sure that $2017$ does not divide $2016!$, but I should be able to prove that if the sylow theorems are actually useful in this question.
So I failed to find a way to find the asked number of subgroups, and anything that could help me find the answer to the question would really be appreciated.
A group of order $p$ (where $p$ is prime) contains exactly $p-1$ elements of order $p$.
So I know there are $2016!$ elements of order $2017$ in $S_{2017}$. A subgroup of order $2017$ must have $2016$ elements of order $2017$. So if all the subgroups of order 2017 were disjunct, there would be $2016! / 2016 = 2015!$ different subgroups. So do I need to prove these subgroups would be disjunct...?
They are not disjoint, every subgroup contains the identity. What you need is that two subgroups of order $p$ are either identical or have trivial intersection.
Yes, that worked, now it all makes sense :) thank you.
| common-pile/stackexchange_filtered |
lldb is not starting an application
this is my first experience in commandline mode of lldb. unsuccessful.
installed minimal kit with clang, lld, lldb v5 (ubuntu 16.04)
sample application built with clang.
trying to start:
lldb applcation
>run
error: process launch failed: unable to locate lldb-server-5.0.0
so now the questions:
why lldb tries to run a server? this is not a remote debugging.
why lldb refers to 5.0.0 (and where to change this setting)? actually there was added symbolic links automiticaly with xxx-5.0 suffix to all llvm utilities, but not with xxx-5.0.0. would be reasonable if this refers to lldb-server itself, without suffixes.
adding lldb-server-5.0.0 symlink doesn't solve the problem.
any idea how this should work?
by the way extra question - seems left/right/up/down arrows keys don't work in lldb console? instead of cursor moving it adds a codes
(lldb) ^[[D^[[A^[[C^[[B
Is this any help: https://stackoverflow.com/questions/37107432/lldb-error-process-launch-failed-unable-to-locate-lldb-server ?
"why lldb tries to run a server" Nothing wrong with abstracting local and remote and any other type of debugging through the same server. Probably great for reducing duplicated code.
no . lldb-server is available in my case, in /usr/bin/ . but lldb looks for lldb-server-5.0.0, i don't know why,
This is the same issue (with the same resolution) as: https://stackoverflow.com/questions/37107432/lldb-error-process-launch-failed-unable-to-locate-lldb-server
@JimIngham, I'm having the same trouble, and concurring with the OP, that does not solve the problem. :(
This is a known bug with LLDB 5.0, apparently related to the Debian packaging. The workaround is similar to the question linked in the comments, but not the same. (And yes, having this exact problem, I've confirmed the solution.)
An strace reveals the problem...
1887 26838 access("/usr/lib/llvm-5.0/bin/lldb-server-5.0.0", F_OK) = -1 ENOENT (No such file or directory)
That indicates exactly where that symlink is needed. Fixing it is as simple as a single terminal command...
$ sudo ln -s /usr/bin/lldb-server-5.0 /usr/lib/llvm-5.0/bin/lldb-server-5.0.0
| common-pile/stackexchange_filtered |
Chef git cookbook: how to fix permission denied while cloning private repo?
I have a cookbook, that uses deploy_key cookbook to generate deploy key & git cookbook to clone private gitlab project.
Chef always says that he has deployed keys successfully and gave them proper rights.
But sometimes it works fine, sometimes it gives following error, and i can't get why.
==> default: ================================================================================
==> default: Error executing action `sync` on resource 'git[/home/vagrant/webtest]'
==> default: ================================================================================
==> default: Mixlib::ShellOut::ShellCommandFailed
==> default: ------------------------------------
==> default: Expected process to exit with [0], but received '128'
==> default: ---- Begin output of git ls-remote "git@gitlab.example.com:qa/webtest.git" "HEAD" ----
==> default: Permission denied, please try again.
==> default: Permission denied, please try again.
==> default: Permission denied (publickey,password).
==> default: fatal: Could not read from remote repository.
==> default: Please make sure you have the correct access rights
==> default: and the repository exists.
==> default: ---- End output of git ls-remote "git@gitlab.example.com:qa/webtest.git" "HEAD" ----
==> default: Ran git ls-remote "git@gitlab.example.com:qa/webtest.git" "HEAD" returned 128
Moreover, if chef fails to clone project with following message, second provision (i've tried vagrant provision for this) try will work fine (same as i will login on the VM and manually clone the project).
I thought that sometimes keys are not deployed in time.. but according to chef output they must be ready.
What could be the problem?
I am deploying keys (each deployment new keys are generated following way using gitlab project_id and token):
deploy_key "my_project_deploy_key" do
provider Chef::Provider::DeployKeyGitlab
path "#{node['webtest']['home_dir']}/.ssh"
credentials({
:token => node['webtest']['gitlab']['token']
})
api_url "#{node['webtest']['gitlab']['api_scheme']}://#{node['webtest']['gitlab']['api_domain']}"
repo node['webtest']['gitlab']['project_id']
owner node['webtest']['user']
group node['webtest']['group']
mode 00600
action :add
end
I am cloning repo this way:
git "#{node['webtest']['home_dir']}/webtest" do
repository node['webtest']['git']['repo']
checkout_branch node['webtest']['git']['branch']
ssh_wrapper "#{node['webtest']['home_dir']}/.ssh/wrap-ssh4git.sh"
user node['webtest']['user']
group node['webtest']['group']
enable_checkout false
action :sync
end
Avoid numeric permission in chef recipes. Use the quoted form mode '0600'
For the example to work, you need to make gitlab.example.com aware of your public key so ssh can use your private key to connect.
The method varies, but for modern Linux machines the ssh-copy-id may make it easier to get your public key copied correctly.
I don't need to use ssh key. I use deploy keys, that provide read-only access to clone repos. You can read about them here: http://doc.gitlab.com/ce/ssh/README.html
Deploy keys are SSH keys, for what it's worth.
@Martin sure, sorry. But i don't want to copy ssh keys. SSH key is generated for every deployment via chef, using gitlab project_id & token via deploy_key cookbook.
The error message is ssh saying that the key mechanism does not give you access.
Also "gitlab.example.com" listed in your output does not resolve. Is this a verbatim copy?
@ThorbjørnRavnAndersen no, it is just example. Yes, i understand. The problem is that rsa key are generated, deploy key is added in gitlab, but on vagrant up this issue randomly happens.
| common-pile/stackexchange_filtered |
Frequency in pandas timeseries index and statsmodel
I have a pandas timeseries y that does not work well with statsmodel functions.
import statsmodels.api as sm
y.tail(10)
2019-09-20 7.854
2019-10-01 44.559
2019-10-10 46.910
2019-10-20 49.053
2019-11-01 24.881
2019-11-10 52.882
2019-11-20 84.779
2019-12-01 56.215
2019-12-10 23.347
2019-12-20 31.051
Name: mean_rainfall, dtype: float64
I verify that it is indeed a timeseries
type(y)
pandas.core.series.Series
type(y.index)
pandas.core.indexes.datetimes.DatetimeIndex
From here, I am able to pass the timeseries through an autocorrelation function with no problem, which produces the expected output
plot_acf(y, lags=72, alpha=0.05)
However, when I try to pass this exact same object y to SARIMA
mod = sm.tsa.statespace.SARIMAX(y.mean_rainfall, order=pdq, seasonal_order=seasonal_pdq)
results = mod.fit()
I get the following error:
A date index has been provided, but it has no associated frequency information and so will be ignored when e.g. forecasting.
The problem is that the frequency of my timeseries is not regular (it is the 1st, 10th, and 20th of every month), so I cannot set freq='m'or freq='D' for example. What is the workaround in this case?
I am new to using timeseries, any advice on how to not have my index ignored during forecasting would help. This prevents any predictions from being possible
First of all, it is extremely important to understand what the relationship between the datetime column and the target column (rainfall) is. Looking at the snippet you provide, I can think of two possibilities:
y represents the rainfall that occurred in the date-range between the current row's date and the next row's date. If that is the case, the timeseries is kind of an aggregated rainfall series with unequal buckets of date i.e. 1-10, 10-20, 20-(end-of-month). If that is the case, you have two options:
You can disaggregate your data using either an equal weightage or even better an interpolation to create a continuous and relatively smooth timeseries. You can then fit your model on the daily time-series and generate predictions which will also naturally be daily in nature. These you can aggregate back to the 1-10, 10-20, 20-(end-of-month) buckets to get your predicitons. One way to do the resampling is using the code below.
ts.Date = pd.to_datetime(ts.Date, format='%d/%m/%y')
ts['delta_time'] = (ts['Date'].shift(-1) - ts['Date']).dt.days
ts['delta_rain'] = ts['Rain'].shift(-1) - ts['Rain']
ts['timesteps'] = ts['Date']
ts['grad_rain'] = ts['delta_rain'] / ts['delta_time']
ts.set_index('timesteps', inplace=True )
ts = ts.resample('d').ffill()
ts
ts['daily_rain'] = ts['Rain'] + ts['grad_rain']*(ts.index - ts['Date']).dt.days
ts['daily_rain'] = ts['daily_rain']/ts['delta_time']
print(ts.head(50))
daily_rain is now the target column and the index i.e. timesteps is the timestamp.
The other option is that you approximate that the date-range of 1-10, 10-20, 20-(EOM) is roughly 10 days, so these are indeed equal timesteps. Of course statsmodel won't allow that so you would need to reset the index to mock datetime for which you maintain a mapping. Below is what you use in the statsmodel as y but do maintain a mapping back to your original dates. Freq will 'd' or 'daily' and you would need to rescale seasonality as well such that it follows the new date scale.
y.tail(10)
2019-09-01 7.854
2019-09-02 44.559
2019-09-03 46.910
2019-09-04 49.053
2019-09-05 24.881
2019-09-06 52.882
2019-09-07 84.779
2019-09-08 56.215
2019-09-09 23.347
2019-09-10 31.051
Name: mean_rainfall, dtype: float64
I would recommend the first option though as it's just more accurate in nature. Also you can try out other aggregation levels also during model training as well as for your predictions. More control!
The second scenario is that the data represents measurements only for the date itself and not for the range. That would mean that technically you do not have enough info now to construct an accurate timeseries - your timesteps are not equidistant and you don't have enough info for what happened between the timesteps. However, you can still improvise and get some approximations going. The second approach listed above would still work as is. For the first approach, you'd need to do interpolation but given the target variable which is rainfall and rainfall has a lot of variation, I would highly discourage this!!
As I can see, the package uses the frequency as a premise for everything, since it's a time-series problem.
So you will not be able to use it with data of different frequencies. In fact, you will have to make an assumption for your analysis to adequate your data for the use. Some options are:
1) Consider 3 different analyses (1st days, 10th days, 20th days individually) and use 30d frequency.
2) As you have ~10d equally separated data, you can consider using some kind of interpolation and then make downsampling to a frequency of 1d. Of course, this option only makes sense depending on the nature of your problem and how quickly your data change.
Either way, I just would like to point out that how you model your problem and your data is a key thing when dealing with time series and data science in general. In my experience as a data scientist, I can say that is analyzing at the domain (where your data came from) that you can have a feeling of which approach will work better.
| common-pile/stackexchange_filtered |
How do I check if a link exists in a page?
There is a form having a link clicking on which field set will be collapsed to show fields under it. HTML is shown as below.
<span class="fieldset-legend">
<a href="#" class="fieldset-title">
<span class="fieldset-legend-prefix element-invisible">Hide</span>
Attach
</a>
<span class="summary"></span></span>
How would I go about checking if this link exists (to click on it), and the fields under it are shown?
To answer my own question, I realized (even without clicking on the link) that fields can be asserted by DrupalWebTestCase::assertFieldByName().
| common-pile/stackexchange_filtered |
Cloud Functions: Can't see function logs in function explorer
When I deployed my Cloud Functions previously I could see the logs and everything in the logs explorer. However, there were some errors and upon some research, I found that in order to get my queue in the functions I have to create it Cloud Functions SDK. So I downloaded it. The installation requires Python so I downloaded it as well. I created my queue Id successfully and I placed it in my functions.
However, after all these, when I deploy a function, I don't see my logs in the logs explorer anymore. The terminal tells me the deploy complete. I see the deployed functions in the Firebase Console, but when I click on view logs, I don't see the deployed logs.
Even when I go to "Recent" on my log explorer, I see the functions there. But when I click on it I don't see any log.
....................................................................
I use firebase deploy --only functions:FUNCTIONS_NAME to deploy my functions.
I'm using JavaScript
My code
const functions = require("firebase-functions");
const admin = require("firebase-admin");
admin.initializeApp();
const database = admin.firestore();
exports.taskCallback = functions.https
.onRequest(async (req, res) => {
const {UID} = req.body;
const docRef = database.collection("ADSGHANA").doc(UID);
const doc = await docRef.get();
if (!doc.exists) {
console.log(`Document with UID ${UID} does not exist.`);
res.status(404).send("Document not found.");
return;
}
const {TimestampDeparture, PostStatus} = doc.data();
const now = admin.firestore.Timestamp.now();
if (now == TimestampDeparture && PostStatus !== "Cancelled") {
await docRef.update({PostStatus: "Started"});
console.log(`Post with UID ${UID} has been started.`);
res.status(200).send("Post started successfully.");
} else {
console.log(`Post with UID ${UID} is not ready to start.`);
res.status(200).send("Post not ready to start yet.");
}
});
exports.scheduledTask = functions.firestore.document("ADSGHANA/{UID}")
.onCreate(async (snapshot, context) => {
const data = snapshot.data();
const {UID, TimestampDeparture} = data;
const now = admin.firestore.Timestamp.now();
if (now == TimestampDeparture) {
console.log(`Post with UID ${UID} should have already started.`);
return null;
}
const project = JSON.parse(process.env.FIREBASE_CONFIG).projectId;
const location = "us-central1";
const queue = "scheduleAds";
const task = {
httpRequest: {
httpMethod: "POST",
url: `https://${location}/${project}.cloudfunctions.net/taskCallback?UID=${UID}`,
body: Buffer.from(JSON.stringify({UID})).toString("base64"),
headers: {
"Content-Type": "application/json",
},
},
scheduleTime: TimestampDeparture,
};
const {CloudTasksClient} = require("@google-cloud/tasks");
const client = new CloudTasksClient();
const parent = client.queuePath(project, location, queue);
const [response] = await client.createTask({parent, task});
console.log(`Created task ${response.name}`);
return null;
});
Have you tried these troubleshooting steps? Are you able to see logs for other functions?
I tried to replicate by following basic deployment with your code and I can able to see logs. Can you share the steps you are following?
Ok, I will try troubleshooting later
update here once you have tried troubleshooting doc
I tried again this morning without touching anything and now I can view the log again
| common-pile/stackexchange_filtered |
trying to run kextutil on kext file returns permissions error
Hi Stackoverflow Community.
Trying to run through the following tutorial - so I can learn how to code a driver util.
http://www.robertopasini.com/index.php/2-uncategorised/625-osx-creating-a-device-driver
I'm at the point where I'm trying to run kextutil on the kext file that my build produces.
Per the instructions I copy it to my temp folder.
But I'm getting the following error:
admins-Mac-mini:Debug admin$ kextutil -n -t /tmp/ssvac.kext
Skipping staging and system policy checks because not running as root, expect staging errors.
Kext rejected due to improper filesystem permissions: <OSKext 0x7f91d402f140 [0x7fff898b2cc0]> { URL = "file:///private/tmp/ssvac.kext/", ID = "myappleid.ssvac" }
Code Signing Failure: code signature is invalid
Authentication Failures:
File owner/permissions are incorrect (must be root:wheel, nonwritable by group/other):
/private/tmp/ssvac.kext
Contents
_CodeSignature
CodeResources
MacOS
ssvac
Info.plist
Diagnostics for /private/tmp/ssvac.kext:
Authentication Failures:
File owner/permissions are incorrect (must be root:wheel, nonwritable by group/other):
/private/tmp/ssvac.kext
Contents
_CodeSignature
CodeResources
MacOS
ssvac
Info.plist
admins-Mac-mini:Debug admin$
I tried to change the permissions / owner like so:
admins-Mac-mini:Debug admin$ chown root:wheel /tmp/ssvac.kext/
admins-Mac-mini:Debug admin$ ls -lah /tmp/ssvac.kext/
total 0
drwxrwxrwx 3 root wheel 96B 16 Oct 16:37 .
drwxrwxrwt 7 root wheel 224B 19 Oct 08:08 ..
drwxr-xr-x 5 admin wheel 160B 16 Oct 16:37 Contents
admins-Mac-mini:Debug admin$ kextutil -n -t /tmp/ssvac.kext
Not sure exactly how to resolve it.
If you have any tips, I'd appreciate it.
Thanks!
EDIT 1
My mistake was when I copied from the debug folder to /tmp/, I didn't use the -r switch. Now that I have, this is the error I'm getting:
admins-Mac-mini:Debug admin$ cp -r ssvac.kext/ /tmp/
admins-Mac-mini:Debug admin$ sudo kextutil /tmp/
Contents/ com.apple.launchd.GufwRL5Sf0/ com.google.Keystone/ powerlog/ ssvac.kext/
admins-Mac-mini:Debug admin$ sudo kextutil /tmp/ssvac.kext/
Password:
Untrusted kexts are not allowed
Kext with invalid signature (-67050) denied: /private/var/db/KernelExtensionManagement/Staging/tmp.RLlmC1/59AFE9EA-12E3-42C0-B3FC-E98EF987D9B2.kext
Bundle (/private/tmp/ssvac.kext) failed to validate, deleting: /private/var/db/KernelExtensionManagement/Staging/tmp.RLlmC1/59AFE9EA-12E3-42C0-B3FC-E98EF987D9B2.kext
Unable to stage kext (/private/tmp/ssvac.kext) to secure location.
admins-Mac-mini:Debug admin$
As you can see from the ls output, kexts are really directories containing at minimum an Info.plist file, and code signing information in a predefined directory layout. (starting with a Contents subdirectory) Usually it also contains a binary executable. All files and subdirectories in the kext must have appropriate permissions for the kext to be considered for loading. This means permissions must be applied recursively with the -R flag when using chown.
Instead of using chown, I generally recommend simply copying the kext to a temporary location (for testing, prior to macOS 11) or /Library/Extensions (from macOS 11 onwards, or when deploying, or when testing the kext's boot time behaviour) as the root user as you then don't run into problems trying to replace it with an updated version as an unprivileged user during your code/compile/load/debug cycle:
# Copies kext to /tmp, owned by root
sudo cp -r "path/to/built.kext" "/tmp/"
# Attempts to load kext
sudo kextutil "/tmp/built.kext"
(Obligatory disclaimer pointing out that many types of kext are now deprecated, and you'll want to make sure that writing a kext really, really, really is the correct way forward for your project.)
I think my mistake was not copying the entire folder to /tmp. I copied just the one file. But now that I did the recursive, I'm getting a different error. Please see Edit 1
@dot After your edit, your question is now essentially 2 questions, which is bad for various reasons. I'll give you the quick answer for the second question, but if that's not enough, please ask it as a new question. Kext with invalid signature (-67050) denied means exactly what it says: you need to sign your kext with a valid kext signing certificate and get the kext notarised, or you need to turn off kext signing validation by turning off that part of SIP (System Integrity Protection) or all of SIP.
yup makes sense!
| common-pile/stackexchange_filtered |
SensorSimulator Openintents simulating for 2 emulators
Hey is it possible to have two instances of Sensor Simulator connecting to a app running on 2 Emulators so that i can uniquely give motion events to each of the Emulator ?
you want to run your app on two simulators simultaneously or what?
ya ..i want to run my app on two android emulators..Each of them shld have a sensor simulator of its own..
yeah you can run your android application simultaneously on two simulators. each simulator have there own id number just have look on left corner of your simulator you will get it.5554 and if you open another simulator again you will find number different like 5556 and all.
i guess u didnt get my question...My pblm is not with Android emulators..Its with openintents sensor simulator
I think you have to change the socket in the preferences of the SensorSimulator (PC) and in the SensorSimulator Settings (Android App) in order to difference between the two app of the two emulator. You have to work with different sockets.
| common-pile/stackexchange_filtered |
Changing Global: Text Area based on an exposed filter in views?
A client wants to display different text in a page depending on the value of the View's Exposed Filter.
I could throw this logic into the template.php or use javascript (showing and hiding a span depending on the value), but this feels like a very poor way to do this when considering maintainability?
Does there exist a Views plugin to perhaps show different text blocks in the header based on the value of an exposed filter?
Currently on Drupal 7.31 using Better Exposed Filters. I'm open to using something other than Global: Text area if there is a better mechanism.
You could use
hook_views_pre_render(&$view){
$view->attachment_before = $view['exposed_input']['your_desired_field'];
}
Hope it works
I'm looking to add text to that area based on the value of the exposed filter. Would this be the appropriate hook to place a if (exp_Filter=='value') { $view->attachment_before .= 'Additional Text'; } ?
Yes, you can check the array 'exposed_input' with (you need the devel module installed and enabled) dpm($view['exposed_input']) to find out the key for your desired value. Then compare it to whatever condition through your if statement.
did you solve it?
| common-pile/stackexchange_filtered |
string vs numeric fields in a query
I'm getting an error 3061, too few parameters on this:
Dim PrbApps1 As Recordset
Set PrbApps1 = CurrentDb.OpenRecordset("Select * FROM [Application] WHERE [PYR_TenderRef] =" & TenderID.Value)
where TenderID is a textbox
I'm new to this, but I've check everything I can think of. Probably simple error but any help would be greatly appreciated.
You need double quotes on the other side of your textbox value. But if it's in an access form it should be referenced with Me.
Set PrbApps1 = CurrentDb.OpenRecordset("Select * FROM [Application] WHERE [PYR_TenderRef] = '" & Me!TenderID.Value & "'")
Thanks, it actually needed ' as well = '" & Me.TenderID.Value & "'") why I don't know.
FYI, use me! instead me. me. references the textbox as a property of the me object. me! creates the textbox as an object, which execute a little faster when you have a lot of references. (Though intellisense doesn't work with it sadly)
I had no idea, I thought it was just preference. Thanks @Elias
Here goes some explanations: http://msdn.microsoft.com/en-us/library/office/aa210660(v=office.11).aspx
| common-pile/stackexchange_filtered |
Callbacks from C native to Xamarin native
My requirement is to implement the async call from native c library to xamarin native application to get all the data coming from server.
/**
* @brief OOB Data receive callback (__NULLABLE)
*
* When receiving data from the sender this function is called.
* You get a pointer to the data, the length and the NetworkConnection object containing your
* object if you did put a object there.
*
* @param function getting data from the sender.
*/
std::function<void(const uint8_t *pBuf, size_t lSize, std::shared_ptr<NetworkConnection> &rConnection, rist_peer *pPeer)>
networkOOBDataCallback = nullptr;
I am getting all data in this callback coming from sender server.
Thanks in Advance
You can try to compile the C source code into platform-specific native libraries , so that it can be called by the C# wrapper , check the details here : https://learn.microsoft.com/en-us/xamarin/cross-platform/cpp/ .
Thank you for your response. @ColeX-MSFT
I have already tried all ways the but it's not working.
I want continuous callback from c wrapper to c# android project.
Which part does not work ? Can you provide more details?
| common-pile/stackexchange_filtered |
URL-NFC tag unique and identifyable request
I have the following problem: I'm developing a web-app which controls a lighting console and only on person at a time can control it. When somebody new visits the web-app this person gets control and this is the expected behavior. But I don't want anybody from anywhere else with the web-app url to take over control, just people standing in front of the lighting console.
My first approach was to use an tablet which creates a qr-code with a one-time-valid url. You take over control over the console when you scan the qr-code and follow the one-time-valid url and the tablet gets notified to regenerate the qr-code.
But may be there is a solution without an expensive tablet: is there a chance I can configure an NFC Tag so that this tag generates a new url on every tap, which cannot be reproduced? (eg by signing it with a private key)
It also has to support Android and iOS
it is not the job of nfc tag to generate new link, it is the job of the device who read the tag to generate an uniqe link and the overwrite the link in the tag.
The secinario could be :
you read the link from tag and save it .
generate new link
re write the link on tag
end session
Most standard NFC tags just read and write to some EPROM data chips, so the data is static on the card, while most cards have a unique serial number this is still static data.
There are some advanced cards out their that can run custom programs (one is called JavaCards https://en.wikipedia.org/wiki/Java_Card ), so you could write a program that generates a unique URL you want and then present it to the card reader device as a standard NDEF message that would launch the devices browser to this dynamic URL.
| common-pile/stackexchange_filtered |
How can I use both .Hyperlink.Add Address with .Formula on VBA?
Greetings for everyone!
Introduction.
At the work, we use an electronic document management web application (we can name it for example as "webdocs") that allows us to search the documents by their specific number.
The "webdocs" have an option to download an excel file to show the list of expired and coming outer/internal documents.
That excel file has the column which consists № character + document number + line break + date of entry.
The webdocs.
When I search a document, the URL looks like the following:
https://webdocs.com/#!/cancelar/incoming/document_list_organization?page=1&document_recipient_reg_number=12345678&boss=-1&from_date=01.01.2022&to_date=12.31.2022&year=2022
So the URL consists 3 main blocks, the second one is what I am looking for:
Protocol HTTPS + Domain + Documents area + Visible page number;
Document number (I wrote 12345678 as the placeholder);
Specific filter + Date filter.
The problem
I wrote the VBA code that adds the additional column and it pasts the URL into each cell of the table of data.
The main point is to replace the second block of URL with the value of the column "B", that is why I have added a formula that ignores "№" character and takes the values until the line break (character 10).
Dim zRange, zCells As Range
Set zRange = .Range("I3", .Range("I3").End(xlDown)).Offset(0, 5)
.Range("N2").Value = "Find the document"
For Each zCells In zRange
.Hyperlinks.Add Anchor:=zCells, _
Address:="https://webdocs.com/#!/cancelar/incoming/document_list_organization?page=1&document_recipient_reg_number="
& zCells.Formula = "RIGHT(LEFT(" & "B" & zCells.Row & ",FIND(CHAR(10)," & "B" & zCells.Row & ")-1), LEN(LEFT(" & "B" & zCells.Row & ",FIND(CHAR(10)," & "B" & zCells.Row & ")-1))-2)"
& "&boss=-1&from_date=01.01.2022&to_date=12.31.2022&year=2022", _
ScreenTip:="Open the document", _
TextToDisplay:="Open the document"
Next zCells
The code interprets the ".Formula" as text and when I opened the hyperlink, I saw the formula on the URL's second block but not the value from the cells of the column "B". The code does not work as it should.
The question
What is the way to fix the problem?
Hint:
Sub TestA()
Const Address As String = "https://webdocs.com/#!/cancelar/incoming/document_list_organization?page=1&document_recipient_reg_number=12345678&boss=-1&from_date=01.01.2022&to_date=12.31.2022&year=2022"
MsgBox "Reg #: " & Split(Split(Address, "=")(2), "&")(0)
End Sub
Likewise:
Sub TestB()
Dim DocID As String
DocID = Range("B" & zCells.Row).Text
MsgBox "Reg #: " & Split(Split(DocID, "№")(1), Chr(10))(0)
End Sub
| common-pile/stackexchange_filtered |
How can I pass multiple values in a function parameter?
I am trying to make a program that calculates compound interest with 3 different lists. The first item in each list are the variables needed in the formula A=P(1+r)^n. These are the instructions.
Albert Einstein once said “compound interest” is man’s greatest invention. Use the equation A=P(1+r) n
,
where P is the amount invested, r is the annual percentage rate (as a decimal 5.0%=0.050) and n is the
number of years of the investment.
Input: 3 lists representing investments, rates, and terms
investment = [10000.00, 10000.00, 10000.00, 10000.00, 1.00]
rate = [5.0, 5.0, 10.0, 10.0, 50.00]
term = [20, 40, 20, 40, 40]
Output: Show the final investment.
$26532.98
$70399.89
$67275.00
$452592.56
$11057332.32
This is the code that I have written so far:
P = [10000.00, 10000.00, 10000.00, 10000.00, 1.00]
r = [5.0, 5.0, 10.0, 10.0, 50.00]
n = [20, 40, 20, 40, 40]
# A=P(1+r)
def formula(principal,rate,years):
body = principal*pow((1 + rate),years)
print "%.2f" %(body)
def sort(lst):
spot = 0
for item in lst:
item /= 100
lst[spot] = item
spot += 1
input = map(list,zip(P,r,n))
sort(r)
for i in input:
for j in i:
formula()
I firstly define a function to calculate the compound interest, then I define a function to convert the rates to the proper format. Then using map (which im not completely familiar with)I separate the first item of each list into a tuple within the new input list. What i am trying to do is find a way to input the three items within in the tuple into principle, rate and years in the formula function.
I am open to critiquing, and advice. I am still fairly new with programming in general.
Thank you.
Judging by the title I have a feeling that your actual question could and should be decoupled from your specific code, with a clear, concise MCVE and without all the boilerplate.
Firstly, I think you should return something from your formula, that is, your calculation:
def formula(principal,rate,years):
return principal*pow((1 + rate),years) #return this result
then you can use the return value from the formula - be it for printing or any other use for further calculation.
Also, since you have the same amount of items in your three lists, why not just using range(len(p)) to iterate over them?
for x in range(len(p)):
print(formula(p[x],r[x],n[x]))
x in range(len(p)) will generate iteration with x value:
0, 1, ..., len(p) - 1 # in your case, len(p) = 5, thus x ranges from 0 to 4
And p[x] is the way you say you want to get x-th-indexed element from p. Putting it you your context you will get combination like this:
when x= principal rate years
----------------------------------
0 10000.00 5.0 20
1 10000.00 5.0 40
2 10000.00 10.0 20
3 10000.00 10.0 40
4 1.00 50.0 40
This way, you do not need to use tuple
Wow, thanks so much for the feedback. It works! :) On another note, how can i format the outputs to round to the hundredths place if I'm using return? I used round that worked, but when the result was 0 even it only displayed one zero. I'm really OCD about that stuff so I'm trying to use the %.2f somehow. So the answer is 67284.0, and i want it to look like 67284.00
@avbirm check this recent post. Seems like what you want. :)
| common-pile/stackexchange_filtered |
PanacheEntityBase setting by default id as not nullable, even if specifying the reverse
I have an java class set as entity, which is defined as follow:
package com.redhat.bvbackend.player;
import io.quarkus.hibernate.orm.panache.PanacheEntityBase;
import java.util.List;
import javax.persistence.CascadeType;
import javax.persistence.FetchType;
import javax.persistence.OneToMany;
import javax.persistence.Entity;
import javax.persistence.SequenceGenerator;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
import javax.persistence.Column;
import javax.validation.constraints.NotNull;
import javax.validation.constraints.Size;
import com.redhat.bvbackend.team.Team;
@Entity
public class Player extends PanacheEntityBase {
@Id
@Column(name = "player_id", nullable = true)
@SequenceGenerator(name = "playerSequence", sequenceName = "playerIdSequence", allocationSize = 1, initialValue = 1)
@GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "playerSequence")
public Long id;
public String name;
public String familyName;
public int age;
@OneToMany(mappedBy = "name", cascade = CascadeType.ALL, orphanRemoval = true, fetch = FetchType.EAGER)
public List<Team> teams;
@NotNull
@Size(min = 1, max = 1, message = "Handed must be either Left or Right")
public String handed;
}
My class extends PanacheEntityBase and I am setting the column to allow nullable entries, and I am creating a generator to automatically increment the player_id for each new entry. My thought is that if a generator is defined, I shouldn't need to set the nullable since the generator already have an initialValue specified. Actually if I see the @column reference or not, it doesn't change I get always the same output. See below.
I would like to create an player as follow:
INSERT INTO player (age,familyname,handed,name) VALUES (25,'foo','x','y');
without the need to specify the id. However when I do so I get:
ERROR: null value in column "player_id" violates not-null constraint
DETAIL: Failing row contains (null, 25, foo, x, y).
What am I doing wrong? Thanks in advance.
There is no "generator" though is there? The "Default" in that table definition is empty.
Though you have a sequence generator created for that ID as playerIdSequence your column does not have a default value set.
The @GeneratedValue itself will be used within the panache insert sequence itself, and it will set the value of the ID when building the SQL request.
If you want to be able to automatically assign your ID when running raw SQL requests to the database yourself, you should assign a default value to something like nextval('playerIdSequence'). This way, it will get the next number in the sequence.
You can change the table like this:
alter table public.player alter column player_id set default nextval('playerIdSequence');
| common-pile/stackexchange_filtered |
Angular Material mat-cell datepicker pass custom ID to the event handler function
I added an Angular Material mat-datepicker inside a mat-cell. When the datepicker is selected and the date is changed it will call a function in the typescript side. I am trying to figure out during the mat-table binding, how to bind the ID (primary key) of the record to that mat-cell and pass that value to the event handler function. Here is my code so far
<ng-container matColumnDef="ProcessDate">
<mat-header-cell *matHeaderCellDef mat-sort-header><strong>{{columnName}}</strong></mat-header-cell>
<mat-cell id={{mydata.id}} *matCellDef="let mydata">
<input id={{mydata.id}} class="control-width" matInput [matDatepicker]="picker1" [value]="myData.date" (dateChange)="addEvent(id, $event)">
<mat-datepicker-toggle matSuffix [for]="picker1"></mat-datepicker-toggle>
<mat-datepicker #picker1 color="primary"></mat-datepicker>
</mat-cell>
</ng-container>
On the typescript side, the id is 'undefined'. Any help is appreciated.
if the id you want is the mydata.id property, change id in the callback to mydata.id. There is no id property in that view, and unless you have a property named id on your component, id will be undefined.
| common-pile/stackexchange_filtered |
Starting an inequality family: How far can we go?
Consider the inequality
$$(ab+bc+ca)\cdot\left(\frac{1}{(ka+b)^2}+\frac{1}{(kb+c)^2}+\frac{1}{(kc+a)^2}\right)\,\ge\,\frac{9}{(k+1)^2}\tag{"Case $k$"}$$
with variables $\,a,b,c\in\mathbb{R}^{>0}\,$ and parameter $\,k\in\mathbb{R}$.
(At least) the two instances with $\,k=1\,$ and
$\,k=2\,$
have yet found their home at math.SE being answered in the positive. The case $\,k=1\,$ is entitled
'Hard inequality' (aka "Iran 1996" amongst insiders I guess), cf the comments there containing further references.
My question: For which other values of the parameter $k$ does the inequality "Case $k$" hold true?
Please note that "Case $k$" is invariant under replacing $\,k\mapsto \frac{1}{k}\,$ and simultaneously switching any two out of the three variables.
So I'd expect that any $\,k>0\,$ yields a valid statement. To 'complete the proof job' it would suffice if a reduction from $\,k>1\,$to $\,k=1\,$ can be achieved.
Check here: https://math.stackexchange.com/questions/3828773/ab-bc-ca-left-frac-1a-pba-qb-frac-1b-pcb-qc/3829012#3829012
A proof in the parameter range $\,k\geqslant 0\,$ is proposed which relies on the known case $k=1$.
It encompasses the following steps, hereby submitted to the community's critical eye:
Variable transformation, to simplify the subsequent
clearing of denominators.
Scaling the inequality and
arguing that its $k=1$ instance yields a lower bound for other values of $k$.
Limit case $k\to 0$
Let's send $\:a,b,c\:$ to the new variables $\:u,v,w\:$ via
$$\begin{pmatrix}u\\ v\\ w\end{pmatrix}\;=\;
\begin{pmatrix}k&1&0\\0&k&1\\1&0&k\end{pmatrix}\,
\begin{pmatrix}a\\b\\c\end{pmatrix}$$
Note that $\,u+v+w=(k+1)(a+b+c)$, and especially
$$\begin{matrix}
\sum_\text{cyc}uv & = & \left(k^2+k+1\right)\sum_\text{cyc}ab & +
& k\:\sum_\text{cyc}a^2 \\[1.5ex]
\sum_\text{cyc}u^2 & = & 2k\:\sum_\text{cyc}ab &
+ & \left(k^2+1\right)\sum_\text{cyc}a^2\,.
\end{matrix}$$
Assume $\,k>0\,$ and combine the preceding two identities to obtain
$$
\left(k+\frac 1 k +1\right)(k+1)^2\sum_\text{cyc}ab\;=\;
\left(k+\frac 1 k\right)\sum_\text{cyc}uv -\sum_\text{cyc}u^2\,.
$$
This is used when multiplying ("Case $k$"), the given inequality, with $\,(k+\frac 1 k -1)(k+1)^2\,u^2v^2w^2$, and after clearing the RHS let
$$
g(k,u,v,w)\;:=\;\left[\left(k+\frac 1 k\right)\sum_\text{cyc}uv -\sum_\text{cyc}u^2 \right]
\cdot\sum_\text{cyc}u^2v^2\; -\;9\left(k+\frac 1 k -1\right)u^2v^2w^2\,.
$$
Note that the freedom to scale the function $g$ has been exploited such that the second "$\sum$" in the brackets does not depend on $k\;\implies$ will drop out upon differentiation w.r.to $k$.
It has to be shown that $\,g(k,u,v,w)\geq 0\;\:\forall\, k,u,v,w>0\,$.
This is accomplished by one-variable analysis in $k$ with $\,u,v,w\,$ fixed.
$g$ enjoys the invariance property $\,g(1/k,\ldots)=g(k,\ldots)\,$ implying
$\,\frac 1kg_k\big(\frac 1k,\ldots\big)=-k\,g_k(k,\ldots)\,$ where $g_k\equiv\frac{\partial}{\partial k}g$. In particular $\,g_k(1,\ldots)=0\,$. Thus the known case $\,g(1,\ldots)\geqslant0\,$ is a bound for $\,g(k,\ldots)\,$, but we badly need $\,g_k(k,\ldots)\geqslant0\,$ for $\,k\geqslant 1\,$ to conclude it's a lower bound!
Now
$$
g_k(k,u,v,w)\;=\;\left(1-\frac 1{k^2}\right)
\left[\left(\sum_\text{cyc}uv\right)\left(\sum_\text{cyc}u^2v^2\right)\; -\;9u^2v^2w^2\right]
$$
and separation of terms involving all three variables from those containing two and distributing the multiples of $\,u^2v^2w^2\,$ accordingly leads to
$$
=\;\left(1-\frac 1{k^2}\right)
\left[uvw\sum_\text{cyc}u(v-w)^2\; +\;\frac{uv+vw+wu}{2}
\sum_\text{cyc}u^2(v-w)^2\right]\,.
$$
Bingo, coz the bracket is a sum of squares with positive coefficients. This completes the $\,k>0\,$ case.
Since the inequality is built on "$\le$" (and not "$<$") the case "$k=0$" holds by continuity too, just send $k$ to $0\,$.
Maybe the following my delirium will help.
It's enough to prove our inequality for all $k\geq1$.
A full expending gives $f(k)\geq0$, where
$$f(k)=\sum_{cyc}(a^3b^3+a^3b^2c+a^3c^2b-3a^2b^2c^2)k^6+$$
$$+2\sum_{cyc}(a^4c^2+a^3b^3+a^4bc-7a^3b^2c+3a^3c^2b+a^2b^2c^2)k^5+$$
$$+\sum_{cyc}(a^5b+a^5c-9a^4b^2+4a^4c^2+2a^3b^3+5a^4bc+10a^3b^2c-22a^3c^2b+8a^2b^2c^2)k^4+$$
$$+2\sum_{cyc}(a^5b+a^5c+a^4b^2+a^4c^2-8a^3b^3-6a^4bc+8a^3b^2c+8a^3c^2b-6a^2b^2c^2)k^3+$$
$$+\sum_{cyc}(a^5b+a^5c+4a^4b^2-9a^4c^2+2a^3b^3+5a^4bc-22a^3b^2c+10a^3c^2b+8a^2b^2c^2)k^2+$$
$$+2\sum_{cyc}(a^4b^2+a^3b^3+a^4bc+3a^3b^2c-7a^3c^2b+a^2b^2c^2)k+$$
$$+\sum_{cyc}(a^3b^3+a^3b^2c+a^3c^2b-3a^2b^2c^2).$$
We'll prove that
$$\sum_{cyc}(a^4c^2+a^3b^3+a^4bc-7a^3b^2c+3a^3c^2b+a^2b^2c^2)\geq0.$$
Indeed, we need to prove that
$$\sum_{cyc}\left(\frac{a^2}{b^2}+\frac{ab}{c^2}+\frac{a^2}{bc}-\frac{7a}{c}+\frac{3a}{b}+1\right)\geq0.$$
Let $\frac{a}{b}=x$, $\frac{b}{c}=y$ and $\frac{c}{a}=z$.
Hence, $xyz=1$ and we need to prove that
$$\sum_{cyc}(x^2+x^2y+x^2z-7xy+3x+1)\geq0.$$
Let $x+y+z=3u$, $xy+xz+yz=3v^2$ and $xyz=w^3$.
Hence, we need to prove that
$$(9u^2-6v^2)w+9uv^2-3w^3-21v^2w+9uw^2+3w^3\geq0$$
or $g(v^2)\geq0$, where
$$g(v^2)=(u-3w)v^2+u^2w+uw^2.$$
We see that $g$ is a linear function, which says that it's enough to prove the last inequality for an extremal value of $v^2$, which happens for equality case of two variables.
Let $y=x$ and $z=\frac{1}{x^2}$.
We need to prove that
$$(x-1)^2(2x^5-x^4+2x^3+10x^2+4x+1)\geq0,$$
which is obvious.
Hence,
$$f''''(k)=360\sum_{cyc}(a^3b^3+a^3b^2c+a^3c^2b-3a^2b^2c^2)k^2+$$
$$+240\sum_{cyc}(a^4c^2+a^3b^3+a^4bc-7a^3b^2c+3a^3c^2b+a^2b^2c^2)k+$$
$$+24\sum_{cyc}(a^5b+a^5c-9a^4b^2+4a^4c^2+2a^3b^3+5a^4bc+10a^3b^2c-22a^3c^2b+8a^2b^2c^2)\geq$$
$$\geq360\sum_{cyc}(a^3b^3+a^3b^2c+a^3c^2b-3a^2b^2c^2)+$$
$$+240\sum_{cyc}(a^4c^2+a^3b^3+a^4bc-7a^3b^2c+3a^3c^2b+a^2b^2c^2)+$$
$$+24\sum_{cyc}(a^5b+a^5c-9a^4b^2+4a^4c^2+2a^3b^3+5a^4bc+10a^3b^2c-22a^3c^2b+8a^2b^2c^2)=$$
$$=24\sum_{cyc}(a^5b+a^5c+4a^4b^2+a^4c^2+27a^3b^3+15a^4bc-45a^3b^2c+23a^3c^2b-27a^2b^2c^2)\geq0.$$
Hence,
$$f'''(k)=120\sum_{cyc}(a^3b^3+a^3b^2c+a^3c^2b-3a^2b^2c^2)k^3+$$
$$+120\sum_{cyc}(a^4c^2+a^3b^3+a^4bc-7a^3b^2c+3a^3c^2b+a^2b^2c^2)k^2+$$
$$+24\sum_{cyc}(a^5b+a^5c-9a^4b^2+4a^4c^2+2a^3b^3+5a^4bc+10a^3b^2c-22a^3c^2b+8a^2b^2c^2)k+$$
$$+12\sum_{cyc}(a^5b+a^5c+a^4b^2+a^4c^2-8a^3b^3-6a^4bc+8a^3b^2c+8a^3c^2b-6a^2b^2c^2)\geq$$
$$\geq120\sum_{cyc}(a^3b^3+a^3b^2c+a^3c^2b-3a^2b^2c^2)+$$
$$+120\sum_{cyc}(a^4c^2+a^3b^3+a^4bc-7a^3b^2c+3a^3c^2b+a^2b^2c^2)+$$
$$+24\sum_{cyc}(a^5b+a^5c-9a^4b^2+4a^4c^2+2a^3b^3+5a^4bc+10a^3b^2c-22a^3c^2b+8a^2b^2c^2)+$$
$$+12\sum_{cyc}(a^5b+a^5c+a^4b^2+a^4c^2-8a^3b^3-6a^4bc+8a^3b^2c+8a^3c^2b-6a^2b^2c^2)=$$
$$12\sum_{cyc}(3a^5b+3a^5c-17a^4b^2+19a^4c^2+16a^3b^3+14a^4bc-32a^3b^2c+4a^3c^2b-10a^2b^2c^2)\geq$$
$$12\sum_{cyc}(3a^5b+3a^5c-17a^4b^2+19a^4c^2+14a^3b^3+14a^4bc-30a^3b^2c+4a^3c^2b-10a^2b^2c^2)=$$
$$=12\sum_{cyc}(3x^3y+3x^3z-17x^2y^2+19x^2+14x^2y+14x^2z-30xy+4x-10),$$
which can be negative!
What is the conclusion? $f'''(k)$ may take some negative values, but that does not decide whether $f(k)$ can be negative.
@user254665 Sometimes it helps. By the way, I think I see something!
@MichaelRozenberg If the "something" is worthwhile to be shared can only be estimated if you share it, don’t you?
@Hanno No! My "something" it's just nothing. I am sorry.
@MichaelRozenberg All good ;-) U needn't be sorry. Btw, had an idea how to show the general case when assuming "Iran 1996", i.e. case $k=1$. Hope to get it written down soon.
"Case k=0"
Just another brick in the wall, not a comprehensive answer. It is shown that the inequality holds true for the parameter value $k=0$.
After clearing denominators we face the expression
$$(ab+bc+ca)\left[\sum_\text{cyc}a^2b^2\right]\;-\;9a^2b^2c^2$$
to be transformed in a sum of squares with positive prefactors.
In the first summand separate terms involving all three variables from those containing two, and spread the multiples of $a^2b^2c^2$ accordingly
$$\begin{eqnarray}
& =\quad & abc\left[\sum_\text{cyc}\left(b^2c+bc^2\right)-6abc\right]\;+\;\underbrace{\sum_\text{cyc}a^3b^3 - 3a^2b^2c^2}_\text{is AM-GM} \\[3ex]
& =\quad & abc\sum_\text{cyc}a(b-c)^2\:
+\:\frac{ab+bc+ca}{2}\,\sum_\text{cyc}a^2(b-c)^2
\end{eqnarray}$$
To the underbraced summand the identity
$r^3+s^3+t^3-3rst=\frac 12(r+s+t)\sum_\text{cyc}(r-s)^2$
with $ab,bc,ca\,$ inserted has been applied.
| common-pile/stackexchange_filtered |
implicit function theorem with domain of definition not an open set or half open set
The following is a question about the nature of the domain of definition when we wish to apply the Implicit Function Theorem.
My question has to do with the nature of the domain of definition, say $ S \subset \mathbb R^{n+m}.$ In most sources I have looked, it is made the assumption that we take $S$ to be an open subset of the Euclidean space $ \mathbb R^{n+m}.$
What if instead of asking $S$ to be an open set, we assume that $S$ is the cartesian product of an open set with an arbitrary set.
Setting: Suppose that $S$ is of the shape
$$ S = S_1 \times S_2, \quad \text{where} \quad S_1 \subset \mathbb R^n \quad \text{is open, and} \quad S_2 \subset \mathbb R^m \quad \text{is arbitrary}. $$
Suppose that $ f = (f_1, \ldots, f_m) : S \to \mathbb R^m$ is a vector-valued function defined for $
(x,y) \in S,$ where $ x$ is an $n$-tuple and $y$ is an $m$-tuple. We suppose that $f$ is continuously differentiable in $S.$ Let $(a,b) = (a_1, \ldots, a_n, b_1, \ldots, b_m) \in \text{int} (S)$ such that $ f(a,b) =0.$ We suppose that the Jacobian matrix
$$ J_{f,y} (a,b) = \left[ \frac{ \partial f_i}{\partial y_j} (a,b) \right]_{ \substack{ 1 \leq i \leq m \\ 1 \leq j \leq m}} $$
is invertible.
Question 1: Does there exist an open set $ U \subset \mathbb R^n$ containing the point $ a= (a_1, \ldots, a_n)$ and a continuous function $ g : U \to \mathbb R^m$ such that one has $ g(a)=b $ and $ f(x, g(x)) =0$ for all $ x \in U ?$
Question 2: What can we say about the uniqueness and the differentiability of the function $g ?$
Any reference would be really appreciated.
Edit : Following the comment of nullUser below, let me give the motivation behind my question. As I wrote in the beginning of the post, my question is related to the set of definition of the functions. For this reason, instead of having functions defined everywhere in $ \mathbb R $ let us suppose that our functions are defined only for non-negative reals, namely in the set $[0, \infty).$
Let $f_1, \ldots, f_m: [0, \infty)^{n+m} \to \mathbb R$ be real-valued functions defined for $ (x,y) = (x_1, \ldots, x_n, y_1, \ldots, y_{m}) \in [0, \infty)^{n+m}.$ We set
$$ f = (f_1, \ldots, f_m) : [0, \infty)^{n+m} \to \mathbb R^m.$$
Now we make the following assumptions.
The functions $f_1, \ldots, f_m$ are continuously differentiable on $[0, \infty)^{n+m}.$
There exists a point $(a,b) = (a_1, \ldots, a_n, b_1, \ldots, b_m) \in [0, \infty)^{n+m}$ with $b_1, \cdots, b_m > 0,$ such that $ f_1(a,b)= \cdots = f_m (a,b)=0.$
The Jacobian matrix
$$
J_{f,y} (a,b) = \left[ \frac{ \partial f_i}{ \partial y_j} (a,b) \right]_{
1\leq i,j \leq m}
$$
is invertible.
Question 3: Can we apply the Implicit Function Theorem in order to deduce that there exist an open set $ U \subset \mathbb R^n$ containing the point $a = (a_1, \ldots, a_n)$ and a continuous (uniqueness or differentiability ??) function $ g : U \to \mathbb R^m,$ so that the following hold:
(i). One has $ g(a) = b.$
(ii). For any $ x =(x_1, \ldots, x_n) \in U$ one has
$$ f_1 (x, g(x)) = \cdots = f_m (x, g(x)) =0.$$
The point of confusion is that the set $[0, \infty)$ is not an open subset of $\mathbb R.$
Pretty much all the standard theorems on differentiability fail if the set $S$ is not open, or the closure of an open set, or if $S$ has some other specific form.
@nullUser: Thank's for your comment. I was expecting the failure of differentiability as well. However, I don't see a reason why not being able to solve (continuously) the implicit system, as described in Question 1.
Note that you need to clarify the meaning of differentiability if your set is not open. Usually it is taken to mean differentiable on the interior such that all derivatives extend continuously and uniquely to the closure, or that it is the resteiction of a differentiable function from an open set containing your non-open set to the set (actually one can show that for sets which are not too wild this is equivalent). In both cases you can apply the implicit function theorem then to the function on an open set (and consider the boundary points as limits or restrict the more general solution).
@AlexanderSchmeding: Thank you for your comment. I see your point. It is more clear to me the approach by considering an open set which is a super-set of the non-open set. However, the question is what happens when we cannot do this. That was the main motivation of the Question 3 in the post.
In the setting of Question 3, we have to consider an open sub-set of the non-open set and the consider the boundary points as limits. How can we make this explicit ? I mean, how does the open sub-set (over which we apply the Implicit Function Theorem and which contains the zero) will look like ?
| common-pile/stackexchange_filtered |
Select all rows that fully intersect with the input array?
Consider the following table
id int
lang_codes text[]
1
{eng,fre}
2
{eng}
I need to select all the rows from the table that intersect with the input array, but only when all the items in the lang_codes are present in the input array, e.g.:
if the input array contains [eng,fre,ger] then it returns all the rows
while [eng,ger] won't return the first record, because it needs both codes to be present in the input array.
I've tried the && operand, but it returns all the rows:
select * from my_table where lang_codes && ARRAY['eng','ger'];
From the other hand @> operand, returns only when array matches fully
Are you sure? @> should be "contains" not "equality". Oh - do you have the arguments the wrong way around?
The <@ operator should do the trick:
select * from my_table where lang_codes <@ ARRAY['eng','ger'];
SQL Fiddle
| common-pile/stackexchange_filtered |
Symfony command on controller generates "Maximum execution time"
I used this snippet in my controller
$app = new Application($this->container->get('kernel'));
$input = new StringInput('generate:doctrine:entity --no-interaction --entity=ModelBundle:MyTest --fields="title:string(100)"');
$output = new StreamOutput(fopen('php://temp', 'w'));
$app->doRun($input, $output);
It triggered the error "Error: Maximum execution time of 30 seconds exceeded", I increased the Maximum execution time to 100, and still got the same error.
I tried the same command in CLI and worked nice.
I replaced in my controller the command generate:doctrine:entity by container:debug and I didn't get the error "Maximum execution time".
$input = new StringInput('container:debug');
I am using Symfony 2.6.1
Any hint ?
Maybe the command you are calling requires some user input.
Not sure what you want to do but have you tried a simple exec() or passthru() call?
| common-pile/stackexchange_filtered |
Back up the download folder on win10 in OneDrive
Seems that one drive could only backup "desktop" "documents" "photos", but not the "download" folder?
Do you see the Download folder in the regular window?
The Downloads folder is sort-off designed for ephemeral data.
However, you can try to do the same thing OneDrive does with the other folders: Move it. To do so, open the properties window and go to the “Location” tab:
Use the “Move” button (or simple enter the path manually). It could be something like C:\Users\Daniel\OneDrive\Downloads in my case. Then click OK. If the folder does not exist yet, you’ll have to confirm that it should be created.
You will then be asked whether the existing files should be moved. It is very important you confirm this, otherwise you’ll end up with two “Downloads” folders in your user profile.
Afterwards, your Downloads folder will be redirected inside OneDrive, just like the “Desktop”, “Documents” and “Pictures” folders.
thanks for answer, this is not a symbolic link right? All the files will be move to there?
Dunno how exactly it works under the hood, but yes, the files will “physically” reside in the OneDrive folder.
I found the button, thx for your reply also
| common-pile/stackexchange_filtered |
How to parse dates in SQL?
I have dates stored in a table in a varchar format, like this:
2014-05-29
Year Month Day
So I thought that for using BETWEEN selections, I could get rid of the dashes (20140529) and select between two dates easily like that. For example, between the dates 2014-01-01 and 2014-02-01 would be seen as 20140101 and 20140201, and there is obviously a range of numbers between these that would match an actual date value, for example 20140115.
This is the sql query I plan to select between two dates (in a php file):
$sql = mysql_query("Select * From $table Where Symbol = $symbol
And (Concat(Parsename(Replace(Date, '-', '.'), 3), Parsename(Replace(Date, '-', '.'), 2), Parsename(Replace(Date, '-', '.'), 1))
Between Concat(Parsename(Replace($lowDate, '-', '.'), 3), Parsename(Replace($lowDate, '-', '.'), 2), Parsename(Replace($lowDate, '-', '.'), 1)) And
Concat(Parsename(Replace($highDate, '-', '.'), 3), Parsename(Replace($highDate, '-', '.'), 2), Parsename(Replace($highDate, '-', '.'), 1))))");
So what I'm doing here is getting each index using the parsename function (which gets strings separated by dots, but first replacing the dashes with dots for it to work). It should get, in order, the year, the month and the day, then concatenating them.
By my understanding, it should be doing this with each date; the date data stored in the table, then with the low and high dates (between which I want data) that are stored as variables in php already. Then it should see if the date is between the low and high dates. I'm not sure why this isn't working, any help would be great.
Don't do this! If you want to compare dates use the MySQL str_to_date() function to create a date and use that. Your dates shouldn't be stored as varchar() anyway - they should be stored as date or datetime and all this malarkey would be irrelevant.
Could you give me an example of how I would do this?
It's not working because PARSENAME is not a MySQL function. If you want to remove the dashes, just use the REPLACE() function to remove the dashes. There's no need for chopping the string up and concatenating it back together.
REPLACE(mycol,'-','')
e.g.
WHERE REPLACE(mycol,'-','') BETWEEN '20140101' AND '20140430'
But since the strings are already in a canonical format, removing the dashes isn't necessary. That is, since the values are all in YYYY-MM-DD format, exactly 10 characters in length with two digit month and day (with leading zeros), then just have your predicate operate on the bare column... just format the other "date" values as strings in the same format, e.g.
mycol BETWEEN '2013-12-01' AND '2014-02-15'
With this form, because the predicate is on a bare VARCHAR column, MySQL should be able to make use of an appropriate index to perform a range scan operation.
To convert your string into an actual MySQL DATE, you could do something like this:
mycol + INTERVAL 0 DAY
You can use that expression in a SQL statement, e.g.
mycol + INTERVAL 0 DAY BETWEEN '2013-12-15' AND '2014-03-31'
(With this form, because the predicate is operating on an expression, rather than a bare colmn, the MySQL optimizer won't be able to make use of a range scan operation to satisfy the predicate. The expression on the left side of BETWEEN will need to be evaluated for every row (which isn't excluded by some other predicate before this one is evaluated.)
NOTE MySQL provides a DATE datatype which is ideal for storing "date" values. Storing "date" values in VARCHAR is an anti-pattern.
Use the DATE format in mysql.
if YYYY-MM-DD format is there, you can simply use
$date3 => Initial date
$date4 => Final Date
query="SELECT * FROM table where date1 >='$date3' AND date2 <='$date4'";
Provided that the date1 and date2 fields in mySql are defined as date type.
| common-pile/stackexchange_filtered |
Compare a key in an Object in an Array with another key in an Object in Another array and merge them together if the values are equal
I have two arrays:
var array1 = [
{ id: 1, name: 'Name 1' },
{ id: 2, name: 'Name 2' },
...
];
var array2 = [
{ someId: '1', someField: 'Some Value 1' },
{ someId: '2', someField: 'Some Value 2' },
...
];
array1 will have objects coming from the backend in batches of 30. As soon as I get a batch, I extract the Ids from this array, and call another API to get the array2 for those ids.
Eventually, I want an array like this:
var array3 = [
{ id: 1, name: 'Name 1', someOtherField: 'Some Value 1' },
{ id: 2, name: 'Name 2', someOtherField: 'Some Value 2' },
...
];
I could do something like this:
ids = array1.map(item => item.id);
var resultingArray = array2.map((item, index) => {
return array1[index].someOtherField = item.someField
});
But since I have the items of array1 in batches, it would be hard to maintain the indexes correctly.
How do I go about doing this?
You can combine both arrays, and use Array.reduce() with some destructuring to get both id and someId as id, and store everything on a dictionary (POJO). We get an array by using Object.values() on the dictionary:
const array1 = [{ id: 1, name: 'Name 1' }, { id: 2, name: 'Name 2' }];
const array2 = [{ someId: '1', someField: 'Some Value 1' }, { someId: '2', someField: 'Some Value 2' }];
const result = Object.values(
// combine both arrays
[...array1, ...array2]
// use destructuring to get someId/id as id, and the rest of objects' props
.reduce((r, { someId, id = someId, ...rest }) => ({
...r, // spread the previous accumulator
[id]: { id, ...r[id], ...rest } // add the current object, with the id, props, and previous data if any
}), {})
);
console.log(result);
Perfect. Let me try this out. :)
Build up a lookup table for the array2 ids:
const ids = new Map(array2.map(el => [el.id, /*to*/ el]));
Then adding the data from array1 is as easy as:
for(const el of array2) {
if(ids.has(el.id)) {
// Merge properties
Object.assign(ids.get(el.id), el);
} else {
// Add to Map and array
array1.push(el);
ids.set(el.id, el);
}
}
Could you please add some explanation to this? I'm pretty new to Map and haven't used it before.
@SiddAjmera I can't tell you more than what the docs contain
| common-pile/stackexchange_filtered |
Vert.x: Is there a difference between deploying a verticle n times vs setInstances(n)?
Is there a major between doing
DeploymentOptions options = new DeploymentOptions().setInstances(2);
vertx.deployVerticle("com.mycompany.MyVerticle", options);
and
vertx.deployVerticle("com.mycompany.MyVerticle");
vertx.deployVerticle("com.mycompany.MyVerticle");
?
The docs only mention the first approach.
In the MyVerticle#start method I read a file which creates a lock. So deploying with .setInstances(2) will cause an IOException. I want to deploy the next verticle only if the future of the first is completed.
Are there any downsides of the second approach?
There is a major difference: when you deploy several instances by configuring DeploymentOptions, Vert.x guarantees the verticle instances use different event loops (provided there are more event loops than instances to deploy).
If you have to create a specific resource that is shared by all verticle instances, you could create it in the main method of your application and then use the deploy method which takes a Supplier argument:
var sharedResource = setupSharedResource();
DeploymentOptions options = new DeploymentOptions().setInstances(2);
vertx.deployVerticle(() -> new com.mycompany.MyVerticle(sharedResource), options);
thanks for pointing to this! Is this also a problem if I plan my verticles to be worker verticles? The resource itself does all the processing work, so I need one for each verticle. (I could also copy the sharedResource in the verticle constructor but that is not straightforward). If I deploy the verticles one after another, can I somehow make sure they get in the same WorkerPool? Is this even desirable?
Worker verticles are also assigned an event loop which manages IO events. Only your verticle callbacks are invoked on worker threads. And for advice on how to design your verticles, I would recommend asking to one of the Vert.x community channels
Thanks for clearing this up! Can I specify the event loop thread a verticle should be deployed on? Then I could spread the verticles manually. If I have 3 event loops and want to deploy 4 verticles sequentially I could do 1->1 2->2 3->3 4->1... (verticle -> event loop)
No, you can't do this.
another update: while it's may not guaranteed that the verticle instances use different event loops the different verticles are still launched on different event-loops in a round robin fashion
When using setInstances() Vert.x will immediately start all instances "at the same time" (they nominally get started by different contexts with different worker threads, so multiple instances should not interact except using well defined non-vert.x multi-threading semantics).
That being said, there's nothing wrong with deploying multiple instances at different times, for example by registering for onSuccess() on the first deployVerticle() and call the second deploy from there - Vert.x doesn't care which verticles you deploy, as long as you never try to deploy the same instance multiple times.
So this should work fine:
vertx.deployVerticle("myverticle").onSuccess(v ->
vertx.deployVerticle("myverticle")).
| common-pile/stackexchange_filtered |
Using advanced for loop instead of for loop
I am a bit confused and I would need some clarification. Not too sure if I'm on the right track, hence this thread.
Here is my code that I want to decipher into advanced foreach loop.
int[] arrayA = {3, 35, 2, 1, 45, 92, 83, 114};
int[] arrayB = {4, 83, 5, 9, 114, 3, 7, 1};
int n = arrayA.length;
int m = arrayB.length;
int[] arrayC = new int[n + m];
int k = 0;
for(int i = 0; i < n; i++)
{
for(int j = 0; j < m; j++)
{
if(arrayB[j] == arrayA[i])
{
arrayC[k++] = arrayA[i];
}
}
}
for(int i=0; i<l;i++)
System.out.print(arrayC[i] + " ");
System.out.println();
So far this is the point where I am stuck at:
int[] a = {3, 8, 2, 4, 5, 1, 6};
int[] b = {4, 7, 9, 8, 2};
int[] c = new int[a.length + b.length];
int k = 0;
for(int i : a)
{
for(int j : b)
{
if(a[i] == b[j])
{
c[k++] = a[i];
}
}
//System.out.println(c[i]);
}
for(int i=0; i<c.length;i++)
System.out.print(c[i] + " ");
System.out.println();
}
The value of i and j will get reset at each iteration. And in order to use array[index] form, you cannot use the temporary variables in foreach loop for direct indexing
You are almost there
for(int i : a)
{
for(int j : b)
{
if(i == j)
{
c[k++] = i;
}
}
}
With for(int i : a) access the elements in the array a using i.
If a is {3, 8, 2, 4, 5, 1, 6}, then i would be 3,8,2,.. on each iteration and you shouldn't use that to index into the original array. If you do, you would get either a wrong number or a ArrayIndexOutOfBoundsException
Since you want to pick the numbers that are present in both the arrays, the length of array c can be max(a.length, b.length). So, int[] c = new int[Math.max(a.length, b.length)]; will suffice.
If you want to truncate the 0s at the end, you can do
c = Arrays.copyOf(c, k);
This will return a new array containing only the first k elements of c.
That look reasonable.
I ran the program with the above, and surprisingly it iterated through the entirety of array 'c', result is 3 common numbers plus the rest of the reserved slots within the array, please see here: 8 2 4 0 0 0 0 0 0 0 0 0
Yes, because you precreated the c array. (It has nothing to do with enhanced for loops)
Since arrays need predefined length, the best you can do is to take the max of a and b length. You can read about ArrayList if you need arrays whose size is adjustable (there are lots of other options)
Wicked help there, Sir! Much appreciated
Let us continue this discussion in chat.
I would use a List and retainAll. And in Java 8+ you can make an int[] into a List<Integer> with something like,
int[] arrayA = { 3, 35, 2, 1, 45, 92, 83, 114 };
int[] arrayB = { 4, 83, 5, 9, 114, 3, 7, 1 };
List<Integer> al = Arrays.stream(arrayA).boxed().collect(Collectors.toList());
al.retainAll(Arrays.stream(arrayB).boxed().collect(Collectors.toList()));
System.out.println(al.stream().map(String::valueOf).collect(Collectors.joining(" ")));
Outputs
3 1 83 114
Alternatively, if you don't actually need the values besides displaying them, and you want to use the for-each loop (and less efficiently) like
int[] arrayA = { 3, 35, 2, 1, 45, 92, 83, 114 };
int[] arrayB = { 4, 83, 5, 9, 114, 3, 7, 1 };
for (int i : arrayA) {
for (int j : arrayB) {
if (i == j) {
System.out.print(i + " ");
}
}
}
System.out.println();
Elliott, your comment surely amazed me, though the level of my current knowledge is insufficient to understand it fully. Thanks for the efforts :)
Just out of curiosity, the first solution you suggested doesn't compile, it runs into an error : List is not generic, it cannot be parameterized with , am I missing something?
Requires Java 8+ and that is java.util.List
| common-pile/stackexchange_filtered |
How many passwords can we create that contain at least one capital letter, a small letter and one digit?
I have a problem in combinatorics.
How many passwords can we create such as:
Each password has length $n$ when $n\ge3$.
Each password must contain at least one capital letter (there are 26 letters in English), and at least one small letter, and at least one digit (there are 10 possible digits)
I tried solving this problem like this:
There are $62^{n}$ possible passwords without any restrictions
There are $2*26^{n}$ passwords with only capital letters or small letters.
There are $10^{n}$ passwords with only digits.
There are $52^{n}$ passwords with only capital letters and small letters.
There are $36^{n}-10^{n}-26^{n}$ passwords with only capital letters and digits.
There are $36^{n}-10^{n}-26^{n}$ passwords with only small letters and digits.
So to get the "right answer" I subtracted all of them from $62^{n}$ and got $62^{n}-52^{n}-2*36^{n}+10^{n}$
Is this the right method/answer? or did I miss something silly?
I am new to combinatorics and I want to make sure, I am on the right track.
Thanks in advance guys!
$52^n$ passwords with only capital letters and small letters also count where you have only small letters or capital letters. Like in other cases, you should have subtracted $2 \times 26^n$ from it. That is the miss in your solution otherwise it is correct. Here is how I would do it -
If $A, B, C$ are sets of passwords where we have capital letters, small letters and numbers missing respectively,
$|A| = |B|= 36^n, |C| = 52^n$
$|A \cap B| = 10^n, |B \cap C| = 26^n, |A \cap C| = 26^n$
$|A \cap B \cap C| = 0$
$|A \cup B \cup C|$ will give you all arrangements where one or more of the three are missing.
$|A \cup B \cup C| = |A| + |B| + |C| - |A \cap B| - |B \cap C| - |A \cap C| + |A \cap B \cap C|$
So, $|A \cup B \cup C| = 2 \times 36^n + 52^n - 2 \times 26^n - 10^n$
Answer you are interested in $ = 62^n - |A \cup B \cup C| = 62^n - 52^n - 2 \times 36^n + 2 \times 26^n + 10^n$
Using Inclusion-exclusion principle, the answer is $62^{n}-52^{n}-2 \times 36^{n}+10^{n} + 2 \times 26^{n}$. Consider different cases and you will simply get the answer.
More precisely, first consider the passwords with no capital letter, then with no small letter and at last with no digit. Now, add the passwords with no capital letters and small letters (equivalently with only digits), then add passwords with no capital letters and no digits (equivalently with only small letters) and at last, add passwords with no small letters and no digits (equivalently with only capital letters).
| common-pile/stackexchange_filtered |
Java regexp that only matches URLs without protocol and www
I need a rather greedy regex that agressively matches strings that does not begin with any protocol such as "http://" or "ftp://" and at the same time doesn't match strings that begin with a "www" (or both combined, of course). I'm fairly new to Java and regex but I've managed to make up this one (that doesn't work for me):
([\w'-]+)\.(com|info|net|org).+
However it doesn't seem to match "example.com". It does seem match "example.com/index.php?q=somequery#something". I don't really understand how to create a regex that doesn't give a match if the string begins with a series of characters, in my case "www" or "http://".
Any help is appreciated.
(P.S I've tried to look for dupes to this question, I however couldn't find one that matches this one perfectly. Very sorry if this is a dupe.)
Note: your current RegEx would match whitespace after the domain which is not allowed in a URL. Also, your RegEx does match your example, so the issue is in your Java code in implementing it. My guess is that you need to double escape before compiling the string to a RegEx object.
@tenub Hmmm...I've tried it on this site: http://www.regexr.com/ and according to that site it doesn't match... And also it doesn't matter if it lets through whitespaces, my implementation splits a larger string (a chat message) by its whitespaces so I don't think I'll have to worry about that for now at least.
Your regex has .+ at the end. Which means any character except \n (1 or more times).
But your sample example.com doesn't have anything after the .com. That's why your regex doesn't match with the sample.
replace the .+ with .* and it will work for you. FYI the .* means any character except \n (0 or more times)
Thanks, that worked nicely! Do you perhaps know any good website that lets you test Java 7 regex's online? (And is reliable)
You can test your regex at here http://regex101.com Java supports PCRE regex. So you can use it without any worry.
| common-pile/stackexchange_filtered |
User PC types the number 56 occasionally
Welcome to the weirdest question of 2014 so far.
Yesterday, a user's computer (Windows 7 Pro, on domain) started typing 56 at random intervals. I saw it sit there and type "5656565656" into notepad over the span of a minute or two, but sometimes it would go half an hour of not typing anything. The user claimed it would type "56" upon clicking on any cell in a blank Excel sheet at one point. Here's some facts:
Replaced PS/2 keyboard with a USB one
Uninstalled all HID drivers, rebooted to have them reinstall the
necessary ones
Uninstalled Microsoft Mouse & Keyboard software (for a wireless kit
that doesn't exist anymore)
Ran quick MalwareBytes scan; Clean
Happens in any application: IE, MS Office, Notepad, etc.
Happens with no applications open
Happens in any user account
We've got Symantec Endpoint Protection updated on all computers. Nobody else is experiencing this. No Windows updates were applied in the days leading up to this happening. Many restarts have been performed. Something else installed is a document management software called OnBase, if anyone knows what that is, or if that helps.
Any other ideas I can pursue? It's not a hardware issue, not a virus, and I guess not a driver issue.
"New base unit" -- new computer? Come on...
Is there another keyboard connected? A standalone numeric keypad, for instance? Does the computer have a VNC server installed? Bluetooth?
Does this happen in 'safe' mode?
It's not a hardware issue, not a virus, and I guess not a driver issue. - probably better to simply remove that part of your question, since you can't confirm those yet.
I also edited your tags...while funny I don't think ghost is the appropriate tag here. :)
http://www.thinkgeek.com/product/ae83/
https://www.facebook.com/evan.anderson/posts/10202187023099274
@EvanAnderson - Check my comment on the answer. That was pretty much what happened!
@armani - Ha! Anything that can go wrong...
Check to see if a) Bluetooth is enabled and b) there's a wireless Bluetooth keyboard attached to the OS.
This used to happen to the mobile Mackbooks all the time when they'd select the wrong BT keyboard for the Hot Desk Macbooks. I imagine the same could happen with a Windows box.
This was the closest answer. Get this: There was a wireless receiver still plugged in from the old USB keyboard the user used to have. The keyboard was still on in another office ACROSS THE HALL with a BOX on top of it pressing the '5' and '6' keys! Happy Monday :)
@armani I think we kind of assumed that you had already checked for suspicious looking usb dongles. Definitely something to remember for next time.
always best to look for the simple answer first. Keep your mind open to the edge cases, but best to check for simple first.
@Grant I thought I did, too. Missed this one at first.
@armani happens to the best of us.
I'd probably start with reimaging the machine.
But if you are determined to figure out the actual cause, swap the hard drive with an identical machine - see if the problem moves with the drive or stays with the hardware.
If it moves with the drive, then it's a driver, software or malware issue. Wipe the drive and start over. Or continue fiddling with drivers and uninstalling programs until the problem goes away.
If it stays with the computer, then you have hardware issues. Maybe someone built a usb microcontroller to randomly type numbers just to irritate you and connected it to the USB header inside the machine. Or maybe the hardware is just faulty and needs to be replaced.
| common-pile/stackexchange_filtered |
Using COUNT(*) inside CASE statement in SQL Server
Is it possible to use count(*) inside case statement in T-SQL?
I am trying to update records in table, but I would like to have 2 cases.
First case should do update when EndDate is less than StartDate, second case should make update only when I have exactly one record for specific EmployeeId.
update ep
set ep.EndDate = case when t.EndDate < t.StartDate then t.EndDate3
case when COUNT(*) = 1 then null
end ,ep.ModifiedBy = 'PCA', ep.ModifiedDate = getdate()
from dbo.EmployeeProductivity ep inner join cteT1 t
on ep.Id = t.Id
where t.EndDate < t.StartDate
or t.EndDate is null
I was trying something like this, but I get errors like:
an expression of non boolean type specified in a context where a condition is expected
This is full script:
use ws3;
select distinct (EmployeeId) as EmployeeId
into #Emps
from [dbo].[EmployeeProductivity]
where EndDate < StartDate
or EndDate is null;
with cteProdRates as
(
select ep.[ID]
,ep.[EmployeeId]
,ep.[FormatId]
,ep.[StartDate]
,ep.[EndDate]
,dateadd(dd, -1, lag(startdate) over (partition by ep.EmployeeId, FormatId order by StartDate desc, Id desc)) as EndDate2
,ep.[Rate]
FROM [dbo].[EmployeeProductivity] ep inner join #Emps e
on ep.EmployeeId = e.EmployeeId
)
,cteT1 as
(
select [ID]
,[EmployeeId]
,[FormatId]
,[StartDate]
,[EndDate]
,case when EndDate2 < StartDate then StartDate else EndDate2 end as EndDate3
,[Rate]
from cteProdRates
)
update ep
set ep.EndDate = case when t.EndDate < t.StartDate then t.EndDate3
case when COUNT(*) = 1 then null
end ,ep.ModifiedBy = 'PCA', ep.ModifiedDate = getdate()
from dbo.EmployeeProductivity ep inner join cteT1 t
on ep.Id = t.Id
where t.EndDate < t.StartDate
or t.EndDate is null
drop table #Emps
So for each unique EmployeeId I have multiple entries. Every StartDate must be greater than EndDate, and when you add new entry with new StartDate, previous entry EndDate is set to newEntry.StartDate - 1. Only if entry is the last one, EndDate is set to NULL, meaning that this entry is not closed yet.
That's why I need to check case when I have only one entry for specific EmployeeId, so I can set it to NULL.
Is this even possible to compare or do I am missing something? Anybody has an experience with this?
What are you trying to do? Add a https://stackoverflow.com/help/minimal-reproducible-example to make things clearer.
You can't perform a COUNT in a SELECT without a GROUP BY or OVER clause; it's unclear which you are after here and with no sample data or expected results we can't give you a definitive answer.
@nemo_87 Check the case syntax first , you have an extra case, should be : case when t.EndDate < t.StartDate then t.EndDate3 when COUNT(*) = 1 then null end
@jarlh thanks for answering, I will post my whole script, maybe it will be cleaner from there and will add detail explanation too. Thanks for suggestion
@ECris Thanks, this solved syntax errors, I will check if the result is correct as well
You cannot use count(*) in an update. This has nothing to do with the case expression.
Perhaps this is what you intend:
update ep
set ep.EndDate = (case when t.EndDate < t.StartDate then t.EndDate3
when cnt = 1 then null
end),
ep.ModifiedBy = 'PCA',
ep.ModifiedDate = getdate()
from dbo.EmployeeProductivity ep inner join
(select t.*, count(*) over (partition by id) as cnt
from cteT1 t
where t.EndDate < t.StartDate or t.EndDate is null
) t
on ep.Id = t.Id
Of course, as the logic is phrased, the condition on the count() is superfluous -- the case expression returns NULL anyway, so this logic seems equivalent :
update ep
set ep.EndDate = (case when t.EndDate < t.StartDate then t.EndDate3
end),
ep.ModifiedBy = 'PCA',
ep.ModifiedDate = getdate()
from dbo.EmployeeProductivity ep inner join
from cteT1 t
on ep.Id = t.Id
where t.EndDate < t.StartDate or t.EndDate is null
Thanks for example it's close to what I need, I will be able to make something out of it now... Still have some bad results, but I guess I need more cases to cover here @GordonLinoff
| common-pile/stackexchange_filtered |
How to activate Application.OnKey method even under protection
I have used an Excel viewer in my C# Winform application. This viewer opens different .xls files with various number of sheets dynamically. I have to protect all sheets at the beginning of the application and also end of the sheetchange event handler.
Now I want to allow user to use some shortcut keys like {F1} or Ctrl+S. But none of the keys are working and I think this is because of the protection that I have applied on sheets. Am I right?
Edit :
I need to run my custom methods instead of excel default methods when these keys are pressed.So i used
Application.OnKey("{F1}", "MyCustomMethod")
in Form_Load event handler,But nothing happen when {F1} button presses on a excel sheet.
What does this have to do with OnKey that you mention in the title?
please see the question again,I edited it.Excuse me because my uncomplete question.
Does this question help? how to use Application.OnKey...
@SidHolland,No because the problem is that sheets could not detect key pressing due to the protection that i have to apply on them.Infact i need a way to stop protection when the user press some of keys.
| common-pile/stackexchange_filtered |
Robotium crash during simple Android JUnit test
I cannot get a trivial Robotium test running:
public class TapsTest extends ActivityUnitTestCase<Ad> {
public TapsTest() { super(Ad.class); }
Solo mSolo;
@Override
protected void setUp() throws Exception {
super.setUp();
mSolo = new Solo(getInstrumentation(), getActivity());
}
public void testTabTaps() {
assertTrue(mSolo.searchText("Latest")); // NPE thrown here
}
}
this test crashes consistently with
java.lang.RuntimeException: java.lang.NullPointerException
at com.jayway.android.robotium.solo.Searcher.searchFor(Searcher.java:113)
at com.jayway.android.robotium.solo.Searcher.searchWithTimeoutFor(Searcher.java:68)
at com.jayway.android.robotium.solo.Solo.searchText(Solo.java:442)
on both my two devices (Android 4.1.0 and 4.0.3) and ADV. Initially Robotium complained about the missing V4 support library (we do not need it for other purposes) so I added android-support-v13.jar. Now the class not found exception is gone but NPE remains. I also tried to start the activity manually:
Ad ado = startActivity(new Intent("android.intent.action.MAIN"), null, null);
mSolo = new Solo(getInstrumentation(), ado);
but NPE remains. I tried mSolo.searchButton("Go") and there is a button with this text on UI, and it is visible, and still the same NPE from the line 113 in Robotium.
The application itself starts and runs correctly if not under tests. Also, other ActivityUnitTestCase tests (without Robotium) run and pass without issues.
I tried to use robotium-solo-3.6.jar from Robotium website.
Is there any reason, you use ActivityUnitTestCase?
You should rather use ActivityInstrumentationTestCase2.
Then you have to change your constructor - add package as parameter.
Look here: http://code.google.com/p/robotium/wiki/Getting_Started
| common-pile/stackexchange_filtered |
Designing NoSQL Data Model and Storage System
I have got a problem scenario like below:
The XYZ website need to show a page with list of all the recipes and when user clicks on each of the recipe they want to show the Recipe page with their ingredients. They also want user to further click into each of the ingredient and see all the recipes linked to that ingredients.
Currently, recipes data is received as feeds from legacy system in a form of a CSV. CSV data looks like this
recipe_id,recipe_name, description, ingredient, active, updated_date, created_date
1, pasta, Italian pasta, tomato sauce, true, 2018-01-09 10:00:57, 2018-01-10 13:00:57
1, pasta, null, cheese, true, 2018-01-09 10:10:57, 2018-01-10 13:00:57
2, lasagna, layered lasagna, cheese, true, 2018-01-09 10:00:57, 2018-01-10 13:00:57
2, lasagna, layered lasagna, blue cheese, false, 2018-01-09 10:00:57, 2018-01-10 13:00:57 ….
Assume that this CSV is consume every 1 hour with 1TB of data You are asked to:
Create a data model which can store this data to allow user to do the
activities mentioned above. This data model needs to support millions of
read per second.
Discuss the persistence system you are going to use to store this data.
Write a Spark Job in Scala which can takes the CSV shown above and store
that in the storage system of your choice using the data model you discussed
above.
Write queries to answer the following
a. Average number of recipes which are updated per hour
i. Eg. Pasta got updated twice in one hour
b. Number of recipes which got updated at 10:00 clock in the entire year.
My question is,
which storage system (HBASE,Cassandra, Redis etc.,) best suits for this scenario ?
Any datamodel help will be appreciated.
Many Thanks,
Kavi
Redis is an in-memory database, which means you'll need at least > 1TB of RAM to store your dataset. This is not cheap and may be overkill for your use case.
Cassandra is a good choice for the simple key-value, read heavy workload you describe.
CREATE TABLE recipe (
id int PRIMARY KEY,
name text,
description text,
ingredients list <text>,
active boolean,
updated_date timestamp,
created_date timestamp
);
| common-pile/stackexchange_filtered |
Log4J loggers for different classes
I want to use Log4J for logging my java projects.
I created a log4j.properties file in the src directory with the following content:
# Root logger option
log4j.rootLogger=INFO, file, stdout
log4j.logger.DEFAULT_LOGGER=INFO,file2
# Direct log messages to a log file
log4j.appender.file=org.apache.log4j.FileAppender
log4j.appender.file.File=file.log
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d [%t] %-5p %c - %m%n
# Direct log messages to stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target=System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{ABSOLUTE} %5p %c{1}:%L - %m%n
log4j.appender.file2=org.apache.log4j.FileAppender
log4j.appender.file2.File=file2.log
log4j.appender.file2.layout=org.apache.log4j.PatternLayout
log4j.appender.file2.layout.ConversionPattern=%d [%t] %-5p %c - %m%n
For example I want to use the "DEFAULT_LOGGER" only in my main method. So I wrote:
static Logger log = Logger.getLogger("DEFAULT_LOGGER");
log.fatal("Process Logger");
But when I execute the main method I prints the message "Process Logger" to all Appenders (stdout, file and file2) But I only want to print it out to file2. How can I do that or better ro say what do I do wrong?
The second point is, when I execute the main method the second time, it does not overwrite file and file2, it justs adds a line inside the textfile. How can I avoid that?
Log4j has something called additivity. by default it is set to true and it means that every log you write will not only be logged by the specific logger but also by its ancestors (the root logger in this case).
To set it to false try this:
log4j.additivity.DEFAULT_LOGGER = false
Learn more about it here.
Thank you. It now works that the output is stored only in file2 and NOT in the stdout.
But there are still 2 things left. It still creates file and file2 when I start the main method. File is empty and the output is only written to file2 but I dont want file to be created at all. How does this works?
And there is still the problem that when I execute the method 2 times, that log4j does not overwrite file2, it just adds the output to the file.
Log4j scans the properties files at the beginning and creates all the needed files that are attached to the logger definitions. So if you don't want the file to be created, remove the appender from the logger: log4j.rootLogger=INFO, file, stdout, make it: log4j.rootLogger=INFO, stdout
why would you want the file to be overwritten? that basically beats the purpose of logging...
Because when I restart my application I want to have a fresh logging file without old entries (from the last day maybe)
Of course I could do delete it with java but I dont want to.
With the standard Java logging API it is possible to just say if I want to add the output or if I want to overwrite it
No, don't delete it with java, if you want to restart the server, just stop it, go the log directory DELETE THE PHYSICAL log file and start the server.
Yes of course I could do that. But my application is a testing framework so the goal is that when I start my application I want to have a fresh environment and I only want to read the current output of logs. And the applciation is client based so the user would have to delete it everytime
Try using RolingFileAppender: http://stackoverflow.com/questions/5538278/how-to-keep-single-file-and-overwrite-the-contents-in-the-same-file-using-log4cx
Sorry but this also did not work.
I will create a new question.
try:
log4j.additivity.DEFAULT_LOGGER = false
| common-pile/stackexchange_filtered |
I want to check whether a string contains specific characters repeated more than thrice
For example, if I have a string containing 'A' or 'B' or 'C' more than three times in a row, that string is invalid:
PPAAAFAL - Valid ,
AAABBBCC- valid ,
NABCCCC - invalid ,
AAAAAAAA- invalid ... etc.
I know that I can check for repetition like A{0,3} but how to check for all characters in one RegEx?
The string can start and end with any character.
Why does it have to be a regex? Write a simple method to check this.
I thought using regex would be faster and easier?
You may use string.matches. Below regex should match only invalid strings. That is, the string having 4 or more A's or B's or C's.
if (string.matches(".*([ABC])\\1{3,}.*")) {
System.out.println("Invalid");
} else {
System.out.println("Valid");
}
Demo
Thanks a lot! it works. Can you please let me know what should I read to understand this and create more such regex?
@SanketPimple http://www.regular-expressions.info/ seems like a good site for learning regex.
| common-pile/stackexchange_filtered |
How to calculate a 95% Confidence Interval
I have this data:
structure(list(age = c(62.84998, 60.33899, 52.74698, 42.38498
), death = c(0, 1, 1, 1), sex = c("male", "female", "female",
"female"), hospdead = c(0, 1, 0, 0), slos = c(5, 4, 17, 3), d.time = c(2029,
4, 47, 133), dzgroup = c("Lung Cancer", "Cirrhosis", "Cirrhosis",
"Lung Cancer"), dzclass = c("Cancer", "COPD/CHF/Cirrhosis", "COPD/CHF/Cirrhosis",
"Cancer"), num.co = c(0, 2, 2, 2), edu = c(11, 12, 12, 11), income = c("$11-$25k",
"$11-$25k", "under $11k", "under $11k"), scoma = c(0, 44, 0,
0), charges = c(9715, 34496, 41094, 3075), totcst = c(NA_real_,
NA_real_, NA_real_, NA_real_), totmcst = c(NA_real_, NA_real_,
NA_real_, NA_real_), avtisst = c(7, 29, 13, 7), race = c("other",
"white", "white", "white"), sps = c(33.8984375, 52.6953125, 20.5,
20.0976562), aps = c(20, 74, 45, 19), surv2m = c(0.262939453,
0.0009999275, 0.790893555, 0.698974609), surv6m = c(0.0369949341,
0, 0.664916992, 0.411987305), hday = c(1, 3, 4, 1), diabetes = c(0,
0, 0, 0), dementia = c(0, 0, 0, 0), ca = c("metastatic", "no",
"no", "metastatic"), prg2m = c(0.5, 0, 0.75, 0.899999619), prg6m = c(0.25,
0, 0.5, 0.5), dnr = c("no dnr", NA, "no dnr", "no dnr"), dnrday = c(5,
NA, 17, 3), meanbp = c(97, 43, 70, 75), wblc = c(6, 17.0976562,
8.5, 9.09960938), hrt = c(69, 112, 88, 88), resp = c(22, 34,
28, 32), temp = c(36, 34.59375, 37.39844, 35), pafi = c(388,
98, 231.65625, NA), alb = c(1.7998047, NA, NA, NA), bili = c(0.19998169,
NA, 2.19970703, NA), crea = c(1.19995117, 5.5, 2, 0.79992676),
sod = c(141, 132, 134, 139), ph = c(7.459961, 7.25, 7.459961,
NA), glucose = c(NA_real_, NA_real_, NA_real_, NA_real_),
bun = c(NA_real_, NA_real_, NA_real_, NA_real_), urine = c(NA_real_,
NA_real_, NA_real_, NA_real_), adlp = c(7, NA, 1, 0), adls = c(7,
1, 0, 0), sfdm2 = c(NA, "<2 mo. follow-up", "<2 mo. follow-up",
"no(M2 and SIP pres)"), adlsc = c(7, 1, 0, 0)), row.names = c(NA,
4L), class = "data.frame")
I have also calculated the estimated population proportion of patients who had lung cancer as the primary disease group below.
SB_xlsx_mean = round(100 * mean(SB_xlsx$dzgroup == "Lung Cancer", na.rm = TRUE), 2)
SB_xlsx_mean
## [1] 9.97
The population proportion with the main disease type of lung cancer was 0.0997 or 9.97%.
However, now need to calculate the 95% CI of the population proportion of patients who had lung cancer as the primary disease group. I've gotten 95% CIs before with t-tests, but I don't think that is really applicable here and I'm not sure how else to start.
You could do bootstrapping (repeatedly sampling with replacement, taking the mean proportion in each sample, then calculate the 2.5 and 97.5th percentile)
This Medium article, Five Confidence Intervals for Proportions That You Should Know About, describes five methods: Wald, Clopper—Pearson (also known as Exact), Wilson (also known as Score), Agresti-Coull, Bayesian HDP (highest posterior density) intervals. See also Confidence interval for Bernoulli sampling.
And Confidence interval around binomial estimate of 0 or 1.
Here is the example of using the binomial.test(). With only 4 values the confidence limits are huge.
binom.test(sum(df$dzgroup=="Lung Cancer"), n=nrow(df), p=0.5 )
Exact binomial test
data: sum(df$dzgroup == "Lung Cancer") and nrow(df)
number of successes = 2, number of trials = 4, p-value = 1
alternative hypothesis: true probability of success is not equal to 0.5
95 percent confidence interval:
0.06758599 0.93241401
sample estimates:
probability of success
0.5
The test above is assuming a probability of "Lung Cancer" at 50%, if you have a better estimate, substitute in an new value for 0.5 and the calculated p value will adjust.
| common-pile/stackexchange_filtered |
Polygon Clipping
The question I have here is quite hard for me to describe in words... So I'll use Pictures!
In general, The issue I have is as follows:
Say I have Polygon A:
Which is intersected at two points by an open polygon B:
What algorithm can I form two closed polygons out of this intersection? (Note that there are three solutions here the one I'm searching for is highlighted)
The preferable solution is
The Smallest of all solutions given that:
A does not contain B
So, any suggestions on how to generate B (and a new A) after the intersection takes place? I'm new to Polygon Math (and 2D Shape interaction in general) so I have no idea where to start or where to look!
Thanks!
I believe what you are looking for is called polygon slicing, so you might search for that as well. Note that you can in theory end up with more than 2 sections depending on the line.
@GrandmasterB Woah! That stuff's intense... And way over my head too!
Maybe there is a better solution to my problem.
What I'm trying to do is fill up Polygon A with points, and have the user separate them into regions by drawing lines (hence the unclosed polygon B)... This is the only method I can think of though...
Maybe I'll have to limit the user to a single line-stroke to simplify things?
Oh, I know. It sounds so simple at first :-) You might look around for existing libraries that do this. Some of the existing javascript graphics libs may have this functionality.
@GrandmasterB IDK... It seems a bit extreme to include an entire graphics library to perform one task... I'll look into it some more! Thanks!
you "represent" your polygons by their contours. a contour is a, somehow ordered, sequence of vertices (each vertex is given by its planar coordinates, x and y).
the segmented polyline you draw inside A is a part of the contour of your new poly, B. the other part of the same contour is one of the two halves of the contour A. you choose which of the two (you say the smallest, but it is not clear what that means...the smallest area ?).
in the end, you close the contour of B, by completing the series of vertices for it with that part of contour A that also belongs to B. that is the contour/representation of your desired polygonal region, that is your solution.
in case you got 2 Bs (one completed with the first "half" of contour A the other with the second), and want the one having the smallest area, you just compute the area for each of the two Bs (by using the coordinates of the contour vertices) and select the smallest. you can easily search for the formula that gives you the area of a 2D polygon by using the coordinates of its vertices or you can try to derive it yourself.
| common-pile/stackexchange_filtered |
How do "% chance on critical strike to apply condition" effects stack in Guild Wars 2?
I am currently playing an Engineer and one of its traits in firearm is called Sharpshooter. It's description is "30% chance to cause bleeding for 3 seconds on critical hit."
Suppose I also got a Superior Sigil of Earth. It's description is "+60% chance on critical: Inflict bleeding (5 seconds)."
How will this stack. Will it give:
90% chance to cause a stack of bleeding for 5 seconds(larger of the two durations)
90% chance to cause a stack of bleeding for 8 seconds(added the two durations)
30% chance to cause a stack of bleeding for 3 seconds AND 60% chance of a stack of bleeding for 5 seconds. (So potentially up to two stacks of bleed on a single critical.)
Additionally suppose I am dual wielding pistols each with Superior Sigil of Earth, how will that work?
Bleeds
Bleed effects stack with intensity, meaning the more stacks you have the more damage it does every time it 'ticks'. The duration of each bleed is individually tracked; You can apply a 20 second bleed and then three 5 second bleeds all running at once, and you'll have 4 stacks for that 5 second period, and it will fall back to 1 stack after that until the long stack runs out.
Superior Sigil of Earth
The Superior Sigil of Earth has a 60% chance on critical to 'proc' and apply a bleed stack for 5 seconds. It has a 2 second internal cooldown, which means it can only proc once every 2 seconds. It has a chance to proc on every attack or ability you use.
Ignoring other bleed producing sources, assuming you crit 100% of the time, and you get lucky and proc every crit, you'll have a maximum of 5 stacks of bleeding on your target from auto attacks with this Sigil. Factoring in your ability use will push the number up, but in practice you'll need a very high crit rate to consistently see around 5 stacks.
Dual Wielding
Dual Wielding rules for sigil stacking are complex. For sigils that proc effects, they stack multiplicatively, they share the same internal cooldown, and they can trigger the effect with the same chance no matter what hand is attacking.
If you're dual wielding pistols with a superior sigil of earth, with a 60% proc rate, you have a 1-(1-.6)*(1-.6) = .84 = 84% chance to proc the sigil on each critical strike.
Sharpshooter
Sharpshooter has a 30% chance on critical to proc and apply a bleed stack for 3 seconds. Like all traits of this nature, it can proc independently of your sigils.
If you have a Superior Sigil of Earth and Sharpshooter, its possible to proc two stacks of bleed (one 3 second stack and one 5 second stack) on a critical strike, if you get lucky.
With a high crit rate, you can expect to see around 8-10 stacks with both sharpshooter and two Sigils of Earth.
Bleeding stacks intensity, so with Sharpshooter and a Superior Sigil of Earth, it's door number 3 (30% chance AND 60% chance, potentially two stacks if both trigger on the same crit).
If wielding two weapons with the same sigil, you'll have another ANDed chance to have its effect trigger on a critical hit, but you need to remember that the two weapons share the same cooldown (in this case, 2 seconds). So, if either of them triggers on a critical, it's another 2 seconds before either of them can trigger again.
I'll leave the detailed calculations of how the percentages work to others, suffice it to say that it can get tricky (see this talk page).
| common-pile/stackexchange_filtered |
How to obtain latest slipstreamed installation media for OS X?
I'm looking for a way to obtain an updated installation media for OS X 10.6.x, currently 10.6.6
I am looking for a similar solution to the Windows slipstreamed installation media and preferably a solution to put the kit on a USB drive in order to improve the installation speed.
Is this possible, and if it is how it is possible to get such a media?
Same Answer as given here:
Basically you want to use the System Image Utility to create a bootable .dmg you can than put on a USB drive.
Have a look at this post, it covers this process in detail (you probably can leave out some steps); this should get you started.
There is no official method to 'slipstream' updates into OS X installation media.
To install OS X from a USB drive you can make an image of your OS X DVD and then restore that image to your USB drive and boot from it.
You could then download and put on that drive the Combo update:
http://support.apple.com/kb/dl1349
Yes, you have to run them separately, but then you don't have to wait for slower i/o and download times.
If your machines are all desktops the using MacOS Server and Netboot might be what you want.
Otherwise if your machines are all similar you might be able to get away with
1) Disc Copy the original to the USB drive
2) Boot new machine off USB
3) Disk copy from USB.
but I would probably use @zevlag's solution
| common-pile/stackexchange_filtered |
Two 12 V heater elements in series in a dual hotend
I have ordered a dual hotend Chimera and it came with 2x 12 V heater elements (in my rush I forgot to order the one with 2x 24V).
Is it possible to run these 12 V heater elements in series?
I am planning on running this with an SRK 1.3 board.
What printer do you have? Note that the power output is different. Also.... https://www.youtube.com/watch?v=k9Yy8OxohGI
This will not work as you intend.
The heaters are designed to be independent. They do not share a thermal path between them. The thermal load on the two extruders will be different whenever one nozzle is active and the other on standby, and there is no condition when both are extruding at the same time.
The two thermistors are needed so that each nozzle can be individually controlled. Placing the heaters either in series or parallel defeats this control, and many problems will follow. You will spend days trying to understand why filament is dripping, or not extruding, or the PLA cooks in the nozzle, or the firmware shuts down for over or under heating, or a nozzle seems to ever hit the right temperature. You will waste far more time than the time needed to order and receive the proper heaters.
If you must...
IF you were trying to proceed with some testing, change the 24V supply to 12V. The stepper motors will be a bit more sluggish, but the DC-to-DC converters will probably (maybe) work well enough to power the electronics. Check your supply rails to be sure.
But don't.
It is better to wait, or find a local store to drive to and fetch them, or call a friend who may have spares.
You don't want the frustration, and uncertainty, and the possibility of doing something as a hack that causes other problems.
Order the right cartridges and wait for their arrival.
ps: Not to make this a shopping answer, but Amazon has qty 5 cartridges, 24v 40w, qty 5 for $8. Depending on where you are, you may be able to get these tomorrow and use them while waiting for the "right" ones to arrive.
No you don't want to do that.
A 12 V 30 W heater has a resistance of about 5 Ω (2.5 A on 12 V). A 24 V 30 W heater is about 19 Ω (1.25 A on 24 V). Placing two 12 V heaters in series means about 10 Ω, for 24 V that means that the current is 2.5 A, similar to a 12 V circuit, the power will be 30 W for each heater. So it appears that this should work.
But, the problem is that being in series, both the hotends are heated. This is not beneficial for the unused core which is prone to ooze filament and can cook filament if not used for a long time (long stand-by high temperature). Typically, unused printing cores go to a lower stand-by temperature when they are not printing. Also it would be more difficult to have filaments of different temperatures in the hotends. Furthermore, which thermistor would you use? A hotend cools down by melting filament, the temperature drop is measured by the thermistor results in the control logic adding current to the heater to compensate the loss in temperature. If you only use one thermistor (basically, from a firmware configuration perspective, the setup is similar to having a single heater in a single hotend and having the filament being changed) and using the other core (without a thermistor) to extract filament, the temperature drop will not be registered and as such not controlled. There is no default firmware solution to use 2 thermistors in a "single" heating element (in this case strand of heating elements), this will probably require some modding to the source code of the firmware.
You could test this setup, but I would not use it for a long time.
| common-pile/stackexchange_filtered |
Difference in latency between public and internal IP addresses
Should there be a difference in latency when accessing resources by their public IP address versus their private, internal IP address? And if there is, would (mis)configuration be to blame?
My understanding (probably over-simplified) is the router will be smart enough to know that packets don't need to go out and back in when calling the public IP address and therefore there shouldn't be any performance hit.
Is that accurate?
Context: accessing a web server that is hosted on the internal LAN with a private address, but is accessible through the firewall via a public IP address on the WAN.
Define "internal IP addresses." There is nothing preventing internal IP addresses to be public Internet addresses. Certainly, using a firewall would be the intelligent thing to do in such a case, and the traffic from internal hosts to internal resource would not need to hit the firewall, while external hosts would need to pass through the firewall.
Added a bit of context that will hopefully clarify.
No. My point is that the public IP address and the internal IP address could be the same thing. In that case, properly configured routing would route directly from one host to another on the internal network. Using private addresses, on the other hand, would prove more problematic. So, again, define "internal IP addresses."
private, non-routable
Under normal circumstances, routing to public addresses from a private address would not work. If you have a single public address, this can be difficult. A one-to one public-to private addressing scheme would be much easier, but why would you use private addresses in such a case?
Re-architecing a domain (new to environment) and trying to avoid split-horizon DNS which is in use for about half of the resources. Want to know if I can just use the Public NS as the SOA internally and externally or if the resources with private addresses will take a performance hit. I believe most resources that are public are using 1:1 NAT.
Can't you check this with traceroute?
Technically yes there will be a small difference and notability will depend on your devices/configuration. This is because of the different paths the packets have to take, but like i said, depends on how your setup is designed - there are plenty of variables.
1) If you are on the LAN then your path to the webserver via its private IP is merely just switched.
2) If you are on the LAN and you try and access the webserver via its public IP then the traffic has to go out through your LAN gateway (which I am assuming is your router with a public ip address on the other side) , get natted out, get natted inbound and forwarded to the private IP address of the server and then return the traffic.
So you can see there will be marginally more resources used than #1
The traffic to a public IP address would go out and get a NAT address. It doesn't necessarily come back. As you said, there are lots of variables in this.
yeah of course, all depending on if the router does hairpinning i guess. My assumption is that connectivity back in is possible for the scenario to be in question in the first place, otherwise latency is irrelevant.
I'm still not sure why anyone with enough public addresses to do one-to-one NAT would go to all the trouble to actually NAT. All pain, no gain...
agreed, I guess it's a legacy concept really.
Any references to new best practices?
@willWorkForCookies it all depends on how much flexibility you have to do a mini-redesign and how precious public IPs are to you
It all depends on the setup.
Assuming a simple setup with a single router/firewall and a number of Ethernet switches on a flat ethernet network traffic to the private IP will go directly while traffic to the public IP will have to harpin through the router/firewall. That will add latency to the path, how much depends on how highly loaded the router/firewall is, how highly loaded the network in general is, how fast it can process packets, where the router sits on the network relative to the client and server and so-on.
In a more complex network you would have to look at the overall toplogy of the network to determine what impact using the public IP would have on the path and whether the path going through the translation would be longer.
Asside from performance two other issues to bear in mind.
It is likely that the client IP address seen by the server will be an IP address of a NAT box rather than the internal IP of the client.
some NAT setups may not support connections to the public IP from internal clients at all.
| common-pile/stackexchange_filtered |
Parsing numeric data with thousands seperator in `polars`
I have a tsv file that contains integers with thousand separators. I'm trying to read it using polars==1.6.0, the encoding is utf-16
from io import BytesIO
import polars as pl
data = BytesIO(
"""
Id\tA\tB
1\t537\t2,288
2\t325\t1,047
3\t98\t194
""".encode("utf-16")
)
df = pl.read_csv(data, encoding="utf-16", separator="\t")
print(df)
I cannot figure out how to get polars to treat column "B" as integer rather than string, and I also cannot find a clean way of casting it to an integer.
shape: (3, 3)
┌────────┬─────┬───────┐
│ Id ┆ A ┆ B │
│ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ str │
╞════════╪═════╪═══════╡
│ 1 ┆ 537 ┆ 2,288 │
│ 2 ┆ 325 ┆ 1,047 │
│ 3 ┆ 98 ┆ 194 │
└────────┴─────┴───────┘
cast fails, as does passing the schema explicitly. I also tried using str.strip_chars and to remove the comma, my work-around is to use str.replace_all instead.
df = df.with_columns(
pl.col("B").str.strip_chars(",").alias("B_strip_chars"),
pl.col("B").str.replace_all("[^0-9]", "").alias("B_replace"),
)
print(df)
shape: (3, 5)
┌────────┬─────┬───────┬───────────────┬───────────┐
│ Id ┆ A ┆ B ┆ B_strip_chars ┆ B_replace │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ str ┆ str ┆ str │
╞════════╪═════╪═══════╪═══════════════╪═══════════╡
│ 1 ┆ 537 ┆ 2,288 ┆ 2,288 ┆ 2288 │
│ 2 ┆ 325 ┆ 1,047 ┆ 1,047 ┆ 1047 │
│ 3 ┆ 98 ┆ 194 ┆ 194 ┆ 194 │
└────────┴─────┴───────┴───────────────┴───────────┘
Also for this to work in general I'd need to ensure that read_csv doesn't try and infer types for any columns so I can convert them all manually (any numeric column with a value > 999 will contain a comma)
not sure if it's the best possible way, but simple .with_columns(pl.col.B.str.replace(",", "").cast(pl.Int32)) works
also pl.read_csv(..., use_pyarrow=True) works
I tried use_pyarrow and it didn't seem to work, at least when reading from an actual file - my toy example might not be 100% the same. I assumed replace would fail if there was more than 1 "," (i.e. 1,000,000) but I didn't try it.
To allow for possible multiple , separators use .str.replace_all:
df = df.with_columns(pl.col('B').str.replace_all(",", "").cast(pl.Int64))
which gives for the sample data:
shape: (3, 3)
┌─────┬─────┬──────┐
│ Id ┆ A ┆ B │
│ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ i64 │
╞═════╪═════╪══════╡
│ 1 ┆ 537 ┆ 2288 │
│ 2 ┆ 325 ┆ 1047 │
│ 3 ┆ 98 ┆ 194 │
└─────┴─────┴──────┘
To make this a more generic approach, instead of pl.col('B') you can do pl.col(pl.String) so it'll catch any column that came through as a string. You may need error handling on top of that but it's data dependent.
I'd probably want to apply to all columns from a list of columns if they're p.String though - otherwise it could strip comands from actual string fields?
I just noticed if I select all of the numeric columns (even those that have been inferred as string) it works, i.e. df = df.with_columns(pl.col('A', 'B').str.replace_all(",", "").cast(pl.Int64)) as well
If your source data is utf-16 (or anything besides utf-8) then polars is going to convert it to utf-8 through python. Since it needs to have that happen anyway, it might be better to do that conversion yourself and replace the ","s in the middle so that the native polars csv reader parses the data as numbers in the read_csv upfront rather than in a subsequent step
data.seek(0)
pl.read_csv(data.read().decode('utf-16').replace(',','').encode('utf-8'), separator="\t")
Just to emphasize that if your source data is already utf-8 then having python do the replace is almost certainly slower than @user19077881's answer. Only do this if your source isn't utf-8 because polars will convert it to utf-8 in python anyway. Of course, if you have columns that are actually supposed to be strings with commas then this doesn't work because it doesn't know the difference.
| common-pile/stackexchange_filtered |
VBA for Excel code to find and change formatting of substrings of text within a cell
I'm using VBA for Excel.
I have code that does the following:
Take an array of words (called Search_Terms)
I then have a function (see below) that receives the Search_Terms and a reference to a Cell in Excel.
The function then searches the text within the cell.
It finds all substrings that match the words in Search_Terms within the cell and changes their formatting.
The function shown below already works.
However, it is quite slow when I want to search several thousand cells with an array of 20 or 30 words.
I'm wondering if there is a more efficient/idiomatic way to do this (I'm not really familiar w/ VBA and I'm just hacking my way through).
Thank you!
Dim Search_Terms As Variant
Dim starting_numbers() As Integer ' this is an "array?" that holds the starting position of each matching substring
Dim length_numbers() As Integer 'This is an "array" that holds the length of each matching substring
Search_Terms = Array("word1", "word2", "word3")
Call change_all_matches(Search_Terms, c) ' "c" is a reference to a Cell in a Worksheet
Function change_all_matches(terms As Variant, ByRef c As Variant)
ReDim starting_numbers(1 To 1) As Integer ' reset the array
ReDim length_numbers(1 To 1) As Integer ' reset the array
response = c.Value
' This For-Loop Searches through the Text in the Cell and finds the starting position & length of each matching substring
For Each term In terms ' Iterate through each term
Start = 1
Do
pos = InStr(Start, response, term, vbTextCompare) 'See if we have a match
If pos > 0 Then
Start = pos + 1 ' keep looking for more substrings
starting_numbers(UBound(starting_numbers)) = pos
ReDim Preserve starting_numbers(1 To UBound(starting_numbers) + 1) As Integer ' Add each matching "starting position" to our array called "starting_numbers"
length_numbers(UBound(length_numbers)) = Len(term)
ReDim Preserve length_numbers(1 To UBound(length_numbers) + 1) As Integer
End If
Loop While pos > 0 ' Keep searching until we find no substring matches
Next
c.Select 'Select the cell
' This For-Loop iterates through the starting position of each substring and modifies the formatting of all matches
For i = 1 To UBound(starting_numbers)
If starting_numbers(i) > 0 Then
With ActiveCell.Characters(Start:=starting_numbers(i), Length:=length_numbers(i)).Font
.FontStyle = "Bold"
.Color = -4165632
.Size = 13
End With
End If
Next i
Erase starting_numbers
Erase length_numbers
End Function
Some of your code is missing. But, with a routine that must write to the worksheet, you can improve speed somewhat by turning off ScreenUpdating and setting the calculation mode to manual. Also, disable Events. You may get some improvement by using the Long data type instead of the Integer, since VBA converts Integers to Long anyway.
@RonRosenfeld You're correct. Some of the code is missing (apologies if that caused confusion). Absolutely fascinating. I didn't know that these things existed (ScreenUpdating, calculation mode, etc.). Thank you.
The code bellow might be a bit faster (I haven't measured it)
What it does:
Turns off Excel features, as suggested by @Ron (ScreenUpdating, EnableEvents, Calculation)
Sets the used range and captures the last used column
Iterates through each column and applies an AutoFilter for each of the words
If there is more than one visible row (the first one being the header)
Iterates through all visible cells in currently auto-filtered column
Checks that the cell doesn't contain error & is not empty (this order, distinct checks)
When it finds the current filter word makes the changes
Moves to the next cell, then next filter word until all search words are done
Moves to the next column, repeats above process
Clears all filters, and turns Excel features back on
Option Explicit
Const ALL_WORDS = "word1,word2,word3"
Public Sub ShowMatches()
Dim ws As Worksheet, ur As Range, lc As Long, wrdArr As Variant, t As Double
t = Timer
Set ws = Sheet1
Set ur = ws.UsedRange
lc = ur.Columns.Count
wrdArr = Split(ALL_WORDS, ",")
enableXL False
Dim c As Long, w As Long, cVal As String, sz As Long, wb As String
Dim pos As Long, vr As Range, cel As Range, wrd As String
For c = 1 To lc
For w = 0 To UBound(wrdArr)
If ws.AutoFilterMode Then ur.AutoFilter 'clear filters
wrd = "*" & wrdArr(w) & "*"
ur.AutoFilter Field:=c, Criteria1:=wrd, Operator:=xlFilterValues
If ur.Columns(c).SpecialCells(xlCellTypeVisible).CountLarge > 1 Then
For Each cel In ur.Columns(c).SpecialCells(xlCellTypeVisible)
If Not IsError(cel.Value2) Then
If Len(cel.Value2) > 0 Then
cVal = cel.Value2: pos = 1
Do While pos > 0
pos = InStr(pos, cVal, wrdArr(w), vbTextCompare)
wb = Mid(cVal, pos + Len(wrdArr(w)), 1)
If pos > 0 And wb Like "[!a-zA-Z0-9]" Then
sz = Len(wrdArr(w))
With cel.Characters(Start:=pos, Length:=sz).Font
.Bold = True
.Color = -4165632
.Size = 11
End With
pos = pos + sz - 1
Else
pos = 0
End If
Loop
End If
End If
Next
End If
ur.AutoFilter 'clear filters
Next
Next
enableXL True
Debug.Print "Time: " & Format(Timer - t, "0.000") & " sec"
End Sub
Private Sub enableXL(Optional ByVal opt As Boolean = True)
Application.ScreenUpdating = opt
Application.EnableEvents = opt
Application.Calculation = IIf(opt, xlCalculationAutomatic, xlCalculationManual)
End Sub
Your code uses ReDim Preserve in the first loop (twice)
slight impact on performance for one cell, but for thousands it becomes significant
ReDim Preserve makes a copy of the initial arr with the new dimension, then deletes the first arr
Also, Selecting and Activating cells should be avoided - most of the times are not needed and slow down execution
Edit
I measured the performance between the 2 versions
Total cells: 3,060; each cell with 15 words, total search terms: 30
Initial code: Time: 69.797 sec
My Code: Time: 3.969 sec
Initial code optimized: Time: 3.438 sec
Initial code optimized:
Option Explicit
Const ALL_WORDS = "word1,word2,word3"
Public Sub TestMatches()
Dim searchTerms As Variant, cel As Range, t As Double
t = Timer
enableXL False
searchTerms = Split(ALL_WORDS, ",")
For Each cel In Sheet1.UsedRange
ChangeAllMatches searchTerms, cel
Next
enableXL True
Debug.Print "Time: " & Format(Timer - t, "0.000") & " sec"
End Sub
Public Sub ChangeAllMatches(ByRef terms As Variant, ByRef cel As Range)
Dim termStart() As Long 'this array holds starting positions of each match
Dim termLen() As Long 'this array holds lengths of each matching substring
Dim response As Variant, term As Variant, strt As Variant, pos As Long, i As Long
If IsError(cel.Value2) Then Exit Sub 'Do not process error
If Len(cel.Value2) = 0 Then Exit Sub 'Do not process empty cells
response = cel.Value2
If Len(response) > 0 Then
ReDim termStart(1 To Len(response)) As Long 'create arrays large enough
ReDim termLen(1 To Len(response)) As Long 'to accommodate any matches
i = 1: Dim wb As String
'The loop finds the starting position & length of each matched term
For Each term In terms 'Iterate through each term
strt = 1
Do
pos = InStr(strt, response, term, vbTextCompare) 'Check for match
wb = Mid(response, pos + Len(term), 1)
If pos > 0 And wb Like "[!a-zA-Z0-9]" Then
strt = pos + 1 'Keep looking for more substrings
termStart(i) = pos 'Add match starting pos to array
termLen(i) = Len(term) 'Add match len to array termLen()
i = i + 1
Else
pos = 0
End If
Loop While pos > 0 'Keep searching until we find no more matches
Next
ReDim Preserve termStart(1 To i - 1) 'clean up array
ReDim Preserve termLen(1 To i - 1) 'remove extra items at the end
For i = 1 To UBound(termStart) 'Modify matches based on termStart()
If termStart(i) > 0 Then
With cel.Characters(Start:=termStart(i), Length:=termLen(i)).Font
.Bold = True
.Color = -4165632
.Size = 11
End With
End If
Next i
End If
End Sub
| common-pile/stackexchange_filtered |
Weekly Featured Image for Mar 21, '11
This is the place to submit and vote on photos for the week of Mar 21 to be featured on the main site. Rules:
Limit one photo per person per week.
A specific photo may be submitted at most two weeks in a row, and not more than four times a year.
Keep all images appropriate, we want this site to be work safe.
Do not submit a photo if you are currently featured.
Images should be 375 x 210 px.
Include a title for the image
Voting Closes on March 20th at 11:59pm EDT (UTC-4). Submissions may be added any day of the week until voting closes. The winning image (with the highest votes) as of the close of voting will be exhibited on the main site.
Last week's thread
In Your Dreams
Larger version on Flickr. Taken in New York, just off Broadway/Times Square.
Congratz! but the title is wrong on the main site as well as the author...
@ShutterBug - Thanks! I'm sure that they'll correct it shortly.
@ShutterBug: It's still the info from last week. That might be why I previously got accused of submitting an entry while being featured...
I wasn't a fan of 'RED', that was before I started photography...
Petals of Dahlia.
Picture taken at Circuit House garden, Chandpur, Bangladesh. Original in flickr.
Cretan View
Large version
Include a title for the image. Seems you missed this one ;)
Summer Fog
A foggy July 4th weekend on the Oregon coast.
| common-pile/stackexchange_filtered |
15 puzzle problem using Branch and Bound
As per my knowledge, the node which having the least cost is branched in branch and bound approach to solve n puzzle problem.
What happens if there are multiple nodes with minimal cost ??
Any of the node is taken or every node must be branched ?
| common-pile/stackexchange_filtered |
Make the code run faster without nested loop
I am trying to solve assignment problem, the code I wrote takes extremely long to run. I think it's due to nested loop I used. Is there another way to rewrite the code to make it more efficient.
The question I am trying to solve. Basically, starting at first element to compare with every element to its right. if it is larger than the rest, it will be "dominator". Then the second element to compare with every element to its right again. All the way to the last element which will be automatically become "dominator"
def count_dominators(items):
if len(items) ==0:
return len(items)
else:
k = 1
for i in range(1,len(items)):
for j in items[i:]:
if items[i-1]>j:
k = k+1
else:
k = k+0
return k
So its just returning how many elements are less than the first? You can take out your else statement... and just initialize k to be 0... before the first if statement.
Hi ShanerM, essentially, I need to start first element, then second, then third... so on. all the way to the last element
So are you trying to sort the array or just get how many elements are less than each element to the right of it?
Because, if you are trying to sort, you could just use a merge sort or something like that. I am not sure how big you are expecting items to be either. https://www.geeksforgeeks.org/time-complexities-of-all-sorting-algorithms/
To get how many elements are greater than its right elements. The list should be very big. As I ran the test file it takes forever
You can use a list comprehension to check if each item is a "dominator", then take the length - you have to exclude the final one to avoid taking max of an empty list, then we add 1 because we know that the final one is actually a dominator.
num_dominators = len([i for i in range(len(items) - 1) if items[i] > max(items[i + 1:])]) + 1
This is nice because it fits on one line, but the more efficient (single pass through the list) way to do it is to start at the end and every time we find a new number bigger than any we have seen before, count it:
biggest = items[-1]
n = 1
for x in reversed(items):
if x > biggest:
biggest = x
n+=1
return n
I actually never think about do it backward. Let me try it.
Hey Simon, it worked.. something new for me today. Need to use different perspective to look a question! Thank you
| common-pile/stackexchange_filtered |
how do i return attribute after inserting in laravel?
I want the insert option in laravel to return an attribute called 'orderid' after it inserts the data in to database.I have come across the insertGetId feature which returns the id of the inserted row, but i donot have an in the table ,instead i have a attribute called 'orderid' .
$id = DB::table('orders')->insertGetId( $data );
The get insert id will give you only the Id and morever DB::table will never give you the attributes. It will just return a boolean. Use Eloquent to achieve this.
$order = \App\Order::create($data);
return $order->orderid;
Do you have the model Order in the App namespace
| common-pile/stackexchange_filtered |
How to use less space for ~
In a paper, I need to use the abbreviated form of Figure as in "see Fig. 2". To avoid line breaks between Fig. and the number itself, I place a ~ in between. However in that case, the inserted space is way too much and a \, looks a lot better.
So the question is, how can I protect the Figure expressions from line breaks but also reduce the space to \,?
It all depends on how you realize the construction. A MWE is needed.
My previous comment was wrong, sorry.
\, is defined as
\DeclareRobustCommand{\,}{%
\relax\ifmmode\mskip\thinmuskip\else\thinspace\fi
}
\def\thinspace{\kern .16667em }
Thus it is basically a \kern. It is not followed by glue, therefore there is no line break caused by \, in:
see~Fig.\,2
A fast test:
\documentclass{article}
\begin{document}
\parbox{0pt}{%
\hspace{0pt}% allow hyphenation for the first word
see~Fig.\,2
}
\end{document}
As expected, there is an overfull \hbox warning:
Overfull \hbox (38.80566pt too wide) in paragraph at lines 9--9
\OT1/cmr/m/n/10 see Fig.2
And the line is unbroken:
Note that \, can bite: a user wanted to add it in front of words (I don't really know why); if that word was at the beginning of a paragraph, the kern was added as a vertical space, because of how \kern works. In this case it's safe, though, because it will always be preceded by a character.
Wow, thanks for the detailed explanation clearing my foggy understanding of TeX's internals.
@egreg -- the \kern at the beginning of a paragraph detects that it's in vertical mode, and that's how the kern is applied. like \hbox in that location, it doesn't switch into horizontal, but remains in vertical mode, usually causing the user to scratch his/her head in (at least momentary) confusion.
@barbarabeeton Yes, that's the problem. I thought it worthy mentioning. The definition of \thinspace should include \leavevmode, as it's documented as a text command.
@egreg: I am afraid that is one of the many issues that will never been fixed in frozen LaTeX2e.
@HeikoOberdiek I know; probably I should have said "the definition of \thinspace should have included \leavevmode".
@egreg perhaps something for fixltx2e ?
| common-pile/stackexchange_filtered |
Linearization of $ m \dfrac{dy^2}{dt^2} = u(t) - C_d \left( \dfrac{dy}{dt} \right)^2-mg $
$$ m \frac{dy^2}{dt^2} = u(t) - C_d \left( \frac{dy}{dt} \right)^2-mg $$
where
$$\begin{align*}
y(t)&=\text{missile altitude}\\
u(t)&= \text{force}\\
m&= \text{mass}\\
C_d&= \text{aerodynamic drag coefficient}
\end{align*}$$
How do I linearize this beast? I want to obtain a transfer function so that I can create a PID controller for it..
I'm really stumped and could use some help.
| common-pile/stackexchange_filtered |
contextDestroyed never called when Tomcat is shutdown in Eclipse
I'm facing this issue where whenever I shutdown my Tomcat 8.0 on Eclipse, contextDestroyed on my ServletContextListener class is never called, but contextInitialized runs normally when I startup Tomcat. I'm using Servlet 3.1 version. Code:
@WebListener
public class ListenerContexto implements ServletContextListener {
@Override
public void contextDestroyed(ServletContextEvent arg0) {
System.out.println("ListenerContexto FINALIZADO");
}
//Run this before web application is started
@Override
public void contextInitialized(ServletContextEvent arg0) {
System.out.println("ListenerContexto INICIADO");
}
}
On TOMCAT startup, I get "ListenerContexto INICIADO" on console. But when TOMCAT is shutdown, nothing shows on console.
How exactly are you starting and stopping Tomcat?
Directly through Eclipse Start and Stop button. An odd thing I discovered is that if I just hit the button stop it will produce no log at all, like a process kill. But if I click on the only tomcat set up server and click the button stop then it will produce the log as expected. I don't know if the stop button is supposed to have this behavior
| common-pile/stackexchange_filtered |
What if only parent version in POM?
Inspecting various POMs I saw that sometimes they have <version> tags only in <parent> section. Sometimes they have the following code <version>${parent.version}</version> in the main section along with version in parent.
Which version value will be used in these various cases?
The version from the parent, in both cases.
See the "Introduction to the POM" for implicit version inheritance info.
See this document for further information regarding what else is inherited.
It will "inherit" the version from the parent and it does many other pieces of configuration. The groupId and dependencies can come from the parent as well.
| common-pile/stackexchange_filtered |
Flutter theme is not changed on Windows
I am new to Flutter and decided to follow a tutorial on making a spotify clone using Flutter. He specifies a custom dark theme, and when he runst the code it is indeed a dark theme, but when I run it it is still white. I initially thought I made a type somewhere so I copied his code, but it did not change anything.
I get this result
This is the result I want to get
My main.dart file
import 'package:flutter/material.dart';
void main() {
runApp(Spotify());
}
class Spotify extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Spotify UI',
debugShowCheckedModeBanner: false,
darkTheme: ThemeData(
brightness: Brightness.dark,
appBarTheme: const AppBarTheme(backgroundColor: Colors.black),
scaffoldBackgroundColor: const Color(0xFF121212),
backgroundColor: const Color(0xFF121212),
primaryColor: Colors.black,
accentColor: const Color(0xFF1DB954),
iconTheme: const IconThemeData().copyWith(color: Colors.black),
fontFamily: 'Montserrat',
textTheme: TextTheme(
headline2: const TextStyle(
color: Colors.white,
fontSize: 32.0,
fontWeight: FontWeight.bold,
),
headline4: TextStyle(
fontSize: 12.0,
color: Colors.grey[300],
fontWeight: FontWeight.w500,
letterSpacing: 2.0,
),
bodyText1: TextStyle(
color: Colors.grey[300],
fontSize: 14.0,
fontWeight: FontWeight.w600,
letterSpacing: 1.0,
),
bodyText2: TextStyle(
color: Colors.grey[300],
letterSpacing: 1.0,
),
),
),
home: Shell(),
);
}
}
class Shell extends StatelessWidget {
@override
Widget build(BuildContext context) {
return Scaffold(
body: Column(
children: [
Expanded(
child: Row(
children: [
Container(
height: double.infinity,
width: 280.0,
color: Colors.green,
),
// PlaylistScreen
],
),
),
Container(
height: 84.0,
width: double.infinity,
color: Colors.blue,
),
],
),
);
}
}
| common-pile/stackexchange_filtered |
Getting wrong validation message using Angular.js
I have an issue.i am getting wrong validation message while using Angular.js. I am explaining my code below.
<div class="input-group bmargindiv1 col-md-12">
<span class="input-group-addon ndrftextwidth text-right" style="width:180px">Member Type :</span>
<select class="form-control" id="nosofvoucher" ng-model="vouchers" ng-options="v.name for v in listOfMember track by v.value " ng-change="generateCodeRange('nosofvoucher')" ng-disabled="dismember">
</select>
</div>
my controller side code is given below.
$scope.listOfMember=[{
name:'Select member type',
value:''
}]
$scope.vouchers=$scope.listOfMember[0];
$http({
method:'GET',
url:"php/selectMemberType.php",
headers: { 'Content-Type': 'application/x-www-form-urlencoded' }
}).then(function successCallback(response){
//console.log('respo',response.data);
angular.forEach(response.data,function(obj){
var data={'name':obj.member_name,'value':obj.member_type};
$scope.listOfMember.push(data);
})
$timeout(function(){
var value={'name':'New Registered User','value':0};
$scope.listOfMember.push(value);
},2000)
},function errorCallback(response) {
})
but when i am checking the validation message even i am selecting the New Registered User from drop down still the validation message is saying Please add member type and the code is given below.
$scope.addGenerateCodeData=function(billdata){
if($scope.vouchers.value==null || $scope.vouchers.value==''){
alert('Please add member type');
codeFieldFocus.borderColor('nosofvoucher');
}
}
Here even after selecting the New Registered User from list the above validation message is executing which should not.Please help me to resolve this issue.
How is addGenerateCodeData getting called? It is being called once your $http has resolved right?
its calling under a button click event.
0 == '' get evaluated to true explanation:
http://stackoverflow.com/questions/7605011/why-is-0-true-in-javascript
so whats the solution for this.
| common-pile/stackexchange_filtered |
Difference between memset and initialization (array)?
I've been working with some C code and I would like to know what the difference is between next codes:
double myArray[5] = {0,0,0,0,0};
and
double myArray[5];
memset(myArray,0,5*sizeof(double));
Could there be a problem for replacing the second one with the first one? If so, what kind of problems might be?
Using memset this way makes assumptions regarding the representation of floating point numbers, specifically that the representation of all bits 0 corresponds to the value 0.
If your system uses IEEE754 floating point representation (as are most systems you're likely to come across) this assumption holds. However, if you find yourself running your code on some exotic system that does not, then you might not get the result you expect.
Incidentally, there was some regret in the IEEE-754 committee that a memset to −1 (255) would not fill an array with signaling NaNs. It is so close and just would have required specifying the sense of a signaling/quiet bit in the original standard.
This is the best site I've ever visited.
Thank you very much. Could you add to your answer what kind of problems might be? I'd help a lot if you could give an example or a case.
@Carlos I've heard of systems where a NULL pointer is not all bytes 0, though I've never personally came across such a system.
In addition to dbush's answer, although there likely wouldn't be a problem on most modern systems using memset, the memset version (as written) is more brittle. If someday you decided to change the size of myArray, then one of two things would happen with the version using the braced initializer list:
If you decreased the size of myArray, you will get a compilation error about having too many initializers.
If you increased the size of myArray, any elements without an explicit initializer will automatically be initialized to 0.
In contrast, with the memset version:
If you decreased the size of myArray without remembering to make a corresponding change to memset, memset will write beyond the bounds of the array, which is undefined behavior.
If you increased the size of myArray without remembering to make a corresponding change to memset, elements at the end will be uninitialized garbage.
(A better way to use memset would be to do memset(myArray, 0, sizeof myArray).)
Finally, IMO using memset in the first place is more error-prone since it's quite easy to mix up the order of the arguments.
| common-pile/stackexchange_filtered |
Magento 2. Do I need custom code(e.g. Plugins) to trigger reindex logic for an Update on Save(realtime) configured Index?
My question is simple and straight forward.
Do I need to add custom logic to call my indexer(e.g. plugin on afterSave) when the indexer is configured as Update On Save or is Magento supposed to call it for me?
EDIT:
I'm adding a link to an issue that might suggest that UpdateOnSave does not work out of the box for custom indexers https://github.com/magento/magento2/issues/8866
Also if we take a look at \Magento\CatalogSearch\Model\Indexer\Fulltext\Plugin\Product::addCommitCallback Magento uses a plugin to add the reindexRow logic.
| common-pile/stackexchange_filtered |
WKB to WKT JavaScript function
Turns out json isn't so good at transporting binary data. But with HTML5, XHR2 is now capable of transferring blobs cleanly. I'm looking to transfer binary geometry (to save bandwidth) and decode it on the client.
To no avail, I've scoured the web for a javascript-based WKB (Well-known Binary) to WKT (Well-known Text) function. Before I re-invent the wheel -- is anyone aware of any open-source solutions?
Btw, you should not use blobs but arraybuffer.
It looks like a new and better supported JS WKB parsing library has since appeared.
https://github.com/cschwarz/wkx
I've been able to use it to convert WKB directly from postgres into JS objects that can be mapped in the browser. You'll need to include https://github.com/cschwarz/wkx/blob/master/dist/wkx.js in your webpage for this to work.
// Required imports (works in browser, too)
var wkx = require('wkx');
var buffer = require('buffer');
// Sample data to convert
var wkbLonlat = '010100000072675909D36C52C0E151BB43B05E4440';
// Split WKB into array of integers (necessary to turn it into buffer)
var hexAry = wkbLonlat.match(/.{2}/g);
var intAry = [];
for (var i in hexAry) {
intAry.push(parseInt(hexAry[i], 16));
}
// Generate the buffer
var buf = new buffer.Buffer(intAry);
// Parse buffer into geometric object
var geom = wkx.Geometry.parse(buf);
// Should log '-73.700380647'
console.log(geom.x)
// Should log '40.739754168'
console.log(geom.y)
Hey, OP here. Asked this 1.5 years ago. Awesome!
The only solution pure javascript solution I've found so far (and I did not try) is https://github.com/thejefflarson/wkb.js.
It's only an incomplete WKB parser (it converts WKB to a js object you can transform to WKT)
An alternative way to wkb on javascript side can be the experimental twkb (not a standard at the moment)
http://blog.jordogskog.no/2013/05/05/mapservice-from-websocket-with-twkb/ but it requires to play with a custom PostGIS build (so really not for beginners)
Another possibility might be to use TopoJSON instead of plain GeoJSON:
TopoJSON is an extension of GeoJSON that encodes topology. Rather than
representing geometries discretely, geometries in TopoJSON files are
stitched together from shared line segments called arcs. TopoJSON
eliminates redundancy, offering much more compact representations of
geometry than with GeoJSON; typical TopoJSON files are 80% smaller
than their GeoJSON equivalents.
As mentioned by ThomasG77 I have been playing with binary data in this "twkb" format.
you can see it in action here (a websocket example)
or here, a php implementation.
If you want to study the parsing check the file twkb.js. It is a little cleaner in the twkb_node example I think.
In this blog post you can find link to the source code of the PostGIS part and some description of the format.
I have done some reworking since and will soon put a new description on github. I have a believe in twkb, but it needs more brains to get good.
You can of course also parse wkb but you will gain no bandwidth compared to gzipped geojson. I was surprised how small that did get. See the second link and check the sizes of the geojson vs twkb. WKB is about 2-6 times bigger than twkb.
This answer is not about wkb to wkt function.
I'd say you shouldn't use conversion from wkt to wkb just to save bandwith - gzipping wkt (or other format you have there) on the server should be more than enough (and most probably - more efficient) and browsers can do unzipping on the fly and out of the box.
Look also at browser support tables for XHTMLRequest2, as it's not supported in some older, yet still used browsers.
GeoScript has a Javascript API that reads and writes WKT and WKB. The methods are part of geom.io.
FIY only in a shell environment with java dependencies (cf pom.xml at http://github.com/tschaub/geoscript-js/) not in the browser
| common-pile/stackexchange_filtered |
Sharing keystore between multiple apps
I am developing an SDK/API that will be used by apps not written by me.
I want my code to generate a private key once and store it on the device.
This key should be unique per device, I don't mind it being erased on factory reset, and I don't need to extract it from the device.
Sounds an ideal case for the KeyStore (preferably with a StrongBox).
I do want, though, different apps to use the same key to sign (or ask my API to do so)
I could not answer two concerns:
If my API is a library linked (statically) into the app, different apps using my code wouldn't be able to use the key, because the keystore requires
If I package my API as a separate android app, and have the other apps use it using Intents, I'll have to have the user install it manually, because as far as I could tell, Android/Play does not allow for dependencies between apps.
What is the most graceful way to resolve this situation?
something like generating a random alphanumeric string of 32 bit would help?
@Sam. how would that help?
| common-pile/stackexchange_filtered |
Could not open or put a Hibernate Session in ValueStack: Cannot open connection
I have created application using struts and hibernate. While running application am getting following error in Eclipse IDE:
org.hibernate.SessionException: Error! Please, check your JDBC/JDNI Configurations and Database Server avaliability.
Could not open or put a Hibernate Session in ValueStack: Cannot open connection
com.googlecode.s2hibernate.struts2.plugin.interceptors.SessionTransactionInjectorInterceptor.intercept(SessionTransactionInjectorInterceptor.java:134)
com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:236)
org.apache.struts2.impl.StrutsActionProxy.execute(StrutsActionProxy.java:52)
org.apache.struts2.dispatcher.Dispatcher.serviceAction(Dispatcher.java:468)
org.apache.struts2.dispatcher.ng.ExecuteOperations.executeAction(ExecuteOperations.java:77)
org.apache.struts2.dispatcher.ng.filter.StrutsPrepareAndExecuteFilter.doFilter(StrutsPrepareAndExecuteFilter.java:76)
Any clue?
Regards,
Any clues to overcome this Error?
org.hibernate.SessionException: Error! Please, check your JDBC/JDNI Configurations and Database Server avaliability.
Cannot open connection
Is this a customized error message?
you need to add a dependency of asm and cglib in my pom.xml .Did you added that?Please check it once.
see here for example
@user2189411 did you check my answer
Thanks fr your answer PSR. am using simple Dynamic project in Eclipse Indigo. am not using Maven project to add POM.xml file.. In this case how can i solve my issue?
@user2189411 No problem.you can download those jarfiles manually then add to build path
I have added following jar files and done build path also antlr-2.7.6
commons-beanutils
commons-collections
commons-fileupload
commons-io
commons-lang
commons-logging
displaytag
displaytag-export-poi
dom4j
ejb3-persistence
freemarker
hibernate3
hibernate-annotations
hibernate-commons-annotations
hibernate-validator
hsqldb
itext
iText
javassist
jcl104-over-slf4j
jstl
jta
junit
log4j
mail
mysql-connector-java
ognl-2.7.3
poi-3.2-FINAL
slf4j-api
slf4jlog4j
standard
struts2-convention-plugin
struts2-core
struts-dojo-plugin
struts-fullhibernatecore-plugin
xwork did i miss any jar files?
No am getting same error after i added jar files in above comments. did i missed any jar files?
I think you missed some jar files.
Could u able to say which JAR files i missed?
| common-pile/stackexchange_filtered |
Deploying a Java Project to a JBoss Server
Just so that this doesn't end up marked as a duplicate right off the bat, I did look through the site and find answers to similar questions, but not mine. Although if it turns out there's no answer/this is still a duplicate, I understand.
At my new job, I just got an assignment where I'm using items from a repository. To finish the project, I need to run the files on a jBoss server and check for errors and other stuff like that. Since I a)have never worked with jBoss/any other server related stuff before and b)am having a lot of difficulty understanding the only person who I can talk to about this work, I'm behind on the project.
I'm supposed to deploy my project to jBoss, but it's not a Maven project (and I'm not sure what Maven is right now, but I'm looking it up since it seems to be the only way to do this) and it isn't a web project, both of which I could just use "export" to make into a WAR file (apparently). My mentor keeps saying it needs to be a WAR file (that's why the above links have to do with WARs) but that doesn't make any sense and it's just confusing me. I'm almost 100% certain WAR files have to be web application projects. But, you know... now I'm confused because my mentor is clearly saying WAR and pretty frequently too.
Further, when I was "taught" to use jBoss, I wasn't taught anything about Maven and I don't know if it's absolutely necessary. I'm not trying to do this in the easiest/laziest way... I'm just aware that there's more stuff that I need to learn and I'm trying to learn the most efficient way to do things such that I don't need to keep asking my mentor questions. He's very smart, but very difficult to understand and often leaves me with way, way more questions than answers (and a lot of the time, those new questions are cleared up by my manager with "No, you don't have to do that, it's not necessary, that's not your job, etc."). He's also the only person who I can talk to about my job (and he hasn't been here much longer than me). So even though I understand the basics of my job (Java, XML, etc.) I need him to understand the environment I'm using. He just confuses me when we talk about literally anything else.
I have multiple projects that influence the main project I'm working on. I need to compile, deploy to jBoss, debug, and then I'm not sure what next (probably submit if it's working? I mean I'm literally brand new here so I'm just flying by the seat of my pants.) I've got the code to compile, now I just need to deploy. So, three questions:
Am I correct in saying that I can't make a WAR file from a java project? If I am wrong, how do I do that?
Can I just make a JAR file and use that to deploy? If so, how do I include other projects files in that file (I think the second part is similar to this question, so it's cool to just ignore the second part of this question unless there are different steps. I only ask to be certain).
Is Maven necessary? I ask because I was never told about Maven and only recently heard about it (not from my mentor). I'm getting the impression that the project itself needs to be Maven based and I'm pretty sure these projects aren't... but again, I'd never heard of it before so I'm still looking into it. I feel as though it would've been mentioned, but... I don't know if my mentor just forgot to mention it or if we're just not using it.
do you have a pom.xml in your project?
Maven has nothing to do with your problem, so you can rest easy and forget about it. Focus on learning what a WAR is. It might be simpler to do your first tests with Apache Tomcat by the way and not jump directly to a full JEE container such as JBoss. Start small.
@Goot No, I don't. I have a build.xml and a web.xml, but no pom.xml.
Since you have absolutely no experience with Jboss, read this book : http://www.amazon.com/JBoss-AS-Configuration-Deployment-Administration/dp/1849516782/. It's fantastic and will give an answer to your questions. I'm kinda new to jboss, too and it was a big help.
@Gimby Would Tomcat be significantly different from jBoss? What I mean is, would I need to add/change code that I have right now to use Tomcat? Sorry if this is a really beginner question, I'm just really lost.
If you're just deploying a war file, Tomcat would definitely be a better way to go. It's an easier step to make than going to JBoss in my humble opinion. Just drop the war file in the webapps folder of Tomcat and you're done!
Am I correct in saying that I can't make a WAR file from a java
project? If I am wrong, how do I do that?
You can wrap the project up as a war when you build it (if you want to use ant or maven), or even doing so through Eclipse using Export -> War file. You will need a web.xml to make a war file, which from your comments you already have.
Can I just make a JAR file and use that to deploy? If so, how do I
include other projects files in that file (I think the second part is
similar to this question, so it's cool to just ignore the second part
of this question unless there are different steps. I only ask to be
certain).
Yes you can deploy a jar, but I'm assuming you somehow want to be able to access it from a war or ear project in JBoss.
Is Maven necessary? I ask because I was never told about Maven and
only recently heard about it (not from my mentor). I'm getting the
impression that the project itself needs to be Maven based and I'm
pretty sure these projects aren't... but again, I'd never heard of it
before so I'm still looking into it. I feel as though it would've been
mentioned, but... I don't know if my mentor just forgot to mention it
or if we're just not using it.
No, you don't need maven at all. You could do all of this through just Eclipse without needing a build tool, although I wouldn't actively advise that. Maven is your dependency management (and much more), it's good to use if you have loads of jars you need to keep check of but by no means is it vital.
Is it possible to make a WAR file from a warproduct? I only ask because Export>War doesn't seem to work, or maybe I'm just doing it wrong. I'm unsure what steps I should take to make a WAR file using web.xml, I suppose is the more clear way of putting it.
If you're using Eclipse, you just have to make a project as a dynamic web project, then the export procedure should work fine, you might want to read http://stackoverflow.com/questions/5108019/how-to-make-war-file-in-eclipse and http://stackoverflow.com/questions/1001714/how-to-create-war-files
The problem is that I'm taking the files from a repository and they're all Java projects. None of them are dynamic web projects, which it seems is the way to do it. I think I could change the project to a dynamic web module, would that be the same thing, essentially?
Yeah that would work fine, it gets a little more complicated if they're all separate projects, you'd be better to make the main project you run on Tomcat and try running it from Eclipse, once you get that, it should export as a war. You can mark a project as dynamic web project in the project facade if you go to the properties of the project.
| common-pile/stackexchange_filtered |
Why does ALTER TABLE ... ALTER COLUMN ... fill the version store in TempDB
I had to change a BIGINT column in a large table from nullable to non-nullable.
ALTER TABLE my.Table ALTER COLUMN myColumn BIGINT NOT NULL
Running this in our UAT and RC environments took around 3 hours with low levels of concurrent activity. Both UAT and RC are reflective of PROD so are good test platforms. 3 hours is reasonable given the size of the table and the performance of the kit.
As far as I'm aware the relevant config is snapshot_isolation_state = 0, is_read_committed_snapshot_on = 1.
The ALTER TABLE has been killed in PROD a couple of times (after running for several hours, then with a lengthy rollback) after other activity started to fail with "Transaction aborted when accessing versioned row in table 'myOther.Table' in database 'MyDatabase'. Requested versioned row was not found. Your tempdb is probably out of space. Please refer to BOL on how to configure tempdb for versioning." errors.
When running this in PROD for the third time I arranged for all other activity to be shutdown. After around 4 hours it was clear that something was not working. Using the initial query in Troubleshooting tempdb growth due to Version Store usage I could see that the version store was most of TempDB, but the ALTER TABLE connectionwas not blocked, CPU & IO were increasing slowly so I was confident it was alive, the only wait I saw was SOS_SCHEDULER_YIELD. There were no other non-trivial connections.
After another couple of hours I decided to add some space to TempDB. The ALTER TABLE finished very soon afterwards.
Can someone explain why the ALTER TABLE stalled? I could understand if there was another connection referencing the old (un-ALTERed) rows in my.Table but this definitely wasn't the case.
How many records does your supposedly "my.Table" contains in those environments?
@BuhakeSindi - about 200m rows, average record size about 50 bytes.
Altering a column from nullable to not nullable causes the new column to be created, the operation is fully logged, and also causes row versions to be produced if you use RCSI.
You can check this topic for more info: Why does ALTER COLUMN to NOT NULL cause massive log file growth?
Reguarding
I could understand if there was another connection referencing the old
(un-ALTERed) rows in my.Table but this definitely wasn't the case.
You misunderstand how RSCI works.
As soon as the transition to RCSI is completed, every update will generate row versions independent of the fact there are or there aren't othet transactions that are interested in those rows
When either the READ_COMMITTED_SNAPSHOT or ALLOW_SNAPSHOT_ISOLATION
database options are ON, logical copies (versions) are maintained for
all data modifications performed in the database. Every time a row is
modified by a specific transaction, the instance of the Database
Engine stores a version of the previously committed image of the row
in tempdb. Each version is marked with the transaction sequence number
of the transaction that made the change. The versions of modified rows
are chained using a link list. The newest row value is always stored
in the current database and chained to the versioned rows stored in
tempdb.
Understanding Row Versioning-Based Isolation Levels
Or more clearly it's written here:
When either the READ_COMMITTED_SNAPSHOT or ALLOW_SNAPSHOT_ISOLATION
database options are ON, update and delete transactions for a
particular database must maintain row versions even when there are no
transactions using a row versioning-based isolation level.
Constructing a consistent snapshot of data using row versions involves
system resources (CPU and memory), and potentially generates I/O
activity. Because the record versions are stored in tempdb,
performance is better and the number of issued I/Os is lower when more
tempdb pages can be stored in memory for row versioning.
As you imagine, ALTER TABLE operates within 1 transaction, so that row versions are alive for all the duration of this transaction (they could live even more, until a statement interested in them was executing, but since no one was interested in, the minimum "life expectation" is the duration of the owning transaction)
...................................................................................
UPDATED:
I tried to reproduce the issue on SQL Server 2012:
I set tempdb autogrowth to 0 (tempdata set to 10Mb, templog to 1Mb) and created a new database of 20Mb data file + 10 Mb log file, simple recovery model, and created a table dbo.Nums filled up with 1000000 integers (bigint, null) this way:
select top 1000000 row_number() over(order by 1/0) as n
into dbo.Nums
from sys.all_columns c1 cross join sys.all_columns c2;
Then I did a checkpoint and alter a column from null to not null:
alter table dbo.nums alter column n bigint not null
This took 0 seconds, my table size was about 16Mb prior to this action and it remains about 16Mb, no log file growth, and what went to log file I'll show in the picture.
Then I dropped the table, recreate it and altered my db:
alter database rcsi set read_committed_snapshot on;
And did exactly the same thing: checkpoint + alter table + select from sys.fn_dblog()
I had to wait for 5 minutes, but tempdb gives no error.
There was PREEMPTIVE_OS_GETDISKFREESPACE as a wait type during the statement execution, but guess what it was.
It was not tempdb (that was only 10Mb + 1Mb and remains the same as I limited it size), it was the LOG FILE of my user database, that just for changing the data type from nullable to not nullable UNDER RCSI, has grown to 1Gb (!!!!)
1Gb of log for changing nullability of 1 column of the table that was 16Mb only
And all the time I was waiting not for tempdb growth but for zering out 1 Gb for my db log file.
I attach the picture of what went to log during the same operation under RC and RCSI, so you can see that producing row versions cost much more to user database than to tempdb, so I think the hours you was waiting were spent to log row versions to your database log file (they are not logged in tempdb at all)
Becides COPY_VERSION_INFO, there were many row modifications that may not be your case: my rows have got a new 14-byte row version tag so there were too many changes made to that table because I have changed Isolation Level just before changing nullability, but the main impact in my case was produced by user db log file growth and not by tempdb that did not grow at all.
P.S. Maybe you'd better move this question to dbaexchange?
The first link relates to the transaction log which is not what my question is about. The 2nd link is good, but the article also states "When tempdb is full, update operations will stop generating versions and continue to succeed..." in which case why did my ALTER TABLE stall?
The first link contains a repro code that shows that a new column was created, existing data transfered to it, the old column is deleted. as you asked "Can someone explain what happened?" I provided a link that not only explaines but shows what happened
Apologies - I've clarified the final question by improving the language.
You are right in that the ALTER TABLE should not have problems if it could not generate versions anymore, but I'm not sure the problem were these row versions. What was in sys.dm_os_waiting_tasks when the operation was executing?
I didn't look at sys.dm_os_waiting_tasks but never saw anything other than SOS_SCHEDULER_YIELD for lastwaittype in sysprocesses. My IO is not very good so if there was much IO going on I would have expected to see some IO waits in lastwaittype.
| common-pile/stackexchange_filtered |
Do gums have any nutritional value?
Do gums (e.g., xanthan, guar, cellulose, glucomannan) have any nutritional value, or do they pass undigested like chewing gum does?
Are there known allergies to any gums?
Gums are considered fibre because they are ignored by our microbiome.
Besides, industry (at the contents of any product that we buy) also consideres fibre anything that is not processed by our digestion.
| common-pile/stackexchange_filtered |
Extract forename if the character is more then 2 letter
I have to get the forename from the c.forename if c.known_as column is null or blank.
This i achieved with case when statement using
CASE
WHEN IND.KNOWN_AS IS NULL OR ind.KNOWN_AS=''
THEN ind.FORENAMES
ELSE ind.KNOWN_AS
END AS 'Known As'
My issue is in the forename column i have name like Jhon Smith where i would like to extract only John, below is an example what i want to achieve
Desire output c.forename
John Mr John
Jhon Jhon Smith
blank Jo
blank J
So , basically it will only take forname skipping 'Mr', 2nd it should take only forename which has more than 2 character.
My current query is:
Select ind.FORENAMES,
ind.KNOWN_AS,
case when (known_as is null or known_as = '' ) and charindex(' ', forenames) > 2
then substring(forenames, 1, charindex(' ', forenames) - 1) end as FORENAMES2,
output
from individual ind
join member m on m.individual_ref=ind.individual_ref
and m.MEMBERSHIP_NO in ('001','002','003','004','005','006','007')
where m.member_status=33
SQL is not really suited for this type of processing.
Whence is the source of the string Smith? To me, it's coming out of nowhere.
Charindex would work here, but still breaks for users with short names. Is that what you're looking for?
@OwlsSleeping We have a table which has column like , Known_as, Forename,SurName. We want to bring everyone but known_as field shouldnt be blank, if its blank replace it with the value from forename column. but the condition is take the value from the forename column if the character is more than 1 character and take only the first word eg John Smith = Jhon
What would be your desired output for Jo Smith?
You could use following case when statement to verify your conditions:
For SQL Server:
case when (c.known_as is null or c.known_as = '' )
and charindex(' ', c.forename) > 3 then substring(c.forename, 1, charindex(' ', c.forename) - 1) end
For MySQL:
case when (c.known_as is null or c.known_as = '' )
and locate(' ', c.forename) > 3 then substring(c.forename, 1, locate(' ', c.forename) - 1) end
Little explanation: if the first name must be longer than 2 characters, that means that first space must occur at least at index 4. And that what the condition is about: locate(' ', c.forename) > 3 or substring(' ', c.forename) > 3
NOTE
You have to first strip down all occurences of Mr, Mrs, Ms in c.forename column, like this (syntax for MySQL and SQL Server):
replace(replace(replace(c.forename, 'Mrs ', ''), 'Mr ', ''), 'Ms ', '')
You have to include it in your query lke this:
Select FORENAMES,
KNOWN_AS,
case when (known_as is null or known_as = '' ) and charindex(' ', FORENAMES2) > 2
then substring(FORENAMES2, 1, charindex(' ', FORENAMES2) - 1) end as FORENAMES2,
output
from (
Select ind.FORENAMES,
ind.KNOWN_AS,
replace(replace(replace(ind.FORENAMES, 'Mrs ', ''), 'Mr ', ''), 'Ms ', '') FORENAMES2,
output
from individual ind
join member m on m.individual_ref = ind.individual_ref
where m.member_status=33
and m.MEMBERSHIP_NO in ('001','002','003','004','005','006','007')
)
This will get very ugly, very fast (no criticism of the answer, more of the data). What other titles can be present, Dr? Drs? Ig? Prof? Miss? etc etc etc. And what if initials are also included, what to do with initials MR? This is a minefield.
@HoneyBadger It is almost impossible to inlucde all this considerations in SQL... but you are completely right and let OP decide, if my efforts help him and solve his problwm :) I based my answer on presented data :)
@MichałTurczyn it didnt worked brother, returning NULL for Mr John
@Biswa You have to also apply inner queary with REPLACE as I mentioned
@MichałTurczyn here is the query Select ind.FORENAMES,ind.KNOWN_AS,case when (known_as is null or known_as = '' )
and charindex(' ', forenames) > 2 then substring(forenames, 1, charindex(' ', forenames) - 1) end as Michal_turczyn output
from individual ind
join member m on m.individual_ref=ind.individual_ref and m.MEMBERSHIP_NO in ('001','002','003','004','005','006','007')
where m.member_status=33 , with this im not getting the person names only John, its return as null
I solved it using else forename and adding len(forename)>'1' in where clause
@Biswa "len(forename)>'1' in where clause" doesn't match your sample data.
That will result in a skipped row in the case where forename='J' rather than a blank result. Sure it's what you want?
@OwlsSleeping Yes dear friend. I wanted to skip that record as i dont want known as to be updated as J. But now we are not going to use the script to get correct data but we will do some data cleansing activity to correct those known as field. so that the known as column is populated with correct records. Thanks for your help, it was really helpful my friend.
Try this:
DECLARE @DataSource TABLE
(
[name] VARCHAR(32)
);
INSERT INTO @DataSource ([name])
VALUES (' Mr John ')
,('Jhon Smith')
,(' Jo ')
,(' J ');
WITH SanitizeDataSoruce ([name], [name_reversed]) AS
(
SELECT LTRIM(RTRIM([name]))
,REVERSE(LTRIM(RTRIM([name])))
FROM @DataSource
)
SELECT [name]
,CASE
WHEN CHARINDEX(' ', [name]) > 1 THEN REVERSE(SUBSTRING([name_reversed], 0, CHARINDEX(' ', [name_reversed])))
ELSE ''
END
FROM SanitizeDataSoruce;
Thanks @gotqn it works like a charm, but the problem is i have a data set which has 30000 rows. and i need to return known as for 30000 , in some case we have known_as populated. i only need to case statement to change known as with forename with the criteria when the known_as column is null
@Biswa Then instead ELSE '' use ELSE known_as? Or even better - check if know_as is NULL, if yes, calculated it, if not, return it.
I solved it using else forename and adding len(forename)>'1' in where clause.
| common-pile/stackexchange_filtered |
Change id of clone children in jQuery
I have a table on which I have a hidden line that I clone in order to add new lines dynamically.
var $clone = $('#table-invoicing').find('tr.hide').clone(true).removeClass('hide');
$('#table-invoicing').find('table').append($clone);
Each line have a id and a data-type.
The hidden line is set an id ending in 99.
I would like to change this id when I clone the hidden line.
I found similar topics, but for some reason I don't manage to include it in my script. When I clone the line, then there is 2 elements with same id, so a selector by id won't work.
I tried :
$clone.$('#invoicing-row-descr-99').attr("id", "newID");
but then it tells me that $clone is not a function.
Any idea ?
$clone.find('#invoicing-row-descr-99').attr("id", "newID");
$clone.$('#invoicing-row-descr-99').attr("id", "newID");
but then it tells me that $clone is not a function.
Because $clone is an object. Just use attr or prop for the cloned element:
$clone.attr("id", "newID");//change cloned element id
As per your comment, use like this:
$clone.find('your_element').attr("id", "newID");
Thanks but the problem here is that I don't need to modify the tr id, but the td inside the tr. And there is 4 td inside
sorry, but not getting you.
As you can see, what I clone is the row by selecting tr :
$('#table-invoicing').find('tr.hide')
But the id I would like to change is the id of the cloned td inside this cloned tr. Sorry if it was unclear in my question
ah now it works!! thanks a lot!
If you have a minute, can you explain me why I can't use prop on find ?
sorry? you can use find. not?
oh and one more thing... sorry but still related to the issue.
I try to change the data of the td the same way I change the id :
$clone.find('[data-date="99"]').data("date", idInsert);
But unfortunately that doesn't work :/ I guess it is a selector issue once again :/
yes I can use find, it works perfect, just wondering why using attr and not prop :)
you may use any. But want to know difference? Then look at this
.prop() Is a good practice in current versions of jQuery.
$clone.prop("id", "yourId");
You'll need to use it before you are appending it.
Thanks, I'll use prop!
@VincentTeyssier here is a jsFiddle version if you need more help https://jsfiddle.net/msy9dgkg/
Thanks a lot, actually I know how to clone and use prop, but I explained my issue not very well. What I would like to change is the id of the cloned td that are inside the cloned tr.
| common-pile/stackexchange_filtered |
how to extract part of a filename before '.' or before extension
I have files in format below:
abc_asdfjhdsf_dfksfj_12345678.csv
hjjhk_hkjh_asd_asd_sd_98765498.csv
hgh_nn_25342134.exe
I want to get the value before the . and after the last _.
The result would look like:
abc_asdfjhdsf_dfksfj_12345678.csv ----> 12345678
hjjhk_hkjh_asd_asd_sd_98765498.csv ----> 98765498
hgh_nn_25342134.exe ----> 25342134
You could use awk also,
$ echo "abc_asdfjhdsf_dfksfj_12345678.csv" | awk -F'[_.]' '{print $4}'
12345678
It sets the Field seperator as _ or .. Then printing the column number 4 will give you the desired result (you may also prefer $(NF-1) (the but-last field) instead of $4).
Why two awk commands? Just set the field seperator to underscore or dot and print the column no 4.
@AvinashRaj if we set FS as "_" field no 4 is "12345678.csv" which has .csv which is not required. If we set the FS as "." and print 4th field then it will print nothing , coz wrt FS "." there is only 2 fields, abc_asdfjhdsf_dfksfj_12345678 & csv.
Its or. Add this to your command instead of your FS value. -Fcharacter class symbol inside that put underscore dot close the character class. That's all. Sorry commented through my mobile.
edited your answer.
If I have the same question, but I have thousands of files, how do I read in each file sequentially and have the portion of the filename extracted? Do I use a loop? If so, how?
If you have the file name in a POSIX shell variable:
file=abc_asdfjhdsf_dfksfj_12345678.csv
n=${file%.*} # n becomes abc_asdfjhdsf_dfksfj_12345678
n=${file##*_} # n becomes 12345678.csv
By explanation:
${variable%pattern} is like $variable, minus shortest matching pattern from the back-end;
${variable##pattern} is like $variable, minus the longest matching pattern from front-end.
See a reference like this one for more on parameter expansion.
If the list of file names is on a text stream with one filename per line:
sed -n 's/.*_\(.*\)\..*/\1/p'
You can use GNU grep:
$ echo abc_asdfjhdsf_dfksfj_12345678.csv | grep -oP '(?<=_)\d+(?=\.)'
12345678
Explanation
(?<=) is lookbehind, (?<=_) matches an underscore _ before pattern.
\d+ matches one or more number.
(?=) is lookahead, (?=\.) matches a dot . after pattern.
The whole regex means match all things between _ and .
Does the dot represents a literal dot?
@AvinashRaj, no, . matches any character there. So that code matches any sequence of digits following an underscore that is followed by at least one character, so it's wrong. On _12_23, it would output 12 and 2.
@StéphaneChazelas: Oh, my mistake, fixed it. I used this for test case and not check the doc, serious wrong here. Thanks.
@stephane yep. That's why I commented.
If it has to be after the last _ and before the last ., that would rather be grep -Po '.*_\K.*(?=\.)'
@StéphaneChazelas: Yeap, maybe TIMTOWTDI :)
Simply:
a=hjjhk_hkjh_asd_asd_sd_98765498.csv
pos1=${a%_*}
pos2=${a%.*}
echo ${a:${#pos1}+1:${#pos2}-${#pos1}-1}
get the offset of last _ to pos1
get the offset of last . to pos2
substring from _ offset to . offset
you can get the same using awk
awk -F"." '{print $1}' | awk -F"_" '{print $NF}'
from your example
echo "abc_asdfjhdsf_dfksfj_12345678.csv" | awk -F"." '{print $1}' | awk -F"_" '{print $NF}'
12345678
echo "hjjhk_hkjh_asd_asd_sd_98765498.csv" | awk -F"." '{print $1}' | awk -F"_" '{print $NF}'
98765498
echo "hgh_nn_25342134.exe" | awk -F"." '{print $1}' | awk -F"_" '{print $NF}'
25342134
Hi upkar, welcome to unix.SE. You're answer is much more readable when you take advantage of Stack Exchange's formatting markup. I've edited your post to insert the markup. You can click edit yourself to see how the small changes I made make it much more clear. See the markup help for more information.
| common-pile/stackexchange_filtered |
JDBC Code Change From SQL Server to Oracle
In the JDBC code, I have the following that is working with SQL Server:
CallableStatement stmt = connection.prepareCall("{ call getName() }");
ResultSet rs = stmt.executeQuery();
if(rs != null)
{
while(rs.next())
{
//do something with rs.getString("name")
}
}
Multiple rows are returned for the above situation.
I understand that the use of a cursor is required to loop through the table in Oracle, but is there any way to keep the above code the same and accomplish the same thing?
Sample PL/SQL code would be much appreciated.
Thanks in advance.
You could implement getName() as a pipelined function:
CREATE OR REPLACE name_record AS OBJECT ( name VARCHAR2(100) );
/
CREATE OR REPLACE name_table AS TABLE OF name_record;
/
CREATE OR REPLACE FUNCTION getName RETURN name_table PIPELINED
AS
n name_record;
BEGIN
-- I have no idea what you're doing here to generate your list of names, so
-- I'll pretend it's a simple query
FOR i IN (SELECT name FROM someTable) LOOP
n := name_record( i.name );
PIPE ROW(n);
END LOOP;
END;
/
You would need to change the actual query in Java to SELECT name FROM TABLE(getName()).
Is there a way to approach the PL/SQL code such that nothing needs to be changed in Java?
I don't know of a way to write an Oracle function that will work with your existing Java code. Is the issue that you want to maintain a single Java codebase that will work with both RDBMS? If so, the only thing you need to vary between the two is a string literal, which you could store externally in a resource file, and load a different resource depending on which database you are running against.
This is straight JDBC, so it'll work with any database that has a valid JDBC driver.
It assumes, of course, that the stored proc exists in both and that you aren't using any non-standard, vendor-proprietary code in your class.
I don't have the stored procedure in Oracle yet. Would returning a SYS_REFCURSOR with a function suffice for the above code? Sample PL/SQL code would be nice.
A ResultSet IS a cursor. I'm sure your example is a simple one, but I didn't think the JDBC API had to know whether it was Oracle underneath. That's the whole point of JDBC. If it can't do it, I'd say either it's poorly designed or you're misunderstanding something.
The part that is RDBMS-specific, I think, is the "call getName()". I'm guessing that in SQL Server, getName() is a function that returns multiple rows, and that as a convention it implicitly treats these as queries. Oracle does not allow this. The closest construct I know of would be "call ? = getName()", which would require more involved changes to the Java code than the other answer I submitted.
Something feels wrong here. The CallableStatement should be abstracting all this for you. You should not want to return a raw cursor. You should be iterating through the ResultSet that's returned, package the results into an object or collection, and closing the ResultSet and Statement in method scope. Anything else is asking for resource leaks.
| common-pile/stackexchange_filtered |
CFQUERYPARAM not working in ColdFusion 10
I am passing three integers into a function in a CFC, like this:
<cfscript>
Q = TOPBIKES.GetTopBikes(127, 10, 11);
writeDump(Q);
</cfscript>
The CFC uses these integers to run a query like this:
<!--- GET TOP BIKES --->
<cffunction name="GetTopBikes">
<cfargument name="FeatureID" required="true">
<cfargument name="MinWins" required="true">
<cfargument name="RecordsToReturn" required="true">
<cfscript>
LOCAL.FeatureID = ARGUMENTS.FeatureID;
LOCAL.MinWins = ARGUMENTS.MinWins;
LOCAL.RecordsToReturn = ARGUMENTS.RecordsToReturn;
</cfscript>
<!--- RUN QUERY --->
<cfquery name="Q">
SELECT TOP #LOCAL.RecordsToReturn#
B.BikeID,
B.BikeName,
BS.PCTWins
FROM Bikes B
LEFT JOIN BikeScores BS
ON B.BikeID = BS.BikeID
WHERE BS.Wins > <cfqueryparam cfsqltype="cf_sql_integer" value="#LOCAL.MinWins#">
AND B.BikeID IN ( SELECT BikeID
FROM Bikes_Features
WHERE FeatureID = <cfqueryparam cfsqltype="cf_sql_integer" value="#LOCAL.FeatureID#">
)
ORDER BY BS.PCTWins desc
</cfquery>
<cfreturn Q>
</cffunction>
The problem is that I cannot get cfqueryparam to work in the TOP part of the SQL statement.
These work:
SELECT TOP 11
SELECT TOP #LOCAL.RecordsToReturn#
This does not work:
SELECT TOP <cfqueryparam
cfsqltype="cf_sql_integer"
value="#LOCAL.RecordsToReturn#">
I can, however use anywhere else in the query. I know it's an integer and works when used elsewhere such in replacement of the FeatureID.
Any clue as to why CFQUERYPARAM is not working in TOP?
FWIW, you should add type="numeric" to your CFARGUMENTs since you can't use CFQUERYPARAM.
SELECT TOP #val(LOCAL.RecordsToReturn)#
Some parts of a SQL statement cannot use cfqueryparam, such as Top or table name after From.
I can't believe I didn't know that.
Just to prove your answer is correct, here's some documentation from Mr. Pete Frietag ~ http://www.petefreitag.com/item/677.cfm
The thing to remember - and Peter's notes that you link to don't explicitly say this, Evik - is that there's two parts to an SQL statement: the SQL "commands", and the data being used by the SQL commands. Only the data can be parameterised. If you think about it, that makes sense: the SQL commands themselves are not "parameters".
One can think in a CF context here, for an analogy. Consider this statement:
<cfset variables.foo = "bar">
One could "parameterise" this with a passed-in value:
<cfset variables.foo = URL.foo>
(Where URL.foo is a parameter in this example)
But one could not expect to do this:
<#URL.tag# variables.foo = "bar">
(this is a very contrived example, but it demonstrates the point).
I think as far as the SQL in a <cfquery> goes, the waters are muddied somewhat because the whole thing is just a string in CF, and any part of the string can be swapped-out with a variable (column names, boolean operators, entire clauses, etc). So by extension one might think any variable can be replaced with a <cfqueryparam>. As we know now, this is not the case, as whilst it's all just a string as far as CF is concerned, it's considered code to the DB, so needs to conform to the DB's coding syntax.
Does this clarify the situation more?
Great explanation. I totally understand it. Thanks for taking the time to write this.
new syntax for MS SQL (from 2005): select top(10) ...
for 10 you can have cfqueryparam.
| common-pile/stackexchange_filtered |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.