qid
int64
1
74.7M
question
stringlengths
15
58.3k
date
stringlengths
10
10
metadata
list
response_j
stringlengths
4
30.2k
response_k
stringlengths
11
36.5k
33,551,559
I want to ignore the punctuation.So, I'm trying to make a program that counts all the appearences of every word in my text but without taking in consideration the punctuation marks. So my program is: ``` static void Main(string[] args) { string text = "This my world. World, world,THIS WORLD ! Is this - the world ."; IDictionary<string, int> wordsCount = new SortedDictionary<string, int>(); text=text.ToLower(); text = text.replaceAll("[^0-9a-zA-Z\text]", "X"); string[] words = text.Split(' ',',','-','!','.'); foreach (string word in words) { int count = 1; if (wordsCount.ContainsKey(word)) count = wordsCount[word] + 1; wordsCount[word] = count; } var items = from pair in wordsCount orderby pair.Value ascending select pair; foreach (var p in items) { Console.WriteLine("{0} -> {1}", p.Key, p.Value); } } ``` The output is: ``` is->1 my->1 the->1 this->3 world->5 (here is nothing) -> 8 ``` How can I remove the punctuation here?
2015/11/05
[ "https://Stackoverflow.com/questions/33551559", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
``` string[] words = text.Split(new char[]{' ',',','-','!','.'}, StringSplitOPtions.RemoveEmptyItems); ```
... you can go with the making people cry version ... ``` "This my world. World, world,THIS WORLD ! Is this - the world ." .ToLower() .Split(" ,-!.".ToCharArray(), StringSplitOptions.RemoveEmptyEntries) .GroupBy(i => i) .Select(i=>new{Word=i.Key, Count = i.Count()}) .OrderBy(k => k.Count) .ToList() .ForEach(Console.WriteLine); ``` .. output ``` { Word = my, Count = 1 } { Word = is, Count = 1 } { Word = the, Count = 1 } { Word = this, Count = 3 } { Word = world, Count = 5 } ```
33,551,559
I want to ignore the punctuation.So, I'm trying to make a program that counts all the appearences of every word in my text but without taking in consideration the punctuation marks. So my program is: ``` static void Main(string[] args) { string text = "This my world. World, world,THIS WORLD ! Is this - the world ."; IDictionary<string, int> wordsCount = new SortedDictionary<string, int>(); text=text.ToLower(); text = text.replaceAll("[^0-9a-zA-Z\text]", "X"); string[] words = text.Split(' ',',','-','!','.'); foreach (string word in words) { int count = 1; if (wordsCount.ContainsKey(word)) count = wordsCount[word] + 1; wordsCount[word] = count; } var items = from pair in wordsCount orderby pair.Value ascending select pair; foreach (var p in items) { Console.WriteLine("{0} -> {1}", p.Key, p.Value); } } ``` The output is: ``` is->1 my->1 the->1 this->3 world->5 (here is nothing) -> 8 ``` How can I remove the punctuation here?
2015/11/05
[ "https://Stackoverflow.com/questions/33551559", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
You should try specifying `StringSplitOptions.RemoveEmptyEntries`: ``` string[] words = text.Split(" ,-!.".ToCharArray(), StringSplitOptions.RemoveEmptyEntries); ``` Note that instead of manually creating a `char[]` with all the punctuation characters, you may create a `string` and call `ToCharArray()` to get the array of characters. I find it easier to read and to modify later on.
... you can go with the making people cry version ... ``` "This my world. World, world,THIS WORLD ! Is this - the world ." .ToLower() .Split(" ,-!.".ToCharArray(), StringSplitOptions.RemoveEmptyEntries) .GroupBy(i => i) .Select(i=>new{Word=i.Key, Count = i.Count()}) .OrderBy(k => k.Count) .ToList() .ForEach(Console.WriteLine); ``` .. output ``` { Word = my, Count = 1 } { Word = is, Count = 1 } { Word = the, Count = 1 } { Word = this, Count = 3 } { Word = world, Count = 5 } ```
105,277
Sibling of mine asked me to open an bank account for her in my name. She wants to fill it with her savings and use it on day-to-day basis with a debit card. I have a strong feeling this could backfire against me. What are the risks I could run in to if I did this? --- I am looking for possible risks as if there were not the edited out reason. The question is simply "Possible risks of opening bank account for family member in own name" --- As I stated, I had bad feeling about it and declined. Now I am just looking for possible risks for the argument sake.
2019/02/13
[ "https://money.stackexchange.com/questions/105277", "https://money.stackexchange.com", "https://money.stackexchange.com/users/-1/" ]
There are several risks to you. 1. Depending on the details of the arrangement, you might be liable for income tax on the money apparently "given" to you, and your sibling might be liable for income tax on the money you apparently "give" to them. 2. If the bank find out that you have given her "your" debit card to use, that is against the Ts and Cs of the account and you could have this account, as well as other accounts you hold with this bank, closed. 3. If your sibling runs up an overdraft on the account, you would be liable for paying it back. 4. I know you edited out this part, but if you were doing this to assist with a fraud then you would also be an accomplice, which could result in criminal and/or civil charges against you.
You edited out the part of you assisting your sister in hiding money from the court. Read your original post out loud to yourself three times. Now what do you think? Of course it is a bad idea, and you should say "sorry sis, I am not going to help you break the law". This would hold true even if you are beyond the jurisdiction of the EU. **Edit:** So lets say that your sister did this because of some silly reason, like she has a fear of banks. The typical risks exist for both her and you. For her, there is nothing stopping you from closing the account and taking all the money. You can siphon money off over time, or just take it all when the sum is sufficiently large. For you, if she has checks, she can write a bunch of checks against the account, and you would be liable for the over draft fees and meeting the obligations of the checks. The bottom line is that there is not a reason to do this unless there is some kind of fraud going on. Which is why editing out the problematic part does not really change the nature of the question.
105,277
Sibling of mine asked me to open an bank account for her in my name. She wants to fill it with her savings and use it on day-to-day basis with a debit card. I have a strong feeling this could backfire against me. What are the risks I could run in to if I did this? --- I am looking for possible risks as if there were not the edited out reason. The question is simply "Possible risks of opening bank account for family member in own name" --- As I stated, I had bad feeling about it and declined. Now I am just looking for possible risks for the argument sake.
2019/02/13
[ "https://money.stackexchange.com/questions/105277", "https://money.stackexchange.com", "https://money.stackexchange.com/users/-1/" ]
You edited out the part of you assisting your sister in hiding money from the court. Read your original post out loud to yourself three times. Now what do you think? Of course it is a bad idea, and you should say "sorry sis, I am not going to help you break the law". This would hold true even if you are beyond the jurisdiction of the EU. **Edit:** So lets say that your sister did this because of some silly reason, like she has a fear of banks. The typical risks exist for both her and you. For her, there is nothing stopping you from closing the account and taking all the money. You can siphon money off over time, or just take it all when the sum is sufficiently large. For you, if she has checks, she can write a bunch of checks against the account, and you would be liable for the over draft fees and meeting the obligations of the checks. The bottom line is that there is not a reason to do this unless there is some kind of fraud going on. Which is why editing out the problematic part does not really change the nature of the question.
Even if this was not fraudulent for external (a priori) reasons, the mere act of opening an account under your name in order to hide the true beneficiary is already money laundering and therefore fraud in its own right. So, even if it were not explicitly against the bank's T&Cs (as Vicky answered), they could still close all your accounts. And that's not even the worst part. Money laundering carries jail penalties in all EU countries.
105,277
Sibling of mine asked me to open an bank account for her in my name. She wants to fill it with her savings and use it on day-to-day basis with a debit card. I have a strong feeling this could backfire against me. What are the risks I could run in to if I did this? --- I am looking for possible risks as if there were not the edited out reason. The question is simply "Possible risks of opening bank account for family member in own name" --- As I stated, I had bad feeling about it and declined. Now I am just looking for possible risks for the argument sake.
2019/02/13
[ "https://money.stackexchange.com/questions/105277", "https://money.stackexchange.com", "https://money.stackexchange.com/users/-1/" ]
There are several risks to you. 1. Depending on the details of the arrangement, you might be liable for income tax on the money apparently "given" to you, and your sibling might be liable for income tax on the money you apparently "give" to them. 2. If the bank find out that you have given her "your" debit card to use, that is against the Ts and Cs of the account and you could have this account, as well as other accounts you hold with this bank, closed. 3. If your sibling runs up an overdraft on the account, you would be liable for paying it back. 4. I know you edited out this part, but if you were doing this to assist with a fraud then you would also be an accomplice, which could result in criminal and/or civil charges against you.
Even if this was not fraudulent for external (a priori) reasons, the mere act of opening an account under your name in order to hide the true beneficiary is already money laundering and therefore fraud in its own right. So, even if it were not explicitly against the bank's T&Cs (as Vicky answered), they could still close all your accounts. And that's not even the worst part. Money laundering carries jail penalties in all EU countries.
9,203,191
I've recently read the news on <http://allseeing-i.com> that ASIHTTP is being discontinued. I have much respect for the makers of the library. However, I am now looking for a substitute that also supports queued download (multithreaded) on iOS, that also supports a progress bar with appropriate information. Is there any (hopefully lightweight) library, that is in an active development livecycle? ARC support would also be much appreciated. Many thanks for your thoughts.
2012/02/08
[ "https://Stackoverflow.com/questions/9203191", "https://Stackoverflow.com", "https://Stackoverflow.com/users/624459/" ]
[AFNetworking](https://github.com/AFNetworking/AFNetworking) is being lauded as a successor to ASIHTTPRequest. It is based on operation queues, and in my experience it works reasonably well. You could probably do what you want to do without a third-party library, but if you want to make it a little easier on yourself, a combination of `AFURLConnectionOperation` subclasses and the `AFHTTPClient` class will do nicely.
I wrote one recently. It's fully ARC compliant and fairly lightweight: <https://github.com/nicklockwood/RequestQueue> As of version 1.2 it supports download and upload progress bars (see the included ProgressLoader example). Rather than make a monolithic framework like ASI, I've tried to keep this as simple as possible. That means you are free to mix and match it with other libraries for stuff like POST parameter generation, JSON parsing, etc.
9,203,191
I've recently read the news on <http://allseeing-i.com> that ASIHTTP is being discontinued. I have much respect for the makers of the library. However, I am now looking for a substitute that also supports queued download (multithreaded) on iOS, that also supports a progress bar with appropriate information. Is there any (hopefully lightweight) library, that is in an active development livecycle? ARC support would also be much appreciated. Many thanks for your thoughts.
2012/02/08
[ "https://Stackoverflow.com/questions/9203191", "https://Stackoverflow.com", "https://Stackoverflow.com/users/624459/" ]
You may want to look at [MKNetworkKit](https://github.com/MugunthKumar/MKNetworkKit). In its words: > > MKNetworkKit's goal was to make it as feature rich as ASIHTTPRequest yet simple and elegant to use like AFNetworking > > > It has a number of very nice features for queuing and managing offline situations.
I wrote one recently. It's fully ARC compliant and fairly lightweight: <https://github.com/nicklockwood/RequestQueue> As of version 1.2 it supports download and upload progress bars (see the included ProgressLoader example). Rather than make a monolithic framework like ASI, I've tried to keep this as simple as possible. That means you are free to mix and match it with other libraries for stuff like POST parameter generation, JSON parsing, etc.
3,654,960
$$\frac{2bc\cos A + ac\cos B +2ab \cos C}{abc}= \frac{a^2+b^2}{abc}$$ $$b^2+c^2-a^2+a^2+b^2-c^2+ac\cos B =a^2+b^2$$ $$ac\ cos B = a^2-b^2$$ How do I find angle $A$ from here?
2020/05/02
[ "https://math.stackexchange.com/questions/3654960", "https://math.stackexchange.com", "https://math.stackexchange.com/users/690228/" ]
By the Cosine Rule, $2ac \cos B = a^2 + c^2 - b^2$, so from your equation above ($ac \cos B = a^2-b^2$), we obtain \begin{equation\*}a^2 + c^2 - b^2 = 2(a^2-b^2)\end{equation\*} and thus \begin{equation\*}c^2 = a^2-b^2\end{equation\*} This can be rewritten as $b^2 + c^2 - a^2 = 0$, so by the Cosine Rule again, $\cos A = 0$. As $A$ is an angle in a triangle, we must have $A = 90^{\circ}$.
Use [Law of sines](https://en.wikipedia.org/wiki/Law_of_sines) then [Prove $ \sin(A+B)\sin(A-B)=\sin^2A-\sin^2B $](https://math.stackexchange.com/questions/175143/prove-sinab-sina-b-sin2a-sin2b) Observe that as $0<A,B,C<\pi, \sin A,\sin B,\sin C>0$ and finally use $\sin(A+B)=\cdots=\sin C$ to find $$\sin B\cos A=0\implies?$$
4,537,130
My attempt so far: Base case: $2^{3!}>3^{3}$ $2^6 > 27$ $64> 27$ Then going for $2^{(n+1)!} > (n+1)^{(n+1)}$ I get $(2^{(n!)})^{(n+1)} > (n+1)^{(n+1)}$ Since n+1 is positive, $2^{n!} > n+1$ And I do not go where to go from here. I am unsure if I am following the correct path. Am I approaching this in the correct way?
2022/09/23
[ "https://math.stackexchange.com/questions/4537130", "https://math.stackexchange.com", "https://math.stackexchange.com/users/1098962/" ]
We're trying to prove $2^{n!} > n^n$. You've shown that the base case holds - great. In an induction proof you assume that the statement holds for an arbitrary $k$, and then you have to show it holds for $k+1$ *dependent* on it holding for $k$. $2^{(k+1)!} = 2^{(k+1)k!} = (2^{k!})^{k+1}$ Now we have to remember that we're assuming that the statement holds for $k$, ie we assume $2^{k!} > k^k$. Also we clearly have $k^k > k+1$. Putting these two inequalities together and raising to the $k+1$'th power we get $$(2^{k!})^{k+1} > (k^k)^{k+1} > (k+1)^{k+1}$$ and so $$2^{(k+1)!} > (k+1)^{k+1}$$ which concludes the proof. Essentially we prove that if it's true for a certain $k$, then it will also be true of $k+1$ and since you yourself showed that it's true for the lowest $n$ in the set you're considering, it must be the case that it's true for any integer higher then that $n$.
Essentially the solution above is right. I'll give the same but with an extra. **BC (Base case)** You've already proven this. **IH (Induction Hypothesis)** Suppose for some integer $m \ge 3$ that $$ 2^{m!} > m^{m}$$ You can notice now, that $$ 2^{m!} > m^{m} > m+m > m+1 $$ And thus, $2^{m!} > m+1 $. Finally, if we raise both sides to the power $ m+1$ , we get. $$ 2^{(m+1)!} > (m+1)^{m+1} $$ Which of course will hold for every integer $m$. $\blacksquare$
2,761
I know that as a tourist, some department store will do tax refund for foreigners when you show them your passport. But this time I will be going as with a Working Holiday Visa, not visitor visa. Will I still be eligible to get the tax refund for foreigners?
2014/08/16
[ "https://expatriates.stackexchange.com/questions/2761", "https://expatriates.stackexchange.com", "https://expatriates.stackexchange.com/users/2255/" ]
As per the current tax free shopping regulations, visitors with a temporary stay status are eligible for tax-free shopping. * Japanese citizens are not eligible. * Not eligible if you are working in Japan. * Not eligible if staying in Japan more than six months. If you have a working holiday visa, I imagine you are working in Japan and are therefore ineligible to receive this tax refund. For full information on tax free shopping in Japan you can check out [enjoy.taxfree.jp](http://enjoy.taxfree.jp/)
No, tax-free shopping is only available to foreign citizens whose status of residence is "temporary visitor". Your status will not be "temporary visitor" (it will probably be "designated activities").
54,835,339
I am using Xamarin.Forms to create a data collection application. One particular feature of this app is to export the data via csv. When captured the data is written to a text file using the following method: ``` public void WriteToFile(string CompanyName, string Website, string FirsttName, string LastName, string JobTitle, string Phone, string Email, string Solution, string Notes, string ContactOwner, string EventName) { string path = Environment.GetFolderPath(Environment.SpecialFolder.Personal); string filename = System.IO.Path.Combine(path, "Prospects.txt"); string lineToBeAdded = string.Format("{0},{1},{2},{3},{4},{5},{6},{7},{8},{9},{10}", CompanyName, Website, FirsttName, LastName, JobTitle, Phone, Email, Solution, Notes, ContactOwner, EventName); File.AppendAllText(filename, lineToBeAdded + Environment.NewLine); } ``` My problem is that in the variable NOTES the user uses commas in their description which messes up the structure of the csv. How do I force ignore commas in this string?
2019/02/22
[ "https://Stackoverflow.com/questions/54835339", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11103719/" ]
To escape comma warp each column in double quotes, to scape double qoutes use 2 double qoutes instead of each: ``` string lineToBeAdded = string.Format("\"{0}\",\"{1}\",\"{2}\",\"{3}\",\"{4}\",\"{5}\",\"{6}\",\"{7}\",\"{8}\",\"{9}\",\"{10}\"", CompanyName.Replace("\"","\"\""), Website.Replace("\"","\"\""), FirsttName.Replace("\"","\"\""), LastName.Replace("\"","\"\""), JobTitle.Replace("\"","\"\""), Phone.Replace("\"","\"\""), Email.Replace("\"","\"\""), Solution.Replace("\"","\"\""), Notes.Replace("\"","\"\""), ContactOwner.Replace("\"","\"\""), EventName.Replace("\"","\"\"")); ```
Also you don't have to use comma's, you can use a | or something else the user doesn't use. To make it even more friendly you can specify the delimiter in the first line, excel supports that I know. ``` sep=| header1|header2|header3 1,2|2,3|3,4 ```
54,835,339
I am using Xamarin.Forms to create a data collection application. One particular feature of this app is to export the data via csv. When captured the data is written to a text file using the following method: ``` public void WriteToFile(string CompanyName, string Website, string FirsttName, string LastName, string JobTitle, string Phone, string Email, string Solution, string Notes, string ContactOwner, string EventName) { string path = Environment.GetFolderPath(Environment.SpecialFolder.Personal); string filename = System.IO.Path.Combine(path, "Prospects.txt"); string lineToBeAdded = string.Format("{0},{1},{2},{3},{4},{5},{6},{7},{8},{9},{10}", CompanyName, Website, FirsttName, LastName, JobTitle, Phone, Email, Solution, Notes, ContactOwner, EventName); File.AppendAllText(filename, lineToBeAdded + Environment.NewLine); } ``` My problem is that in the variable NOTES the user uses commas in their description which messes up the structure of the csv. How do I force ignore commas in this string?
2019/02/22
[ "https://Stackoverflow.com/questions/54835339", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11103719/" ]
Solution -------- Create this function: ``` string GetCsvLine(params string[] fields) => string.Join(",", fields.Select(x => $"\"{x.Replace("\"", "\"\"")}\"")); ``` And call it like this, replacing the line in your example code that initializes "lineToBeAdded": ``` string lineToBeAdded = GetCsvLine(CompanyName, Website, FirsttName, LastName, JobTitle, Phone, Email, Solution, Notes, ContactOwner, EventName); ``` Explanation ----------- Refer [here - RFC4180](https://www.rfc-editor.org/rfc/rfc4180#section-2) - to the specification for CSV: Summarizing: * A field must be enclosed in double-quotes, if it contains a comma, a line break, or a double-quote. * Any double-quotes **within** a field must themselves be doubled. * *Bonus: using 'params' and LINQ allows you to call this method with a variable number of arguments, eliminating the need for a format string with the exact number of placeholders.*
Also you don't have to use comma's, you can use a | or something else the user doesn't use. To make it even more friendly you can specify the delimiter in the first line, excel supports that I know. ``` sep=| header1|header2|header3 1,2|2,3|3,4 ```
41,781,807
I am trying to get the counts of items in a database to confirm that data insertion is successful. 1. Get count before insert 2. Insert 3. Get count after insert 4. Console.log a summary Note: I know this can be implemented using some simple functions: ``` dbName.equal(insertSize, result.insertedCount) ``` However, I am new to javascript and I think I've come across a need to implement asynchronous callbacks, so I wanted to figure this out. **Insert Function** ``` var insertMany = function() { // get initial count var count1 = getDbCount(); // Insert new 'data' MongoClient.connect(url, function(err, db) { var col = db.collection(collectionName); col.insert(data, {w:1}, function(err,result) {}); db.close(); }); /** This needs to be implemented through next/callback after the insert operation **/ var count2 = getDbCount(); /** These final console logs should be executed after all other operations are completed **/ console.log('[Count] Start: ' + count1 + ' | End:' +count2); console.log('[Insert] Expected: ' + data.length + ' | Actual: ' + (count2 - count1)); }; ``` **Get DB Count Function** ``` var getDbCount = function() { MongoClient.connect(url, function(err, db) { if (err) console.log(err); var col = db.collection(collectionName); col.count({}, function(err, count) { if (err) console.log(err); db.close(); console.log('docs count: ' + count); // This log works fine }); }); return count; // this is returning as undefined since this is // executing before the count operation is completed }; ``` I am getting errors because the returns are occurring before the required operations have completed. Thanks for your help. --- **[EDIT] getCount Function with Promise** I have added the promise to the getCount function as a start: ``` var getCount = function() { var dbCount = 0; var promise = new Promise(function(resolve, reject) { MongoClient.connect(url, function(err, db) { if (err) { console.log('Unable to connect to server', err); } else { console.log('Database connection established:' + dbName); } // Get the collection var col = db.collection(collectionName); col.count({}, function(err, count) { if (err) console.log(err); db.close(); console.log('docs count: ' + count); resolve(null); dbCount = count; }); }); }); promise.then(function() { return dbCount; }); }; console.log(getCount()); ``` The output is still: undefined Database connection established:testdb docs count: 500 Then then({return count}) code is still being executed before the promise {db.count()}. It returns undefined before the database operations are completed.
2017/01/21
[ "https://Stackoverflow.com/questions/41781807", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5109242/" ]
In general the problem you have is that you expect the value to be already there when your functions return. Well with async functions this is often not the case. There is a huge amount of information about async processing (not to be mistaken with parallel processing like in Java). I have created an example which should fit your situation: <https://jsfiddle.net/rh3gx76x/1/> ``` var dummyCount = 5 getCount = function() { return new Promise(function(resolve, reject) { setTimeout(function() { // this would be your db call, counting your documents resolve(dummyCount); // dummy for number of documents found }, 100 * Math.random()); }); }; insertMany = function() { return new Promise(function(resolve, reject) { setTimeout(function() { // this would be your db call, writing your documents dummyCount += 2; resolve(); }, 100 * Math.random()); }); }; runIt = function(callback) { var count1; getCount().then(function(count) { console.log("First callback with value ", count); count1 = count; insertMany().then(function() { getCount().then(function(count2){ console.log("Second callback with value ", count2); callback(null, count1, count2); }); }) }) } runIt(function(err, count1, count2) { console.log("count1: " + count1 + ", count2: " + count2); }); ``` One last thing. You might want to check out the "async" package. This helps a lot with those problems providing a lot of helper functions and control flow stuff.
The reason why the returns are occurring before the operations have completed is the `asynchronous` design of nodejs, inherited from javascript. The callback from `Mongoose.connect()` is left to do the operation while the program moves forward and encounters the `return`. You have not provided your workaround which landed you in the error statement so I can't comment on what wrong you've done there. But, considering your first effort to finish what you started, the best approach to ensure that the `Mongoose.connect()` completes and then the return statement executes is using JavaScript's [promise](https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Global_Objects/Promise). Using promises will make your `Mongoose.connect()` execute and then pass the control to the return statement. Your code will go something like: **Insert function**: ``` var insertMany = function() { // get initial count var count1 = getDbCount(); // Insert new 'data' var promise = new Promise( function(resolve,reject){ MongoClient.connect(url, function(err, db) { var col = db.collection(collectionName); col.insert(data, {w:1}, function(err,result) {}); db.close(); resolve(null); }); } ); promise.then(function(val){ var count2 = getDbCount(); /** These final console logs should be executed after all other operations are completed **/ console.log('[Count] Start: ' + count1 + ' | End:' +count2); console.log('[Insert] Expected: ' + data.length + ' | Actual: ' + (count2 - count1)); }); ``` Similarly, you can add promise to `GetDB` function. The keypoints while executing a promise are: 1. `then()` contains the part of function to be executed synchronously after the execution of promise. It is called by `resolve`, and the parameter passed is received in `catch` function's callback. 2. `catch()` contains the part of function which is called when error is encountered in a promise. It is called by `reject`, and the parameter passed is received in `catch` function's callback. **EDIT:** An alternative of promise is [async.js](https://caolan.github.io/async/docs.html), only differing in the fact that it is an external module, unlike promises.
13,160
I am filtering a VEP annotated vcf, trying to maintain just those variants classified as deleterious by `SIFT`and as damaging (probably or possibly included) by `PolyPhen`. I am using: ``` filter_vep -i "$input" -o "$output" -filter "SIFT = deleterious and PolyPhen match damaging" ``` and I also tried: ``` filter_vep -i "$input" -o "$output" -filter "SIFT match deleterious and PolyPhen match damaging" ``` The output contains just variants with some value in these two fields, but some of the variants are classified as tolerated by `SIFT` and as benign by `PolyPhen`, so do you know why the filter is not working? As an example: this two variants shouldn't exist in my filtered VCF, because the first one is classified as benign by PolyPhen and the second is classified as benign by both fields: ``` CHROM POS REF ALT FILTER INFO 1 1934441 . G T . PASS deleterious(0.01)|benign(0.138) 1 207085100 . G A . PASS tolerated(0.16)|benign(0.037) ``` As an example of a variant that has pass properly those filters: ``` CHROM POS REF ALT FILTER INFO 2 3938449 . T A . PASS deleterious(0.01)|probably damaging(0.67) ``` Those are 3 variants among those that I have in my filtered VCF, but I should be getting just those variants which are classified as deleterious and damaging by both SIFT and PolyPhen, as the third variant I've shown.
2020/05/01
[ "https://bioinformatics.stackexchange.com/questions/13160", "https://bioinformatics.stackexchange.com", "https://bioinformatics.stackexchange.com/users/8631/" ]
I think what you're trying to do could be achieved by adding parenthesis to the conditions and by specifying the format of the input file by using the flag `--format vcf`, as specified in the [documentation](https://www.ensembl.org/info/docs/tools/vep/script/vep_filter.html). The final command should look like this: ```bash filter_vep -i "$input" --format vcf --filter "(SIFT = deleterious) and (PolyPhen match damaging)" -o "$output" ```
If you are working under a Linux environment, then you may easily use a piped grep search. For example: ``` grep deleterious file.vcf |grep damaging >filtered_file.vcf ```
13,160
I am filtering a VEP annotated vcf, trying to maintain just those variants classified as deleterious by `SIFT`and as damaging (probably or possibly included) by `PolyPhen`. I am using: ``` filter_vep -i "$input" -o "$output" -filter "SIFT = deleterious and PolyPhen match damaging" ``` and I also tried: ``` filter_vep -i "$input" -o "$output" -filter "SIFT match deleterious and PolyPhen match damaging" ``` The output contains just variants with some value in these two fields, but some of the variants are classified as tolerated by `SIFT` and as benign by `PolyPhen`, so do you know why the filter is not working? As an example: this two variants shouldn't exist in my filtered VCF, because the first one is classified as benign by PolyPhen and the second is classified as benign by both fields: ``` CHROM POS REF ALT FILTER INFO 1 1934441 . G T . PASS deleterious(0.01)|benign(0.138) 1 207085100 . G A . PASS tolerated(0.16)|benign(0.037) ``` As an example of a variant that has pass properly those filters: ``` CHROM POS REF ALT FILTER INFO 2 3938449 . T A . PASS deleterious(0.01)|probably damaging(0.67) ``` Those are 3 variants among those that I have in my filtered VCF, but I should be getting just those variants which are classified as deleterious and damaging by both SIFT and PolyPhen, as the third variant I've shown.
2020/05/01
[ "https://bioinformatics.stackexchange.com/questions/13160", "https://bioinformatics.stackexchange.com", "https://bioinformatics.stackexchange.com/users/8631/" ]
Here is another way to achieve your goal using the `fuc vcf_vep` [command](https://sbslee-fuc.readthedocs.io/en/latest/cli.html#vcf-vep) I wrote: ``` $ fuc vcf_vep in.vcf 'SIFT.str.contains("deleterious") and PolyPhen.str.contains("damaging")' > out.vcf ``` For getting help: ``` $ fuc vcf_vep -h usage: fuc vcf_vep [-h] [--opposite] [--as_zero] vcf expr This command will filter a VCF file annotated by Ensemble VEP. It essentially wraps the `pandas.DataFrame.query` method. For details on query expression, please visit the method's documentation page (https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.query.html#pandas-dataframe-query). examples: $ fuc vcf\_vep in.vcf 'SYMBOL == "TP53"' > out.vcf $ fuc vcf_vep in.vcf 'SYMBOL != "TP53"' > out.vcf $ fuc vcf\_vep in.vcf 'SYMBOL == "TP53"' --opposite > out.vcf $ fuc vcf_vep in.vcf 'Consequence in ["splice_donor_variant", "stop_gained"]' > out.vcf $ fuc vcf\_vep in.vcf '(SYMBOL == "TP53") and (Consequence.str.contains("stop\_gained"))' > out.vcf $ fuc vcf_vep in.vcf 'gnomAD_AF < 0.001' > out.vcf $ fuc vcf_vep in.vcf 'gnomAD_AF < 0.001' --as_zero > out.vcf positional arguments: vcf Ensemble VEP-annotated VCF file expr query expression to evaluate optional arguments: -h, --help show this help message and exit --opposite use this flag to return records that don’t meet the said criteria --as_zero use this flag to treat missing values as zero instead of NaN ``` You can also do above with Python: ``` from fuc import pyvcf, pyvep vf = pyvcf.VcfFrame.from_file('in.vcf') expr = 'SIFT.str.contains("deleterious") and PolyPhen.str.contains("damaging")' filtered_vf = pyvep.filter_query(vf, expr) filtered_vf.to_file('out.vcf') ```
If you are working under a Linux environment, then you may easily use a piped grep search. For example: ``` grep deleterious file.vcf |grep damaging >filtered_file.vcf ```
13,160
I am filtering a VEP annotated vcf, trying to maintain just those variants classified as deleterious by `SIFT`and as damaging (probably or possibly included) by `PolyPhen`. I am using: ``` filter_vep -i "$input" -o "$output" -filter "SIFT = deleterious and PolyPhen match damaging" ``` and I also tried: ``` filter_vep -i "$input" -o "$output" -filter "SIFT match deleterious and PolyPhen match damaging" ``` The output contains just variants with some value in these two fields, but some of the variants are classified as tolerated by `SIFT` and as benign by `PolyPhen`, so do you know why the filter is not working? As an example: this two variants shouldn't exist in my filtered VCF, because the first one is classified as benign by PolyPhen and the second is classified as benign by both fields: ``` CHROM POS REF ALT FILTER INFO 1 1934441 . G T . PASS deleterious(0.01)|benign(0.138) 1 207085100 . G A . PASS tolerated(0.16)|benign(0.037) ``` As an example of a variant that has pass properly those filters: ``` CHROM POS REF ALT FILTER INFO 2 3938449 . T A . PASS deleterious(0.01)|probably damaging(0.67) ``` Those are 3 variants among those that I have in my filtered VCF, but I should be getting just those variants which are classified as deleterious and damaging by both SIFT and PolyPhen, as the third variant I've shown.
2020/05/01
[ "https://bioinformatics.stackexchange.com/questions/13160", "https://bioinformatics.stackexchange.com", "https://bioinformatics.stackexchange.com/users/8631/" ]
I think what you're trying to do could be achieved by adding parenthesis to the conditions and by specifying the format of the input file by using the flag `--format vcf`, as specified in the [documentation](https://www.ensembl.org/info/docs/tools/vep/script/vep_filter.html). The final command should look like this: ```bash filter_vep -i "$input" --format vcf --filter "(SIFT = deleterious) and (PolyPhen match damaging)" -o "$output" ```
Here is another way to achieve your goal using the `fuc vcf_vep` [command](https://sbslee-fuc.readthedocs.io/en/latest/cli.html#vcf-vep) I wrote: ``` $ fuc vcf_vep in.vcf 'SIFT.str.contains("deleterious") and PolyPhen.str.contains("damaging")' > out.vcf ``` For getting help: ``` $ fuc vcf_vep -h usage: fuc vcf_vep [-h] [--opposite] [--as_zero] vcf expr This command will filter a VCF file annotated by Ensemble VEP. It essentially wraps the `pandas.DataFrame.query` method. For details on query expression, please visit the method's documentation page (https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.query.html#pandas-dataframe-query). examples: $ fuc vcf\_vep in.vcf 'SYMBOL == "TP53"' > out.vcf $ fuc vcf_vep in.vcf 'SYMBOL != "TP53"' > out.vcf $ fuc vcf\_vep in.vcf 'SYMBOL == "TP53"' --opposite > out.vcf $ fuc vcf_vep in.vcf 'Consequence in ["splice_donor_variant", "stop_gained"]' > out.vcf $ fuc vcf\_vep in.vcf '(SYMBOL == "TP53") and (Consequence.str.contains("stop\_gained"))' > out.vcf $ fuc vcf_vep in.vcf 'gnomAD_AF < 0.001' > out.vcf $ fuc vcf_vep in.vcf 'gnomAD_AF < 0.001' --as_zero > out.vcf positional arguments: vcf Ensemble VEP-annotated VCF file expr query expression to evaluate optional arguments: -h, --help show this help message and exit --opposite use this flag to return records that don’t meet the said criteria --as_zero use this flag to treat missing values as zero instead of NaN ``` You can also do above with Python: ``` from fuc import pyvcf, pyvep vf = pyvcf.VcfFrame.from_file('in.vcf') expr = 'SIFT.str.contains("deleterious") and PolyPhen.str.contains("damaging")' filtered_vf = pyvep.filter_query(vf, expr) filtered_vf.to_file('out.vcf') ```
136,585
I just read: [What is the best site to ask Facebook questions?](https://meta.stackexchange.com/questions/136560/what-is-the-best-stack-to-ask-facebook-questions) Thanks to gobernador, I just learned that SO has a site for Facebook. This butts up to: [WordPress Answers or Stack Overflow?](https://meta.stackexchange.com/questions/136577/wordpress-answers-or-stack-overflow) While we canhave a debate on how dynamic FB is to WP, or vis versa, should questions be pushed to their sites or are things getting too specific?
2012/06/18
[ "https://meta.stackexchange.com/questions/136585", "https://meta.stackexchange.com", "https://meta.stackexchange.com/users/183974/" ]
For *programming* questions about Facebook, you can ask on <http://facebook.stackoverflow.com/> - this isn't really a *separate site*, merely a custom "view" of Stack Overflow with an emphasis on Facebook questions. Note that "programming" means "developing an app that connects with Facebook's APIs in some way". As with the rest of Stack Overflow, [the FAQ applies](http://facebook.stackoverflow.com/faq#dontask). For questions on *using* Facebook, ask on [Web Applications](https://webapps.stackexchange.com/). For support questions, see: <http://www.facebook.com/help/> - Stack Exchange does not and cannot provide answers about problems with your account, billing, etc.
If you want to find questions on a common topic across all of the Stack Exchange sites, check out the "Filtered Questions" tool on <http://stackexchange.com>. ![modify filter screen](https://i.stack.imgur.com/FcbXi.png)
70,483,525
I'm trying to use the Googletranslate function on Google Sheets but would like the cell to detect English or Japanese. I wanted to use Detectlanguage to find out the language first but I'm not sure how to format it. Here's what I did but I get an error: =if(=DETECTLANGUAGE(A2)="en",[=GOOGLETRANSLATE("jp"])) If it's English, I wanted it to be translated to Japanese and vice versa. Does anyone know if I am on the right track? Thank you in advance.
2021/12/26
[ "https://Stackoverflow.com/questions/70483525", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17763584/" ]
based on the GOOGLETRANSLATE and DETECTLANGUAGE functions, try this: ``` =IF(DETECTLANGUAGE(A2)="en",GOOGLETRANSLATE(A2, "en", "ja"),GOOGLETRANSLATE(A2, "ja", "en")) ``` Source: <https://support.google.com/docs/answer/3093331?hl=en> <https://support.google.com/docs/answer/3093278?hl=en>
A few problems with your attempted answer. first to translate from English to Japanese you would use `=googletranslate("en","ja")` or if you want to detect the source language `=googletranslate("auto","ja")` The first language is the source language and the second is the target the list of supported languages and their codes is here (note ja not jp for Japanese) language docs: <https://cloud.google.com/translate/docs/languages> next, when nesting functions (i.e. calling functions within other functions you do not prefix with = so you would use something like this to return the desired target language `=if(detectlanguage(a2)="en", "ja", "en") you can put this all together like this: `=googletranslate("auto", if(detectlanguage(a2)="en", "ja", "en"))`
35,489,427
I want to print the two integer variables divided. ``` int a = 1, b = 2; System.out.println(a + b); ``` Obviously println() function processes them as integers and calculates the sum. Instead I would like the output becomes like this "12". Any ideas?
2016/02/18
[ "https://Stackoverflow.com/questions/35489427", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5947430/" ]
Insert some blank Strings to induce this: ``` System.out.println(a +""+ b); ```
Store the integers as strings: ``` String a = "1"; String b = "2"; String c = a + b; System.out.println(c); ```
19,524,881
Need to create a javascript that opens **a single** window. Code: ``` document.body.onclick= function() { window.open( 'www.androidhackz.blogspot.com', 'poppage', 'toolbars=0, scrollbars=1, location=0, statusbars=0, menubars=0, resizable=1, width=650, height=650, left = 300, top = 50' ); } ``` What should I do? This script opens every single click on the website - I want it only once.
2013/10/22
[ "https://Stackoverflow.com/questions/19524881", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Add a flag that says you opened it. Check the flag, if set, than do not open it. If it is only one time on the entire site, than means cookie or localstorage.
``` var clickedAlready = false; document.body.onclick = function() { if (!clickedAlready) { window.open('www.androidhackz.blogspot.com', 'poppage', 'toolbars=0, scrollbars=1, location=0, statusbars=0, menubars=0, resizable=1, width=650, height=650, left = 300, top = 50'); clickedAlready = true; } }; ```
19,524,881
Need to create a javascript that opens **a single** window. Code: ``` document.body.onclick= function() { window.open( 'www.androidhackz.blogspot.com', 'poppage', 'toolbars=0, scrollbars=1, location=0, statusbars=0, menubars=0, resizable=1, width=650, height=650, left = 300, top = 50' ); } ``` What should I do? This script opens every single click on the website - I want it only once.
2013/10/22
[ "https://Stackoverflow.com/questions/19524881", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
``` var count = 0; document.body.onclick= function(){ if(count === 0) window.open('www.androidhackz.blogspot.com', 'poppage', 'toolbars=0, scrollbars=1, location=0, statusbars=0, menubars=0, resizable=1, width=650, height=650, left = 300, top = 50'); count++; } ```
``` var clickedAlready = false; document.body.onclick = function() { if (!clickedAlready) { window.open('www.androidhackz.blogspot.com', 'poppage', 'toolbars=0, scrollbars=1, location=0, statusbars=0, menubars=0, resizable=1, width=650, height=650, left = 300, top = 50'); clickedAlready = true; } }; ```
51,241,660
please help.. I have 2 tables Current user\_id = 3 Users Table: ``` | user_id | email | name | ------------------------------------------- | 1 | one@gmail.com | ridwan | | 2 | two@gmail.com | budi | | 3 | six@gmail.com | stevan | | 4 | ten@gmail.com | agung | ``` Relations Table [ user\_id and follower\_id are related to Users Table ] ``` | relation_id | user_id | follower_id | ----------------------------------------- | 1 | 1 | 3 | | 2 | 2 | 3 | ``` i want to get the list of the user, but if i already have relation with a user, it will give me a status 'following', just like instagram, maybe look like this ``` { user_id : 1, name : ridwan, status : following }, { user_id : 2, name : budi, status : following }, { user_id : 4, name : agung, status : not following } ``` how can i do that in laravel? thank you..
2018/07/09
[ "https://Stackoverflow.com/questions/51241660", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9584987/" ]
Double check your credentials and also you need to allow less secure apps in gmail. 1) Go to <https://myaccount.google.com/lesssecureapps> 2) Enable Less Secure Apps option
I have another solution for the same. If you want to remove password for security purpose, Then you can use my soltion too. At first you need some sender account credential, you can create that from here: <https://console.developers.google.com/projectselector/apis/credentials?supportedpurview=project> Node.js, Nodemailer, SMTP issue Here is my testing code [file:server.js] ``` //start server var http = require('http'); // Create a Transport instance using nodemailer var nodemailer = require('nodemailer'); var smtpTransport = nodemailer.createTransport({ host: "smtp.gmail.com", auth: { type: "OAuth2", user: "sender_email_id", clientId: "YOUR_CLIENT_ID", clientSecret: "YOUR_CLIENT_SECRET", refreshToken: "YOUR_REFRESH_TOKEN" } }); var htmlBody = '<h2>Hello Body</h2>'; // Setup mail configuration var mailOptions = { from: 'serder_email_id', // sender address to: 'recipent_email_id', // list of receivers subject: 'TEST SUBJECT', // Subject line text: 'Hello Body', // plaintext body html: htmlBody // html body }; var app = http.createServer(function (req, res) { // send mail smtpTransport.sendMail(mailOptions, function(error, info) { res.writeHead(200, {'Content-Type': 'text/html'}); if (error) { console.log('error', error) res.write('Error'); res.end(); } console.log('Message %s sent: %s', info.messageId, info.response); smtpTransport.close(); res.write('Email sent'); res.end(); }); }).listen(3000); ```
18,832,801
I've been looking to find how to configure a client to connect to a Cassandra cluster. Independent of clients like Pelops, Hector, etc, what is the best way to connect to a multi-node Cassandra cluster? Sending the string IP values works fine, but what about growing number cluster nodes in the future? Is maintaining synchronically ALL IP cluster nodes on client part?
2013/09/16
[ "https://Stackoverflow.com/questions/18832801", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1119684/" ]
Don't know if this answer all your questions but the growing cluster and your knowledge of clients ip are not related. I have a 5 node cluster but the client(s) only knows 2 ip addresses: the seeds. Since each machine of the cluster knows about the seeds (each cassandra.yaml contains the seeds ip address) if new machine will be added information about new one will come "for free" on the client side. Imagine a 5 nodes cluster with following ips 192.168.1.1 192.168.1.2 (seed) 192.168.1.3 192.168.1.4 (seed) 192.168.1.5 eg: the node .5 boot -- it will contact the seeds (node 2 and 4) and receive back information about the whole cluster. If you add a new 192.168.1.6 will behave exactly like the .5 and will point to the seeds to know the cluster situation. On the client side you don't have to change anything: you will just know that now you have 6 endpoints instead of 5. ps: you don't have necessarily to connect to the seeds you can just connect to any node of since after having contacted the seeds each node knows the whole cluster topology pps: it's your choice how many nodes to put in you "client known hosts", you can also put all 5 but this won't change the fact that if one node will be added you don't need to do anything on the client side Regards, Carlo
You will have an easier time letting the client track the state of each node. Smart clients will track endpoint state via the gossipinfo, which passes on new nodes as they appear in the cluster.
3,829,296
How to show that $$\lim\_{(x,y,z) \to (0,0,0)} \frac{xyz}{x^2+y^2+z^2}=0,$$ where $x,y,z>0$. My attempt: $$||(x,y,z)|| < \delta \implies |x|, |y|, |z| < \delta$$ $$\left | \frac{xyz}{x^2+y^2+z^2} \right | < \left | \frac{xyz}{x^2}\right | < \frac{\delta^3}{x^2}.$$ Now, I do not know how to proceed, and I think my attempt might be wrong.
2020/09/17
[ "https://math.stackexchange.com/questions/3829296", "https://math.stackexchange.com", "https://math.stackexchange.com/users/426645/" ]
Let $y= \frac{7x^{2}-4x+4}{x^{2}+1}$ $\Rightarrow (y-7)x^2+4x+(y-4)=0$ As $x$ must be real, the discriminant is $≥0$ $\Rightarrow 16-4(y-7)(y-4)≥0$ $\Rightarrow y^2-11y+24≤0$ Can you finish?
$f(x)=7-\frac{4x+3}{x^2+1}=7-g(x)$ We now have to find the range of $g(x)=\frac{4x+3}{x^2+1}$. We can see that the domain of this is $(-\infty,\infty)$ and that there are no vertical asymptotes and a horizontal asymptote of $y=0$ We can do some more inspection of this function and see that for $x>-\frac{3}{4}$, $g(x)>0$ and for $x<-\frac{3}{4}$, $g(x)<0$. So this means that $g(x)$ looks something like the derivative of a negative normal distribution. Specifically this means that there will be a local maxima at $x>-\frac{3}{4}$ and a local minima at $x<-\frac{3}{4}$. $g'(x)=\frac{4(x^2+1)-8x^2-6x}{(x^2+1)^2}$. Solving for the location of extrema, we get $2x^2+3x-2=0$, which has solutions $x=\{-2,0.5\}$. We already determined which one was the minima and the maxima, so the range of $g(x)$ is $[g(-2),g(.5)]=[-1,4]$ So the range of $f(x)$ is $[3,8]$.
2,915,266
A client of mine has a pure HTML website that was built in the dark ages - they want me to find where their users are coming from, how many individual users there are, etc. They want to know if the site is being used enough for them to invest the money into renovating it. I am remote from their site and do not have access to their web server. Is there something like ComScore for small sites that I can go into to check their usage statistics?
2010/05/26
[ "https://Stackoverflow.com/questions/2915266", "https://Stackoverflow.com", "https://Stackoverflow.com/users/109035/" ]
it looks like you have stumbled upon a nice [S-Expression](http://en.wikipedia.org/wiki/S-expression) file, also know as [LISP](http://en.wikipedia.org/wiki/Lisp_programming_language) code. It does look complex but its actually pretty easy to parse. In fact if you wan't to learn a lot about Lisp you could follow these [blog posts](http://peter.michaux.ca/articles/scheme-from-scratch-introduction), a small part of it is writing a parser for files like this. But thats probably overkill for you. :) instead you should use an already available S-Expression parser, here's project that has a [lisp interpreter](http://www.lsharp.org/) for .NET, you should be able to either use their code or their project to parse the file. The lispy thing to do would be to just read the file as a lisp program so instead of 'parsing' it you would just execute it. So another option would be to just write a small lisp program to transform the file into something else thats a little more natural in C# (maybe XML?). for reference here's another post that talks about [lisp in C#](https://stackoverflow.com/questions/70004/using-lisp-in-c) **EDIT** [here](http://github.com/petermichaux/bootstrap-scheme/blob/v0.21/scheme.c) is a scheme interpreter written in c (its only about 1000 loc) you are interested in the `read` and associated procedures. this uses a very simple forward only parse of an sexpression into a tree of c structs, you should be able to adapt this into C# no problem.
You might consider writing a state machine implementation which changes states according to the different tokens you encounter within the file. I have found state-based parsers to be quite easy to write and debug. The most difficult part would likely be defining the tokens you use.
2,915,266
A client of mine has a pure HTML website that was built in the dark ages - they want me to find where their users are coming from, how many individual users there are, etc. They want to know if the site is being used enough for them to invest the money into renovating it. I am remote from their site and do not have access to their web server. Is there something like ComScore for small sites that I can go into to check their usage statistics?
2010/05/26
[ "https://Stackoverflow.com/questions/2915266", "https://Stackoverflow.com", "https://Stackoverflow.com/users/109035/" ]
it looks like you have stumbled upon a nice [S-Expression](http://en.wikipedia.org/wiki/S-expression) file, also know as [LISP](http://en.wikipedia.org/wiki/Lisp_programming_language) code. It does look complex but its actually pretty easy to parse. In fact if you wan't to learn a lot about Lisp you could follow these [blog posts](http://peter.michaux.ca/articles/scheme-from-scratch-introduction), a small part of it is writing a parser for files like this. But thats probably overkill for you. :) instead you should use an already available S-Expression parser, here's project that has a [lisp interpreter](http://www.lsharp.org/) for .NET, you should be able to either use their code or their project to parse the file. The lispy thing to do would be to just read the file as a lisp program so instead of 'parsing' it you would just execute it. So another option would be to just write a small lisp program to transform the file into something else thats a little more natural in C# (maybe XML?). for reference here's another post that talks about [lisp in C#](https://stackoverflow.com/questions/70004/using-lisp-in-c) **EDIT** [here](http://github.com/petermichaux/bootstrap-scheme/blob/v0.21/scheme.c) is a scheme interpreter written in c (its only about 1000 loc) you are interested in the `read` and associated procedures. this uses a very simple forward only parse of an sexpression into a tree of c structs, you should be able to adapt this into C# no problem.
Use a parser generator like ANTLR. It takes a EBNF-like description of the grammar and creates parser code in the language of your choice.
2,915,266
A client of mine has a pure HTML website that was built in the dark ages - they want me to find where their users are coming from, how many individual users there are, etc. They want to know if the site is being used enough for them to invest the money into renovating it. I am remote from their site and do not have access to their web server. Is there something like ComScore for small sites that I can go into to check their usage statistics?
2010/05/26
[ "https://Stackoverflow.com/questions/2915266", "https://Stackoverflow.com", "https://Stackoverflow.com/users/109035/" ]
it looks like you have stumbled upon a nice [S-Expression](http://en.wikipedia.org/wiki/S-expression) file, also know as [LISP](http://en.wikipedia.org/wiki/Lisp_programming_language) code. It does look complex but its actually pretty easy to parse. In fact if you wan't to learn a lot about Lisp you could follow these [blog posts](http://peter.michaux.ca/articles/scheme-from-scratch-introduction), a small part of it is writing a parser for files like this. But thats probably overkill for you. :) instead you should use an already available S-Expression parser, here's project that has a [lisp interpreter](http://www.lsharp.org/) for .NET, you should be able to either use their code or their project to parse the file. The lispy thing to do would be to just read the file as a lisp program so instead of 'parsing' it you would just execute it. So another option would be to just write a small lisp program to transform the file into something else thats a little more natural in C# (maybe XML?). for reference here's another post that talks about [lisp in C#](https://stackoverflow.com/questions/70004/using-lisp-in-c) **EDIT** [here](http://github.com/petermichaux/bootstrap-scheme/blob/v0.21/scheme.c) is a scheme interpreter written in c (its only about 1000 loc) you are interested in the `read` and associated procedures. this uses a very simple forward only parse of an sexpression into a tree of c structs, you should be able to adapt this into C# no problem.
One approach is to just start with a helper parsing like the one described at <http://www.blackbeltcoder.com/Articles/strings/a-text-parsing-helper-class>. And then process the file character by character. This is what I've done for several classes.
2,915,266
A client of mine has a pure HTML website that was built in the dark ages - they want me to find where their users are coming from, how many individual users there are, etc. They want to know if the site is being used enough for them to invest the money into renovating it. I am remote from their site and do not have access to their web server. Is there something like ComScore for small sites that I can go into to check their usage statistics?
2010/05/26
[ "https://Stackoverflow.com/questions/2915266", "https://Stackoverflow.com", "https://Stackoverflow.com/users/109035/" ]
it looks like you have stumbled upon a nice [S-Expression](http://en.wikipedia.org/wiki/S-expression) file, also know as [LISP](http://en.wikipedia.org/wiki/Lisp_programming_language) code. It does look complex but its actually pretty easy to parse. In fact if you wan't to learn a lot about Lisp you could follow these [blog posts](http://peter.michaux.ca/articles/scheme-from-scratch-introduction), a small part of it is writing a parser for files like this. But thats probably overkill for you. :) instead you should use an already available S-Expression parser, here's project that has a [lisp interpreter](http://www.lsharp.org/) for .NET, you should be able to either use their code or their project to parse the file. The lispy thing to do would be to just read the file as a lisp program so instead of 'parsing' it you would just execute it. So another option would be to just write a small lisp program to transform the file into something else thats a little more natural in C# (maybe XML?). for reference here's another post that talks about [lisp in C#](https://stackoverflow.com/questions/70004/using-lisp-in-c) **EDIT** [here](http://github.com/petermichaux/bootstrap-scheme/blob/v0.21/scheme.c) is a scheme interpreter written in c (its only about 1000 loc) you are interested in the `read` and associated procedures. this uses a very simple forward only parse of an sexpression into a tree of c structs, you should be able to adapt this into C# no problem.
I wrote an S-Expression parser for C# using OMeta#. It is available at <https://github.com/databigbang/SExpression.NET> Looking at your S-Expression variant you just need to change my definition of string with opening and ending double quotes to a single quote and add the definition for elements that contains a colon in the end (I assume that are dictionaries).
14,694,914
I'm trying to implement a jquery ui dialog. Using [this code](http://jsfiddle.net/N7PRp/) as a base, I've succeeded. But I would rather use elements' classes instead of IDs. I therefore modified the code to this: ``` $(document).ready(function() { $(".add_shipping_address").click(function() { console.log($(this).parents('.shipping_sector')); //correctly returns the parent fieldset $(this).parents('.shipping_sector').find(".shipping_dialog").dialog(); return false; }); }); ``` The dialog works the first time, but once it is closed, it will not open again. Whereas it works as expected in the source example. How have I damaged it? [jsbin](http://jsbin.com/ufaxop/1/edit)
2013/02/04
[ "https://Stackoverflow.com/questions/14694914", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1252748/" ]
The way jQuery dialogs work is that they take the HTML for the dialog out of the current location in the DOM and place a new `div` at the bottom of the DOM. When you open your dialog your new location is defined as seen below. therefore your HTML is not where it was and your selector using `find` is not going to find anything. You have to either use `id` or the class name directly but if you got multiple elements with that class you are better of using identifiers. What we do in our project we craft a new div with an id specifically for the dialog, then we know which one it is. You can the either place your actual content into the new container or `clone()` it and place it inside. Similar to this: ``` var $dialog = $('<div id="dialog-container"></div>') var $content = $(this).parents('.shipping_sector').find(".shipping_dialog"); var $clonedContent = $(this).parents('.shipping_sector').find(".shipping_dialog").clone() // use clone(true, true) to include bound events. $dialog.append($content); // or $dialog.append($clonedContent); $dialog.dialog(); ``` But that means you also have to slightly restructure your code to deal with that. In addition when the dialog is destroyed it does not move the HTML back again where it found it so we manually have to put it back. Mind you we are using jQuery 1.7 and I don't know if that is still the same issue in 1.9. Dialogs are quite tricky to deal with but if you use something similar to the above whereby you create a custom `div` for example and give it a unique id you have a lot of freedom. ### What your new HTML looks like when dialog is opened: ``` <div style="display: block; z-index: 1003; outline: 0px; position: absolute; height: auto; width: 300px; top: 383px; left: 86px;" class="ui-dialog ui-widget ui-widget-content ui-corner-all ui-draggable ui-resizable" tabindex="-1" role="dialog" aria-labelledby="ui-dialog-title-1"> <div class="ui-dialog-titlebar ui-widget-header ui-corner-all ui-helper-clearfix"><span class="ui-dialog-title" id="ui-dialog-title-1">Contact form</span> <a href="#" class="ui-dialog-titlebar-close ui-corner-all" role="button"><span class="ui-icon ui-icon-closethick">close</span> </a> </div> <div class="shipping_dialog ui-dialog-content ui-widget-content" style="display: block; width: auto; min-height: 91.03125px; height: auto;" scrolltop="0" scrollleft="0"> <p>appear now</p> </div> <div class="ui-resizable-handle ui-resizable-n" style="z-index: 1000;"></div> <div class="ui-resizable-handle ui-resizable-e" style="z-index: 1000;"></div> <div class="ui-resizable-handle ui-resizable-s" style="z-index: 1000;"></div> <div class="ui-resizable-handle ui-resizable-w" style="z-index: 1000;"></div> <div class="ui-resizable-handle ui-resizable-se ui-icon ui-icon-gripsmall-diagonal-se ui-icon-grip-diagonal-se" style="z-index: 1000;"></div> <div class="ui-resizable-handle ui-resizable-sw" style="z-index: 1000;"></div> <div class="ui-resizable-handle ui-resizable-ne" style="z-index: 1000;"></div> <div class="ui-resizable-handle ui-resizable-nw" style="z-index: 1000;"></div> </div> ```
It may be that the parenting structure for the dialog has changed. Try changing it to ``` //jquery dialog functions $(document).ready(function() { $(".add_shipping_address").click(function() { //console.log($(this).parents('.shipping_sector')); $(".shipping_dialog").dialog(); return false; }); }); ``` [jsbin](http://jsbin.com/ufaxop/6/edit)
14,694,914
I'm trying to implement a jquery ui dialog. Using [this code](http://jsfiddle.net/N7PRp/) as a base, I've succeeded. But I would rather use elements' classes instead of IDs. I therefore modified the code to this: ``` $(document).ready(function() { $(".add_shipping_address").click(function() { console.log($(this).parents('.shipping_sector')); //correctly returns the parent fieldset $(this).parents('.shipping_sector').find(".shipping_dialog").dialog(); return false; }); }); ``` The dialog works the first time, but once it is closed, it will not open again. Whereas it works as expected in the source example. How have I damaged it? [jsbin](http://jsbin.com/ufaxop/1/edit)
2013/02/04
[ "https://Stackoverflow.com/questions/14694914", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1252748/" ]
You're current code is : ``` $(".OpenDialogOnClick").dialog(); ``` And just change it to: ``` $(".OpenDialogOnClick").clone().dialog(); ``` Voila, you're HTML will never be destroyed / deleted again :)
It may be that the parenting structure for the dialog has changed. Try changing it to ``` //jquery dialog functions $(document).ready(function() { $(".add_shipping_address").click(function() { //console.log($(this).parents('.shipping_sector')); $(".shipping_dialog").dialog(); return false; }); }); ``` [jsbin](http://jsbin.com/ufaxop/6/edit)
14,694,914
I'm trying to implement a jquery ui dialog. Using [this code](http://jsfiddle.net/N7PRp/) as a base, I've succeeded. But I would rather use elements' classes instead of IDs. I therefore modified the code to this: ``` $(document).ready(function() { $(".add_shipping_address").click(function() { console.log($(this).parents('.shipping_sector')); //correctly returns the parent fieldset $(this).parents('.shipping_sector').find(".shipping_dialog").dialog(); return false; }); }); ``` The dialog works the first time, but once it is closed, it will not open again. Whereas it works as expected in the source example. How have I damaged it? [jsbin](http://jsbin.com/ufaxop/1/edit)
2013/02/04
[ "https://Stackoverflow.com/questions/14694914", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1252748/" ]
The way jQuery dialogs work is that they take the HTML for the dialog out of the current location in the DOM and place a new `div` at the bottom of the DOM. When you open your dialog your new location is defined as seen below. therefore your HTML is not where it was and your selector using `find` is not going to find anything. You have to either use `id` or the class name directly but if you got multiple elements with that class you are better of using identifiers. What we do in our project we craft a new div with an id specifically for the dialog, then we know which one it is. You can the either place your actual content into the new container or `clone()` it and place it inside. Similar to this: ``` var $dialog = $('<div id="dialog-container"></div>') var $content = $(this).parents('.shipping_sector').find(".shipping_dialog"); var $clonedContent = $(this).parents('.shipping_sector').find(".shipping_dialog").clone() // use clone(true, true) to include bound events. $dialog.append($content); // or $dialog.append($clonedContent); $dialog.dialog(); ``` But that means you also have to slightly restructure your code to deal with that. In addition when the dialog is destroyed it does not move the HTML back again where it found it so we manually have to put it back. Mind you we are using jQuery 1.7 and I don't know if that is still the same issue in 1.9. Dialogs are quite tricky to deal with but if you use something similar to the above whereby you create a custom `div` for example and give it a unique id you have a lot of freedom. ### What your new HTML looks like when dialog is opened: ``` <div style="display: block; z-index: 1003; outline: 0px; position: absolute; height: auto; width: 300px; top: 383px; left: 86px;" class="ui-dialog ui-widget ui-widget-content ui-corner-all ui-draggable ui-resizable" tabindex="-1" role="dialog" aria-labelledby="ui-dialog-title-1"> <div class="ui-dialog-titlebar ui-widget-header ui-corner-all ui-helper-clearfix"><span class="ui-dialog-title" id="ui-dialog-title-1">Contact form</span> <a href="#" class="ui-dialog-titlebar-close ui-corner-all" role="button"><span class="ui-icon ui-icon-closethick">close</span> </a> </div> <div class="shipping_dialog ui-dialog-content ui-widget-content" style="display: block; width: auto; min-height: 91.03125px; height: auto;" scrolltop="0" scrollleft="0"> <p>appear now</p> </div> <div class="ui-resizable-handle ui-resizable-n" style="z-index: 1000;"></div> <div class="ui-resizable-handle ui-resizable-e" style="z-index: 1000;"></div> <div class="ui-resizable-handle ui-resizable-s" style="z-index: 1000;"></div> <div class="ui-resizable-handle ui-resizable-w" style="z-index: 1000;"></div> <div class="ui-resizable-handle ui-resizable-se ui-icon ui-icon-gripsmall-diagonal-se ui-icon-grip-diagonal-se" style="z-index: 1000;"></div> <div class="ui-resizable-handle ui-resizable-sw" style="z-index: 1000;"></div> <div class="ui-resizable-handle ui-resizable-ne" style="z-index: 1000;"></div> <div class="ui-resizable-handle ui-resizable-nw" style="z-index: 1000;"></div> </div> ```
If you do not mind creating a new dialog every time, you can essentially destroy your dialog and move the contents back to its previous location. That way on the next click, the process will repeat itself. ``` //jquery dialog functions $(document).ready(function() { $(".add_shipping_address").click(function() { var sector = $(this).parents('.shipping_sector'); sector.find(".shipping_dialog").dialog({ close: function(event, ui) { $(event.target).dialog('destroy'); $(event.target).appendTo(sector); } }); return false; }); }); ``` [jsbin](http://jsbin.com/ufaxop/13/edit)
14,694,914
I'm trying to implement a jquery ui dialog. Using [this code](http://jsfiddle.net/N7PRp/) as a base, I've succeeded. But I would rather use elements' classes instead of IDs. I therefore modified the code to this: ``` $(document).ready(function() { $(".add_shipping_address").click(function() { console.log($(this).parents('.shipping_sector')); //correctly returns the parent fieldset $(this).parents('.shipping_sector').find(".shipping_dialog").dialog(); return false; }); }); ``` The dialog works the first time, but once it is closed, it will not open again. Whereas it works as expected in the source example. How have I damaged it? [jsbin](http://jsbin.com/ufaxop/1/edit)
2013/02/04
[ "https://Stackoverflow.com/questions/14694914", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1252748/" ]
The way jQuery dialogs work is that they take the HTML for the dialog out of the current location in the DOM and place a new `div` at the bottom of the DOM. When you open your dialog your new location is defined as seen below. therefore your HTML is not where it was and your selector using `find` is not going to find anything. You have to either use `id` or the class name directly but if you got multiple elements with that class you are better of using identifiers. What we do in our project we craft a new div with an id specifically for the dialog, then we know which one it is. You can the either place your actual content into the new container or `clone()` it and place it inside. Similar to this: ``` var $dialog = $('<div id="dialog-container"></div>') var $content = $(this).parents('.shipping_sector').find(".shipping_dialog"); var $clonedContent = $(this).parents('.shipping_sector').find(".shipping_dialog").clone() // use clone(true, true) to include bound events. $dialog.append($content); // or $dialog.append($clonedContent); $dialog.dialog(); ``` But that means you also have to slightly restructure your code to deal with that. In addition when the dialog is destroyed it does not move the HTML back again where it found it so we manually have to put it back. Mind you we are using jQuery 1.7 and I don't know if that is still the same issue in 1.9. Dialogs are quite tricky to deal with but if you use something similar to the above whereby you create a custom `div` for example and give it a unique id you have a lot of freedom. ### What your new HTML looks like when dialog is opened: ``` <div style="display: block; z-index: 1003; outline: 0px; position: absolute; height: auto; width: 300px; top: 383px; left: 86px;" class="ui-dialog ui-widget ui-widget-content ui-corner-all ui-draggable ui-resizable" tabindex="-1" role="dialog" aria-labelledby="ui-dialog-title-1"> <div class="ui-dialog-titlebar ui-widget-header ui-corner-all ui-helper-clearfix"><span class="ui-dialog-title" id="ui-dialog-title-1">Contact form</span> <a href="#" class="ui-dialog-titlebar-close ui-corner-all" role="button"><span class="ui-icon ui-icon-closethick">close</span> </a> </div> <div class="shipping_dialog ui-dialog-content ui-widget-content" style="display: block; width: auto; min-height: 91.03125px; height: auto;" scrolltop="0" scrollleft="0"> <p>appear now</p> </div> <div class="ui-resizable-handle ui-resizable-n" style="z-index: 1000;"></div> <div class="ui-resizable-handle ui-resizable-e" style="z-index: 1000;"></div> <div class="ui-resizable-handle ui-resizable-s" style="z-index: 1000;"></div> <div class="ui-resizable-handle ui-resizable-w" style="z-index: 1000;"></div> <div class="ui-resizable-handle ui-resizable-se ui-icon ui-icon-gripsmall-diagonal-se ui-icon-grip-diagonal-se" style="z-index: 1000;"></div> <div class="ui-resizable-handle ui-resizable-sw" style="z-index: 1000;"></div> <div class="ui-resizable-handle ui-resizable-ne" style="z-index: 1000;"></div> <div class="ui-resizable-handle ui-resizable-nw" style="z-index: 1000;"></div> </div> ```
You're current code is : ``` $(".OpenDialogOnClick").dialog(); ``` And just change it to: ``` $(".OpenDialogOnClick").clone().dialog(); ``` Voila, you're HTML will never be destroyed / deleted again :)
14,694,914
I'm trying to implement a jquery ui dialog. Using [this code](http://jsfiddle.net/N7PRp/) as a base, I've succeeded. But I would rather use elements' classes instead of IDs. I therefore modified the code to this: ``` $(document).ready(function() { $(".add_shipping_address").click(function() { console.log($(this).parents('.shipping_sector')); //correctly returns the parent fieldset $(this).parents('.shipping_sector').find(".shipping_dialog").dialog(); return false; }); }); ``` The dialog works the first time, but once it is closed, it will not open again. Whereas it works as expected in the source example. How have I damaged it? [jsbin](http://jsbin.com/ufaxop/1/edit)
2013/02/04
[ "https://Stackoverflow.com/questions/14694914", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1252748/" ]
You're current code is : ``` $(".OpenDialogOnClick").dialog(); ``` And just change it to: ``` $(".OpenDialogOnClick").clone().dialog(); ``` Voila, you're HTML will never be destroyed / deleted again :)
If you do not mind creating a new dialog every time, you can essentially destroy your dialog and move the contents back to its previous location. That way on the next click, the process will repeat itself. ``` //jquery dialog functions $(document).ready(function() { $(".add_shipping_address").click(function() { var sector = $(this).parents('.shipping_sector'); sector.find(".shipping_dialog").dialog({ close: function(event, ui) { $(event.target).dialog('destroy'); $(event.target).appendTo(sector); } }); return false; }); }); ``` [jsbin](http://jsbin.com/ufaxop/13/edit)
36,852,475
I have a folder where there are books and I have a file with the real name of each file. I renamed them in a way that I can easily see if they are ordered, say "00.pdf", "01.pdf" and so on. I want to know if there is a way, using the shell, to match each of the lines of the file, say "names", with each file. Actually, match the line `i` of the file with the book in the positión `i` in sort order. ``` <name-of-the-book-in-the-1-line> -> <book-in-the-1-position> <name-of-the-book-in-the-2-line> -> <book-in-the-2-position> . . . <name-of-the-book-in-the-i-line> -> <book-in-the-i-position> . . . ``` I'm doing this in Windows, using Total Commander, but I want to do it in Ubuntu, so I don't have to reboot. I know about `mv` and `rename`, but I'm not as good as I want with regular expressions...
2016/04/25
[ "https://Stackoverflow.com/questions/36852475", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3591470/" ]
You need a Babel transform. Meteor 1.3.3+ supports additional plugins and presets via .babelrc Install the static transform: ``` npm install babel-plugin-transform-class-properties # .babelrc { "presets": [ "meteor", "es2015", "stage-1" ], "plugins": [ "transform-class-properties" ] } ``` [Support in Meteor 1.3.3](https://github.com/meteor/meteor/pull/7033/commits/49a60f155b64e00a5f3e879540ce4be7bbb74691) [The transform](https://babeljs.io/docs/plugins/transform-class-properties/)
I changed it from extending the class too React.createClass and it now works. ``` import React from 'react'; import AppBar from 'material-ui/AppBar'; import IconButton from 'material-ui/IconButton'; import Navigationclose from 'material-ui/svg-icons/navigation/close'; import IconMenu from 'material-ui/IconMenu'; import NavigationMoreVert from 'material-ui/svg-icons/navigation/more-vert'; import MenuItem from 'material-ui/MenuItem'; import baseTheme from 'material-ui/styles/baseThemes/lightBaseTheme'; import getMuiTheme from 'material-ui/styles/getMuiTheme'; var Navbar = React.createClass({ childContextTypes: {muiTheme: React.PropTypes.object}, getChildContext() { return {muiTheme: getMuiTheme(baseTheme)}; }, navigate(event, index, item) { console.log('navigate', item); FlowRouter.go(item.route); }, getMenuItems() { console.log('navigate1'); return [ { route: '/', text: 'Home' }, { route: '/table', text: 'Table' } ]; }, render() { console.log('Render'); return (<AppBar title="Title" iconElementLeft={<IconButton><Navigationclose /></IconButton>} iconElementRight={ <IconMenu iconButtonElement={ <IconButton><NavigationMoreVert /></IconButton> } targetOrigin={{horizontal: 'right', vertical: 'top'}} anchorOrigin={{horizontal: 'right', vertical: 'top'}} > <MenuItem primaryText="Refresh"/> <MenuItem primaryText="Help"/> <MenuItem primaryText="Sign out"/> </IconMenu> } />); } }); export default Navbar; ```
41,485,804
How to check if the file uploaded file is **solely** an image and not video file and anything? I tested out a simple code that will check if the uploaded file is an image. **form** ``` <form method="post" enctype="multipart/form-data"> <input type="file" name="photo" accept="image/*"> <input type="submit" name="submit" value="Test"> </form> ``` Although I have `accept="image/*"`, I can easily change it to `All types` and get a non-image file. **code to check if the file is a valid image** (This is only for testing) ``` if($_POST['submit']) { $tmp_file = $_FILES['photo']['tmp_name']; if(mime_content_type($tmp_file)) { var_dump(mime_content_type($tmp_file)); } else { echo 'error1'; } if(getimagesize($tmp_file)) { var_dump(getimagesize($tmp_file)); } else { echo 'error2'; } } ``` As of now I have 3 test to this, 2 passed 1 failed: 1. Testing an image = passed. 2. Testing a non-image valid **.srt** file = passed (gives an error to `getimagesize`) 3. Testing a valid video file **.mp4** = failed (after the submission, it is loading for 5-10 seconds and not gives any error) What do I need to do about this? I do not know what is the problem because it does not gives any results from both `var_dump()` and `echo 'errors'`. What I do think now is that **PHP** is accepting the file. **Note** I need to do this so that only a valid image will be uploaded. **Note 2** The accepted answer on the marked question is not working with me. **Update** If I try to upload a video that is less than **128M** it returns something. But if it is greater than **128M** it gets nothing in my `localhost`, but I tested this in a production site it gives me a `REQUEST TIME OUT`.
2017/01/05
[ "https://Stackoverflow.com/questions/41485804", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7059329/" ]
You do not see code that adds `'\0'` to the end of the sequence because null character is already there. An implementation of `c_str` cannot return a pointer to new array, so the array must be stored on the `std::string` object itself. Hence, you have two valid approaches for implementing this: 1. Always store `'\0'` at the end of `_Myptr()` array of characters on construction, or 2. Make a copy of the string on demand, add `'\0'` when `c_str()` is called, and delete the copy in the destructor. The first approach lets you return `_Myptr()` for `c_str()`, at the expense of storing an extra character for each string. The second approach requires an extra pointer per `std::string` object, so the first approach is less expensive.
The requirement is the `c_str` must return a null terminated cstring. There is nothing saying that the function has to add the null terminator. Most implementations (and I think all that want to be standard compliant) store the null terminator in the underlying buffer used by the string itself. One reason for this is that ``` std::string s; assert(s[0] == '\0'); ``` Has to work since string is now required to return the null terminator at `string[string.size()]`. If string did not store the null terminator in the underlying buffer then `[]` would have to do bounds checking to see if it is at `size()` and needs to return `\0`.
41,485,804
How to check if the file uploaded file is **solely** an image and not video file and anything? I tested out a simple code that will check if the uploaded file is an image. **form** ``` <form method="post" enctype="multipart/form-data"> <input type="file" name="photo" accept="image/*"> <input type="submit" name="submit" value="Test"> </form> ``` Although I have `accept="image/*"`, I can easily change it to `All types` and get a non-image file. **code to check if the file is a valid image** (This is only for testing) ``` if($_POST['submit']) { $tmp_file = $_FILES['photo']['tmp_name']; if(mime_content_type($tmp_file)) { var_dump(mime_content_type($tmp_file)); } else { echo 'error1'; } if(getimagesize($tmp_file)) { var_dump(getimagesize($tmp_file)); } else { echo 'error2'; } } ``` As of now I have 3 test to this, 2 passed 1 failed: 1. Testing an image = passed. 2. Testing a non-image valid **.srt** file = passed (gives an error to `getimagesize`) 3. Testing a valid video file **.mp4** = failed (after the submission, it is loading for 5-10 seconds and not gives any error) What do I need to do about this? I do not know what is the problem because it does not gives any results from both `var_dump()` and `echo 'errors'`. What I do think now is that **PHP** is accepting the file. **Note** I need to do this so that only a valid image will be uploaded. **Note 2** The accepted answer on the marked question is not working with me. **Update** If I try to upload a video that is less than **128M** it returns something. But if it is greater than **128M** it gets nothing in my `localhost`, but I tested this in a production site it gives me a `REQUEST TIME OUT`.
2017/01/05
[ "https://Stackoverflow.com/questions/41485804", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7059329/" ]
Before C++11, there was no requirement that a `std::string` (or the templated class `std::basic_string` - of which std::string is an instantiation) store a trailing `'\0'`. This was reflected in different specifications of the `data()` and `c_str()` member functions - `data()` returns a pointer to the underlying data (which was not required to be terminated with a `'\0'` and `c_str()` returned a copy with a terminating `'\0'`. However, equally, there was no requirement to NOT store a trailing `'\0'` internally (accessing characters past the end of the stored data was undefined behaviour) ..... and, for simplicity, some implementations chose to append a trailing `'\0'` anyway. With C++11, this changed. Essentially, the `data()` member function was specified as giving the same effect as `c_str()` (i.e. the returned pointer is to the first character of an array that has a trailing `'\0'`). That has a consequence of requiring the trailing `'\0'` on the array returned by `data()`, and therefore on the internal representation. So the behaviour you're seeing is consistent with C++11 - one of the invariants of the class is a trailing `'\0'` (i.e. constructors ensure that is the case, member functions which modify the string ensure it remains true, and all public member functions can rely on it being true). The behaviour you're seeing is not inconsistent with C++ standards before C++11. Strictly speaking, `std::string` before C++11 was not required to maintain a trailing `'\0'` but, equally, an implementer could choose to do so.
The requirement is the `c_str` must return a null terminated cstring. There is nothing saying that the function has to add the null terminator. Most implementations (and I think all that want to be standard compliant) store the null terminator in the underlying buffer used by the string itself. One reason for this is that ``` std::string s; assert(s[0] == '\0'); ``` Has to work since string is now required to return the null terminator at `string[string.size()]`. If string did not store the null terminator in the underlying buffer then `[]` would have to do bounds checking to see if it is at `size()` and needs to return `\0`.
41,485,804
How to check if the file uploaded file is **solely** an image and not video file and anything? I tested out a simple code that will check if the uploaded file is an image. **form** ``` <form method="post" enctype="multipart/form-data"> <input type="file" name="photo" accept="image/*"> <input type="submit" name="submit" value="Test"> </form> ``` Although I have `accept="image/*"`, I can easily change it to `All types` and get a non-image file. **code to check if the file is a valid image** (This is only for testing) ``` if($_POST['submit']) { $tmp_file = $_FILES['photo']['tmp_name']; if(mime_content_type($tmp_file)) { var_dump(mime_content_type($tmp_file)); } else { echo 'error1'; } if(getimagesize($tmp_file)) { var_dump(getimagesize($tmp_file)); } else { echo 'error2'; } } ``` As of now I have 3 test to this, 2 passed 1 failed: 1. Testing an image = passed. 2. Testing a non-image valid **.srt** file = passed (gives an error to `getimagesize`) 3. Testing a valid video file **.mp4** = failed (after the submission, it is loading for 5-10 seconds and not gives any error) What do I need to do about this? I do not know what is the problem because it does not gives any results from both `var_dump()` and `echo 'errors'`. What I do think now is that **PHP** is accepting the file. **Note** I need to do this so that only a valid image will be uploaded. **Note 2** The accepted answer on the marked question is not working with me. **Update** If I try to upload a video that is less than **128M** it returns something. But if it is greater than **128M** it gets nothing in my `localhost`, but I tested this in a production site it gives me a `REQUEST TIME OUT`.
2017/01/05
[ "https://Stackoverflow.com/questions/41485804", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7059329/" ]
Before C++11, there was no requirement that a `std::string` (or the templated class `std::basic_string` - of which std::string is an instantiation) store a trailing `'\0'`. This was reflected in different specifications of the `data()` and `c_str()` member functions - `data()` returns a pointer to the underlying data (which was not required to be terminated with a `'\0'` and `c_str()` returned a copy with a terminating `'\0'`. However, equally, there was no requirement to NOT store a trailing `'\0'` internally (accessing characters past the end of the stored data was undefined behaviour) ..... and, for simplicity, some implementations chose to append a trailing `'\0'` anyway. With C++11, this changed. Essentially, the `data()` member function was specified as giving the same effect as `c_str()` (i.e. the returned pointer is to the first character of an array that has a trailing `'\0'`). That has a consequence of requiring the trailing `'\0'` on the array returned by `data()`, and therefore on the internal representation. So the behaviour you're seeing is consistent with C++11 - one of the invariants of the class is a trailing `'\0'` (i.e. constructors ensure that is the case, member functions which modify the string ensure it remains true, and all public member functions can rely on it being true). The behaviour you're seeing is not inconsistent with C++ standards before C++11. Strictly speaking, `std::string` before C++11 was not required to maintain a trailing `'\0'` but, equally, an implementer could choose to do so.
You do not see code that adds `'\0'` to the end of the sequence because null character is already there. An implementation of `c_str` cannot return a pointer to new array, so the array must be stored on the `std::string` object itself. Hence, you have two valid approaches for implementing this: 1. Always store `'\0'` at the end of `_Myptr()` array of characters on construction, or 2. Make a copy of the string on demand, add `'\0'` when `c_str()` is called, and delete the copy in the destructor. The first approach lets you return `_Myptr()` for `c_str()`, at the expense of storing an extra character for each string. The second approach requires an extra pointer per `std::string` object, so the first approach is less expensive.
17,711,952
Is there any way that I can remove the successive duplicates from the array below while only keeping the first one? The array is shown below: ``` $a=array("1"=>"go","2"=>"stop","3"=>"stop","4"=>"stop","5"=>"stop","6"=>"go","7"=>"go","8"=>"stop"); ``` What I want is to have an array that contains: ``` $a=array("1"=>"go","2"=>"stop","3"=>"go","7"=>"stop"); ```
2013/07/17
[ "https://Stackoverflow.com/questions/17711952", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1942410/" ]
Successive duplicates? I don't know about native functions, but this one works. Well almost. Think I understood it wrong. In my function the 7 => "go" is a duplicate of 6 => "go", and 8 => "stop" is the new value...? ``` function filterSuccessiveDuplicates($array) { $result = array(); $lastValue = null; foreach ($array as $key => $value) { // Only add non-duplicate successive values if ($value !== $lastValue) { $result[$key] = $value; } $lastValue = $value; } return $result; } ```
You can just do something like: ``` if(current($a) !== $new_val) $a[] = $new_val; ``` Assuming you're not manipulating that array in between you can use `current()` it's more efficient than counting it each time to check the value at `count($a)-1`
70,677,153
I've always used the term "environment variable", but I have a well-informed colleague who consistently says "environment**al** variable". Which one is correct?
2022/01/12
[ "https://Stackoverflow.com/questions/70677153", "https://Stackoverflow.com", "https://Stackoverflow.com/users/119527/" ]
#### Symmetrical range around focal year & range may differ among 'id' Within each 'id' (`by = id`), use `rleid` to create a grouping variable 'r' based on runs of equal values. Within each 'id' and run (`by = .(id, r)`), check if at least previous and next year from the focal year (e.g. 2018) are present (`if(sum(time %in% yr_rng) == 3)`). If so, select equal number of rows before and after the focal year (`min(c(shock - .I[1], .I[.N] - shock)`). Note that here the number of years selected may vary among 'id'. ```r library(data.table) setDT(pdata) yr = 2018 yr_rng = (yr - 1):(yr + 1) pdata[ , r := rleid(value), by = id] pdata[pdata[ , if(sum(time %in% yr_rng) == 3) { shock = .I[time == 2018] rng = min(c(shock - .I[1], .I[.N] - shock)) (shock - rng):(shock + rng) }, by = .(id, r)]$V1] id time value r 1: 4 2016 0 1 2: 4 2017 0 1 3: 4 2018 0 1 4: 4 2019 0 1 5: 4 2020 0 1 6: 5 2017 0 2 7: 5 2018 0 2 8: 5 2019 0 2 9: 6 2017 1 2 10: 6 2018 1 2 11: 6 2019 1 2 12: 7 2017 1 2 13: 7 2018 1 2 14: 7 2019 1 2 15: 8 2016 1 1 16: 8 2017 1 1 17: 8 2018 1 1 18: 8 2019 1 1 19: 8 2020 1 1 ``` --- #### Allowing asymmetrical range around focal year Within each 'id' and run (`by = .(id, r)`), check if both previous and next year from the focal year (e.g. 2018) are present (`if(sum(time %in% yr_rng) == 3)`). If so, select the entire group (`.SD`). ```r pdata[ , r := rleid(value), by = id] pdata[ , if(sum(time %in% yr_rng) == 3) .SD, by = .(id, r)] id r time value 1: 4 1 2016 0 2: 4 1 2017 0 3: 4 1 2018 0 4: 4 1 2019 0 5: 4 1 2020 0 6: 5 2 2017 0 7: 5 2 2018 0 8: 5 2 2019 0 9: 6 2 2017 1 10: 6 2 2018 1 11: 6 2 2019 1 12: 7 2 2017 1 13: 7 2 2018 1 14: 7 2 2019 1 15: 7 2 2020 1 16: 8 1 2016 1 17: 8 1 2017 1 18: 8 1 2018 1 19: 8 1 2019 1 20: 8 1 2020 1 ```
As far as I understood, here's a `dplyr` suggestion: ``` library(dplyr) MyF <- function(id2, shock, nb_row) { values <- pdata %>% filter(id == id2) %>% pull(value) if (length(unique(values)) == 1) { pdata %>% filter(id == id2) } else { pdata %>% filter(id == id2) %>% filter(time >= shock - nb_row & time <= shock + nb_row) %>% filter(length(unique(value)) == 1) } } map_df(pdata %>% select(id) %>% distinct() %>% pull(), MyF, shock = 2018, nb_row = 1) ## Or map_df(1:8,MyF,shock = 2018, nb_row = 1) ``` Output: ``` # A tibble: 19 x 3 id time value <int> <int> <dbl> 1 4 2016 0 2 4 2017 0 3 4 2018 0 4 4 2019 0 5 4 2020 0 6 5 2017 0 7 5 2018 0 8 5 2019 0 9 6 2017 1 10 6 2018 1 11 6 2019 1 12 7 2017 1 13 7 2018 1 14 7 2019 1 15 8 2016 1 16 8 2017 1 17 8 2018 1 18 8 2019 1 19 8 2020 1 ```
70,677,153
I've always used the term "environment variable", but I have a well-informed colleague who consistently says "environment**al** variable". Which one is correct?
2022/01/12
[ "https://Stackoverflow.com/questions/70677153", "https://Stackoverflow.com", "https://Stackoverflow.com/users/119527/" ]
A solution with [data.table](/questions/tagged/data.table "show questions tagged 'data.table'"): ``` # load the package & convert data to a data.table library(data.table) setDT(pdata) # define shock-year and number of previous/next rows shock <- 2018 n <- 2 # filter pdata[, .SD[value == value[time == shock] & between(time, shock - n, shock + n) & value == rev(value)][.N > 1 & all(diff(time) == 1)] , by = id] ``` which gives: > > > ``` > id time value > 1: 4 2016 0 > 2: 4 2017 0 > 3: 4 2018 0 > 4: 4 2019 0 > 5: 4 2020 0 > 6: 5 2017 0 > 7: 5 2018 0 > 8: 5 2019 0 > 9: 6 2017 1 > 10: 6 2018 1 > 11: 6 2019 1 > 12: 7 2017 1 > 13: 7 2018 1 > 14: 7 2019 1 > 15: 8 2016 1 > 16: 8 2017 1 > 17: 8 2018 1 > 18: 8 2019 1 > 19: 8 2020 1 > > ``` > > --- Used data: ``` pdata <- data.frame( id = rep(1:10, each = 5), time = rep(2016:2020, times = 10), value = c(c(1,1,1,0,0), c(1,1,0,0,0), c(0,0,1,0,0), c(0,0,0,0,0), c(1,0,0,0,1), c(0,1,1,1,0), c(0,1,1,1,1), c(1,1,1,1,1), c(1,0,1,1,1), c(1,1,0,1,1)) ) ```
As far as I understood, here's a `dplyr` suggestion: ``` library(dplyr) MyF <- function(id2, shock, nb_row) { values <- pdata %>% filter(id == id2) %>% pull(value) if (length(unique(values)) == 1) { pdata %>% filter(id == id2) } else { pdata %>% filter(id == id2) %>% filter(time >= shock - nb_row & time <= shock + nb_row) %>% filter(length(unique(value)) == 1) } } map_df(pdata %>% select(id) %>% distinct() %>% pull(), MyF, shock = 2018, nb_row = 1) ## Or map_df(1:8,MyF,shock = 2018, nb_row = 1) ``` Output: ``` # A tibble: 19 x 3 id time value <int> <int> <dbl> 1 4 2016 0 2 4 2017 0 3 4 2018 0 4 4 2019 0 5 4 2020 0 6 5 2017 0 7 5 2018 0 8 5 2019 0 9 6 2017 1 10 6 2018 1 11 6 2019 1 12 7 2017 1 13 7 2018 1 14 7 2019 1 15 8 2016 1 16 8 2017 1 17 8 2018 1 18 8 2019 1 19 8 2020 1 ```
70,677,153
I've always used the term "environment variable", but I have a well-informed colleague who consistently says "environment**al** variable". Which one is correct?
2022/01/12
[ "https://Stackoverflow.com/questions/70677153", "https://Stackoverflow.com", "https://Stackoverflow.com/users/119527/" ]
As far as I understood, here's a `dplyr` suggestion: ``` library(dplyr) MyF <- function(id2, shock, nb_row) { values <- pdata %>% filter(id == id2) %>% pull(value) if (length(unique(values)) == 1) { pdata %>% filter(id == id2) } else { pdata %>% filter(id == id2) %>% filter(time >= shock - nb_row & time <= shock + nb_row) %>% filter(length(unique(value)) == 1) } } map_df(pdata %>% select(id) %>% distinct() %>% pull(), MyF, shock = 2018, nb_row = 1) ## Or map_df(1:8,MyF,shock = 2018, nb_row = 1) ``` Output: ``` # A tibble: 19 x 3 id time value <int> <int> <dbl> 1 4 2016 0 2 4 2017 0 3 4 2018 0 4 4 2019 0 5 4 2020 0 6 5 2017 0 7 5 2018 0 8 5 2019 0 9 6 2017 1 10 6 2018 1 11 6 2019 1 12 7 2017 1 13 7 2018 1 14 7 2019 1 15 8 2016 1 16 8 2017 1 17 8 2018 1 18 8 2019 1 19 8 2020 1 ```
One way to solve your problem using data.table: ``` library(data.table) yrs=2017:2019 setDT(pdata)[, if(uniqueN(value)==1) .(time, value) else if(uniqueN(value <- value[time %in% yrs])==1) .(time=yrs, value), by=id] # id time value # 1: 4 2016 0 # 2: 4 2017 0 # 3: 4 2018 0 # 4: 4 2019 0 # 5: 4 2020 0 # 6: 5 2017 0 # 7: 5 2018 0 # 8: 5 2019 0 # 9: 6 2017 1 # 10: 6 2018 1 # 11: 6 2019 1 # 12: 7 2017 1 # 13: 7 2018 1 # 14: 7 2019 1 # 15: 8 2016 1 # 16: 8 2017 1 # 17: 8 2018 1 # 18: 8 2019 1 # 19: 8 2020 1 ```
70,677,153
I've always used the term "environment variable", but I have a well-informed colleague who consistently says "environment**al** variable". Which one is correct?
2022/01/12
[ "https://Stackoverflow.com/questions/70677153", "https://Stackoverflow.com", "https://Stackoverflow.com/users/119527/" ]
#### Symmetrical range around focal year & range may differ among 'id' Within each 'id' (`by = id`), use `rleid` to create a grouping variable 'r' based on runs of equal values. Within each 'id' and run (`by = .(id, r)`), check if at least previous and next year from the focal year (e.g. 2018) are present (`if(sum(time %in% yr_rng) == 3)`). If so, select equal number of rows before and after the focal year (`min(c(shock - .I[1], .I[.N] - shock)`). Note that here the number of years selected may vary among 'id'. ```r library(data.table) setDT(pdata) yr = 2018 yr_rng = (yr - 1):(yr + 1) pdata[ , r := rleid(value), by = id] pdata[pdata[ , if(sum(time %in% yr_rng) == 3) { shock = .I[time == 2018] rng = min(c(shock - .I[1], .I[.N] - shock)) (shock - rng):(shock + rng) }, by = .(id, r)]$V1] id time value r 1: 4 2016 0 1 2: 4 2017 0 1 3: 4 2018 0 1 4: 4 2019 0 1 5: 4 2020 0 1 6: 5 2017 0 2 7: 5 2018 0 2 8: 5 2019 0 2 9: 6 2017 1 2 10: 6 2018 1 2 11: 6 2019 1 2 12: 7 2017 1 2 13: 7 2018 1 2 14: 7 2019 1 2 15: 8 2016 1 1 16: 8 2017 1 1 17: 8 2018 1 1 18: 8 2019 1 1 19: 8 2020 1 1 ``` --- #### Allowing asymmetrical range around focal year Within each 'id' and run (`by = .(id, r)`), check if both previous and next year from the focal year (e.g. 2018) are present (`if(sum(time %in% yr_rng) == 3)`). If so, select the entire group (`.SD`). ```r pdata[ , r := rleid(value), by = id] pdata[ , if(sum(time %in% yr_rng) == 3) .SD, by = .(id, r)] id r time value 1: 4 1 2016 0 2: 4 1 2017 0 3: 4 1 2018 0 4: 4 1 2019 0 5: 4 1 2020 0 6: 5 2 2017 0 7: 5 2 2018 0 8: 5 2 2019 0 9: 6 2 2017 1 10: 6 2 2018 1 11: 6 2 2019 1 12: 7 2 2017 1 13: 7 2 2018 1 14: 7 2 2019 1 15: 7 2 2020 1 16: 8 1 2016 1 17: 8 1 2017 1 18: 8 1 2018 1 19: 8 1 2019 1 20: 8 1 2020 1 ```
Here's another `dplyr` solution. We basically group by sequences of unique values for each `id` and then just filter around the maximum distance to the shock time that is duplicated. ```r pdata %>% group_by(id) %>% mutate(value_group = cumsum(value != lag(value, default = value[1]))) %>% group_by(id, value_group) %>% mutate(shock_diff = abs(time - 2018)) %>% filter(shock_diff <= max(shock_diff[duplicated(shock_diff)], -Inf)) #> # A tibble: 19 × 5 #> # Groups: id, value_group [5] #> id time value value_group shock_diff #> <int> <int> <dbl> <int> <dbl> #> 1 4 2016 0 0 2 #> 2 4 2017 0 0 1 #> 3 4 2018 0 0 0 #> 4 4 2019 0 0 1 #> 5 4 2020 0 0 2 #> 6 5 2017 0 1 1 #> 7 5 2018 0 1 0 #> 8 5 2019 0 1 1 #> 9 6 2017 1 1 1 #> 10 6 2018 1 1 0 #> 11 6 2019 1 1 1 #> 12 7 2017 1 1 1 #> 13 7 2018 1 1 0 #> 14 7 2019 1 1 1 #> 15 8 2016 1 0 2 #> 16 8 2017 1 0 1 #> 17 8 2018 1 0 0 #> 18 8 2019 1 0 1 #> 19 8 2020 1 0 2 ```
70,677,153
I've always used the term "environment variable", but I have a well-informed colleague who consistently says "environment**al** variable". Which one is correct?
2022/01/12
[ "https://Stackoverflow.com/questions/70677153", "https://Stackoverflow.com", "https://Stackoverflow.com/users/119527/" ]
A solution with [data.table](/questions/tagged/data.table "show questions tagged 'data.table'"): ``` # load the package & convert data to a data.table library(data.table) setDT(pdata) # define shock-year and number of previous/next rows shock <- 2018 n <- 2 # filter pdata[, .SD[value == value[time == shock] & between(time, shock - n, shock + n) & value == rev(value)][.N > 1 & all(diff(time) == 1)] , by = id] ``` which gives: > > > ``` > id time value > 1: 4 2016 0 > 2: 4 2017 0 > 3: 4 2018 0 > 4: 4 2019 0 > 5: 4 2020 0 > 6: 5 2017 0 > 7: 5 2018 0 > 8: 5 2019 0 > 9: 6 2017 1 > 10: 6 2018 1 > 11: 6 2019 1 > 12: 7 2017 1 > 13: 7 2018 1 > 14: 7 2019 1 > 15: 8 2016 1 > 16: 8 2017 1 > 17: 8 2018 1 > 18: 8 2019 1 > 19: 8 2020 1 > > ``` > > --- Used data: ``` pdata <- data.frame( id = rep(1:10, each = 5), time = rep(2016:2020, times = 10), value = c(c(1,1,1,0,0), c(1,1,0,0,0), c(0,0,1,0,0), c(0,0,0,0,0), c(1,0,0,0,1), c(0,1,1,1,0), c(0,1,1,1,1), c(1,1,1,1,1), c(1,0,1,1,1), c(1,1,0,1,1)) ) ```
Here's another `dplyr` solution. We basically group by sequences of unique values for each `id` and then just filter around the maximum distance to the shock time that is duplicated. ```r pdata %>% group_by(id) %>% mutate(value_group = cumsum(value != lag(value, default = value[1]))) %>% group_by(id, value_group) %>% mutate(shock_diff = abs(time - 2018)) %>% filter(shock_diff <= max(shock_diff[duplicated(shock_diff)], -Inf)) #> # A tibble: 19 × 5 #> # Groups: id, value_group [5] #> id time value value_group shock_diff #> <int> <int> <dbl> <int> <dbl> #> 1 4 2016 0 0 2 #> 2 4 2017 0 0 1 #> 3 4 2018 0 0 0 #> 4 4 2019 0 0 1 #> 5 4 2020 0 0 2 #> 6 5 2017 0 1 1 #> 7 5 2018 0 1 0 #> 8 5 2019 0 1 1 #> 9 6 2017 1 1 1 #> 10 6 2018 1 1 0 #> 11 6 2019 1 1 1 #> 12 7 2017 1 1 1 #> 13 7 2018 1 1 0 #> 14 7 2019 1 1 1 #> 15 8 2016 1 0 2 #> 16 8 2017 1 0 1 #> 17 8 2018 1 0 0 #> 18 8 2019 1 0 1 #> 19 8 2020 1 0 2 ```
70,677,153
I've always used the term "environment variable", but I have a well-informed colleague who consistently says "environment**al** variable". Which one is correct?
2022/01/12
[ "https://Stackoverflow.com/questions/70677153", "https://Stackoverflow.com", "https://Stackoverflow.com/users/119527/" ]
Here's another `dplyr` solution. We basically group by sequences of unique values for each `id` and then just filter around the maximum distance to the shock time that is duplicated. ```r pdata %>% group_by(id) %>% mutate(value_group = cumsum(value != lag(value, default = value[1]))) %>% group_by(id, value_group) %>% mutate(shock_diff = abs(time - 2018)) %>% filter(shock_diff <= max(shock_diff[duplicated(shock_diff)], -Inf)) #> # A tibble: 19 × 5 #> # Groups: id, value_group [5] #> id time value value_group shock_diff #> <int> <int> <dbl> <int> <dbl> #> 1 4 2016 0 0 2 #> 2 4 2017 0 0 1 #> 3 4 2018 0 0 0 #> 4 4 2019 0 0 1 #> 5 4 2020 0 0 2 #> 6 5 2017 0 1 1 #> 7 5 2018 0 1 0 #> 8 5 2019 0 1 1 #> 9 6 2017 1 1 1 #> 10 6 2018 1 1 0 #> 11 6 2019 1 1 1 #> 12 7 2017 1 1 1 #> 13 7 2018 1 1 0 #> 14 7 2019 1 1 1 #> 15 8 2016 1 0 2 #> 16 8 2017 1 0 1 #> 17 8 2018 1 0 0 #> 18 8 2019 1 0 1 #> 19 8 2020 1 0 2 ```
One way to solve your problem using data.table: ``` library(data.table) yrs=2017:2019 setDT(pdata)[, if(uniqueN(value)==1) .(time, value) else if(uniqueN(value <- value[time %in% yrs])==1) .(time=yrs, value), by=id] # id time value # 1: 4 2016 0 # 2: 4 2017 0 # 3: 4 2018 0 # 4: 4 2019 0 # 5: 4 2020 0 # 6: 5 2017 0 # 7: 5 2018 0 # 8: 5 2019 0 # 9: 6 2017 1 # 10: 6 2018 1 # 11: 6 2019 1 # 12: 7 2017 1 # 13: 7 2018 1 # 14: 7 2019 1 # 15: 8 2016 1 # 16: 8 2017 1 # 17: 8 2018 1 # 18: 8 2019 1 # 19: 8 2020 1 ```
70,677,153
I've always used the term "environment variable", but I have a well-informed colleague who consistently says "environment**al** variable". Which one is correct?
2022/01/12
[ "https://Stackoverflow.com/questions/70677153", "https://Stackoverflow.com", "https://Stackoverflow.com/users/119527/" ]
A solution with [data.table](/questions/tagged/data.table "show questions tagged 'data.table'"): ``` # load the package & convert data to a data.table library(data.table) setDT(pdata) # define shock-year and number of previous/next rows shock <- 2018 n <- 2 # filter pdata[, .SD[value == value[time == shock] & between(time, shock - n, shock + n) & value == rev(value)][.N > 1 & all(diff(time) == 1)] , by = id] ``` which gives: > > > ``` > id time value > 1: 4 2016 0 > 2: 4 2017 0 > 3: 4 2018 0 > 4: 4 2019 0 > 5: 4 2020 0 > 6: 5 2017 0 > 7: 5 2018 0 > 8: 5 2019 0 > 9: 6 2017 1 > 10: 6 2018 1 > 11: 6 2019 1 > 12: 7 2017 1 > 13: 7 2018 1 > 14: 7 2019 1 > 15: 8 2016 1 > 16: 8 2017 1 > 17: 8 2018 1 > 18: 8 2019 1 > 19: 8 2020 1 > > ``` > > --- Used data: ``` pdata <- data.frame( id = rep(1:10, each = 5), time = rep(2016:2020, times = 10), value = c(c(1,1,1,0,0), c(1,1,0,0,0), c(0,0,1,0,0), c(0,0,0,0,0), c(1,0,0,0,1), c(0,1,1,1,0), c(0,1,1,1,1), c(1,1,1,1,1), c(1,0,1,1,1), c(1,1,0,1,1)) ) ```
#### Symmetrical range around focal year & range may differ among 'id' Within each 'id' (`by = id`), use `rleid` to create a grouping variable 'r' based on runs of equal values. Within each 'id' and run (`by = .(id, r)`), check if at least previous and next year from the focal year (e.g. 2018) are present (`if(sum(time %in% yr_rng) == 3)`). If so, select equal number of rows before and after the focal year (`min(c(shock - .I[1], .I[.N] - shock)`). Note that here the number of years selected may vary among 'id'. ```r library(data.table) setDT(pdata) yr = 2018 yr_rng = (yr - 1):(yr + 1) pdata[ , r := rleid(value), by = id] pdata[pdata[ , if(sum(time %in% yr_rng) == 3) { shock = .I[time == 2018] rng = min(c(shock - .I[1], .I[.N] - shock)) (shock - rng):(shock + rng) }, by = .(id, r)]$V1] id time value r 1: 4 2016 0 1 2: 4 2017 0 1 3: 4 2018 0 1 4: 4 2019 0 1 5: 4 2020 0 1 6: 5 2017 0 2 7: 5 2018 0 2 8: 5 2019 0 2 9: 6 2017 1 2 10: 6 2018 1 2 11: 6 2019 1 2 12: 7 2017 1 2 13: 7 2018 1 2 14: 7 2019 1 2 15: 8 2016 1 1 16: 8 2017 1 1 17: 8 2018 1 1 18: 8 2019 1 1 19: 8 2020 1 1 ``` --- #### Allowing asymmetrical range around focal year Within each 'id' and run (`by = .(id, r)`), check if both previous and next year from the focal year (e.g. 2018) are present (`if(sum(time %in% yr_rng) == 3)`). If so, select the entire group (`.SD`). ```r pdata[ , r := rleid(value), by = id] pdata[ , if(sum(time %in% yr_rng) == 3) .SD, by = .(id, r)] id r time value 1: 4 1 2016 0 2: 4 1 2017 0 3: 4 1 2018 0 4: 4 1 2019 0 5: 4 1 2020 0 6: 5 2 2017 0 7: 5 2 2018 0 8: 5 2 2019 0 9: 6 2 2017 1 10: 6 2 2018 1 11: 6 2 2019 1 12: 7 2 2017 1 13: 7 2 2018 1 14: 7 2 2019 1 15: 7 2 2020 1 16: 8 1 2016 1 17: 8 1 2017 1 18: 8 1 2018 1 19: 8 1 2019 1 20: 8 1 2020 1 ```
70,677,153
I've always used the term "environment variable", but I have a well-informed colleague who consistently says "environment**al** variable". Which one is correct?
2022/01/12
[ "https://Stackoverflow.com/questions/70677153", "https://Stackoverflow.com", "https://Stackoverflow.com/users/119527/" ]
#### Symmetrical range around focal year & range may differ among 'id' Within each 'id' (`by = id`), use `rleid` to create a grouping variable 'r' based on runs of equal values. Within each 'id' and run (`by = .(id, r)`), check if at least previous and next year from the focal year (e.g. 2018) are present (`if(sum(time %in% yr_rng) == 3)`). If so, select equal number of rows before and after the focal year (`min(c(shock - .I[1], .I[.N] - shock)`). Note that here the number of years selected may vary among 'id'. ```r library(data.table) setDT(pdata) yr = 2018 yr_rng = (yr - 1):(yr + 1) pdata[ , r := rleid(value), by = id] pdata[pdata[ , if(sum(time %in% yr_rng) == 3) { shock = .I[time == 2018] rng = min(c(shock - .I[1], .I[.N] - shock)) (shock - rng):(shock + rng) }, by = .(id, r)]$V1] id time value r 1: 4 2016 0 1 2: 4 2017 0 1 3: 4 2018 0 1 4: 4 2019 0 1 5: 4 2020 0 1 6: 5 2017 0 2 7: 5 2018 0 2 8: 5 2019 0 2 9: 6 2017 1 2 10: 6 2018 1 2 11: 6 2019 1 2 12: 7 2017 1 2 13: 7 2018 1 2 14: 7 2019 1 2 15: 8 2016 1 1 16: 8 2017 1 1 17: 8 2018 1 1 18: 8 2019 1 1 19: 8 2020 1 1 ``` --- #### Allowing asymmetrical range around focal year Within each 'id' and run (`by = .(id, r)`), check if both previous and next year from the focal year (e.g. 2018) are present (`if(sum(time %in% yr_rng) == 3)`). If so, select the entire group (`.SD`). ```r pdata[ , r := rleid(value), by = id] pdata[ , if(sum(time %in% yr_rng) == 3) .SD, by = .(id, r)] id r time value 1: 4 1 2016 0 2: 4 1 2017 0 3: 4 1 2018 0 4: 4 1 2019 0 5: 4 1 2020 0 6: 5 2 2017 0 7: 5 2 2018 0 8: 5 2 2019 0 9: 6 2 2017 1 10: 6 2 2018 1 11: 6 2 2019 1 12: 7 2 2017 1 13: 7 2 2018 1 14: 7 2 2019 1 15: 7 2 2020 1 16: 8 1 2016 1 17: 8 1 2017 1 18: 8 1 2018 1 19: 8 1 2019 1 20: 8 1 2020 1 ```
One way to solve your problem using data.table: ``` library(data.table) yrs=2017:2019 setDT(pdata)[, if(uniqueN(value)==1) .(time, value) else if(uniqueN(value <- value[time %in% yrs])==1) .(time=yrs, value), by=id] # id time value # 1: 4 2016 0 # 2: 4 2017 0 # 3: 4 2018 0 # 4: 4 2019 0 # 5: 4 2020 0 # 6: 5 2017 0 # 7: 5 2018 0 # 8: 5 2019 0 # 9: 6 2017 1 # 10: 6 2018 1 # 11: 6 2019 1 # 12: 7 2017 1 # 13: 7 2018 1 # 14: 7 2019 1 # 15: 8 2016 1 # 16: 8 2017 1 # 17: 8 2018 1 # 18: 8 2019 1 # 19: 8 2020 1 ```
70,677,153
I've always used the term "environment variable", but I have a well-informed colleague who consistently says "environment**al** variable". Which one is correct?
2022/01/12
[ "https://Stackoverflow.com/questions/70677153", "https://Stackoverflow.com", "https://Stackoverflow.com/users/119527/" ]
A solution with [data.table](/questions/tagged/data.table "show questions tagged 'data.table'"): ``` # load the package & convert data to a data.table library(data.table) setDT(pdata) # define shock-year and number of previous/next rows shock <- 2018 n <- 2 # filter pdata[, .SD[value == value[time == shock] & between(time, shock - n, shock + n) & value == rev(value)][.N > 1 & all(diff(time) == 1)] , by = id] ``` which gives: > > > ``` > id time value > 1: 4 2016 0 > 2: 4 2017 0 > 3: 4 2018 0 > 4: 4 2019 0 > 5: 4 2020 0 > 6: 5 2017 0 > 7: 5 2018 0 > 8: 5 2019 0 > 9: 6 2017 1 > 10: 6 2018 1 > 11: 6 2019 1 > 12: 7 2017 1 > 13: 7 2018 1 > 14: 7 2019 1 > 15: 8 2016 1 > 16: 8 2017 1 > 17: 8 2018 1 > 18: 8 2019 1 > 19: 8 2020 1 > > ``` > > --- Used data: ``` pdata <- data.frame( id = rep(1:10, each = 5), time = rep(2016:2020, times = 10), value = c(c(1,1,1,0,0), c(1,1,0,0,0), c(0,0,1,0,0), c(0,0,0,0,0), c(1,0,0,0,1), c(0,1,1,1,0), c(0,1,1,1,1), c(1,1,1,1,1), c(1,0,1,1,1), c(1,1,0,1,1)) ) ```
One way to solve your problem using data.table: ``` library(data.table) yrs=2017:2019 setDT(pdata)[, if(uniqueN(value)==1) .(time, value) else if(uniqueN(value <- value[time %in% yrs])==1) .(time=yrs, value), by=id] # id time value # 1: 4 2016 0 # 2: 4 2017 0 # 3: 4 2018 0 # 4: 4 2019 0 # 5: 4 2020 0 # 6: 5 2017 0 # 7: 5 2018 0 # 8: 5 2019 0 # 9: 6 2017 1 # 10: 6 2018 1 # 11: 6 2019 1 # 12: 7 2017 1 # 13: 7 2018 1 # 14: 7 2019 1 # 15: 8 2016 1 # 16: 8 2017 1 # 17: 8 2018 1 # 18: 8 2019 1 # 19: 8 2020 1 ```
16,542,099
Im working on a MVC app. When I call context.SaveChanges to update a specific records. The update is not registered in the database. I do not get any runtime error either. All in notice is that my Records is not updated. I still see the same values. Insert Functionality work Perfectly. ``` enter code here public Admission Update(int stuid){ VDData.VidyaDaanEntities context = new VDData.VidyaDaanEntities(); VDData.Student_Master studentmaster = new VDData.Student_Master(); studentmaster.Student_ID = stuid; studentmaster.Student_First_Name = this.FirstName; studentmaster.Student_Middle_Name = this.MiddleName; studentmaster.Student_Last_Name = this.LastName; studentmaster.Student_Address_1 = this.Address; studentmaster.Student_Address_2 = this.Address2; studentmaster.Student_City = this.City; studentmaster.Student_State = this.State; studentmaster.Student_Pin_Code = this.Pincode; context.SaveChanges(); // here it wont give any kind of error. it runs sucessfully. } ```
2013/05/14
[ "https://Stackoverflow.com/questions/16542099", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1203470/" ]
First get the entity you are going to update: ``` var entity = obj.GetEntity(id); entity.col1 = "value"; context.SaveChanges(entity); ``` hope this will help.
Before ``` context.SaveChanges(); ``` You need to call this ``` context.Student_Masters.Add(studentmaster ); ``` **Edit:** introduce `Abstraction` to your `Context class` and Create a method in your context class like below, then you can call it whenever you want to create or update your objects. ``` public void SaveStudent_Master(Student_Master studentmaster) { using (var context = new VDData.VidyaDaanEntities()) { if (studentmaster.Student_ID == 0) { context.Student_Masters.Add(studentmaster); } else if (studentmaster.Student_ID > 0) { //This Updates N-Level deep Object grapgh //This is important for Updates var currentStudent_Master = context.Student_Masters .Single(s => s.Student_ID == studentmaster.Student_ID ); context.Entry(currentStudent_Master ).CurrentValues.SetValues(studentmaster); } context.SaveChanges(); } ``` Then in your Controller replace `context.SaveChanges();` with `_context.SaveStudent_Master(studentmaster);`
16,542,099
Im working on a MVC app. When I call context.SaveChanges to update a specific records. The update is not registered in the database. I do not get any runtime error either. All in notice is that my Records is not updated. I still see the same values. Insert Functionality work Perfectly. ``` enter code here public Admission Update(int stuid){ VDData.VidyaDaanEntities context = new VDData.VidyaDaanEntities(); VDData.Student_Master studentmaster = new VDData.Student_Master(); studentmaster.Student_ID = stuid; studentmaster.Student_First_Name = this.FirstName; studentmaster.Student_Middle_Name = this.MiddleName; studentmaster.Student_Last_Name = this.LastName; studentmaster.Student_Address_1 = this.Address; studentmaster.Student_Address_2 = this.Address2; studentmaster.Student_City = this.City; studentmaster.Student_State = this.State; studentmaster.Student_Pin_Code = this.Pincode; context.SaveChanges(); // here it wont give any kind of error. it runs sucessfully. } ```
2013/05/14
[ "https://Stackoverflow.com/questions/16542099", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1203470/" ]
It seems like you want to update, so your code should be ``` VDData.Student_Master studentmaster = context.Student_Masters.Single(p=>p.Student_ID == stuid); ``` And you should not change the Student\_ID if it is the primary key. ``` public Admission Update(int stuid){ VDData.VidyaDaanEntities context = new VDData.VidyaDaanEntities(); //VDData.Student_Master studentmaster = new VDData.Student_Master(); //REPLACE WITH VDData.Student_Master studentmaster = context.Student_Masters.Where(p=>p.Student_ID == stuid); studentmaster.Student_ID = stuid; studentmaster.Student_First_Name = this.FirstName; studentmaster.Student_Middle_Name = this.MiddleName; studentmaster.Student_Last_Name = this.LastName; studentmaster.Student_Address_1 = this.Address; studentmaster.Student_Address_2 = this.Address2; studentmaster.Student_City = this.City; studentmaster.Student_State = this.State; studentmaster.Student_Pin_Code = this.Pincode; context.SaveChanges(); ```
Before ``` context.SaveChanges(); ``` You need to call this ``` context.Student_Masters.Add(studentmaster ); ``` **Edit:** introduce `Abstraction` to your `Context class` and Create a method in your context class like below, then you can call it whenever you want to create or update your objects. ``` public void SaveStudent_Master(Student_Master studentmaster) { using (var context = new VDData.VidyaDaanEntities()) { if (studentmaster.Student_ID == 0) { context.Student_Masters.Add(studentmaster); } else if (studentmaster.Student_ID > 0) { //This Updates N-Level deep Object grapgh //This is important for Updates var currentStudent_Master = context.Student_Masters .Single(s => s.Student_ID == studentmaster.Student_ID ); context.Entry(currentStudent_Master ).CurrentValues.SetValues(studentmaster); } context.SaveChanges(); } ``` Then in your Controller replace `context.SaveChanges();` with `_context.SaveStudent_Master(studentmaster);`
370,385
I was wondering where the Google servers reside and how their DNS lookup work. I'm located in Germany right now. If I'm calling google.de (German Google page) is the server located in Germany for all the searches or are they splitted throughout the world? If I'm calling google.com, does it automatically connect to the US servers or does it try to look for the search results on a German server first? I was wondering, because I noticed the really low latency when pinging google.com. I can't imagine such low ping, if the servers reside outside of Germany. So, how does the lookup of a search keyword works, concerning connecting to their servers? I tried traceroute, but couldn't make up much. Does it depend on the keyword? Does it depend on several different factors, which server is actually being used?
2012/03/16
[ "https://serverfault.com/questions/370385", "https://serverfault.com", "https://serverfault.com/users/109245/" ]
How Google search *actually* works is, of course, a closely-guarded secret. However, in the past there has been some info coming out of them with general practices they employ. First off, Google has **hundreds** of datacenters - back in 2008-ish, they were already estimated to run on several hundred thousand servers; you can safely assume they have more than a million now - and that's not counting the new 800-thousand-server datacenter they're building out in the nevada desert :) These are not necessarily state-of-the-art servers - their platform is "cloud"-ed by its very design, and any number of nodes may die without the slightest detectable change in service. Basically, they have servers in three tiers: frontend search, middle layer, and backend ("deep") storage. For every single bit of information google search can provide, the information will be stored in several places - oft-used results perhaps in hundreds of places. While most of these will use close-by servers to provide answers, they don't have to - if you are searching for a very obscure but specific piece of information, they may have to reach out to one of a few servers that has that piece worldwide. For daily news (for example), it'll be on thousands of servers, and you will get the closest one. Search on Youtube for some google architecture videos; I remember this being online some years back.
They probably have multiple datacenters in every continent, and thanks to anycasting they can announce the same networks from multiple providers/datacenters. You will always go the least expensive path (in terms of as paths, hops, metrics, bandwidth between peers etc etc), therefor you will experience low latency from everywhere. You can read more about anycasting here: <http://en.wikipedia.org/wiki/Anycast>
370,385
I was wondering where the Google servers reside and how their DNS lookup work. I'm located in Germany right now. If I'm calling google.de (German Google page) is the server located in Germany for all the searches or are they splitted throughout the world? If I'm calling google.com, does it automatically connect to the US servers or does it try to look for the search results on a German server first? I was wondering, because I noticed the really low latency when pinging google.com. I can't imagine such low ping, if the servers reside outside of Germany. So, how does the lookup of a search keyword works, concerning connecting to their servers? I tried traceroute, but couldn't make up much. Does it depend on the keyword? Does it depend on several different factors, which server is actually being used?
2012/03/16
[ "https://serverfault.com/questions/370385", "https://serverfault.com", "https://serverfault.com/users/109245/" ]
They probably have multiple datacenters in every continent, and thanks to anycasting they can announce the same networks from multiple providers/datacenters. You will always go the least expensive path (in terms of as paths, hops, metrics, bandwidth between peers etc etc), therefor you will experience low latency from everywhere. You can read more about anycasting here: <http://en.wikipedia.org/wiki/Anycast>
The closest DNS entry that returns you request, records differ from Google.de, Goggle.fr and .com this works in your favor so you access the service with less network hops, However apart from the large google DC's the severs that you and I connect to are most probally **GGC (Google Global Cache)** servers. They are located at large network POP's and ISP's of almost all teirs. You could say they are a CDN in some way You can find out more on their GGC Beta program <http://ggcadmin.google.com/ggc> btw.. even though it's BETA program, it's far from a Beta deployment ;)
370,385
I was wondering where the Google servers reside and how their DNS lookup work. I'm located in Germany right now. If I'm calling google.de (German Google page) is the server located in Germany for all the searches or are they splitted throughout the world? If I'm calling google.com, does it automatically connect to the US servers or does it try to look for the search results on a German server first? I was wondering, because I noticed the really low latency when pinging google.com. I can't imagine such low ping, if the servers reside outside of Germany. So, how does the lookup of a search keyword works, concerning connecting to their servers? I tried traceroute, but couldn't make up much. Does it depend on the keyword? Does it depend on several different factors, which server is actually being used?
2012/03/16
[ "https://serverfault.com/questions/370385", "https://serverfault.com", "https://serverfault.com/users/109245/" ]
How Google search *actually* works is, of course, a closely-guarded secret. However, in the past there has been some info coming out of them with general practices they employ. First off, Google has **hundreds** of datacenters - back in 2008-ish, they were already estimated to run on several hundred thousand servers; you can safely assume they have more than a million now - and that's not counting the new 800-thousand-server datacenter they're building out in the nevada desert :) These are not necessarily state-of-the-art servers - their platform is "cloud"-ed by its very design, and any number of nodes may die without the slightest detectable change in service. Basically, they have servers in three tiers: frontend search, middle layer, and backend ("deep") storage. For every single bit of information google search can provide, the information will be stored in several places - oft-used results perhaps in hundreds of places. While most of these will use close-by servers to provide answers, they don't have to - if you are searching for a very obscure but specific piece of information, they may have to reach out to one of a few servers that has that piece worldwide. For daily news (for example), it'll be on thousands of servers, and you will get the closest one. Search on Youtube for some google architecture videos; I remember this being online some years back.
The closest DNS entry that returns you request, records differ from Google.de, Goggle.fr and .com this works in your favor so you access the service with less network hops, However apart from the large google DC's the severs that you and I connect to are most probally **GGC (Google Global Cache)** servers. They are located at large network POP's and ISP's of almost all teirs. You could say they are a CDN in some way You can find out more on their GGC Beta program <http://ggcadmin.google.com/ggc> btw.. even though it's BETA program, it's far from a Beta deployment ;)
42,438,055
I try to execute such a scenery via Jenkins "execute shell" build step: ``` rm -r -f _dpatch; mkdir _dpatch; mkdir _dpatch/deploy; from_revision='HEAD'; to_revision='2766920'; git diff --name-only $from_revision $to_revision > "_dpatch/deploy/files.txt"; for file in $(<"_dpatch/deploy/files.txt"); do cp --parents "$file" "_dpatch"; done; whoami ``` Build ends successfully with console output: ``` [Deploy to production] $ /bin/sh -xe /tmp/hudson8315034696077699718.sh + rm -r -f _dpatch + mkdir _dpatch + mkdir _dpatch/deploy + from_revision=HEAD + to_revision=2766920 + git diff --name-only HEAD 2766920 + + whoami jenkins Finished: SUCCESS ``` The problem is line "for file in" is just ignored, I do not understand why. Content of files.txt is not empty and looks like this: ``` addons/tiny_mce/plugins/image/plugin.min.org.js addons/webrtc/adapter-latest.js templates/standard/style/review.css ``` More over, when I execute via ssh the same script in the same jenkins workspace folder under the same user (jenkins) - "for file in" line executes normally and creates files in "\_dpatch" subfolder as it should. My environment: Debian 8, Jenkins 2.45 Thanks
2017/02/24
[ "https://Stackoverflow.com/questions/42438055", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2248804/" ]
**Solution:** As far as I know, descriptors are the only objects, that can distinguish, if they are invoke for class or instance, because of the `__get__` function signature: `__get__(self, instance, instance_type)`. This property allows us to build a switch on top of it. ``` class boundmethod(object): def __init__(self, cls_method=None, instance_method=None, doc=None): self._cls_method = cls_method self._instance_method = instance_method if cls_method: self._method_name = cls_method.__name__ elif instance_method: self._method_name = instance_method.__name__ if doc is None and cls_method is not None: doc = cls_method.__doc__ self.__doc__ = doc self._method = None self._object = None def _find_method(self, instance, instance_type, method_name): for base in instance_type.mro()[1:]: method = getattr(base, method_name, None) if _is_descriptor(method): method = method.__get__(instance, base) if method and method is not self: try: return method.__func__ except AttributeError: return method def __get__(self, instance, instance_type): if instance is None: self._method = self._cls_method or self._find_method(instance, instance_type, self._method_name) self._object = instance_type else: self._method = self._instance_method or self._find_method(instance, instance_type, self._method_name) self._object = instance return self @staticmethod def cls_method(obj=None): def constructor(cls_method): if obj is None: return boundmethod(cls_method, None, cls_method.__doc__) else: return type(obj)(cls_method, obj._instance_method, obj.__doc__) if isinstance(obj, FunctionType): return boundmethod(obj, None, obj.__doc__) else: return constructor @staticmethod def instance_method(obj=None): def constructor(instance_method): if obj is None: return boundmethod(None, instance_method, instance_method.__doc__) else: return type(obj)(obj._cls_method, instance_method, obj.__doc__) if isinstance(obj, FunctionType): return boundmethod(None, obj, obj.__doc__) else: return constructor def __call__(self, *args, **kwargs): if self._method: try: return self._method(self._object, *args, **kwargs) except TypeError: return self._method(*args, **kwargs) return None ``` **Example:** ``` >>> class Walkmen(object): ... @boundmethod.cls_method ... def start(self): ... return 'Walkmen start class bound method' ... @boundmethod.instance_method(start) ... def start(self): ... return 'Walkmen start instance bound method' >>> print Walkmen.start() Walkmen start class bound method >>> print Walkmen().start() Walkmen start instance bound method ``` I hope it will help some o you guys. Best.
I actually just asked this question ([Python descriptors and inheritance](https://stackoverflow.com/questions/44771706/python-descriptors-and-inheritance/44835452#44835452) I hadn't seen this question). [My solution](https://stackoverflow.com/a/44835452/1575353) uses descriptors and a metaclass for inheritance. from [my answer](https://stackoverflow.com/a/44835452/1575353): ``` class dynamicmethod: ''' Descriptor to allow dynamic dispatch on calls to class.Method vs obj.Method fragile when used with inheritence, to inherit and then overwrite or extend a dynamicmethod class must have dynamicmethod_meta as its metaclass ''' def __init__(self, f=None, m=None): self.f = f self.m = m def __get__(self, obj, objtype=None): if obj is not None and self.f is not None: return types.MethodType(self.f, obj) elif objtype is not None and self.m is not None: return types.MethodType(self.m, objtype) else: raise AttributeError('No associated method') def method(self, f): return type(self)(f, self.m) def classmethod(self, m): return type(self)(self.f, m) def make_dynamicmethod_meta(meta): class _dynamicmethod_meta(meta): def __prepare__(name, bases, **kwargs): d = meta.__prepare__(name, bases, **kwargs) for base in bases: for k,v in base.__dict__.items(): if isinstance(v, dynamicmethod): if k in d: raise ValueError('Multiple base classes define the same dynamicmethod') d[k] = v return d return _dynamicmethod_meta dynamicmethod_meta=make_dynamicmethod_meta(type) class A(metaclass=dynamicmethod_meta): @dynamicmethod def a(self): print('Called from obj {} defined in A'.format(self)) @a.classmethod def a(cls) print('Called from class {} defined in A'.format(cls)) class B(A): @a.method def a(self): print('Called from obj {} defined in B'.format(self)) A.a() A().a() B.a() B().a() ``` results in: ``` Called from class <class 'A'> defined in A Called from obj <A object at ...> defined in A Called from class <class 'B'> defined in A Called from obj <B object at ...> defined in B ```
4,432,288
I have the CTE as a UDF and am trying to get it to take a default value of nothing in which case the result returned should be everything. I want to call it as a default like this: ``` select * from fnGetEmployeeHierarchyByUsername ``` my UDF/ CTE is: ``` alter FUNCTION [dbo].[fnGetEmployeeHierarchyByUsername] ( @AMRSNTID varchar(100) = null ) RETURNS TABLE AS RETURN ( WITH yourcte AS ( SELECT EmployeeId, ManagerAMRSNTID, ManagerID, AMRSNTID, FullName, 0 as depth--, Name FROM Employees WHERE AMRSNTID = @AMRSNTID UNION ALL SELECT e.EmployeeId, e.ManagerAMRSNTID, e.ManagerID, e.AMRSNTID, e.FullName, y.depth+1 as depth--, e.Name FROM Employees e JOIN yourcte y ON e.ManagerAMRSNTID = y.AMRSNTID ) SELECT EmployeeId, ManagerID, AMRSNTID, FullName, depth--, Name FROM yourcte ) ``` How can I get it to work like this?
2010/12/13
[ "https://Stackoverflow.com/questions/4432288", "https://Stackoverflow.com", "https://Stackoverflow.com/users/352157/" ]
You cannot get any information about the user unless the user gives permission. Unfortunately. But with "OAuth 2.0 for Canvas" you do receive a the signed\_request POST. And this way is the only way to go about it; <http://developers.facebook.com/docs/authentication/canvas> But user still needs to authorize your application.
``` $json = file_get_contents("http://graph.facebook.com/$uid"); $fdata = json_decode($json); $fname = $fdata->name; echo 'Merry Christmas x ' . $fname . ' x '; ```
4,432,288
I have the CTE as a UDF and am trying to get it to take a default value of nothing in which case the result returned should be everything. I want to call it as a default like this: ``` select * from fnGetEmployeeHierarchyByUsername ``` my UDF/ CTE is: ``` alter FUNCTION [dbo].[fnGetEmployeeHierarchyByUsername] ( @AMRSNTID varchar(100) = null ) RETURNS TABLE AS RETURN ( WITH yourcte AS ( SELECT EmployeeId, ManagerAMRSNTID, ManagerID, AMRSNTID, FullName, 0 as depth--, Name FROM Employees WHERE AMRSNTID = @AMRSNTID UNION ALL SELECT e.EmployeeId, e.ManagerAMRSNTID, e.ManagerID, e.AMRSNTID, e.FullName, y.depth+1 as depth--, e.Name FROM Employees e JOIN yourcte y ON e.ManagerAMRSNTID = y.AMRSNTID ) SELECT EmployeeId, ManagerID, AMRSNTID, FullName, depth--, Name FROM yourcte ) ``` How can I get it to work like this?
2010/12/13
[ "https://Stackoverflow.com/questions/4432288", "https://Stackoverflow.com", "https://Stackoverflow.com/users/352157/" ]
I've found the answer. Just use: to show the user's name on a TAB/Canvas app. It also works within the Static FBML app. Writing: ``` Merry Christmas <fb:userlink uid="loggedinuser"/> ``` Will show: ``` Merry Christmas John Smith ``` if John Smith is viewing it :) The name will be formatted as a link, but you can style it with CSS.
``` $json = file_get_contents("http://graph.facebook.com/$uid"); $fdata = json_decode($json); $fname = $fdata->name; echo 'Merry Christmas x ' . $fname . ' x '; ```
12,182,382
**Please someone Fix this Javascript Code.** This script actually reads the URL Parameter and then depending on the parameter it SHOW/HIDE's Table rows. I found out this script on Sack Overflow but when tried it on Dreamweaver, its not working.. Please someone go through the script and fix what is wrong in it.... The Script is: There are two pages here... First is the `First_page.html` : ``` <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Untitled Document</title> </head> <body> <a href="Second_page.html?showid=tblRow14">First Row</a><br /> <a href="Second_page.html?showid=tblRow46">Second Row</a><br /> <a href="Second_page.html?showid=tblRow30">Third Row</a><br /> </body> </html> ``` and the `Second_page.html` is as below : ``` <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Untitled Document</title> <head> <style> #theTable>tbody>tr { display: none; } //hide rows by default </style> <script type="text/javascript"> function getUrlVar(varName) { //returns empty string if variable name not found in URL if (!varName) return ''; //no variable name specified. exit and return empty string varName = varName.toLowerCase(); //convert to lowercase var params = location.search; //get URL if (params == '') return ''; //no variables at all. exit and return empty string var vars = params.split('?')[1].split('&'); //get list of variable+value strings for (var i = 0; i < vars.length; i++) { //check each variable var varPair = vars[i].split('='); //split variable and its value if (varPair.length > 1) { //has "=" separator if (varPair[0].toLowerCase() == varName) { //same variable name? return varPair[1]; //found variable. exit and return its value } //else: check next variable, if any } //else: is not an array. i.e.: invalid URL variable+value format. ignore it } return ''; //no matching variable found. exit and return empty string } function show() { var value = getUrlVar('showid'); //get variable value if (!value) return; //variable not found if (parseInt(value) == NaN) return; //value is not a number var row = document.getElementById('tblRow' + value); //get the element by ID name if (!row) return; //element not found row.style.display = 'inherit'; //set element display style to inherited value (which is visible by default) } </script> </head> <body onLoad="show();"> <table id="theTable"> <tr id="tblRow14"><td>row ID 14</td></tr> <tr id="tblRow46"><td>row ID 46</td></tr> <tr id="tblRow30"><td>row ID 30</td></tr> </table> </body> </html> ``` Please fix it or help me in finding out a new Javascript code that works same like this does... Please Help me in this.
2012/08/29
[ "https://Stackoverflow.com/questions/12182382", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1633771/" ]
You don't have an apostrophe problem, you have an exclamation point problem. An exclamation point is neither word (`\w`) nor whitespace (`\s`) nor an apostrophe. So you should add `!` to your character class if you want to allow it.
Can't you get away with a simple `.`, e.g., `'<(.+)>'`? Also, it's typically easier if you don't use single quotes for the string if you need to embed a single quote inside, e.g., `"<([\w\s']+)>"`.
25,193,869
i keep getting valgrind errors in my code and i have no idea how to fix it. :/ the idea is that no matter how much tabs / spaces are between 2 or more words/letters in an input, in the output it should be only one space. for example: ``` a b c d -> a b c d ``` code: ``` char* echo(char* in) { char buffer[256]; int incounter=0, buffcounter=0; while(incounter<(strlen(in))) { if(in[incounter] == ' ' || in[incounter] == '\t') incounter++; else if(in[incounter] != ' ' && in[incounter] != '\t') { while(in[incounter] != ' ' && in[incounter] != '\t') { buffer[buffcounter] = in[incounter]; //53 incounter++; buffcounter++; } buffer[buffcounter] = ' '; buffcounter++; } } char* out = buffer; return out; } ``` errors: ``` ==20521== Conditional jump or move depends on uninitialised value(s) ==20521== at 0x4010B4: echo (hhush.c:53) ==20521== by 0x4021CA: readCommand (hhush.c:327) ==20521== by 0x402538: main (hhush.c:371) ==20521== Uninitialised value was created by a stack allocation ==20521== at 0x402017: readCommand (hhush.c:301) ==20521== ==20521== Conditional jump or move depends on uninitialised value(s) ==20521== at 0x4010CE: echo (hhush.c:53) ==20521== by 0x4021CA: readCommand (hhush.c:327) ==20521== by 0x402538: main (hhush.c:371) ==20521== Uninitialised value was created by a stack allocation ==20521== at 0x402017: readCommand (hhush.c:301) ``` thats where i am now, still the same errors ``` char* echo(char* in,char* buffer){ size_t inlen=strlen(in); int incounter=0,buffcounter=0; while(incounter<inlen){ if(in[incounter]==' '||in[incounter]=='\t')incounter++; else{ while(in[incounter]!=' '&&in[incounter]!='\t'){ buffer[buffcounter]=in[incounter]; incounter++; buffcounter++; } buffer[buffcounter]=' '; buffcounter++; } } return buffer; } ``` i call it with: ``` char input[256]; fgets(input,sizeof(input),stdin); ... char buffer[256]; printf("%s\n",echo(input,buffer)); ```
2014/08/07
[ "https://Stackoverflow.com/questions/25193869", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3910643/" ]
``` char* out=buffer; return out; } ``` Do not return a pointer to an array with automatic storage duration (here `buffer` array). If the pointer value is accessed, it invokes undefined behavior as automatic objects are discarded when the block where they are declared is exited (here when `echo` function returns).
If you want to use the buffer outside of the function then you should pass the allocated buffer to the function. ``` void echo(char *in, char *buffer) { //do stuff } ``` This way you won't be trying to access memory that has gone out of scope. You can then use the function as such ``` char buffer[256]; echo(string, buffer); ``` The buffer should now contain your edited string.
25,193,869
i keep getting valgrind errors in my code and i have no idea how to fix it. :/ the idea is that no matter how much tabs / spaces are between 2 or more words/letters in an input, in the output it should be only one space. for example: ``` a b c d -> a b c d ``` code: ``` char* echo(char* in) { char buffer[256]; int incounter=0, buffcounter=0; while(incounter<(strlen(in))) { if(in[incounter] == ' ' || in[incounter] == '\t') incounter++; else if(in[incounter] != ' ' && in[incounter] != '\t') { while(in[incounter] != ' ' && in[incounter] != '\t') { buffer[buffcounter] = in[incounter]; //53 incounter++; buffcounter++; } buffer[buffcounter] = ' '; buffcounter++; } } char* out = buffer; return out; } ``` errors: ``` ==20521== Conditional jump or move depends on uninitialised value(s) ==20521== at 0x4010B4: echo (hhush.c:53) ==20521== by 0x4021CA: readCommand (hhush.c:327) ==20521== by 0x402538: main (hhush.c:371) ==20521== Uninitialised value was created by a stack allocation ==20521== at 0x402017: readCommand (hhush.c:301) ==20521== ==20521== Conditional jump or move depends on uninitialised value(s) ==20521== at 0x4010CE: echo (hhush.c:53) ==20521== by 0x4021CA: readCommand (hhush.c:327) ==20521== by 0x402538: main (hhush.c:371) ==20521== Uninitialised value was created by a stack allocation ==20521== at 0x402017: readCommand (hhush.c:301) ``` thats where i am now, still the same errors ``` char* echo(char* in,char* buffer){ size_t inlen=strlen(in); int incounter=0,buffcounter=0; while(incounter<inlen){ if(in[incounter]==' '||in[incounter]=='\t')incounter++; else{ while(in[incounter]!=' '&&in[incounter]!='\t'){ buffer[buffcounter]=in[incounter]; incounter++; buffcounter++; } buffer[buffcounter]=' '; buffcounter++; } } return buffer; } ``` i call it with: ``` char input[256]; fgets(input,sizeof(input),stdin); ... char buffer[256]; printf("%s\n",echo(input,buffer)); ```
2014/08/07
[ "https://Stackoverflow.com/questions/25193869", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3910643/" ]
``` char* out=buffer; return out; } ``` Do not return a pointer to an array with automatic storage duration (here `buffer` array). If the pointer value is accessed, it invokes undefined behavior as automatic objects are discarded when the block where they are declared is exited (here when `echo` function returns).
This message: ``` ==20521== Conditional jump or move depends on uninitialised value(s) ==20521== at 0x4010B4: echo (hhush.c:53) ==20521== Uninitialised value was created by a stack allocation ==20521== at 0x402017: readCommand (hhush.c:301) ``` is telling you that an `if` or `while` test on line 53 depends on uninitialzed data. The data in question was allocated at line 301. By looking at those lines, you should be able to tell what is going on, but it looks like you're allocating an array on the stack in `readCommand` and then passing it to `echo` as an argument, without ever putting anything in the array. So your 'i call it with code' is *actually* something like: ``` char input[256]; fgets(input,sizeof(input),stdin); ... char buffer[256]; printf("%s\n",echo(buffer)); ``` and you're not passing what you think you're passing
53,832,287
I am trying to print out images on the dice, instead of just the number. But when i am using document.write, it just opens a new tab, and shows the picture. What should i use to print it out, with the button i have? ``` <div id="roll-dice"> <button type="button" value="Trow dice" onclick="rolldice()">Roll dice</button> <span id="roll-result"></span> </div> var rollResult = Math.floor(Math.random() * 6) + 1; if (rollResult == 1) document.write('<img src="1.jpg">'); else if (rollResult == 2) document.write('<img src="2.jpg">'); else if (rollResult == 3) document.write('<img src="3.jpg">'); else if (rollResult == 4) document.write('<img src="4.jpg">'); else if (rollResult == 5) document.write('<img src="5.jpg">'); else document.write('<img src="6.jpg">'); ```
2018/12/18
[ "https://Stackoverflow.com/questions/53832287", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10260767/" ]
**Use a picturebox and modify its source** ``` <img src="" id="pic-result"/> ``` javascript code : ``` var rollResult = Math.floor(Math.random() * 6) + 1; var pic = document.getElementById("pic-result"); if (rollResult == 1) pic.setAttribute('src', '1.jpg'); else if (rollResult == 2) pic.setAttribute('src', '2.jpg'); else if (rollResult == 3) pic.setAttribute('src', '3.jpg'); else if (rollResult == 4) pic.setAttribute('src', '4.jpg'); else if (rollResult == 5) pic.setAttribute('src', '5.jpg>'); else pic.setAttribute('src', '6.jpg'); ```
You should be able to achieve this with `innerHTML` on `#roll-result`: ```js var diceResult = document.querySelector('#roll-result'); function rolldice() { var rollResult = Math.floor(Math.random() * 6) + 1; diceResult.innerHTML = '<img src="' + rollResult + '.jpg">'; } ``` ```html <div id="roll-dice"> <button type="button" value="Trow dice" onclick="rolldice()">Roll dice</button> <span id="roll-result"></span> </div> ```
53,832,287
I am trying to print out images on the dice, instead of just the number. But when i am using document.write, it just opens a new tab, and shows the picture. What should i use to print it out, with the button i have? ``` <div id="roll-dice"> <button type="button" value="Trow dice" onclick="rolldice()">Roll dice</button> <span id="roll-result"></span> </div> var rollResult = Math.floor(Math.random() * 6) + 1; if (rollResult == 1) document.write('<img src="1.jpg">'); else if (rollResult == 2) document.write('<img src="2.jpg">'); else if (rollResult == 3) document.write('<img src="3.jpg">'); else if (rollResult == 4) document.write('<img src="4.jpg">'); else if (rollResult == 5) document.write('<img src="5.jpg">'); else document.write('<img src="6.jpg">'); ```
2018/12/18
[ "https://Stackoverflow.com/questions/53832287", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10260767/" ]
You should be able to achieve this with `innerHTML` on `#roll-result`: ```js var diceResult = document.querySelector('#roll-result'); function rolldice() { var rollResult = Math.floor(Math.random() * 6) + 1; diceResult.innerHTML = '<img src="' + rollResult + '.jpg">'; } ``` ```html <div id="roll-dice"> <button type="button" value="Trow dice" onclick="rolldice()">Roll dice</button> <span id="roll-result"></span> </div> ```
You should set the innerHTML of a parent element rather than writing the element onto the page: ``` <div id="roll-dice"> <button type="button" value="Trow dice" onclick="rolldice()">Roll dice</button> <span id="roll-result"></span> </div> ``` JS: ``` var rollResult = Math.floor(Math.random() * 6) + 1; var imageContainer = document.getElementById("roll-result"); if (rollResult == 1) imageContainer.innerHTML = '<img src="1.jpg">'; else if (rollResult == 2) imageContainer.innerHTML = '<img src="2.jpg">'; else if (rollResult == 3) imageContainer.innerHTML = '<img src="3.jpg">'; else if (rollResult == 4) imageContainer.innerHTML = '<img src="4.jpg">'; else if (rollResult == 5) imageContainer.innerHTML = '<img src="5.jpg">'; else imageContainer.innerHTML = '<img src="6.jpg">'; ``` And you can even bypass all the if statements and just use this code, which does exactly the same thing as the above except it is one line long: ``` document.getElementById("roll-result").innerHTML = '<img src="' + (Math.floor(Math.random() * 6) + 1) + '" /> ``` Hopefully this helps!
53,832,287
I am trying to print out images on the dice, instead of just the number. But when i am using document.write, it just opens a new tab, and shows the picture. What should i use to print it out, with the button i have? ``` <div id="roll-dice"> <button type="button" value="Trow dice" onclick="rolldice()">Roll dice</button> <span id="roll-result"></span> </div> var rollResult = Math.floor(Math.random() * 6) + 1; if (rollResult == 1) document.write('<img src="1.jpg">'); else if (rollResult == 2) document.write('<img src="2.jpg">'); else if (rollResult == 3) document.write('<img src="3.jpg">'); else if (rollResult == 4) document.write('<img src="4.jpg">'); else if (rollResult == 5) document.write('<img src="5.jpg">'); else document.write('<img src="6.jpg">'); ```
2018/12/18
[ "https://Stackoverflow.com/questions/53832287", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10260767/" ]
**Use a picturebox and modify its source** ``` <img src="" id="pic-result"/> ``` javascript code : ``` var rollResult = Math.floor(Math.random() * 6) + 1; var pic = document.getElementById("pic-result"); if (rollResult == 1) pic.setAttribute('src', '1.jpg'); else if (rollResult == 2) pic.setAttribute('src', '2.jpg'); else if (rollResult == 3) pic.setAttribute('src', '3.jpg'); else if (rollResult == 4) pic.setAttribute('src', '4.jpg'); else if (rollResult == 5) pic.setAttribute('src', '5.jpg>'); else pic.setAttribute('src', '6.jpg'); ```
You should set the innerHTML of a parent element rather than writing the element onto the page: ``` <div id="roll-dice"> <button type="button" value="Trow dice" onclick="rolldice()">Roll dice</button> <span id="roll-result"></span> </div> ``` JS: ``` var rollResult = Math.floor(Math.random() * 6) + 1; var imageContainer = document.getElementById("roll-result"); if (rollResult == 1) imageContainer.innerHTML = '<img src="1.jpg">'; else if (rollResult == 2) imageContainer.innerHTML = '<img src="2.jpg">'; else if (rollResult == 3) imageContainer.innerHTML = '<img src="3.jpg">'; else if (rollResult == 4) imageContainer.innerHTML = '<img src="4.jpg">'; else if (rollResult == 5) imageContainer.innerHTML = '<img src="5.jpg">'; else imageContainer.innerHTML = '<img src="6.jpg">'; ``` And you can even bypass all the if statements and just use this code, which does exactly the same thing as the above except it is one line long: ``` document.getElementById("roll-result").innerHTML = '<img src="' + (Math.floor(Math.random() * 6) + 1) + '" /> ``` Hopefully this helps!
52,177
Disclaimer: This is for a homework project. I'm trying to come up with the best model for diamond prices, depending on several variables and I seem to have a pretty good model so far. However I have run into two variables that are obviously collinear: ``` >with(diamonds, cor(data.frame(Table, Depth, Carat.Weight))) Table Depth Carat.Weight Table 1.00000000 -0.41035485 0.05237998 Depth -0.41035485 1.00000000 0.01779489 Carat.Weight 0.05237998 0.01779489 1.00000000 ``` Table and Depth are dependent on each other, but I still want to include them in my predictive model. I did some research on diamonds and found that Table and Depth are the length across the top and the distance from top to bottom tip of a diamond. Since these prices of diamonds seem to be related to beauty and beauty seems to be related proportions, I was going to include their ratio, say $\frac{Table}{Depth}$, to predict the prices. Is this standard procedure for dealing with collinear variables? If not, what is? Edit: Here is a plot of Depth ~ Table: ![enter image description here](https://i.stack.imgur.com/utIua.png)
2013/03/14
[ "https://stats.stackexchange.com/questions/52177", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/12031/" ]
Those variables are correlated. The extent of linear association implied by that correlation matrix is not remotely high enough for the variables to be considered collinear. In this case, I'd be quite happy to use all three of those variables for typical regression applications. One way to detect multicollinearity is to check the Choleski decomposition of the correlation matrix - if there's multicollinearity there will be some diagonal elements that are close to zero. Here it is on your own correlation matrix: ``` > chol(co) [,1] [,2] [,3] [1,] 1 -0.4103548 0.05237998 [2,] 0 0.9119259 0.04308384 [3,] 0 0.0000000 0.99769741 ``` (The diagonal should always be positive, though some implementations can go slightly negative with the effect of accumulated truncation errors) As you see, the smallest diagonal is 0.91, which is still a long way from zero. By contrast here's some nearly collinear data: ``` > x<-data.frame(x1=rnorm(20),x2=rnorm(20),x3=rnorm(20)) > x$x4<-with(x,x1+x2+x3+rnorm(20,0,1e-4)) > chol(cor(x)) x1 x2 x3 x4 x1 1 0.03243977 -0.3920567 3.295264e-01 x2 0 0.99947369 0.4056161 7.617940e-01 x3 0 0.00000000 0.8256919 5.577474e-01 x4 0 0.00000000 0.0000000 7.590116e-05 <------- close to 0. ```
Thought this diamond-cutting schematic might add insight to the Question. Can't add an image to a Comment so made it an answer.... ![enter image description here](https://i.stack.imgur.com/zGX2f.gif) PS. @PeterEllis's comment: The fact that "diamonds which are longer across the top are shorter from top to bottom" might make sense this way: Assume all uncut diamonds are roughly rectangular (say). Now the cutter must choose his cut with this bounding rectangle. That introduces the tradeoff. If both width and length increase you are going for larger diamonds. Possible but rarer and more expensive. Make sense?
52,177
Disclaimer: This is for a homework project. I'm trying to come up with the best model for diamond prices, depending on several variables and I seem to have a pretty good model so far. However I have run into two variables that are obviously collinear: ``` >with(diamonds, cor(data.frame(Table, Depth, Carat.Weight))) Table Depth Carat.Weight Table 1.00000000 -0.41035485 0.05237998 Depth -0.41035485 1.00000000 0.01779489 Carat.Weight 0.05237998 0.01779489 1.00000000 ``` Table and Depth are dependent on each other, but I still want to include them in my predictive model. I did some research on diamonds and found that Table and Depth are the length across the top and the distance from top to bottom tip of a diamond. Since these prices of diamonds seem to be related to beauty and beauty seems to be related proportions, I was going to include their ratio, say $\frac{Table}{Depth}$, to predict the prices. Is this standard procedure for dealing with collinear variables? If not, what is? Edit: Here is a plot of Depth ~ Table: ![enter image description here](https://i.stack.imgur.com/utIua.png)
2013/03/14
[ "https://stats.stackexchange.com/questions/52177", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/12031/" ]
Thought this diamond-cutting schematic might add insight to the Question. Can't add an image to a Comment so made it an answer.... ![enter image description here](https://i.stack.imgur.com/zGX2f.gif) PS. @PeterEllis's comment: The fact that "diamonds which are longer across the top are shorter from top to bottom" might make sense this way: Assume all uncut diamonds are roughly rectangular (say). Now the cutter must choose his cut with this bounding rectangle. That introduces the tradeoff. If both width and length increase you are going for larger diamonds. Possible but rarer and more expensive. Make sense?
From the correlation its difficult to conclude if the Table and Width are indeed correlated. A coefficient close to +1/-1 would say they are collinear. It also depends on the sample size..if you have more data use it to confirm. The standard procedure in dealing with collinear variables is to eliminate one of them...cos knowing one would determine the other.
52,177
Disclaimer: This is for a homework project. I'm trying to come up with the best model for diamond prices, depending on several variables and I seem to have a pretty good model so far. However I have run into two variables that are obviously collinear: ``` >with(diamonds, cor(data.frame(Table, Depth, Carat.Weight))) Table Depth Carat.Weight Table 1.00000000 -0.41035485 0.05237998 Depth -0.41035485 1.00000000 0.01779489 Carat.Weight 0.05237998 0.01779489 1.00000000 ``` Table and Depth are dependent on each other, but I still want to include them in my predictive model. I did some research on diamonds and found that Table and Depth are the length across the top and the distance from top to bottom tip of a diamond. Since these prices of diamonds seem to be related to beauty and beauty seems to be related proportions, I was going to include their ratio, say $\frac{Table}{Depth}$, to predict the prices. Is this standard procedure for dealing with collinear variables? If not, what is? Edit: Here is a plot of Depth ~ Table: ![enter image description here](https://i.stack.imgur.com/utIua.png)
2013/03/14
[ "https://stats.stackexchange.com/questions/52177", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/12031/" ]
Thought this diamond-cutting schematic might add insight to the Question. Can't add an image to a Comment so made it an answer.... ![enter image description here](https://i.stack.imgur.com/zGX2f.gif) PS. @PeterEllis's comment: The fact that "diamonds which are longer across the top are shorter from top to bottom" might make sense this way: Assume all uncut diamonds are roughly rectangular (say). Now the cutter must choose his cut with this bounding rectangle. That introduces the tradeoff. If both width and length increase you are going for larger diamonds. Possible but rarer and more expensive. Make sense?
What makes you think that table and depth cause collinearity in your model? From the correlation matrix alone it's hard to tell that these two variables will cause collinearity issues. What does a joint F test tell you about both variables' contribution to your model? As curious\_cat mentioned the Pearson may not be the best measure of correlation when the relationship is not linear (perhaps a rank based measure?). VIF and tolerance may help quantify the degree of collinearity you may have. I think your approach of using their ratio is appropriate (though not as a solution to collinearity). When I see the figure, I immediately thought of a common measure in health research which waist to hip ratio. Although, in this case is more akin to BMI (weight/height^2). If the ratio is readily interpretable and intuitive in your audience, I don't see a reason not to use it. However, you maybe able to use both variables in your model unless there is clear evidence of collinearity.
52,177
Disclaimer: This is for a homework project. I'm trying to come up with the best model for diamond prices, depending on several variables and I seem to have a pretty good model so far. However I have run into two variables that are obviously collinear: ``` >with(diamonds, cor(data.frame(Table, Depth, Carat.Weight))) Table Depth Carat.Weight Table 1.00000000 -0.41035485 0.05237998 Depth -0.41035485 1.00000000 0.01779489 Carat.Weight 0.05237998 0.01779489 1.00000000 ``` Table and Depth are dependent on each other, but I still want to include them in my predictive model. I did some research on diamonds and found that Table and Depth are the length across the top and the distance from top to bottom tip of a diamond. Since these prices of diamonds seem to be related to beauty and beauty seems to be related proportions, I was going to include their ratio, say $\frac{Table}{Depth}$, to predict the prices. Is this standard procedure for dealing with collinear variables? If not, what is? Edit: Here is a plot of Depth ~ Table: ![enter image description here](https://i.stack.imgur.com/utIua.png)
2013/03/14
[ "https://stats.stackexchange.com/questions/52177", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/12031/" ]
Thought this diamond-cutting schematic might add insight to the Question. Can't add an image to a Comment so made it an answer.... ![enter image description here](https://i.stack.imgur.com/zGX2f.gif) PS. @PeterEllis's comment: The fact that "diamonds which are longer across the top are shorter from top to bottom" might make sense this way: Assume all uncut diamonds are roughly rectangular (say). Now the cutter must choose his cut with this bounding rectangle. That introduces the tradeoff. If both width and length increase you are going for larger diamonds. Possible but rarer and more expensive. Make sense?
Using ratios in linear regression should be avoided. Essentially, what you are saying is that, if a linear regression was done on those two variables, they would be linearly correlated with no intercept; this is obviously not the case. See: <http://cscu.cornell.edu/news/statnews/stnews03.pdf> Also,they are measuring a latent variable- the size(volume or area) of the diamond. Have you considered converting your data to a surface area/volume measure rather than include both variables? You should post a residual plot of that depth and table data. Your correlation between the two may be invalid anyways.
52,177
Disclaimer: This is for a homework project. I'm trying to come up with the best model for diamond prices, depending on several variables and I seem to have a pretty good model so far. However I have run into two variables that are obviously collinear: ``` >with(diamonds, cor(data.frame(Table, Depth, Carat.Weight))) Table Depth Carat.Weight Table 1.00000000 -0.41035485 0.05237998 Depth -0.41035485 1.00000000 0.01779489 Carat.Weight 0.05237998 0.01779489 1.00000000 ``` Table and Depth are dependent on each other, but I still want to include them in my predictive model. I did some research on diamonds and found that Table and Depth are the length across the top and the distance from top to bottom tip of a diamond. Since these prices of diamonds seem to be related to beauty and beauty seems to be related proportions, I was going to include their ratio, say $\frac{Table}{Depth}$, to predict the prices. Is this standard procedure for dealing with collinear variables? If not, what is? Edit: Here is a plot of Depth ~ Table: ![enter image description here](https://i.stack.imgur.com/utIua.png)
2013/03/14
[ "https://stats.stackexchange.com/questions/52177", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/12031/" ]
Those variables are correlated. The extent of linear association implied by that correlation matrix is not remotely high enough for the variables to be considered collinear. In this case, I'd be quite happy to use all three of those variables for typical regression applications. One way to detect multicollinearity is to check the Choleski decomposition of the correlation matrix - if there's multicollinearity there will be some diagonal elements that are close to zero. Here it is on your own correlation matrix: ``` > chol(co) [,1] [,2] [,3] [1,] 1 -0.4103548 0.05237998 [2,] 0 0.9119259 0.04308384 [3,] 0 0.0000000 0.99769741 ``` (The diagonal should always be positive, though some implementations can go slightly negative with the effect of accumulated truncation errors) As you see, the smallest diagonal is 0.91, which is still a long way from zero. By contrast here's some nearly collinear data: ``` > x<-data.frame(x1=rnorm(20),x2=rnorm(20),x3=rnorm(20)) > x$x4<-with(x,x1+x2+x3+rnorm(20,0,1e-4)) > chol(cor(x)) x1 x2 x3 x4 x1 1 0.03243977 -0.3920567 3.295264e-01 x2 0 0.99947369 0.4056161 7.617940e-01 x3 0 0.00000000 0.8256919 5.577474e-01 x4 0 0.00000000 0.0000000 7.590116e-05 <------- close to 0. ```
From the correlation its difficult to conclude if the Table and Width are indeed correlated. A coefficient close to +1/-1 would say they are collinear. It also depends on the sample size..if you have more data use it to confirm. The standard procedure in dealing with collinear variables is to eliminate one of them...cos knowing one would determine the other.
52,177
Disclaimer: This is for a homework project. I'm trying to come up with the best model for diamond prices, depending on several variables and I seem to have a pretty good model so far. However I have run into two variables that are obviously collinear: ``` >with(diamonds, cor(data.frame(Table, Depth, Carat.Weight))) Table Depth Carat.Weight Table 1.00000000 -0.41035485 0.05237998 Depth -0.41035485 1.00000000 0.01779489 Carat.Weight 0.05237998 0.01779489 1.00000000 ``` Table and Depth are dependent on each other, but I still want to include them in my predictive model. I did some research on diamonds and found that Table and Depth are the length across the top and the distance from top to bottom tip of a diamond. Since these prices of diamonds seem to be related to beauty and beauty seems to be related proportions, I was going to include their ratio, say $\frac{Table}{Depth}$, to predict the prices. Is this standard procedure for dealing with collinear variables? If not, what is? Edit: Here is a plot of Depth ~ Table: ![enter image description here](https://i.stack.imgur.com/utIua.png)
2013/03/14
[ "https://stats.stackexchange.com/questions/52177", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/12031/" ]
Those variables are correlated. The extent of linear association implied by that correlation matrix is not remotely high enough for the variables to be considered collinear. In this case, I'd be quite happy to use all three of those variables for typical regression applications. One way to detect multicollinearity is to check the Choleski decomposition of the correlation matrix - if there's multicollinearity there will be some diagonal elements that are close to zero. Here it is on your own correlation matrix: ``` > chol(co) [,1] [,2] [,3] [1,] 1 -0.4103548 0.05237998 [2,] 0 0.9119259 0.04308384 [3,] 0 0.0000000 0.99769741 ``` (The diagonal should always be positive, though some implementations can go slightly negative with the effect of accumulated truncation errors) As you see, the smallest diagonal is 0.91, which is still a long way from zero. By contrast here's some nearly collinear data: ``` > x<-data.frame(x1=rnorm(20),x2=rnorm(20),x3=rnorm(20)) > x$x4<-with(x,x1+x2+x3+rnorm(20,0,1e-4)) > chol(cor(x)) x1 x2 x3 x4 x1 1 0.03243977 -0.3920567 3.295264e-01 x2 0 0.99947369 0.4056161 7.617940e-01 x3 0 0.00000000 0.8256919 5.577474e-01 x4 0 0.00000000 0.0000000 7.590116e-05 <------- close to 0. ```
What makes you think that table and depth cause collinearity in your model? From the correlation matrix alone it's hard to tell that these two variables will cause collinearity issues. What does a joint F test tell you about both variables' contribution to your model? As curious\_cat mentioned the Pearson may not be the best measure of correlation when the relationship is not linear (perhaps a rank based measure?). VIF and tolerance may help quantify the degree of collinearity you may have. I think your approach of using their ratio is appropriate (though not as a solution to collinearity). When I see the figure, I immediately thought of a common measure in health research which waist to hip ratio. Although, in this case is more akin to BMI (weight/height^2). If the ratio is readily interpretable and intuitive in your audience, I don't see a reason not to use it. However, you maybe able to use both variables in your model unless there is clear evidence of collinearity.
52,177
Disclaimer: This is for a homework project. I'm trying to come up with the best model for diamond prices, depending on several variables and I seem to have a pretty good model so far. However I have run into two variables that are obviously collinear: ``` >with(diamonds, cor(data.frame(Table, Depth, Carat.Weight))) Table Depth Carat.Weight Table 1.00000000 -0.41035485 0.05237998 Depth -0.41035485 1.00000000 0.01779489 Carat.Weight 0.05237998 0.01779489 1.00000000 ``` Table and Depth are dependent on each other, but I still want to include them in my predictive model. I did some research on diamonds and found that Table and Depth are the length across the top and the distance from top to bottom tip of a diamond. Since these prices of diamonds seem to be related to beauty and beauty seems to be related proportions, I was going to include their ratio, say $\frac{Table}{Depth}$, to predict the prices. Is this standard procedure for dealing with collinear variables? If not, what is? Edit: Here is a plot of Depth ~ Table: ![enter image description here](https://i.stack.imgur.com/utIua.png)
2013/03/14
[ "https://stats.stackexchange.com/questions/52177", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/12031/" ]
Those variables are correlated. The extent of linear association implied by that correlation matrix is not remotely high enough for the variables to be considered collinear. In this case, I'd be quite happy to use all three of those variables for typical regression applications. One way to detect multicollinearity is to check the Choleski decomposition of the correlation matrix - if there's multicollinearity there will be some diagonal elements that are close to zero. Here it is on your own correlation matrix: ``` > chol(co) [,1] [,2] [,3] [1,] 1 -0.4103548 0.05237998 [2,] 0 0.9119259 0.04308384 [3,] 0 0.0000000 0.99769741 ``` (The diagonal should always be positive, though some implementations can go slightly negative with the effect of accumulated truncation errors) As you see, the smallest diagonal is 0.91, which is still a long way from zero. By contrast here's some nearly collinear data: ``` > x<-data.frame(x1=rnorm(20),x2=rnorm(20),x3=rnorm(20)) > x$x4<-with(x,x1+x2+x3+rnorm(20,0,1e-4)) > chol(cor(x)) x1 x2 x3 x4 x1 1 0.03243977 -0.3920567 3.295264e-01 x2 0 0.99947369 0.4056161 7.617940e-01 x3 0 0.00000000 0.8256919 5.577474e-01 x4 0 0.00000000 0.0000000 7.590116e-05 <------- close to 0. ```
Using ratios in linear regression should be avoided. Essentially, what you are saying is that, if a linear regression was done on those two variables, they would be linearly correlated with no intercept; this is obviously not the case. See: <http://cscu.cornell.edu/news/statnews/stnews03.pdf> Also,they are measuring a latent variable- the size(volume or area) of the diamond. Have you considered converting your data to a surface area/volume measure rather than include both variables? You should post a residual plot of that depth and table data. Your correlation between the two may be invalid anyways.
52,177
Disclaimer: This is for a homework project. I'm trying to come up with the best model for diamond prices, depending on several variables and I seem to have a pretty good model so far. However I have run into two variables that are obviously collinear: ``` >with(diamonds, cor(data.frame(Table, Depth, Carat.Weight))) Table Depth Carat.Weight Table 1.00000000 -0.41035485 0.05237998 Depth -0.41035485 1.00000000 0.01779489 Carat.Weight 0.05237998 0.01779489 1.00000000 ``` Table and Depth are dependent on each other, but I still want to include them in my predictive model. I did some research on diamonds and found that Table and Depth are the length across the top and the distance from top to bottom tip of a diamond. Since these prices of diamonds seem to be related to beauty and beauty seems to be related proportions, I was going to include their ratio, say $\frac{Table}{Depth}$, to predict the prices. Is this standard procedure for dealing with collinear variables? If not, what is? Edit: Here is a plot of Depth ~ Table: ![enter image description here](https://i.stack.imgur.com/utIua.png)
2013/03/14
[ "https://stats.stackexchange.com/questions/52177", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/12031/" ]
Using ratios in linear regression should be avoided. Essentially, what you are saying is that, if a linear regression was done on those two variables, they would be linearly correlated with no intercept; this is obviously not the case. See: <http://cscu.cornell.edu/news/statnews/stnews03.pdf> Also,they are measuring a latent variable- the size(volume or area) of the diamond. Have you considered converting your data to a surface area/volume measure rather than include both variables? You should post a residual plot of that depth and table data. Your correlation between the two may be invalid anyways.
From the correlation its difficult to conclude if the Table and Width are indeed correlated. A coefficient close to +1/-1 would say they are collinear. It also depends on the sample size..if you have more data use it to confirm. The standard procedure in dealing with collinear variables is to eliminate one of them...cos knowing one would determine the other.
52,177
Disclaimer: This is for a homework project. I'm trying to come up with the best model for diamond prices, depending on several variables and I seem to have a pretty good model so far. However I have run into two variables that are obviously collinear: ``` >with(diamonds, cor(data.frame(Table, Depth, Carat.Weight))) Table Depth Carat.Weight Table 1.00000000 -0.41035485 0.05237998 Depth -0.41035485 1.00000000 0.01779489 Carat.Weight 0.05237998 0.01779489 1.00000000 ``` Table and Depth are dependent on each other, but I still want to include them in my predictive model. I did some research on diamonds and found that Table and Depth are the length across the top and the distance from top to bottom tip of a diamond. Since these prices of diamonds seem to be related to beauty and beauty seems to be related proportions, I was going to include their ratio, say $\frac{Table}{Depth}$, to predict the prices. Is this standard procedure for dealing with collinear variables? If not, what is? Edit: Here is a plot of Depth ~ Table: ![enter image description here](https://i.stack.imgur.com/utIua.png)
2013/03/14
[ "https://stats.stackexchange.com/questions/52177", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/12031/" ]
Using ratios in linear regression should be avoided. Essentially, what you are saying is that, if a linear regression was done on those two variables, they would be linearly correlated with no intercept; this is obviously not the case. See: <http://cscu.cornell.edu/news/statnews/stnews03.pdf> Also,they are measuring a latent variable- the size(volume or area) of the diamond. Have you considered converting your data to a surface area/volume measure rather than include both variables? You should post a residual plot of that depth and table data. Your correlation between the two may be invalid anyways.
What makes you think that table and depth cause collinearity in your model? From the correlation matrix alone it's hard to tell that these two variables will cause collinearity issues. What does a joint F test tell you about both variables' contribution to your model? As curious\_cat mentioned the Pearson may not be the best measure of correlation when the relationship is not linear (perhaps a rank based measure?). VIF and tolerance may help quantify the degree of collinearity you may have. I think your approach of using their ratio is appropriate (though not as a solution to collinearity). When I see the figure, I immediately thought of a common measure in health research which waist to hip ratio. Although, in this case is more akin to BMI (weight/height^2). If the ratio is readily interpretable and intuitive in your audience, I don't see a reason not to use it. However, you maybe able to use both variables in your model unless there is clear evidence of collinearity.
40,746
I am working on a small custom Markup script in Java that converts a Markdown/Wiki style markup into HTML. The below works, but as I add more Markup I can see it becoming unwieldy and hard to maintain. Is there is a better, more elegant, way to do something similar? ``` private String processString(String t) { t = setBoldItal(t); t = setBold(t); t = setItal(t); t = setUnderline(t); t = setHeadings(t); t = setImages(t); t = setOutLinks(t); t = setLocalLink(t); return t; } ``` And on top of it, passing in the string itself and setting it back to the same string just doesn't feel right. But, I just don't know of any other way to go about this.
2014/02/03
[ "https://codereview.stackexchange.com/questions/40746", "https://codereview.stackexchange.com", "https://codereview.stackexchange.com/users/14889/" ]
You could create a `StringProcessor` interface: ``` public interface StringProcessor { String process(String input); } public class BoldProcessor implements StringProcessor { public String process(final String input) { ... } } ``` and create a `List` from the available implementations: ``` final List<StringProcessor> processors = new ArrayList<StringProcessor>(); processors.add(new ItalicProcessor()); processors.add(new BoldProcessor()); ... ``` and use it: ``` String result = input; for (final StringProcessor processor: processors) { result = processor.process(result); } return result; ```
This sounds like a case where you should encapsulate the data with a ['Decorator Pattern'](http://en.wikipedia.org/wiki/Decorator_pattern). You should declare a simple interface such as: ``` public interface StyledString { public String toFormatted(); public StyledString getSource(); } ``` Then create a concrete class for each style you have: ``` public class BoldStyle implements StyledString { private final StyledString source; public BoldStyle(StyledString source) { this.source = source; } public String toFormatted() { return "<b>" + source.toFormatted() + "</b>"; } public StyledString getSource() { return source; } } ``` You should also have a 'NoStyle' class that takes a raw String input, and returns a null getSource(); using this system you can easily add Styles, and you can have styles that join phrases, etc..... Also, you can add the styles together in a way that makes decomposing the value easier at a later point, and you only need to add/wrap the styles that you want.
40,746
I am working on a small custom Markup script in Java that converts a Markdown/Wiki style markup into HTML. The below works, but as I add more Markup I can see it becoming unwieldy and hard to maintain. Is there is a better, more elegant, way to do something similar? ``` private String processString(String t) { t = setBoldItal(t); t = setBold(t); t = setItal(t); t = setUnderline(t); t = setHeadings(t); t = setImages(t); t = setOutLinks(t); t = setLocalLink(t); return t; } ``` And on top of it, passing in the string itself and setting it back to the same string just doesn't feel right. But, I just don't know of any other way to go about this.
2014/02/03
[ "https://codereview.stackexchange.com/questions/40746", "https://codereview.stackexchange.com", "https://codereview.stackexchange.com/users/14889/" ]
If you want to process a language, even a simple one like a Wiki Markup, you should eventually write a proper parser, not do step-by-step replacement, nor chain a number of individual processors, no matter how fancy their implementation. You can go with the fully generic approach, generate an AST from the markup (this would look similar to @rolfl's `StyledString`), and then use an AST serializer to create the end result (but for efficiency's sake, please append to a `StringBuilder` instead of repeatedly creating new strings). This allows you to use multiple serializers; e.g. if at one point you want to create PDF instead of HTML, this gives you a huge advantage. Your AST nodes should implement the visitor pattern for this purpose. (The serializer would be the visitor.) But that would probably be overkill here. A simple parser that outputs the HTML as it parses would be simpler and probably sufficient. You can use parser generators like ANTLR to generate the parser, or you can hand-write a parser.
This sounds like a case where you should encapsulate the data with a ['Decorator Pattern'](http://en.wikipedia.org/wiki/Decorator_pattern). You should declare a simple interface such as: ``` public interface StyledString { public String toFormatted(); public StyledString getSource(); } ``` Then create a concrete class for each style you have: ``` public class BoldStyle implements StyledString { private final StyledString source; public BoldStyle(StyledString source) { this.source = source; } public String toFormatted() { return "<b>" + source.toFormatted() + "</b>"; } public StyledString getSource() { return source; } } ``` You should also have a 'NoStyle' class that takes a raw String input, and returns a null getSource(); using this system you can easily add Styles, and you can have styles that join phrases, etc..... Also, you can add the styles together in a way that makes decomposing the value easier at a later point, and you only need to add/wrap the styles that you want.
40,746
I am working on a small custom Markup script in Java that converts a Markdown/Wiki style markup into HTML. The below works, but as I add more Markup I can see it becoming unwieldy and hard to maintain. Is there is a better, more elegant, way to do something similar? ``` private String processString(String t) { t = setBoldItal(t); t = setBold(t); t = setItal(t); t = setUnderline(t); t = setHeadings(t); t = setImages(t); t = setOutLinks(t); t = setLocalLink(t); return t; } ``` And on top of it, passing in the string itself and setting it back to the same string just doesn't feel right. But, I just don't know of any other way to go about this.
2014/02/03
[ "https://codereview.stackexchange.com/questions/40746", "https://codereview.stackexchange.com", "https://codereview.stackexchange.com/users/14889/" ]
You could create a `StringProcessor` interface: ``` public interface StringProcessor { String process(String input); } public class BoldProcessor implements StringProcessor { public String process(final String input) { ... } } ``` and create a `List` from the available implementations: ``` final List<StringProcessor> processors = new ArrayList<StringProcessor>(); processors.add(new ItalicProcessor()); processors.add(new BoldProcessor()); ... ``` and use it: ``` String result = input; for (final StringProcessor processor: processors) { result = processor.process(result); } return result; ```
If you want to process a language, even a simple one like a Wiki Markup, you should eventually write a proper parser, not do step-by-step replacement, nor chain a number of individual processors, no matter how fancy their implementation. You can go with the fully generic approach, generate an AST from the markup (this would look similar to @rolfl's `StyledString`), and then use an AST serializer to create the end result (but for efficiency's sake, please append to a `StringBuilder` instead of repeatedly creating new strings). This allows you to use multiple serializers; e.g. if at one point you want to create PDF instead of HTML, this gives you a huge advantage. Your AST nodes should implement the visitor pattern for this purpose. (The serializer would be the visitor.) But that would probably be overkill here. A simple parser that outputs the HTML as it parses would be simpler and probably sufficient. You can use parser generators like ANTLR to generate the parser, or you can hand-write a parser.
40,746
I am working on a small custom Markup script in Java that converts a Markdown/Wiki style markup into HTML. The below works, but as I add more Markup I can see it becoming unwieldy and hard to maintain. Is there is a better, more elegant, way to do something similar? ``` private String processString(String t) { t = setBoldItal(t); t = setBold(t); t = setItal(t); t = setUnderline(t); t = setHeadings(t); t = setImages(t); t = setOutLinks(t); t = setLocalLink(t); return t; } ``` And on top of it, passing in the string itself and setting it back to the same string just doesn't feel right. But, I just don't know of any other way to go about this.
2014/02/03
[ "https://codereview.stackexchange.com/questions/40746", "https://codereview.stackexchange.com", "https://codereview.stackexchange.com/users/14889/" ]
You could create a `StringProcessor` interface: ``` public interface StringProcessor { String process(String input); } public class BoldProcessor implements StringProcessor { public String process(final String input) { ... } } ``` and create a `List` from the available implementations: ``` final List<StringProcessor> processors = new ArrayList<StringProcessor>(); processors.add(new ItalicProcessor()); processors.add(new BoldProcessor()); ... ``` and use it: ``` String result = input; for (final StringProcessor processor: processors) { result = processor.process(result); } return result; ```
I like @palacsint's approach but I just have one thing to add, you can probably do most of the processing with the same class. ``` public class TagProcessor implements StringProcessor { private final String wrapWith; public TagProcessor(String wrapWith) { this.wrapWith = wrapWith; } @Override public String process(String input) { return "<" + wrapWith + ">" + input + "</" + wrapWith + ">"; } } processors.add(new TagProcessor("i")); processors.add(new TagProcessor("b")); ``` I also believe that you can add generalize a lot of the functionality for other processors into a proper class and use it's constructor to send proper parameters. (Wrapping in `<div class="someclass">...</div>` for example).
40,746
I am working on a small custom Markup script in Java that converts a Markdown/Wiki style markup into HTML. The below works, but as I add more Markup I can see it becoming unwieldy and hard to maintain. Is there is a better, more elegant, way to do something similar? ``` private String processString(String t) { t = setBoldItal(t); t = setBold(t); t = setItal(t); t = setUnderline(t); t = setHeadings(t); t = setImages(t); t = setOutLinks(t); t = setLocalLink(t); return t; } ``` And on top of it, passing in the string itself and setting it back to the same string just doesn't feel right. But, I just don't know of any other way to go about this.
2014/02/03
[ "https://codereview.stackexchange.com/questions/40746", "https://codereview.stackexchange.com", "https://codereview.stackexchange.com/users/14889/" ]
If you want to process a language, even a simple one like a Wiki Markup, you should eventually write a proper parser, not do step-by-step replacement, nor chain a number of individual processors, no matter how fancy their implementation. You can go with the fully generic approach, generate an AST from the markup (this would look similar to @rolfl's `StyledString`), and then use an AST serializer to create the end result (but for efficiency's sake, please append to a `StringBuilder` instead of repeatedly creating new strings). This allows you to use multiple serializers; e.g. if at one point you want to create PDF instead of HTML, this gives you a huge advantage. Your AST nodes should implement the visitor pattern for this purpose. (The serializer would be the visitor.) But that would probably be overkill here. A simple parser that outputs the HTML as it parses would be simpler and probably sufficient. You can use parser generators like ANTLR to generate the parser, or you can hand-write a parser.
I like @palacsint's approach but I just have one thing to add, you can probably do most of the processing with the same class. ``` public class TagProcessor implements StringProcessor { private final String wrapWith; public TagProcessor(String wrapWith) { this.wrapWith = wrapWith; } @Override public String process(String input) { return "<" + wrapWith + ">" + input + "</" + wrapWith + ">"; } } processors.add(new TagProcessor("i")); processors.add(new TagProcessor("b")); ``` I also believe that you can add generalize a lot of the functionality for other processors into a proper class and use it's constructor to send proper parameters. (Wrapping in `<div class="someclass">...</div>` for example).
1,173,383
My Run button is grayed out. This includes starting a new project of any type. I completely un-installed VS 2008 and Re-Installed it. The Run is Still grayed out. VB is now worthless.
2009/07/23
[ "https://Stackoverflow.com/questions/1173383", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18225/" ]
It sounds like you do not have a clear spec. If you don't have a clear spec then how can you possibly know whether the code works according to spec? Take a step back. Start by writing a **one sentence spec**: **The FrobFilter component takes a sequence of Frobs and places each one in the correct FrobBin until one bin is full.** OK, now you've got a specification. It's not a testable or implementable specification yet. Why not? Two reasons. Reason One: the consequence of no FrobBin filling up before the Frob sequence runs out has not been specified. Reason Two: "correct" is not specified. Now write a one-sentence spec that addresses each concern. **If the sequence ends before some bin is full then the administrator is notified.** **The correct bin for a Frob such that Blargh is Gnusto is always the FrotzBin.** OK, now you have two more problems. How is the administrator notified? And what happens if the Frob's Blargh is not a Gnusto? Just keep on breaking it down, one sentence at a time, until you have a complete and accurate spec. Then you'll find that **your spec, the program which implements, and the test cases all look remarkably like each other**. And that is an awesome situation to be in.
It sounds like you should be testing each of these filters separately, with a mock filter "underneath" each one to be chained to. Hopefully each of the filters is simple, and can be tested simply. I'd then have a few integration tests for the whole thing when it's all wired up.
6,304,375
I'm looking for a `RegEx` for multiple line email addresses. For example: 1) Single email: ``` johnsmith@email.com - ok ``` 2) Two line email: ``` johnsmith@email.com karensmith@emailcom - ok ``` 3) Two line email: ``` john smith@email.com - not ok karensmith@emailcom ``` I've tried the following: ``` ((\w+([-+.']\w+)*@\w+([-.]\w+)*\.\w+([-.]\w+)*(\r\n)?)+)\r* ``` But when I test it, it seems to still match ok if there is 1 valid email address as in example 3. I need a rule which states **all** email addresses must be valid.
2011/06/10
[ "https://Stackoverflow.com/questions/6304375", "https://Stackoverflow.com", "https://Stackoverflow.com/users/306098/" ]
How about: ``` ^(((\w+([-+.']\w+)*@\w+([-.]\w+)*\.\w+([-.]\w+)*(\r\n)?\s?)+)*)$ ``` Check the beginning of the string using '^' and the end using '$'. Allow an optional whitespace character with '\s?'. Try out <http://myregexp.com/signedJar.html> for testing regex expressions.
My guess would be that you probably need a multiline option at the end of your regexp (in most cases `/m` after the regexp). **Edit** You might also want to add anchors `\A` and `\z` to mark the beginning and end of the input data. Here is a [good article](http://www.regular-expressions.info/anchors.html) on anchors. **Edit** Quick and dirty example working in Ruby: ``` /\A\w+@\w+\.\w+(\n\w+@\w+.\.\w+)*\z/ ``` Will produce: ``` "test@here.pl\nthe@bar.pl".match(/\A\w+@\w+\.\w+(\n\w+@\w+\.\w+)*\z/) => #<MatchData "test@here.pl\nthe@bar.pl" 1:"\nthe@bar.pl"> "test@here.pl\nthebar.pl".match(/\A\w+@\w+\.\w+(\n\w+@\w+\.\w+)*\z/) => nil "test@here.pl".match(/\A\w+@\w+\.\w+(\n\w+@\w+\.\w+)*\z/) => #<MatchData "test@here.pl" 1:nil> "test@here".match(/\A\w+@\w+\.\w+(\n\w+@\w+\.\w+)*\z/) => nil ``` You can improve the regex and it should work. The key was to use `\A` and `\z` anchors. The `/m` modifier is not required.
6,304,375
I'm looking for a `RegEx` for multiple line email addresses. For example: 1) Single email: ``` johnsmith@email.com - ok ``` 2) Two line email: ``` johnsmith@email.com karensmith@emailcom - ok ``` 3) Two line email: ``` john smith@email.com - not ok karensmith@emailcom ``` I've tried the following: ``` ((\w+([-+.']\w+)*@\w+([-.]\w+)*\.\w+([-.]\w+)*(\r\n)?)+)\r* ``` But when I test it, it seems to still match ok if there is 1 valid email address as in example 3. I need a rule which states **all** email addresses must be valid.
2011/06/10
[ "https://Stackoverflow.com/questions/6304375", "https://Stackoverflow.com", "https://Stackoverflow.com/users/306098/" ]
I'd split the string on `[\r\n]+` and then test each address individualy.
My guess would be that you probably need a multiline option at the end of your regexp (in most cases `/m` after the regexp). **Edit** You might also want to add anchors `\A` and `\z` to mark the beginning and end of the input data. Here is a [good article](http://www.regular-expressions.info/anchors.html) on anchors. **Edit** Quick and dirty example working in Ruby: ``` /\A\w+@\w+\.\w+(\n\w+@\w+.\.\w+)*\z/ ``` Will produce: ``` "test@here.pl\nthe@bar.pl".match(/\A\w+@\w+\.\w+(\n\w+@\w+\.\w+)*\z/) => #<MatchData "test@here.pl\nthe@bar.pl" 1:"\nthe@bar.pl"> "test@here.pl\nthebar.pl".match(/\A\w+@\w+\.\w+(\n\w+@\w+\.\w+)*\z/) => nil "test@here.pl".match(/\A\w+@\w+\.\w+(\n\w+@\w+\.\w+)*\z/) => #<MatchData "test@here.pl" 1:nil> "test@here".match(/\A\w+@\w+\.\w+(\n\w+@\w+\.\w+)*\z/) => nil ``` You can improve the regex and it should work. The key was to use `\A` and `\z` anchors. The `/m` modifier is not required.
6,304,375
I'm looking for a `RegEx` for multiple line email addresses. For example: 1) Single email: ``` johnsmith@email.com - ok ``` 2) Two line email: ``` johnsmith@email.com karensmith@emailcom - ok ``` 3) Two line email: ``` john smith@email.com - not ok karensmith@emailcom ``` I've tried the following: ``` ((\w+([-+.']\w+)*@\w+([-.]\w+)*\.\w+([-.]\w+)*(\r\n)?)+)\r* ``` But when I test it, it seems to still match ok if there is 1 valid email address as in example 3. I need a rule which states **all** email addresses must be valid.
2011/06/10
[ "https://Stackoverflow.com/questions/6304375", "https://Stackoverflow.com", "https://Stackoverflow.com/users/306098/" ]
How about: ``` ^(((\w+([-+.']\w+)*@\w+([-.]\w+)*\.\w+([-.]\w+)*(\r\n)?\s?)+)*)$ ``` Check the beginning of the string using '^' and the end using '$'. Allow an optional whitespace character with '\s?'. Try out <http://myregexp.com/signedJar.html> for testing regex expressions.
I'd split the string on `[\r\n]+` and then test each address individualy.
49,029,592
I have data ``` dat1 <- data.table(id=1:9, group=c(1,1,2,2,2,3,3,3,3), t=c(14,17,20,21,26,89,90,95,99), index=c(1,2,1,2,3,1,2,3,4) ) ``` and I would like to compute the difference on `t` to the previous value, according to `index`. For the first instance of each group, I would like to compute the difference to some external variable ``` dat2 <- data.table(group=c(1,2,3), start=c(10,15,80) ) ``` such that the following result should be obtained: ``` > res id group t index dif 1: 1 1 14 1 4 2: 2 1 17 2 3 3: 3 2 20 1 5 4: 4 2 21 2 1 5: 5 2 26 3 5 6: 6 3 89 1 9 7: 7 3 90 2 1 8: 8 3 95 3 5 9: 9 3 99 4 4 ``` I have tried using ``` dat1[ , ifelse(index == min(index), dif := t - dat2$start, dif := t - t[-1]), by = group] ``` but I was unsure about referencing other elements of the same group and external elements in one step. Is this at all possible using data.table?
2018/02/28
[ "https://Stackoverflow.com/questions/49029592", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5089467/" ]
A possible solution: ``` dat1[, dif := ifelse(index == min(index), t - dat2$start[match(.BY, dat2$group)], t - shift(t)) , by = group][] ``` which gives: > > > ``` > id group t index dif > 1: 1 1 14 1 4 > 2: 2 1 17 2 3 > 3: 3 2 20 1 5 > 4: 4 2 21 2 1 > 5: 5 2 26 3 5 > 6: 6 3 89 1 9 > 7: 7 3 90 2 1 > 8: 8 3 95 3 5 > 9: 9 3 99 4 4 > > ``` > > Or a variant as proposed by @jogo in the comments which avoids the ifelse: ``` dat1[, dif := t - shift(t), by = group ][index == 1, dif := t - dat2[group==.BY, start], by = group][] ```
I would try to avoid `ifelse` and use data.tables efficient join-capabilities: ``` dat1[dat2, on = "group", # join on group start := i.start][, # add start value diff := diff(c(start[1L], t)), by = group][, # compute difference start := NULL] # remove start value ``` The resulting table is: ``` # id group t index diff #1: 1 1 14 1 4 #2: 2 1 17 2 3 #3: 3 2 20 1 5 #4: 4 2 21 2 1 #5: 5 2 26 3 5 #6: 6 3 89 1 9 #7: 7 3 90 2 1 #8: 8 3 95 3 5 #9: 9 3 99 4 4 ```
49,029,592
I have data ``` dat1 <- data.table(id=1:9, group=c(1,1,2,2,2,3,3,3,3), t=c(14,17,20,21,26,89,90,95,99), index=c(1,2,1,2,3,1,2,3,4) ) ``` and I would like to compute the difference on `t` to the previous value, according to `index`. For the first instance of each group, I would like to compute the difference to some external variable ``` dat2 <- data.table(group=c(1,2,3), start=c(10,15,80) ) ``` such that the following result should be obtained: ``` > res id group t index dif 1: 1 1 14 1 4 2: 2 1 17 2 3 3: 3 2 20 1 5 4: 4 2 21 2 1 5: 5 2 26 3 5 6: 6 3 89 1 9 7: 7 3 90 2 1 8: 8 3 95 3 5 9: 9 3 99 4 4 ``` I have tried using ``` dat1[ , ifelse(index == min(index), dif := t - dat2$start, dif := t - t[-1]), by = group] ``` but I was unsure about referencing other elements of the same group and external elements in one step. Is this at all possible using data.table?
2018/02/28
[ "https://Stackoverflow.com/questions/49029592", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5089467/" ]
A possible solution: ``` dat1[, dif := ifelse(index == min(index), t - dat2$start[match(.BY, dat2$group)], t - shift(t)) , by = group][] ``` which gives: > > > ``` > id group t index dif > 1: 1 1 14 1 4 > 2: 2 1 17 2 3 > 3: 3 2 20 1 5 > 4: 4 2 21 2 1 > 5: 5 2 26 3 5 > 6: 6 3 89 1 9 > 7: 7 3 90 2 1 > 8: 8 3 95 3 5 > 9: 9 3 99 4 4 > > ``` > > Or a variant as proposed by @jogo in the comments which avoids the ifelse: ``` dat1[, dif := t - shift(t), by = group ][index == 1, dif := t - dat2[group==.BY, start], by = group][] ```
You may use `shift` with a dynamic `fill` argument: Index 'dat2' with `.BY` to get 'start' values for each 'group': ``` dat1[ , dif := t - shift(t, fill = dat2[group == .BY, start]), by = group] # id group t index dif # 1: 1 1 14 1 4 # 2: 2 1 17 2 3 # 3: 3 2 20 1 5 # 4: 4 2 21 2 1 # 5: 5 2 26 3 5 # 6: 6 3 89 1 9 # 7: 7 3 90 2 1 # 8: 8 3 95 3 5 # 9: 9 3 99 4 4 ``` --- Alternatively, you can do this in steps. Probably a matter of taste, but I find it more transparent than the `ifelse` way. First the 'normal' `shift`. Then add an 'index' variable to 'dat2' and do an update join. ``` dat1[ , dif := t - shift(t), by = group] dat2[ , index := 1] dat1[dat2, on = .(group, index), dif := t - start] ```
345,483
What's the word for when a person chooses to believe something based on preconceptions rather than judging it objectively? Here's an example (a little contrived but hopefully illustrates the point): A person ,P, is usually very healthy and prides themself on eating good, healthy food. This person has two friends, A and B, and they share a house. P is in awe of A and pretty much anything A does is "right". For example, if A buys a drink that's packed with sugar but looks "healthy" then they decide this drink is super-healthy and there's now always a carton of this drink on P's shelf in the fridge. However, they're not so keen on B. B buys some food that's super-healthy. P doesn't even bother tasting this food and it never gets touched, despite being on the sharing shelf.
2016/08/29
[ "https://english.stackexchange.com/questions/345483", "https://english.stackexchange.com", "https://english.stackexchange.com/users/193661/" ]
It's almost in your title. **[preconceived](https://www.oxforddictionaries.com/definition/english/preconceived)** > > ADJECTIVE > (Of an idea or opinion) formed before having the evidence for its truth or usefulness: > the same set of facts can be tailored to fit any preconceived belief > > >
**Biased** is the word that comes to my mind when a person is not objective. From your example, person P is unfair in treating person B's food even though B's food is super healthy. So P is biased towards person A's choices. [Cambridge](http://dictionary.cambridge.org/dictionary/english/biased) dictionary defines *biased* as, > > Showing an unreasonable like or dislike for a person based on personal opinions. > > >
345,483
What's the word for when a person chooses to believe something based on preconceptions rather than judging it objectively? Here's an example (a little contrived but hopefully illustrates the point): A person ,P, is usually very healthy and prides themself on eating good, healthy food. This person has two friends, A and B, and they share a house. P is in awe of A and pretty much anything A does is "right". For example, if A buys a drink that's packed with sugar but looks "healthy" then they decide this drink is super-healthy and there's now always a carton of this drink on P's shelf in the fridge. However, they're not so keen on B. B buys some food that's super-healthy. P doesn't even bother tasting this food and it never gets touched, despite being on the sharing shelf.
2016/08/29
[ "https://english.stackexchange.com/questions/345483", "https://english.stackexchange.com", "https://english.stackexchange.com/users/193661/" ]
I think you are looking for ***[prejudice](http://www.dictionary.com/browse/prejudice):*** > > * any preconceived opinion or feeling, either favorable or unfavorable. > > > (Dictionary.com)
**Biased** is the word that comes to my mind when a person is not objective. From your example, person P is unfair in treating person B's food even though B's food is super healthy. So P is biased towards person A's choices. [Cambridge](http://dictionary.cambridge.org/dictionary/english/biased) dictionary defines *biased* as, > > Showing an unreasonable like or dislike for a person based on personal opinions. > > >
60,369,740
I've reduced my code down to the following minimum code: ``` #include<iostream> #include<vector> class tt { public: bool player; std::vector<tt> actions; }; template<typename state_t> int func(state_t &state, const bool is_max) { state.player = true; const auto &actions = state.actions; if(state.actions.size()) { auto soln = func(actions[0], false); } return 0; } int main(int argc, char const *argv[]) { tt root; func(root, true); return 0; } ``` When I try to compile this code, I get ``` test.cpp:14:17: error: cannot assign to variable 'state' with const-qualified type 'const tt &' state.player = true; ~~~~~~~~~~~~ ^ test.cpp:19:19: note: in instantiation of function template specialization 'func<const tt>' requested here auto soln = func(actions[0], false); ^ test.cpp:28:4: note: in instantiation of function template specialization 'func<tt>' requested here func(root, true); ^ test.cpp:12:19: note: variable 'state' declared const here int func(state_t &state, const bool is_max) ~~~~~~~~~^~~~~ 1 error generated. ``` It is claiming that state is a `const tt &` type. The signature of the templated function is `int func(state_t &state, const bool is_max)`, and there is no `const` in front of the `state_t`. It appears the `const` is somehow being deduced from the recursive call because `actions` is a const-ref vector of `tt` objects. I thought argument deduction ignores `const`? How can this occur?
2020/02/24
[ "https://Stackoverflow.com/questions/60369740", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8869570/" ]
**Answer is mainly extracted from Scott Mayers *Effective C++* book.** ``` template<typename T> void f(ParamType param); f(expr); // deduce T and ParamType from expr ``` **ParamType is a Reference or Pointer, but not a Universal Reference** The simplest situation is when ParamType is a reference type or a pointer type, but not a universal reference. In that case, type deduction works like this: * If expr’s type is a reference, ignore the reference part. * Then pattern-match expr’s type against ParamType to determine T. *In argument deduction process it ignores the reference part not the `const` part.* In your case it is `const auto &actions = state.actions;` which means, for the template argument deduction of `auto soln = func(actions[0], false);` only the reference part is dropped, not the *cv qualifiers*. Further examples from the book. ``` template<typename T> void f(T& param); // param is a reference ``` and we have these variable declarations, ``` int x = 27; // x is an int const int cx = x; // cx is a const int const int& rx = x; // rx is a reference to x as a const int the deduced types for param and T in various calls are as follows: f(x); // T is int, param's type is int& f(cx); // T is const int, // param's type is const int& f(rx); // T is const int, // param's type is const int& ```
In addition to @aep's answer, If it had been deduced as `tt` instead of `const tt`, the compiler would generated an error too, because it is not possible to bind `const reference` to `non-const reference` without `const_cast`. ``` #include<iostream> #include<vector> class tt { public: bool player; std::vector<tt> actions; }; void try_to_bind_const_reference_to_non_const_reference( tt& t ) { } template<typename state_t> int func(state_t &state, const bool is_max) { state.player = true; const auto &actions = state.actions; if(state.actions.size()) { // auto soln = func(actions[0], false); try_to_bind_const_reference_to_non_const_reference( actions[0] ); } return 0; } int main(int argc, char const *argv[]) { tt root; func(root, true); return 0; } ``` [run online](https://onlinegdb.com/B1ds8kWE8)
28,016,571
Although undocumented, conventional wisdom using the Android BLE apis is that certain operations like reading / writing Characteristics & Descriptors should be done one at a time (although some devices are more lenient than others). However, I am not clear on whether this policy should apply only to a single connection, or across all active connections. I've heard that its best to initiate connections to devices one at a time. That might be an example of operations (connect / connectGatt) which should be executed serially among all devices. But for other operations, like reading and writing Characteristics, is it good enough if each connection executes operations serially, or do I need some global operation queue shared among all devices so that between all devices, only one operation is executing?
2015/01/19
[ "https://Stackoverflow.com/questions/28016571", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4468153/" ]
While I cannot speak for the upper layer, I can relate to what will happen on lower hardware level and that might provide some insights for your design. Which ever is the stack on upper layer doing, at the end the operation has to be handled by the transceiver chip. BLE is operating over 40 channel band in which 3 are used for broadcast and other for data transmission. This is done to be able to have multitude of device communicating together limiting the collision by being on other frequency bands. Those bands are selected based on the one with the lowest noise (or traffic). The transceiver himself is only able to communicate (speak and listen) in one band at a time and has to switch between bands to reach other devices. This is done by very tight timing of the communication. Another fact is that a wireless transceiver is basically some sort of half duplex communication with collision detection, it cannot send and listen at the same time, nor can two device emit at the same time on the same band. It is therefore by design (and laws of nature) serial or sequential. If you implement some sort of operational queue or threaded implementation, at the end everything will have to be treated serially / sequentially by the transceiver. If you access to it by different threads, the transceiver may have to jump all the time between channels or perhaps gets confused if it is not well handled on the upper level. The only good reason I may see to treat that on thread would be that the processing time of the transceiver to be significantly lower than the upper stack you have to run, and you would take advantage of multi core processor. But otherwise unless very specific software need or architecture, I do not believe you will have significant gain of having a different implementation than serial and I would also speak to the slaves one by one rather than all at the same time for the considerations explained above.
BLE is designed to be asynchronous and event driven. You can send the commands however you like and you will get the responses back in no particular order. If you send a command and expect the next packet to be the response, you're going to get into trouble. This being said, I'm not sure how the Android library is structured around this.
28,016,571
Although undocumented, conventional wisdom using the Android BLE apis is that certain operations like reading / writing Characteristics & Descriptors should be done one at a time (although some devices are more lenient than others). However, I am not clear on whether this policy should apply only to a single connection, or across all active connections. I've heard that its best to initiate connections to devices one at a time. That might be an example of operations (connect / connectGatt) which should be executed serially among all devices. But for other operations, like reading and writing Characteristics, is it good enough if each connection executes operations serially, or do I need some global operation queue shared among all devices so that between all devices, only one operation is executing?
2015/01/19
[ "https://Stackoverflow.com/questions/28016571", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4468153/" ]
On Android, *per BluetoothGatt object* you should only execute one operation at a time (request mtu, discover services, read/write characteristic/descriptor) otherwise things will go wrong. You have to wait until the corresponding callback gets called until you can execute the next operation. Regarding having pending connections to multiple devices at the same time, if you use autoConnect=true then there is no problem but if you use autoConnect=false then Android's bluetooth stack will only attempt to connect to one device at a time, meaning it will enqueue the connection requests if there are more than one outstanding. There is one particular bug where it fails to cancel a pending connection that is still in the queue (when you call .disconnect() or .close()) however, that was recently fixed in Android. Note that there is also a maximum number of connections/pending connections/gatt objects for which the behaviour is completely undocumented what happens when you exceed these limits. In the best cases you simply get a callback with error status but in some cases I've seen that the android bluetooth stack gets stuck in an endless loop where it in each iteration tells the bluetooth controller to connect to a device but the controller sends back the error code maximum connections reached.
BLE is designed to be asynchronous and event driven. You can send the commands however you like and you will get the responses back in no particular order. If you send a command and expect the next packet to be the response, you're going to get into trouble. This being said, I'm not sure how the Android library is structured around this.
28,016,571
Although undocumented, conventional wisdom using the Android BLE apis is that certain operations like reading / writing Characteristics & Descriptors should be done one at a time (although some devices are more lenient than others). However, I am not clear on whether this policy should apply only to a single connection, or across all active connections. I've heard that its best to initiate connections to devices one at a time. That might be an example of operations (connect / connectGatt) which should be executed serially among all devices. But for other operations, like reading and writing Characteristics, is it good enough if each connection executes operations serially, or do I need some global operation queue shared among all devices so that between all devices, only one operation is executing?
2015/01/19
[ "https://Stackoverflow.com/questions/28016571", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4468153/" ]
On Android, *per BluetoothGatt object* you should only execute one operation at a time (request mtu, discover services, read/write characteristic/descriptor) otherwise things will go wrong. You have to wait until the corresponding callback gets called until you can execute the next operation. Regarding having pending connections to multiple devices at the same time, if you use autoConnect=true then there is no problem but if you use autoConnect=false then Android's bluetooth stack will only attempt to connect to one device at a time, meaning it will enqueue the connection requests if there are more than one outstanding. There is one particular bug where it fails to cancel a pending connection that is still in the queue (when you call .disconnect() or .close()) however, that was recently fixed in Android. Note that there is also a maximum number of connections/pending connections/gatt objects for which the behaviour is completely undocumented what happens when you exceed these limits. In the best cases you simply get a callback with error status but in some cases I've seen that the android bluetooth stack gets stuck in an endless loop where it in each iteration tells the bluetooth controller to connect to a device but the controller sends back the error code maximum connections reached.
While I cannot speak for the upper layer, I can relate to what will happen on lower hardware level and that might provide some insights for your design. Which ever is the stack on upper layer doing, at the end the operation has to be handled by the transceiver chip. BLE is operating over 40 channel band in which 3 are used for broadcast and other for data transmission. This is done to be able to have multitude of device communicating together limiting the collision by being on other frequency bands. Those bands are selected based on the one with the lowest noise (or traffic). The transceiver himself is only able to communicate (speak and listen) in one band at a time and has to switch between bands to reach other devices. This is done by very tight timing of the communication. Another fact is that a wireless transceiver is basically some sort of half duplex communication with collision detection, it cannot send and listen at the same time, nor can two device emit at the same time on the same band. It is therefore by design (and laws of nature) serial or sequential. If you implement some sort of operational queue or threaded implementation, at the end everything will have to be treated serially / sequentially by the transceiver. If you access to it by different threads, the transceiver may have to jump all the time between channels or perhaps gets confused if it is not well handled on the upper level. The only good reason I may see to treat that on thread would be that the processing time of the transceiver to be significantly lower than the upper stack you have to run, and you would take advantage of multi core processor. But otherwise unless very specific software need or architecture, I do not believe you will have significant gain of having a different implementation than serial and I would also speak to the slaves one by one rather than all at the same time for the considerations explained above.
14,319,795
I have create a jsFiddle to demonstrate my problem: > > <http://jsfiddle.net/MXt8d/1/> > > > ``` .outer { display: inline-block; vertical-align: top; overflow: visible; position: relative; width: 100%; height: 100%; background: red; } .inner { overflow: hidden; height: 50%; width: 100%; margin-top: 25%; margin-bottom: 25%; background: blue; opacity: 0.7; color: white; } <div class="outer"> <div class="inner"></div> </div> ``` The thing is that when i need to horizontally center a div inside another. I specify the height of the inner div in % (eg. 50%) and then the margin-top and margin-bottom to the remaining (eg. (100 - 50) / 2 = 25 %). But as you see in the jsFiddle it's not working as intended. Calculating the margins from the Parent works, but it's not possible for me, because I dont have access to the div's parent, as the elements-style object is bound to the object via knockout.js and it's not so simple as shown in the jsFiddle. Hope anyone could help me :-) bj99 **Update:** Just found out why this is actually happening, so I'll post here for peaple with similar problems: > > From <http://www.w3.org/TR/CSS2/box.html#propdef-margin-top> : > > > 'margin-top', 'margin-bottom' > Percentages: refer to ***width*** of containing block > > > And not as I tought to the **height** :-/
2013/01/14
[ "https://Stackoverflow.com/questions/14319795", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1115457/" ]
To `#inner` element: 1) Add `position:absolute` 2) Remove `margin-top` and `margin-bottom` properties 3) Add `top:25%` That's it!
It is a solution to your problem.I hope I helped you ``` .inner { overflow: hidden; height: 50%; width: 100%; top:0; bottom:0; left:0; right:0; position: absolute; margin: auto; background: blue; opacity: 0.7; color: white; } ```
14,319,795
I have create a jsFiddle to demonstrate my problem: > > <http://jsfiddle.net/MXt8d/1/> > > > ``` .outer { display: inline-block; vertical-align: top; overflow: visible; position: relative; width: 100%; height: 100%; background: red; } .inner { overflow: hidden; height: 50%; width: 100%; margin-top: 25%; margin-bottom: 25%; background: blue; opacity: 0.7; color: white; } <div class="outer"> <div class="inner"></div> </div> ``` The thing is that when i need to horizontally center a div inside another. I specify the height of the inner div in % (eg. 50%) and then the margin-top and margin-bottom to the remaining (eg. (100 - 50) / 2 = 25 %). But as you see in the jsFiddle it's not working as intended. Calculating the margins from the Parent works, but it's not possible for me, because I dont have access to the div's parent, as the elements-style object is bound to the object via knockout.js and it's not so simple as shown in the jsFiddle. Hope anyone could help me :-) bj99 **Update:** Just found out why this is actually happening, so I'll post here for peaple with similar problems: > > From <http://www.w3.org/TR/CSS2/box.html#propdef-margin-top> : > > > 'margin-top', 'margin-bottom' > Percentages: refer to ***width*** of containing block > > > And not as I tought to the **height** :-/
2013/01/14
[ "https://Stackoverflow.com/questions/14319795", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1115457/" ]
To `#inner` element: 1) Add `position:absolute` 2) Remove `margin-top` and `margin-bottom` properties 3) Add `top:25%` That's it!
There are various solutions to your problem: 1) add `position:absolute` and `top:25%` on the inner element - **[Example](http://jsfiddle.net/MXt8d/3/)** 2) use `display:table` on the outer and `display:table-cell` on the inner element, this also allows vertical centering. - **[Example](http://jsfiddle.net/MXt8d/2/)** Each of the solutions has some caveats, I personally try to avoid absolute positionings wherever I can, but this is also up to personal preferences.
13,553,531
Is there any attribute to tell a (standard) NumberPicker to stop after its last value? E.g. if my MinValue was 0 and my MaxValue was 5 the NumberPicker just repeats itself after the 5, so that the user could scroll endlessly.
2012/11/25
[ "https://Stackoverflow.com/questions/13553531", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1786159/" ]
If You had set min/max value, try this: ``` yourNumberPicker.setWrapSelectorWheel(false); ``` does this work for you? EDIT ---- For TimePicker: ``` timePicker.setOnTimeChangedListener(new TimePicker.OnTimeChangedListener() { public void onTimeChanged(TimePicker view, int hourOfDay, int minute) { if(hourOfDay>max) { TimePicker.setHour(max): } updateDisplay(hourOfDay, minute); } }); ``` It's not a tested code, but this could be the way you could do this.
This [Answer](https://stackoverflow.com/a/24963508/2630035) helped me: ``` public void updatePickerValues(String[] newValues){ picker.setDisplayedValues(null); picker.setMinValue(0); picker.setMaxValue(newValues.length -1); picker.setWrapSelectorWheel(false); picker.setDisplayedValues(newValues); } ``` Apparently the order matters.
9,088,608
I have 2 list: List1: ```none ID 1 2 3 ``` List2: ```none ID Name 1 Jason 1 Jim 2 Mike 3 Phil ``` I like to join both of these but get only the first record from list2 for a given ID: The end result would be ```none ID Name 1 Jason 2 Mike 3 Phil ``` I tried the following but was not successful: ``` var lst = (from lst1 in list1 join lst2 in list2 on lst1.ID equals lst2.ID).ToList().First(); ```
2012/01/31
[ "https://Stackoverflow.com/questions/9088608", "https://Stackoverflow.com", "https://Stackoverflow.com/users/996431/" ]
You can get this result with what the [101 LINQ Samples](http://code.msdn.microsoft.com/101-LINQ-Samples-3fb9811b) calls ["Cross Join with Group Join"](http://code.msdn.microsoft.com/LINQ-Join-Operators-dabef4e9). Combine that with `First()` to get just one item from the group. ``` var lst = ( from lst1 in list1 join lst2 in list2 on lst1.ID equals lst2.ID into lstGroup select lstGroup.First() ); ``` **Example:** <http://ideone.com/V0sRO>
Here's one way to do it: ```cs var lst = list1 // Select distinct IDs that are in both lists: .Where(lst1 => list2 .Select(lst2 => lst2.ID) .Contains(lst1.ID)) .Distinct() // Select items in list 2 matching the IDs above: .Select(lst1 => list2 .Where(lst2 => lst2.ID == lst1.ID) .First()); ``` **Example:** <http://ideone.com/6egSc>
9,088,608
I have 2 list: List1: ```none ID 1 2 3 ``` List2: ```none ID Name 1 Jason 1 Jim 2 Mike 3 Phil ``` I like to join both of these but get only the first record from list2 for a given ID: The end result would be ```none ID Name 1 Jason 2 Mike 3 Phil ``` I tried the following but was not successful: ``` var lst = (from lst1 in list1 join lst2 in list2 on lst1.ID equals lst2.ID).ToList().First(); ```
2012/01/31
[ "https://Stackoverflow.com/questions/9088608", "https://Stackoverflow.com", "https://Stackoverflow.com/users/996431/" ]
Here's one way to do it: ```cs var lst = list1 // Select distinct IDs that are in both lists: .Where(lst1 => list2 .Select(lst2 => lst2.ID) .Contains(lst1.ID)) .Distinct() // Select items in list 2 matching the IDs above: .Select(lst1 => list2 .Where(lst2 => lst2.ID == lst1.ID) .First()); ``` **Example:** <http://ideone.com/6egSc>
Another way: ``` var query = from lst1 in list1 let first = list2.FirstOrDefault(f => f.Id == lst1.Id) where first != null select first; ``` Or if you wanted to know about items that could not be located in list2: ``` var query = from lst1 in list1 let first = list2.FirstOrDefault(f => f.Id == lst1.Id) select first ?? new { Id = 0, Name = "Not Found" }; ```
9,088,608
I have 2 list: List1: ```none ID 1 2 3 ``` List2: ```none ID Name 1 Jason 1 Jim 2 Mike 3 Phil ``` I like to join both of these but get only the first record from list2 for a given ID: The end result would be ```none ID Name 1 Jason 2 Mike 3 Phil ``` I tried the following but was not successful: ``` var lst = (from lst1 in list1 join lst2 in list2 on lst1.ID equals lst2.ID).ToList().First(); ```
2012/01/31
[ "https://Stackoverflow.com/questions/9088608", "https://Stackoverflow.com", "https://Stackoverflow.com/users/996431/" ]
You can get this result with what the [101 LINQ Samples](http://code.msdn.microsoft.com/101-LINQ-Samples-3fb9811b) calls ["Cross Join with Group Join"](http://code.msdn.microsoft.com/LINQ-Join-Operators-dabef4e9). Combine that with `First()` to get just one item from the group. ``` var lst = ( from lst1 in list1 join lst2 in list2 on lst1.ID equals lst2.ID into lstGroup select lstGroup.First() ); ``` **Example:** <http://ideone.com/V0sRO>
Try grouping list2 by ID first and then selecting the first item from each group. After that, do the join and select what you want. ``` var uniqueIDList2 = list2.GroupBy(p => p.ID) .Select(p => p.First()); var result = from lst1 in list1 join lst2 in uniqueIDList2 on lst1.ID equals lst2.ID select new { lst1.ID, lst2.Name }; ```
9,088,608
I have 2 list: List1: ```none ID 1 2 3 ``` List2: ```none ID Name 1 Jason 1 Jim 2 Mike 3 Phil ``` I like to join both of these but get only the first record from list2 for a given ID: The end result would be ```none ID Name 1 Jason 2 Mike 3 Phil ``` I tried the following but was not successful: ``` var lst = (from lst1 in list1 join lst2 in list2 on lst1.ID equals lst2.ID).ToList().First(); ```
2012/01/31
[ "https://Stackoverflow.com/questions/9088608", "https://Stackoverflow.com", "https://Stackoverflow.com/users/996431/" ]
Try grouping list2 by ID first and then selecting the first item from each group. After that, do the join and select what you want. ``` var uniqueIDList2 = list2.GroupBy(p => p.ID) .Select(p => p.First()); var result = from lst1 in list1 join lst2 in uniqueIDList2 on lst1.ID equals lst2.ID select new { lst1.ID, lst2.Name }; ```
Another way: ``` var query = from lst1 in list1 let first = list2.FirstOrDefault(f => f.Id == lst1.Id) where first != null select first; ``` Or if you wanted to know about items that could not be located in list2: ``` var query = from lst1 in list1 let first = list2.FirstOrDefault(f => f.Id == lst1.Id) select first ?? new { Id = 0, Name = "Not Found" }; ```
9,088,608
I have 2 list: List1: ```none ID 1 2 3 ``` List2: ```none ID Name 1 Jason 1 Jim 2 Mike 3 Phil ``` I like to join both of these but get only the first record from list2 for a given ID: The end result would be ```none ID Name 1 Jason 2 Mike 3 Phil ``` I tried the following but was not successful: ``` var lst = (from lst1 in list1 join lst2 in list2 on lst1.ID equals lst2.ID).ToList().First(); ```
2012/01/31
[ "https://Stackoverflow.com/questions/9088608", "https://Stackoverflow.com", "https://Stackoverflow.com/users/996431/" ]
You can get this result with what the [101 LINQ Samples](http://code.msdn.microsoft.com/101-LINQ-Samples-3fb9811b) calls ["Cross Join with Group Join"](http://code.msdn.microsoft.com/LINQ-Join-Operators-dabef4e9). Combine that with `First()` to get just one item from the group. ``` var lst = ( from lst1 in list1 join lst2 in list2 on lst1.ID equals lst2.ID into lstGroup select lstGroup.First() ); ``` **Example:** <http://ideone.com/V0sRO>
Another way: ``` var query = from lst1 in list1 let first = list2.FirstOrDefault(f => f.Id == lst1.Id) where first != null select first; ``` Or if you wanted to know about items that could not be located in list2: ``` var query = from lst1 in list1 let first = list2.FirstOrDefault(f => f.Id == lst1.Id) select first ?? new { Id = 0, Name = "Not Found" }; ```
58,203,700
I have a string: ``` a = '"{""key1"": ""val1"", ""key2"":""val2""}"' ``` What is the most propriate way to convert this to a dictionary in Python? plain `json.loads(a)` cannot decipher this format. **EDIT:** This weird JSON string is created when I read a CSV with one "json-like" column.
2019/10/02
[ "https://Stackoverflow.com/questions/58203700", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4174013/" ]
I don't know this kind of format for json. So you can use the next function : ``` import json def load_weird_json(json_string): a = json_string.replace('""','"') a = a[1:len(a)-1] return json.loads(a) ```
Try ``` import json a = '"{""key1"": ""val1"", ""key2"":"val2""}"' a = a.replace('""','"').replace('}"',"}").replace('"{','{') data = json.loads(a) print(data) ``` output ``` {'key1': 'val1', 'key2': 'val2'} ```
58,203,700
I have a string: ``` a = '"{""key1"": ""val1"", ""key2"":""val2""}"' ``` What is the most propriate way to convert this to a dictionary in Python? plain `json.loads(a)` cannot decipher this format. **EDIT:** This weird JSON string is created when I read a CSV with one "json-like" column.
2019/10/02
[ "https://Stackoverflow.com/questions/58203700", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4174013/" ]
Assuming the string came from a CSV file, use `csv` to decode it before passing the result to `json` for decoding. ``` >>> import io, csv, json >>> a = '"{""key1"": ""val1"", ""key2"":""val2""}"' >>> csv_file_like = io.StringIO(a) >>> reader = csv.reader(csv_file_like) >>> result = list(reader) >>> json.loads(result[0][0]) {'key1': 'val1', 'key2': 'val2'} ``` This is a little simpler if `a` was already set by reading from a CSV file; you can skip using `io` to create a file-like object from `a` and use `csv.reader` directly on the original CSV file.
I don't know this kind of format for json. So you can use the next function : ``` import json def load_weird_json(json_string): a = json_string.replace('""','"') a = a[1:len(a)-1] return json.loads(a) ```
58,203,700
I have a string: ``` a = '"{""key1"": ""val1"", ""key2"":""val2""}"' ``` What is the most propriate way to convert this to a dictionary in Python? plain `json.loads(a)` cannot decipher this format. **EDIT:** This weird JSON string is created when I read a CSV with one "json-like" column.
2019/10/02
[ "https://Stackoverflow.com/questions/58203700", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4174013/" ]
I don't know this kind of format for json. So you can use the next function : ``` import json def load_weird_json(json_string): a = json_string.replace('""','"') a = a[1:len(a)-1] return json.loads(a) ```
The @chepner's answer actually sent me in the right direction. The string was loaded from CSV and I could effectively prevent this weird JSON forming by using answer from here: [spark 2.0 read csv with json](https://stackoverflow.com/questions/47176021/spark-2-0-read-csv-with-json), i.e. add the escape char `'\"'` option. Thanks for the answers - they all work :)
58,203,700
I have a string: ``` a = '"{""key1"": ""val1"", ""key2"":""val2""}"' ``` What is the most propriate way to convert this to a dictionary in Python? plain `json.loads(a)` cannot decipher this format. **EDIT:** This weird JSON string is created when I read a CSV with one "json-like" column.
2019/10/02
[ "https://Stackoverflow.com/questions/58203700", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4174013/" ]
Assuming the string came from a CSV file, use `csv` to decode it before passing the result to `json` for decoding. ``` >>> import io, csv, json >>> a = '"{""key1"": ""val1"", ""key2"":""val2""}"' >>> csv_file_like = io.StringIO(a) >>> reader = csv.reader(csv_file_like) >>> result = list(reader) >>> json.loads(result[0][0]) {'key1': 'val1', 'key2': 'val2'} ``` This is a little simpler if `a` was already set by reading from a CSV file; you can skip using `io` to create a file-like object from `a` and use `csv.reader` directly on the original CSV file.
Try ``` import json a = '"{""key1"": ""val1"", ""key2"":"val2""}"' a = a.replace('""','"').replace('}"',"}").replace('"{','{') data = json.loads(a) print(data) ``` output ``` {'key1': 'val1', 'key2': 'val2'} ```
58,203,700
I have a string: ``` a = '"{""key1"": ""val1"", ""key2"":""val2""}"' ``` What is the most propriate way to convert this to a dictionary in Python? plain `json.loads(a)` cannot decipher this format. **EDIT:** This weird JSON string is created when I read a CSV with one "json-like" column.
2019/10/02
[ "https://Stackoverflow.com/questions/58203700", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4174013/" ]
Assuming the string came from a CSV file, use `csv` to decode it before passing the result to `json` for decoding. ``` >>> import io, csv, json >>> a = '"{""key1"": ""val1"", ""key2"":""val2""}"' >>> csv_file_like = io.StringIO(a) >>> reader = csv.reader(csv_file_like) >>> result = list(reader) >>> json.loads(result[0][0]) {'key1': 'val1', 'key2': 'val2'} ``` This is a little simpler if `a` was already set by reading from a CSV file; you can skip using `io` to create a file-like object from `a` and use `csv.reader` directly on the original CSV file.
The @chepner's answer actually sent me in the right direction. The string was loaded from CSV and I could effectively prevent this weird JSON forming by using answer from here: [spark 2.0 read csv with json](https://stackoverflow.com/questions/47176021/spark-2-0-read-csv-with-json), i.e. add the escape char `'\"'` option. Thanks for the answers - they all work :)