qid
int64 1
74.7M
| question
stringlengths 0
58.3k
| date
stringlengths 10
10
| metadata
list | response_j
stringlengths 2
48.3k
| response_k
stringlengths 2
40.5k
|
|---|---|---|---|---|---|
67,405,791
|
I just updated Android Studio to version 4.2. I was surprised to not see the Gradle tasks in my project.
In the previous version, 4.1.3, I could see the tasks as shown here:
[](https://i.stack.imgur.com/7fhMP.png)
But now I only see the dependencies in version 4.2:
[](https://i.stack.imgur.com/ScuhS.png)
I tried to clear Android Studio's cache and sync my project again, but there was no change.
Is this a feature change?
|
2021/05/05
|
[
"https://Stackoverflow.com/questions/67405791",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4424400/"
] |
To check the task list use the below command
```
./gradlew task
```
You will get the list of available task. To execute a particular task run command as follow
```
./gradlew <taskname>
```
|
This solution works for me
Go to File -> Settings -> Experimental and uncheck Do not build Gradle task list during Gradle sync, then sync the project File -> Sync Project with Gradle Files.
|
67,405,791
|
I just updated Android Studio to version 4.2. I was surprised to not see the Gradle tasks in my project.
In the previous version, 4.1.3, I could see the tasks as shown here:
[](https://i.stack.imgur.com/7fhMP.png)
But now I only see the dependencies in version 4.2:
[](https://i.stack.imgur.com/ScuhS.png)
I tried to clear Android Studio's cache and sync my project again, but there was no change.
Is this a feature change?
|
2021/05/05
|
[
"https://Stackoverflow.com/questions/67405791",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4424400/"
] |
To check the task list use the below command
```
./gradlew task
```
You will get the list of available task. To execute a particular task run command as follow
```
./gradlew <taskname>
```
|
Inside your android studio, select File option in toolbar -> then click on Settings option.
1. From settings, select the last option "Experimental"
2. Within there, select **Uncheck the option** that I have shared the screenshot below.
3. Then click Apply.[](https://i.stack.imgur.com/1sj02.png)
4. After that, Sync your project again from the file option, over their **Sync project with Gradle Files**
All set :)
|
67,405,791
|
I just updated Android Studio to version 4.2. I was surprised to not see the Gradle tasks in my project.
In the previous version, 4.1.3, I could see the tasks as shown here:
[](https://i.stack.imgur.com/7fhMP.png)
But now I only see the dependencies in version 4.2:
[](https://i.stack.imgur.com/ScuhS.png)
I tried to clear Android Studio's cache and sync my project again, but there was no change.
Is this a feature change?
|
2021/05/05
|
[
"https://Stackoverflow.com/questions/67405791",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4424400/"
] |
Go to `File -> Settings -> Experimental` and uncheck `Do not build Gradle task list during Gradle sync`, then sync the project `File -> Sync Project with Gradle Files`. If the problem is still there, just reboot Android Studio.
1.[](https://i.stack.imgur.com/TYDcw.png)
2.
[](https://i.stack.imgur.com/mgURV.png)
|
To check the task list use the below command
```
./gradlew task
```
You will get the list of available task. To execute a particular task run command as follow
```
./gradlew <taskname>
```
|
4,499,728
|
I'm trying to get to grips with drawing (fairly basic) shapes in Cocoa. I understand how to create paths with straight edges (duh!), but when it comes to doing curves, I just can't get my head around what inputs will produce what shaped curve. Specifically, I have no idea how the `controlPoint1:` and `controlPoint2:` arguments to the method influence the shape.
I'm trying to approximate the shape of a tab in Google Chrome:

And the code I'm using is:
```
-(void)drawRect:(NSRect)dirtyRect {
NSSize size = [self bounds].size;
CGFloat height = size.height;
CGFloat width = size.width;
NSBezierPath *path = [NSBezierPath bezierPath];
[path setLineWidth:1];
[path moveToPoint:NSMakePoint(0, 0)];
[path curveToPoint:NSMakePoint(width * 0.1, height)
controlPoint1:NSMakePoint(width * 0.05, height)
controlPoint2:NSMakePoint(width * 0.03, height * 0.05)];
[path lineToPoint:NSMakePoint(width * 0.9, height)];
[path curveToPoint:NSMakePoint(width, 0)
controlPoint1:NSMakePoint(width * 0.95, height)
controlPoint2:NSMakePoint(width * 0.97, height * 0.05)];
[path closePath];
[[NSColor colorWithCalibratedWhite:0.98 alpha:1] set];
[path fill];
[[NSColor colorWithCalibratedWhite:0.6 alpha:1] set];
[path stroke];
}
```
I'm failing horribly.
See, this is why we can't have nice things :(

Can anyone give me some pointers about how to think when it comes to drawing curves? An example that draws this path would be great too, but really it's just understanding these inputs to `curveToPoint:controlPoint1:controlPoint2:` that's holding me back.
UPDATE | Thanks to @Ahruman's answer I finally got it to start taking shape. It's not 100% (missing the curves on the bottom corners, but it's actually a symmetrical shape at least now :)

|
2010/12/21
|
[
"https://Stackoverflow.com/questions/4499728",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/322122/"
] |
The line between the current drawing point (implicit) and control point 1 is the tangent of the curve at its beginning. The line between control point 2 and the “to” point is the tangent at the end of the curve. These correspond to the ends of the two tangent controls you see in any vector drawing application with Bézier paths. If you haven’t used one, Inkscape is free.
|
The image below illustrates the accepted answer,

>
> control point 1 is the tangent of the curve at its beginning.
>
>
>
This would be the dotted line from Start Point to Control Point 1 in Bézier curve
>
> The line between control point 2 and the “to” point is the tangent at the end of the curve
>
>
>
This is the dotted line between Endpoint and Control Point 2.
This image is from [official Apple documentation](https://developer.apple.com/library/ios/documentation/2ddrawing/conceptual/drawingprintingios/BezierPaths/BezierPaths.html).
|
11,814,425
|
I'm implementing segment tree from an array of data, and I also want to maintaining the max/min of the tree while updating a range of data. Here is my initial approach following this tutorial <http://p--np.blogspot.com/2011/07/segment-tree.html>.
Unfortunately it doesn't work at all, the logic makes sense to me, but I'm a little confused about `b` and `e`, I wonder is this the range of the `data` array? or it's the actual range of the tree? From what I understand, the `max_segment_tree[1]` should hold the `max` of the range `[1, MAX_RANGE]` while `min_segment_tree[1]` should hold the `min` of the range `[1, MAX_RANGE]`.
```
int data[MAX_RANGE];
int max_segment_tree[3 * MAX_RANGE + 1];
int min_segment_tree[3 * MAX_RANGE + 1];
void build_tree(int position, int left, int right) {
if (left > right) {
return;
}
else if (left == right) {
max_segment_tree[position] = data[left];
min_segment_tree[position] = data[left];
return;
}
int middle = (left + right) / 2;
build_tree(position * 2, left, middle);
build_tree(position * 2 + 1, middle + 1, right);
max_segment_tree[position] = max(max_segment_tree[position * 2], max_segment_tree[position * 2 + 1]);
min_segment_tree[position] = min(min_segment_tree[position * 2], min_segment_tree[position * 2 + 1]);
}
void update_tree(int position, int b, int e, int i, int j, int value) {
if (b > e || b > j || e < i) {
return;
}
if (i <= b && j >= e) {
max_segment_tree[position] += value;
min_segment_tree[position] += value;
return;
}
update_tree(position * 2 , b , (b + e) / 2 , i, j, value);
update_tree(position * 2 + 1 , (b + e) / 2 + 1 , e , i, j, value);
max_segment_tree[position] = max(max_segment_tree[position * 2], max_segment_tree[position * 2 + 1]);
min_segment_tree[position] = min(min_segment_tree[position * 2], min_segment_tree[position * 2 + 1]);
}
```
**EDIT**
Adding test cases:
```
#include <iostream>
#include <iomanip>
#include <vector>
#include <string>
#include <algorithm>
#include <map>
#include <set>
#include <utility>
#include <stack>
#include <deque>
#include <queue>
#include <fstream>
#include <functional>
#include <numeric>
#include <cstdio>
#include <cstdlib>
#include <cstring>
#include <cmath>
#include <cassert>
using namespace std;
const int MAX_RANGE = 20;
int data[MAX_RANGE];
int max_segment_tree[2 * MAX_RANGE];
int min_segment_tree[2 * MAX_RANGE];
int added_to_interval[2 * MAX_RANGE] = {0};
void update_bruteforce(int x, int y, int z, int &smallest, int &largest) {
for (int i = x - 1; i < y; ++i) {
data[i] += z;
}
// update min/max
smallest = data[0];
largest = data[0];
for (int i = 0; i < MAX_RANGE; ++i) {
if (data[i] < smallest) {
smallest = data[i];
}
if (data[i] > largest) {
largest = data[i];
}
}
}
void build_tree(int position, int left, int right) {
if (left > right) {
return;
}
else if (left == right) {
max_segment_tree[position] = data[left];
min_segment_tree[position] = data[left];
return;
}
int middle = (left + right) / 2;
build_tree(position * 2, left, middle);
build_tree(position * 2 + 1, middle + 1, right);
max_segment_tree[position] = max(max_segment_tree[position * 2], max_segment_tree[position * 2 + 1]);
min_segment_tree[position] = min(min_segment_tree[position * 2], min_segment_tree[position * 2 + 1]);
}
void update_tree(int position, int b, int e, int i, int j, int value) {
if (b > e || b > j || e < i) {
return;
}
if (i <= b && e <= j) {
max_segment_tree[position] += value;
min_segment_tree[position] += value;
added_to_interval[position] += value;
return;
}
update_tree(position * 2 , b , (b + e) / 2 , i, j, value);
update_tree(position * 2 + 1 , (b + e) / 2 + 1 , e , i, j, value);
max_segment_tree[position] = max(max_segment_tree[position * 2], max_segment_tree[position * 2 + 1]) + added_to_interval[position];
min_segment_tree[position] = min(min_segment_tree[position * 2], min_segment_tree[position * 2 + 1]) + added_to_interval[position];
}
void update(int x, int y, int value) {
// memset(added_to_interval, 0, sizeof(added_to_interval));
update_tree(1, 0, MAX_RANGE - 1, x - 1, y - 1, value);
}
namespace unit_test {
void test_show_data() {
for (int i = 0; i < MAX_RANGE; ++i) {
cout << data[i] << ", ";
}
cout << endl << endl;
}
void test_brute_force_and_segment_tree() {
// arrange
int number_of_operations = 100;
for (int i = 0; i < MAX_RANGE; ++i) {
data[i] = i + 1;
}
build_tree(1, 0, MAX_RANGE - 1);
// act
int operation;
int x;
int y;
int z;
int smallest = 1;
int largest = MAX_RANGE;
// assert
while (number_of_operations--) {
operation = rand() % 1;
x = 1 + rand() % MAX_RANGE;
y = x + (rand() % (MAX_RANGE - x + 1));
z = 1 + rand() % MAX_RANGE;
if (operation == 0) {
z *= 1;
}
else {
z *= -1;
}
cout << "left, right, value: " << x - 1 << ", " << y - 1 << ", " << z << endl;
update_bruteforce(x, y, z, smallest, largest);
update(x, y, z);
test_show_data();
cout << "correct:\n";
cout << "\tsmallest = " << smallest << endl;
cout << "\tlargest = " << largest << endl;
cout << "possibly correct:\n";
cout << "\tsmallest = " << min_segment_tree[1] << endl;
cout << "\tlargest = " << max_segment_tree[1] << endl;
cout << "\n--------------------------------------------------------------\n";
cin.get();
}
}
}
int main() {
unit_test::test_brute_force_and_segment_tree();
}
```
|
2012/08/05
|
[
"https://Stackoverflow.com/questions/11814425",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/398398/"
] |
You need to store separately the max/min for each interval, AND what values have been added to it (just their sum). Here's how it could go wrong:
Suppose we're building a tree (I'll only show the min tree here) for the array [5, 1, 3, 7]. The tree looks like this:
```
1
1 3
5 1 3 7
```
Then we add 1 to the whole interval. The tree looks like this:
```
2
1 3
5 1 3 7
```
because the propagation has stopped on the first node since the updated interval covers it completely.
Then add 1 to the range [0-1]. This range does not cover the whole interval of the first node, so we update the children, and then set the min for the whole interval (that is, the value of the first node) to be the min of nodes 2 and 3. Here is the resulting tree:
```
2
2 3
5 1 3 7
```
And here is where it got wrong - there is no element 2 in the array, yet the tree claims that the min of the whole array is 2. This is happening because the lower levels of the tree never actually get the information that their values have been increased - the second node isn't aware of the fact that its values are not [5, 1] but rather [6, 2].
In order to make it work correctly, you can add a third array that keeps the values that have been added to whole intervals - say, `int added_to_interval[3 * MAX_RANGE + 1];`. Then, when you're updating a whole interval (the case where `i <= b && j >= e`), you also have to increment `added_to_interval[position]` with `value`. Also, when going up the tree to update the nodes from the values of the children, you also have to add that has been added to the whole interval (e.g. `max_segment_tree[position] = max(max_segment_tree[position * 2], max_segment_tree[position * 2 + 1]) + added_to_interval[position];`).
**EDIT:**
Here are the changes to the code to make it working:
```
if (i <= b && j >= e) {
max_segment_tree[position] += value;
min_segment_tree[position] += value;
added_to_interval[position] += value;
return;
}
```
...
```
update_tree(position * 2 , b , (b + e) / 2 , i, j, value);
update_tree(position * 2 + 1 , (b + e) / 2 + 1 , e , i, j, value);
max_segment_tree[position] = max(max_segment_tree[position * 2], max_segment_tree[position * 2 + 1]) + added_to_interval[position];
min_segment_tree[position] = min(min_segment_tree[position * 2], min_segment_tree[position * 2 + 1]) + added_to_interval[position];
```
I haven't tested it extensively - I'm leaving that to you, but I tried a bunch of examples that seemed to work correctly.
Also, I don't think you need 3 \* MAX\_RANGE + 1 elements in the arrays - 2 \* MAX\_RANGE or something like that should be enough.
|
**[b, e]** is the range, covered by **\*\_segment\_tree[ position ]**, and **[i, j]** is the current queried range.
About range storage:
**\*\_segment\_tree[ 1 ]** holds **max/min** of the whole data array - It's the root of the tree, because array-based binary tree has to be indexed from **1**. It's because children of **n**-th node of the tree are numbered **2\*n** and **2\*n + 1**, and **0** cannot be used as **n**, because in that case **2\*n = n**. Hereby, if **\*\_segment\_tree[k]** holds min/max of **data[b, e]**, then **\*segment\_tree[ 2\*k ]** holds min/max of **data[ b, ( b + e ) / 2 ]** and **\*segment\_tree[ 2\*k + 1 ]** - of **data[ ( b + e ) / 2 + 1, e ]** - you can see these indicies in the code.
|
10,307,209
|
I am trying to write a function that takes three c-style strings, and returns a c-style string. This function searches a c-string for all occurrences of the sub-string and replaces them with a different string.
This program works but seems very inelegant. I can't help the feeling like it could have been done in a less bulky way.
```
char* replaceSubstring(char *original, char *from, char *to)
{
int origlen = strlen(original);
int i = 0;
int count = 0;
char *ptr;
//figure out how many times the sub-string occurs in a string.
//i couldn't figure out a way to avoid this loop
while (i<origlen)
{
ptr = strstr(original+i, from);
if (!ptr)
break;
else
{
i = ptr - original + 1;
count++;
}
}
//figure out what the size of the output string has to be
int newsize = origlen + (strlen(to) - strlen(from)) * count;
char *newstring = new char[newsize];
newstring[0] = '\0';
i = 0;
while (i < origlen)
{
ptr = strstr(original+i, from);
if (!ptr)
{
strcat(newstring,original+i);
break;
}
else
{
//this looks extremely ugly and bulky...
strncat(newstring, original+i, ptr-(original+i));
strcat(newstring, to);
i = i + ptr - (original + i) + strlen(from);
}
}
strcat(newstring,"\0");
return newstring;
}
```
Would anyone have any suggestions on how to make this code clearer and/or more efficient ?
Any comments are welcome.
Please do not suggest to use class string instead. That is not an option. The function must work with c-strings
|
2012/04/24
|
[
"https://Stackoverflow.com/questions/10307209",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1354866/"
] |
One improvement I would make that would probably improve elegance and efficiency simultaneously would be to
1. Allocate an array of integers that will hold the indices of the substrings that match the given string.
2. Loop through the string and find all the matching substrings, and add each to the array, reallocating the array larger as needed (because you don't want to use the STL I presume; if you can, use `std::vector` `std::list` `std::deque`).
3. Allocate new memory for the modified string based on the length of the original string and how many substrings you found.
4. Iterate the old string and the array simultaneously, copying the non-matched parts from the old string to the new.
5. Fill in the holes you left with the replacement string.
Also, instead of allocating memory dynamically inside the function, I would change it to accept a caller-allocated buffer and maximum buffer size instead. This way the caller can be completely responsible for the lifetime of the memory (utilising automatic memory if they want/can) and you don't have to worry about calculating a buffer size (you rely on the caller for that).
---
EDIT:
Here is an example implementation I whipped up. Please let me know if anyone finds any errors, which is likely. (You might not want to read this if you want to figure it out yourself.)
```
char* strreplace(const char* haystack, const char* needle, const char* replacement) {
// using deque for pop_front
std::deque<const char*> positions;
unsigned int haystacklen = strlen(haystack),
needlelen = strlen(needle),
replacementlen = strlen(replacement);
for (const char* cur = haystack, *pos = strstr(cur, needle); pos; cur = pos + 1, pos = strstr(cur, needle))
positions.push_back(pos);
char* newstr = new char[haystacklen + replacementlen * positions.size() + 1],
dst = newstr;
const char* src = haystack;
while (src <= haystack + haystacklen)
if (!positions.empty() && src == positions.front()) {
strcpy(dst, replacement);
dst += replacementlen;
src += needlelen;
positions.pop_front();
} else
*dst++ = *src++;
return newstr;
}
```
And don't forget to `delete[]` the return value of that function.
I went for efficiency without doing maximum optimisations. For instance, you could have a `while` loop that executed while `positions.empty()` was false, and then when it becomes true, just exit the loop and do a straight `strcpy` for the rest because there are no more replacements to be made, which would let you avoid unnecessarily calling `positions.empty()` for *every character even if there are no replacements to be made left, or at all*. But I think that is a small nit, and the code conveys the point.
Also, I used `std::list` `std::deque` to remove all the array management code but that should be straighforward if you want to do it yourself.
As ildjarn mentioned in the comments, I changed from `list` to `deque` because I use the `size` member and, per his comment, it's not `O(1)` (usually it would be `O(n)`) on all pre-C++11 implementations, so `deque` with it's constant-time `size` will be more efficient.
|
You can get rid of the first part of your code to calculate the count if you simply set the size of the newstring to be the maximum possible size after the solution.
In particular:
```
int newsize = origlen + (strlen(to) - strlen(from)) * origlen/strlen(from);
```
Also, instead of calling strlen(from) multiple times, just assign it to a variable (e.g. srtlen\_from) and just use that.
|
10,307,209
|
I am trying to write a function that takes three c-style strings, and returns a c-style string. This function searches a c-string for all occurrences of the sub-string and replaces them with a different string.
This program works but seems very inelegant. I can't help the feeling like it could have been done in a less bulky way.
```
char* replaceSubstring(char *original, char *from, char *to)
{
int origlen = strlen(original);
int i = 0;
int count = 0;
char *ptr;
//figure out how many times the sub-string occurs in a string.
//i couldn't figure out a way to avoid this loop
while (i<origlen)
{
ptr = strstr(original+i, from);
if (!ptr)
break;
else
{
i = ptr - original + 1;
count++;
}
}
//figure out what the size of the output string has to be
int newsize = origlen + (strlen(to) - strlen(from)) * count;
char *newstring = new char[newsize];
newstring[0] = '\0';
i = 0;
while (i < origlen)
{
ptr = strstr(original+i, from);
if (!ptr)
{
strcat(newstring,original+i);
break;
}
else
{
//this looks extremely ugly and bulky...
strncat(newstring, original+i, ptr-(original+i));
strcat(newstring, to);
i = i + ptr - (original + i) + strlen(from);
}
}
strcat(newstring,"\0");
return newstring;
}
```
Would anyone have any suggestions on how to make this code clearer and/or more efficient ?
Any comments are welcome.
Please do not suggest to use class string instead. That is not an option. The function must work with c-strings
|
2012/04/24
|
[
"https://Stackoverflow.com/questions/10307209",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1354866/"
] |
One improvement I would make that would probably improve elegance and efficiency simultaneously would be to
1. Allocate an array of integers that will hold the indices of the substrings that match the given string.
2. Loop through the string and find all the matching substrings, and add each to the array, reallocating the array larger as needed (because you don't want to use the STL I presume; if you can, use `std::vector` `std::list` `std::deque`).
3. Allocate new memory for the modified string based on the length of the original string and how many substrings you found.
4. Iterate the old string and the array simultaneously, copying the non-matched parts from the old string to the new.
5. Fill in the holes you left with the replacement string.
Also, instead of allocating memory dynamically inside the function, I would change it to accept a caller-allocated buffer and maximum buffer size instead. This way the caller can be completely responsible for the lifetime of the memory (utilising automatic memory if they want/can) and you don't have to worry about calculating a buffer size (you rely on the caller for that).
---
EDIT:
Here is an example implementation I whipped up. Please let me know if anyone finds any errors, which is likely. (You might not want to read this if you want to figure it out yourself.)
```
char* strreplace(const char* haystack, const char* needle, const char* replacement) {
// using deque for pop_front
std::deque<const char*> positions;
unsigned int haystacklen = strlen(haystack),
needlelen = strlen(needle),
replacementlen = strlen(replacement);
for (const char* cur = haystack, *pos = strstr(cur, needle); pos; cur = pos + 1, pos = strstr(cur, needle))
positions.push_back(pos);
char* newstr = new char[haystacklen + replacementlen * positions.size() + 1],
dst = newstr;
const char* src = haystack;
while (src <= haystack + haystacklen)
if (!positions.empty() && src == positions.front()) {
strcpy(dst, replacement);
dst += replacementlen;
src += needlelen;
positions.pop_front();
} else
*dst++ = *src++;
return newstr;
}
```
And don't forget to `delete[]` the return value of that function.
I went for efficiency without doing maximum optimisations. For instance, you could have a `while` loop that executed while `positions.empty()` was false, and then when it becomes true, just exit the loop and do a straight `strcpy` for the rest because there are no more replacements to be made, which would let you avoid unnecessarily calling `positions.empty()` for *every character even if there are no replacements to be made left, or at all*. But I think that is a small nit, and the code conveys the point.
Also, I used `std::list` `std::deque` to remove all the array management code but that should be straighforward if you want to do it yourself.
As ildjarn mentioned in the comments, I changed from `list` to `deque` because I use the `size` member and, per his comment, it's not `O(1)` (usually it would be `O(n)`) on all pre-C++11 implementations, so `deque` with it's constant-time `size` will be more efficient.
|
Here is a version I made which is pretty much using pointers only (error checking, etc. is omitted) (I have also noticed that it fails in certain cases):
```
#include <cstring>
#include <cstdlib>
#include <iostream>
char* replaceSubstring(char *original, char *from, char *to)
{
// This could be improved (I was lazy and made an array twice the size)
char* retstring = new char[std::strlen(original) * 2];
int pos = 0;
for (int i = 0; *(original + i); ++i)
{
if (*(original + i) == *(from))
{
// Got a match now check if the two are the same
bool same = true; // Assume they are the same
for (int j = 1, k = i + 1; *(from + j) && *(original + k); ++j, ++k)
{
if (*(from + j) != *(original + k))
{
same = false;
break;
}
}
if (same)
{
// They are the same now copy to new array
for (int j = 0; *(to + j); ++j)
{
retstring[pos++] = *(to + j);
}
i += std::strlen(from) - 1;
continue;
}
}
retstring[pos++] = *(original + i);
}
retstring[pos] = '\0';
return retstring;
}
int main()
{
char orig1[] = "Replace all the places that say all";
char* r1 = replaceSubstring(orig1, "all", "Replacement");
std::cout << r1 << std::endl;
delete [] r1;
char orig2[] = "XXXXXX with something else XXXXXX";
char* r2 = replaceSubstring(orig2, "XXXXXX", "hello");
std::cout << r2 << std::endl;
delete [] r2;
}
```
|
10,307,209
|
I am trying to write a function that takes three c-style strings, and returns a c-style string. This function searches a c-string for all occurrences of the sub-string and replaces them with a different string.
This program works but seems very inelegant. I can't help the feeling like it could have been done in a less bulky way.
```
char* replaceSubstring(char *original, char *from, char *to)
{
int origlen = strlen(original);
int i = 0;
int count = 0;
char *ptr;
//figure out how many times the sub-string occurs in a string.
//i couldn't figure out a way to avoid this loop
while (i<origlen)
{
ptr = strstr(original+i, from);
if (!ptr)
break;
else
{
i = ptr - original + 1;
count++;
}
}
//figure out what the size of the output string has to be
int newsize = origlen + (strlen(to) - strlen(from)) * count;
char *newstring = new char[newsize];
newstring[0] = '\0';
i = 0;
while (i < origlen)
{
ptr = strstr(original+i, from);
if (!ptr)
{
strcat(newstring,original+i);
break;
}
else
{
//this looks extremely ugly and bulky...
strncat(newstring, original+i, ptr-(original+i));
strcat(newstring, to);
i = i + ptr - (original + i) + strlen(from);
}
}
strcat(newstring,"\0");
return newstring;
}
```
Would anyone have any suggestions on how to make this code clearer and/or more efficient ?
Any comments are welcome.
Please do not suggest to use class string instead. That is not an option. The function must work with c-strings
|
2012/04/24
|
[
"https://Stackoverflow.com/questions/10307209",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1354866/"
] |
One improvement I would make that would probably improve elegance and efficiency simultaneously would be to
1. Allocate an array of integers that will hold the indices of the substrings that match the given string.
2. Loop through the string and find all the matching substrings, and add each to the array, reallocating the array larger as needed (because you don't want to use the STL I presume; if you can, use `std::vector` `std::list` `std::deque`).
3. Allocate new memory for the modified string based on the length of the original string and how many substrings you found.
4. Iterate the old string and the array simultaneously, copying the non-matched parts from the old string to the new.
5. Fill in the holes you left with the replacement string.
Also, instead of allocating memory dynamically inside the function, I would change it to accept a caller-allocated buffer and maximum buffer size instead. This way the caller can be completely responsible for the lifetime of the memory (utilising automatic memory if they want/can) and you don't have to worry about calculating a buffer size (you rely on the caller for that).
---
EDIT:
Here is an example implementation I whipped up. Please let me know if anyone finds any errors, which is likely. (You might not want to read this if you want to figure it out yourself.)
```
char* strreplace(const char* haystack, const char* needle, const char* replacement) {
// using deque for pop_front
std::deque<const char*> positions;
unsigned int haystacklen = strlen(haystack),
needlelen = strlen(needle),
replacementlen = strlen(replacement);
for (const char* cur = haystack, *pos = strstr(cur, needle); pos; cur = pos + 1, pos = strstr(cur, needle))
positions.push_back(pos);
char* newstr = new char[haystacklen + replacementlen * positions.size() + 1],
dst = newstr;
const char* src = haystack;
while (src <= haystack + haystacklen)
if (!positions.empty() && src == positions.front()) {
strcpy(dst, replacement);
dst += replacementlen;
src += needlelen;
positions.pop_front();
} else
*dst++ = *src++;
return newstr;
}
```
And don't forget to `delete[]` the return value of that function.
I went for efficiency without doing maximum optimisations. For instance, you could have a `while` loop that executed while `positions.empty()` was false, and then when it becomes true, just exit the loop and do a straight `strcpy` for the rest because there are no more replacements to be made, which would let you avoid unnecessarily calling `positions.empty()` for *every character even if there are no replacements to be made left, or at all*. But I think that is a small nit, and the code conveys the point.
Also, I used `std::list` `std::deque` to remove all the array management code but that should be straighforward if you want to do it yourself.
As ildjarn mentioned in the comments, I changed from `list` to `deque` because I use the `size` member and, per his comment, it's not `O(1)` (usually it would be `O(n)`) on all pre-C++11 implementations, so `deque` with it's constant-time `size` will be more efficient.
|
Self-unexplanatory: <http://ideone.com/ew5pL>
This is what ugly and bulky looks like - no C functions except an strlen and a memcpy at the end.
I think yours looks nice and compact.
|
68,745,548
|
Here's part of my Python code:
```
pstat1 = [plotvex(alpha,beta,j)[0] for j in range(5)]
ptset1 = [plotvex(alpha,beta,j)[1] for j in range(5)]
```
where `plotvex` is a function that returns 2 items. I want to generate two lists `pstat1` and `ptset1` using list comprehension, but I wonder is there a way I don't need to call the function twice? Thanks:)
|
2021/08/11
|
[
"https://Stackoverflow.com/questions/68745548",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14084298/"
] |
Assuming `plotvex()` returns a 2-tuple exactly\*, this should work:
```
pstat1, ptset1 = zip(*[plotvex(alpha, beta, j) for j in range(5)])
```
`zip(*iterable_of_iterables)` is a common idiom to 'rotate' a list of lists from being vertical to being horizontal. So instead of a list of 2-tuples, `[plotvex(alpha, beta, j) for j in range(5)]` will become two lists of singles, one list from each half of the tuples.
`*` here is the argument-unpacking operator.
---
\*if it returns more than a 2-tuple, then just do `plotvex(alpha, beta, j)[:2]` instead to take the first two elements
|
You are quite right that you don't want to call the `plotvex()` function twice for each set of parameters.
So, just call it once and then generate `pstat1` and `pstat2` later:
```py
pv = [plotvex(alpha,beta,j) for j in range(5)]
pstat1 = [item[0] for item in pv]
ptset1 = [item[1] for item in pv]
```
|
68,055,446
|
I have the following JSON response from an API which I need to show in a Gridview:
```json
[
{
"count": 271,
"headings": [
"Application",
"Host",
"os_type",
"os_version",
"model",
"vendor",
"virtual"
],
"kind": "BSI",
"next": "NextROW",
"next_offset": 100,
"offset": 0,
"results": [
[
"Customer Documents Archive System",
"Microsoft Network LoadBalancer mainserver running on 10.75.0.99",
null,
null,
null,
null,
null
],
[
"Customer Documents Archive System Group",
"Microsoft Network LoadBalancer server1 running on 10.128.2.143",
null,
null,
null,
null,
null
],
[
"Customer Documents Archive System Group",
"Microsoft Network LoadBalancer server2 running on 10.21.5.100",
null,
null,
null,
null,
null
],
[
"KIOSK",
"EastServer",
null,
null,
null,
null,
null
],
[
"Hotline",
"EastServer",
null,
null,
null,
null,
null
],
[
"ProjectWise",
"NorthServer",
"Windows",
"Server 2012 R2",
"VMware Virtual Platform",
"VMware, Inc.",
true
],
[
"SMS System",
"CentralServer",
"Windows",
"Server 2016",
"VMware Virtual Platform",
"VMware, Inc.",
true
]
],
"results_id": "QnVzaW5lc3N"
}
]
```
I am using the following code:
```
Dim steamreader As StreamReader = New StreamReader(stream)
Dim strdata As String = steamreader.ReadToEnd
Dim rs As List(Of Root) = JsonConvert.DeserializeObject(Of List(Of Root))(strdata)
GridView1.DataSource = rs
GridView1.DataBind()
Public Class Root
Public Property count As Integer
Public Property headings As List(Of String)
Public Property kind As String
Public Property [next] As String
Public Property next_offset As Integer
Public Property offset As Integer
Public Property results As List(Of List(Of Object))
Public Property results_id As String
End Class
```
GridView HTML:
```html
<asp:GridView ID = "GridView1" runat = "server">
```
Required Output from JSON data:
[](https://i.stack.imgur.com/UpUwf.png)
The above code is not working. Can anyone can guide me in how I can show data from the JSON in a gridview using "headings" in the JSON response as Gridview columns headers and corresponding data in "results" as row data for the columns?
|
2021/06/20
|
[
"https://Stackoverflow.com/questions/68055446",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2632097/"
] |
From the way your data is arranged, you need to extract the data for each set of results into a DataTable, and add those datatables to a DataSet.
I used Windows Forms, but you should be able to adapt it to use an ASP.NET GridView with little trouble:
```
Imports System.IO
Imports Newtonsoft.Json
Public Class Form1
Public Class Root
Public Property count As Integer
Public Property headings As List(Of String)
Public Property kind As String
Public Property [next] As String
Public Property next_offset As Integer
Public Property offset As Integer
Public Property results As List(Of List(Of Object))
Public Property results_id As String
End Class
Sub ShowData(f As String)
Dim allResults As List(Of Root)
Using sr As StreamReader = New StreamReader(f)
Dim dataAsJson As String = sr.ReadToEnd()
allResults = JsonConvert.DeserializeObject(Of List(Of Root))(dataAsJson)
End Using
Dim ds As New DataSet()
For Each resultSet In allResults
Dim dt As New DataTable(resultSet.results_id)
For Each heading In resultSet.headings
dt.Columns.Add(New DataColumn With {.ColumnName = heading, .DataType = Type.GetType("System.String")})
Next
For i = 0 To resultSet.results.Count - 1
Dim nr = dt.NewRow()
For j = 0 To resultSet.results(i).Count - 1
Dim thisItem = resultSet.results(i).Item(j)
nr(j) = If(thisItem Is Nothing, "-", CStr(thisItem))
Next
dt.Rows.Add(nr)
Next
ds.Tables.Add(dt)
Next
DataGridView1.DataSource = ds.Tables(0)
End Sub
Private Sub Form1_Load(sender As Object, e As EventArgs) Handles MyBase.Load
Dim j = "C:\temp\SO68055446.json"
ShowData(j)
End Sub
End Class
```
[](https://i.stack.imgur.com/gIOws.png)
|
What you have should work. I would double check what strData (do a debug.print(strData) check in your debug (or output window depending on your settings).
I just dropped in a text box, grid view. With this:
```
<asp:TextBox ID="TextBox1" runat="server" Height="125px"
TextMode="MultiLine" Width="285px"></asp:TextBox>
<asp:Button ID="Button1" runat="server" Text="Button" Width="111px" />
<br />
<br />
<div style="width:40%">
<asp:GridView ID="GridView1" runat="server" CssClass="table table-hover"></asp:GridView>
</div>
```
I then just cut + pasted your json into the text box
And I get this result:
[](https://i.stack.imgur.com/ui2HH.png)
So, this should work. I created the class as a separate module (class), and then pasted in your class for the data.
I can only guess that the json you posted is NOT what you have from that stream reader.
My code above for the button was this:
```
Protected Sub Button1_Click(sender As Object, e As EventArgs) Handles Button1.Click
Dim strJSON As String = TextBox1.Text
Dim rs As List(Of RootTest)
rs = JsonConvert.DeserializeObject(Of List(Of RootTest))(strJSON)
GridView1.DataSource = rs
GridView1.DataBind()
End Sub
```
and the class was:
```
Public Class RootTest
Public Property count As Integer
Public Property headings As List(Of String)
Public Property kind As String
Public Property [next] As String
Public Property next_offset As Integer
Public Property offset As Integer
Public Property results As List(Of List(Of Object))
Public Property results_id As String
End Class
```
|
59,378,373
|
I want to list the users who have reacted to Discord.
```
let MyChannel = client.channels.get('573534660852711426');
MyChannel.fetchMessage('656352072806957066').then(themessage => {
for (const reaction of themessage.reactions){
for (let user of reaction.users){
console.log(user);
}
console.log(reaction);
}
});
```
This is my code, but it doesn't work. Can you help me.
Thank you.
|
2019/12/17
|
[
"https://Stackoverflow.com/questions/59378373",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12553484/"
] |
with `docker run` you can pass the user flag.
```
-u, --user string Username or UID (format: <name|uid>[:<group|gid>])
```
I believe the UID of root should be 0, so I think any of `-u root` `-u 0` `-u root:root` should work?
If you're building a Dockerfile you can also add `USER root` to your dockerfile to switch users.
|
A quick workaround that also worked was to use a multi-stage build, start off as sudo in the first build, do `RUN sudo chmod +s /usr/bin/gdb` and then use `COPY` in the second stage to get gdb with permissions from the first stage.
|
354,742
|
The present discussion arises from [this MO question](https://mathoverflow.net/questions/354655/how-to-prove-ex-left-int-xx1-sinet-mathrm-d-t-right-le-1-4). Below, $e$ stands for [Euler's number](https://www.mathsisfun.com/numbers/e-eulers-number.html) and let
$$\tau:=\arccos\left(\frac{\sin e-\sin 1}{e-1}\right)\approx 1.82\cdots.$$
An application of the [Mean Value Theorem (for derivatives)](https://en.wikipedia.org/wiki/Mean_value_theorem) to the function $f(t)=\sin t$ leads to
$$\frac{\sin(e\,t)-\sin(t)}{e\,t-t}=\cos(\xi\_tt) \qquad \text{for some $1\leq\xi\_t\leq e$}. \tag1$$
>
> **QUESTION.** Is it true that for each $t>0$, one can always find some $\xi\_t\geq\tau$ such that (1) holds? **Example:** $\xi\_1=\tau$.
>
>
>
|
2020/03/11
|
[
"https://mathoverflow.net/questions/354742",
"https://mathoverflow.net",
"https://mathoverflow.net/users/66131/"
] |
In view of the answer by Carlo Beenakker and the comment by Alexandre Eremenko, it appears that what you actually had in mind is the following question:
>
> By the mean value theorem, for each $t\in(0,1]$ there is some $\xi\_t\in(1,e)$ such that
> \begin{equation\*}
> r(t):=\frac{\sin et-\sin t}{(e-1)t}=\cos(\xi\_t t). \tag{2}
> \end{equation\*}
> (Since $\cos u$ is strictly decreasing in $u\in[0,e]$, the value of $\xi\_t$ is unique for each $t\in(0,1]$.)
> Is it true that $\xi\_t\ge\tau$ for all $t\in(0,1]$?
>
>
>
The answer to this question is yes. Indeed, for $t\in(0,1)$ we have $\xi\_t t\in(0,e)\subset[0,\pi]$ and $\tau t\in(0,\tau]\subset[0,\pi]$. Therefore, in view of (2) and because $\cos$ is strictly decreasing on $[0,\pi]$, we see that
\begin{equation\*}
\xi\_t>\tau\iff d(t):=\cos\tau t-r(t)>0; \tag{3}
\end{equation\*}
here and in what follows, $t\in(0,1)$.
Next,
\begin{equation\*}
d\_1(t):=(e-1) d(t)/t^2=\sum\_{j=1}^\infty(-1)^jb\_j t^{2j-2}-\sum\_{j=1}^\infty(-1)^ja\_j t^{2j-2},
\end{equation\*}
where
\begin{equation\*}
a\_j:=\frac{e^{2 j+1}-1}{(2 j+1)!},\quad b\_j:=\frac{(e-1) \tau^{2 j}}{(2 j)!}.
\end{equation\*}
It is easy to see that $0<a\_j<a\_{j+1}$ and $0<b\_j<b\_{j+1}$ for all natural $j$. So,
\begin{equation\*}
d\_1(t)>-b\_1+b\_2t^2-b\_3t^4+a\_1-a\_2t^2>0 \quad\text{if}\quad 0<t\le4/5.
\end{equation\*}
It remains to prove that
\begin{equation}
d\_2(t):=(e-1)t d(t)>0\quad\text{if}\quad 4/5<t<1.
\end{equation}
Since $d\_2(1)=0$, it suffices to show that
\begin{equation}
d\_2'(t)=\cos t-e \cos et+(e-1) \cos \tau t-(e-1) \tau t \sin \tau t<0
\end{equation}
for $t\in(4/5,1)$.
Since $\cos t,\cos et,\cos \tau t$ are decreasing in $t\in(4/5,1)$ and $\sin \tau t$ is concave in $t\in(4/5,1)$, the desired result follows because for $t\in[t\_j,t\_{j+1}]$ and $j=0,\dots,n-1$
\begin{equation}
d\_2'(t)\le\cos t\_j-e \cos et\_{j+1}+(e-1) \cos \tau t\_j
-(e-1) \tau t\_j \min(\sin \tau t\_j,\sin \tau t\_{j+1})<0,
\end{equation}
where $n:=20$ and $t\_j:=4/5+j/(5n)$.
---
To illustrate the above proof, here is the graph $\{(t,d(t))\colon0<t<1\}$:
[](https://i.stack.imgur.com/bWuJ2.png)
|
this is a plot of $\frac{\sin e\,t-\sin t}{e\,t-t}-\cos\xi \,t$ as a function of $\xi$ for $t=1.1$; the curve does not cross zero in the interval $[\tau,e]$, so I would conclude that (1) does not hold.

|
34,696,452
|
When using the following Polymer component:
```
<paper-textarea value="test"></paper-textarea>
```
I did not find any way to change its font to a fixed font. (for code entry)
I tried the following styling, but just the color was actually applied:
```
<style is="custom-style">
:root {
--paper-input-container-input-color: blue;
/* the following lines do not work... */
--paper-input-font-family: monospace;
--iron-autogrow-textarea: {
font-family: monospace;
};
}
</style>
```
Any one knows how to do that?
|
2016/01/09
|
[
"https://Stackoverflow.com/questions/34696452",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/592254/"
] |
paper-textarea uses iron-autogrow-textarea.
This should work (not tried myself)
```
<style is="custom-style">
:root {
--iron-autogrow-textarea: {
font-family: monospace !important;
};
}
</style>
```
|
You don't actually need to use `!important`. This will work in Polymer 2.0:
```css
paper-textarea {
--iron-autogrow-textarea: {
font-family: monospace;
};
}
```
|
25,590
|
Sowohl beim Stilmittel der [Ellipse](https://de.wikipedia.org/wiki/Ellipse_(Linguistik)) als auch in der im Deutschen möglichen [Zusammenziehung von Teilsätzen](http://canoonet.eu/services/OnlineGrammar/Satz/Komplex/Ordnung/Zusammenziehung.html) werden Satzteile weggelassen. Klar ist außerdem, dass es viele Ellipsen gibt, die keine Zusammenziehungen sind, wie z.B.
>
> Sonst noch was?
>
>
>
Aber sind Zusammenziehungen - manchmal, immer - Ellipsen?
Der Wikipedia-Artikel thematisiert das nicht, bringt aber eine Zusammenziehung als Beispiel für ein Ellipse:
>
> Karl fährt nach Italien, Wilhelm [fährt] an die Nordsee.
>
>
>
Auf der Suche nach weiteren Quellen bin ich erstaunlicherweise vor allem auf sehr alte Grammatiken gestoßen wie [diese hier](https://books.google.de/books?id=kNYLAQAAIAAJ&pg=PA56&lpg=PA56&dq=ellipse%20zusammenziehung&source=bl&ots=d108JqEaBm&sig=PSTSsmckRe-SZLs7Rgg9_cpDHWA&hl=de&sa=X&ved=0CCIQ6AEwAGoVChMIxc3H8tT5xwIVR40sCh2qigs1#v=onepage&q=ellipse%20zusammenziehung&f=false), in denen auf eine Unterscheidung der beiden Begriffe Wert gelegt wird.
Wie werden die beiden Phänomene heute gegeneinander abgegrenzt?
|
2015/09/15
|
[
"https://german.stackexchange.com/questions/25590",
"https://german.stackexchange.com",
"https://german.stackexchange.com/users/9091/"
] |
Ohne aktuelle Literatur gewälzt zu haben, glaube ich, dass man den Fokus für die Unterscheidung zwischen **Ellipse** und **Zusammenziehung** oder **Verkürzung** heute etwas anders setzen würde, als Heyse dies vor über hundert Jahren getan hat.
Ein Satz oder eine Phrase mit **Ellipse** wird dadurch geprägt, dass darin zur prototypischen Formulierung, wie sie in der förmlichen Schriftsprache erwartet wird, bestimmte Satzteile fehlen, die durch grammatische Struktur und Rektion zu erwarten wären. Diese Teile stehen auch nicht im Kotext, sondern müssen aus Kontext oder sprachlichem Wissen ergänzt werden. In der mündlichen Kommunikation, in der eventuelle diesbezügliche Missverständnisse ggf. per Nachfrage geklärt werden können, ist die Ellipse ein völlig normales sprachliches Muster und als solche in oralen Grammatiken selbstverständlich vertreten. Im Geschriebenen kann sie hingegen leicht als schlechter Stil aufgefasst werden.
Im Gegensatz dazu sind **Zusammenziehung** oder **Verkürzung** in der Schrift gut akzeptiert, sogar ein beliebtes Stilmittel. Das, was bei ihnen ausgelassen wird, steht meistens schon kurz davor oder wird kurz danach genannt bzw. es ist ein durchgehender Topos des gesamten Textes, gehört also zum Kontext. Leser können sich nötigenfalls selbst orientieren, indem sie ein paar Worte, schlimmstenfalls Sätze zurückspringen, während Hörer sich bei solchen Formulierungen mitunter schon sehr konzentrieren müssen: In der Rede ist Redundanz hilfreich und üblich, im Geschriebenen wird sie weitgehend vermieden.
*Ellipse* klingt fachsprachlicher, weil es ein Fremdwort ist, und daher wird man häufig auch Zusammenziehungen und Verkürzungen damit bezeichnet finden.
|
In beiden Fällen wird etwas weggelassen, aber bei einer **Zusammenziehung** ist der Teil, der weggelassen wurde, an anderer Stelle vorhanden und kann von dort genommen werden um die Lücke zu füllen.
Dieser Satz enthält eine Zusammenziehung (die leere Klammer steht hier für etwas das fehlt):
>
> Karl **fährt** nach Italien, Wilhelm () an die Nordsee.
>
>
>
In diesem Beispiel wurden zwei Hauptsätze durch ein Komma zu einem Satz verbunden. Jedoch fehlt im zweiten Hauptsatz das Prädikat bzw. Verb:
>
> Wilhelm an die Nordsee. (Dieser Satz kein Verb!)
>
>
>
Das fehlende Verb steht aber im selben Satz, nur eben im vorangegangenen Hauptsatz:
>
> Karl **fährt** nach Italien. (Dieser Satz enthält ein Verb.)
>
>
>
Dadurch, dass man den Teil, der dem unvollständigen Hauptsatz fehlt, in Gedanken vom vollständigen Hauptsatz nimmt und an der Lücke einsetzt, entsteht dann der vollständige Satz:
>
> Karl **fährt** nach Italien, Wilhelm **fährt** an die Nordsee.
>
>
>
Das geht auch mit anderen Wortarten bzw. Satzteilen:
>
> Die Uni besitzt, aber der Professor verwendet das teure Messgerät.
>
>
>
Auch hier folgen zwei Hauptsätze aufeinander, wobei der erste unvollständig ist:
>
> Die Uni besitzt ().
>
>
>
Hier fehlt ein Akkusativobjekt, nämlich *das teure Messgerät*, welches im zweiten Hauptsatz enthalten ist. Zur Gänze ausgewalzt würde der Satz wie folgt lauten:
>
> Die Uni besitzt das teure Messgerät, aber der Professor verwendet das teure Messgerät.
>
>
>
Zu erwähnen ist, dass dies zwar grammatisch korrekt, aber kein schöner Stil ist, denn man kann das zweite Auftreten desselben Objekts durch dein Einsatz eines Pronomens vermeiden: »*Die Uni besitzt das teure Messgerät, aber der Professor verwendet **es***«
Noch ein Beispiel:
>
> Zu erwähnen ist, dass dies zwar grammatisch korrekt (), aber kein schöner Stil **ist**.
>
> Zu erwähnen ist, dass dies zwar grammatisch korrekt **ist**, aber kein schöner Stil ().
>
> Zu erwähnen ist, dass dies zwar grammatisch korrekt **ist**, aber kein schöner Stil **ist**.
>
>
>
---
Anders die Ellipse:
Der Wiener Opernball wird, nachdem die Debütantinnen und Debütanten getanzt haben, jedes Jahr mit diesen Worten eröffnet:
>
> Alles Walzer!
>
>
>
Auch hier fehlt etwas, nämlich das Prädikat bzw. Verb. Aber es gibt weder davor noch danach irgend einen Text, denn die gesamte Rede des Tanzmeisters besteht nur aus diesen beiden Worten. Es gibt also keinen in der Nähe stehenden ähnlichen Satz, von dem man sich etwas ausborgen könnte um die Lücke zu schließen. Und genau das unterscheidet die Zusammenziehung von der Ellipse: Das was den Satz vervollständigen könnte, ist nirgendwo vorhanden.
Bei der vorliegenden Ellipse ist klar, dass es sich um eine Aufforderung handelt, und da die Aufforderung einen Tanz betrifft, ist auch klar, dass das gemeint ist:
>
> Alles tanzt Walzer!
>
>
>
(Kleiner Exkurs: Mit »Alles« ist »alle Personen« gemeint, und »tanzt« ist ein Ersatz-Infinitiv für einen Imperativ, ähnlich wie in »Zwiebel *anrösten*« in einem Kochrezept)
Anders Beispiel:
>
> Wer da?
>
>
>
Auch hier fehlt das Verb (»ist«), und es ist nicht in der Umgebung der Frage vorhanden, sondern muss aus dem Kontext erraten werden.
Viele Ellipsen werden auch als Phrasen verwendet, bei denen man gar nicht erst versucht, sie in Gedanken zu vervollständigen:
>
> (*Ich wünsche dir einen*) **guten Morgen**.
>
> (*Es*) **grüß**(e) (*dich*) **Gott**.
>
> (*Zeigen Sie mir Ihren*) **Ausweis!**
>
>
>
|
14,319,079
|
I want to write a style for wpf where all buttons in a StatusBar (that has a defined style) have the same style (e.g. width).
Here is what my style looks like:
```
<Style TargetType="{x:Type StatusBar}"
x:Key="DialogBoxStatusBarStyle">
<Setter Property="Background"
Value="LightGray" />
<Setter Property="Padding"
Value="5" />
...?
</Style>
```
And the xaml for the elements:
```
<StatusBar Style="{StaticResource ResourceKey=DialogBoxStatusBarStyle}" Grid.Row="3"
FlowDirection="RightToLeft">
<Button Content="Übernehmen"
Width="100"
HorizontalAlignment="Right" />
<Button Content="Abbrechen"
Width="100"
HorizontalAlignment="Right" />
<Button Content="OK"
Width="100"
HorizontalAlignment="Right" />
</StatusBar>
```
In the final version I don't want to set width to 100 for all buttons. This should be defined in the style of the `StatusBar` or better say in the style of the button-childs of the `StatusBar`.
|
2013/01/14
|
[
"https://Stackoverflow.com/questions/14319079",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/734648/"
] |
You could add a default Style for Buttons to the `Resources` of your `DialogBoxStatusBarStyle`:
```
<Style TargetType="StatusBar" x:Key="DialogBoxStatusBarStyle">
<Style.Resources>
<Style TargetType="Button">
<Setter Property="Width" Value="100"/>
</Style>
</Style.Resources>
...
</Style>
```
|
To extend on the answer above (@Clemens), you could even do something like this, to reuse a button style independently, and also apply it to children of a specific container.
Styles:
```
<Style TargetType="{x:Type Button}" x:Key="MyButtonStyle">
<Setter Property="Width" Value="100" />
<Setter Property="HorizontalAlignment" Value="Right" />
</Style>
<Style TargetType="{x:Type StatusBar}" x:Key="DialogBoxStatusBarStyle">
<Setter Property="Background" Value="LightGray" />
<Setter Property="Padding" Value="5" />
<Style.Resources>
<Style TargetType="{x:Type Button}" BasedOn="{StaticResource MyButtonStyle}" />
</Style.Resources>
...
</Style>
```
|
103,866
|
We have multiple sites with multiple subnets. We have mandated that the admins at those sites enter DNS names for all devices that exist on the network. (anything with an IP gets a name) I want to be able to audit this and make sure this has been done.
I am able to run an individual network scan on each subnet but that takes time. I would like a way to kick off a scan that runs through all subnets once a week and gives me an output file with a list of names and IPs. (preferably csv) I have tried to do this nmap but getting a nice list of IPs and their corresponding names is not easy.
|
2015/10/27
|
[
"https://security.stackexchange.com/questions/103866",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/90371/"
] |
>
> Is this is a concern?
>
>
>
I won't say no but one thing to realize is that Java comes with a lot of things that aren't relevant in any context you are likely to see it used. Almost all of the security vulnerabilities in Java are client-side. That is, most are only applicable when you are using Java plug-ins on your browser. Very few of the Java vulnerabilities are server-side. A lot of IT-admin type people fail to understand this and it's possible they are over-reacting.
On the other hand, a small number of vulnerabilities (e.g. one) can allow for pretty significant exploits so there's really no good argument for not working to patch the JVM.
Overall, though, it's likely that your vendor has creared vulnerabilities in their code or have used libraries with vulnerabilities. I would wager it's far more likely those issues will be exploited than JVM/JDK vulnerabilities. This is strongly suggested by the fact that they seem unable/unwilling to retest their code on a new version of Java. Backwards compatability is very well maintained in Java. It's really unacceptable for them to respond in this way. It implies they have very weak development processes.
>
> What questions do I ask of the vendor to determine our specific vulnerability - e.g. if they keep our system updated with critical patches, etc?
>
>
>
Those are good questions to start. You probably need to have some help though. You might want to have someone do a static analysis (their license may forbid this, however) of the system and/or penetration testing.
>
> Is the only recourse to insist that the vendor move to 1.8 before we create access into our network from outside entities?
>
>
>
I would. It really should not be an issue. They won't need to change any code, to my knowledge. It's unreasonable for them to refuse.
EDIT: I should add, if this is an application that is running on servers that your IT team controls, you can have your IT people install and run the application on an updated JVM without any assistance from the vendor. It's unlikely there will be any issues unless they are doing something goofy but they might use this as an excuse for not supporting the application. There's no need to recompile or anything, an application written using Java 1.6 should run on a 1.8 JVM.
>
> Is it true that 1.6 is at end of life and therefore no more security patches are generated?
>
>
>
Yes, in [2013](http://www.oracle.com/technetwork/java/eol-135779.html). You can buy extended support, however.
|
Personally, I think you need to hire a security consultancy company to properly cover the security issues here. Your exact security issues will likely be highly contextual, and there are additional regulatory implications in hospitals. That said, I'll try to cover your concerns.
>
> This application is in use in many large hospital systems, so finding it hard to believe that vendor wouldn't have addressed this somehow
>
>
>
Unfortunately, that's pretty common. Speak to anyone who handles IT infrastructure, networks, security, software, etc. in a hospital (or any organisation really) and they'll tell you the same thing - vendors are usually slow to fix problems, and often won't put the effort in for legacy products that are still relied upon.
>
> Is this is a concern?
>
>
>
Most likely, yes. You're putting an out-of-date, insecure framework on the affected systems. Java can be invoked via the browser unless it is properly configured, which potentially allows for remote attacks against the system. Another potential attack vector against the framework is via Java web servers such as Apache Tomcat, which might expose exploitable functionality (e.g. RMI) regardless of what the application does.
>
> What questions do I ask of the vendor to determine our specific vulnerability - e.g. if they keep our system updated with critical patches, etc?
>
>
>
The problem is that, if they're using out of date Java (esp. 1.6 or earlier) then it doesn't really matter what they do to address individual vulnerabilities in their application - the underlying framework itself is broken and they can't mitigate that individually.
>
> Is the only recourse to insist that the vendor move to 1.8 before we create access into our network from outside entities?
>
>
>
The simple answer is yes. The more complex answer is that you *could* sandbox off the application server so that, if it were to be compromised, there's a limited level of access that it has back to your internal network, if any. However, if patient data is included (especially any PII) then you've got regulatory requirements to contest with.
The TL;DR is that the vendor should update their code to work with a more modern version of Java. There are two primary ways you can expedite this: offer to pay towards the development time required to do so, or threaten to switch off their service and use an alternative which is more up-to-date.
Another avenue you have is regulatory: assuming you're in the US, tell your HIPAA regulatory body that their software uses a vulnerable framework that results in risks exposure of patient information. If their customer base is large, they may be required by law to update their software or otherwise mitigate any issues.
|
103,866
|
We have multiple sites with multiple subnets. We have mandated that the admins at those sites enter DNS names for all devices that exist on the network. (anything with an IP gets a name) I want to be able to audit this and make sure this has been done.
I am able to run an individual network scan on each subnet but that takes time. I would like a way to kick off a scan that runs through all subnets once a week and gives me an output file with a list of names and IPs. (preferably csv) I have tried to do this nmap but getting a nice list of IPs and their corresponding names is not easy.
|
2015/10/27
|
[
"https://security.stackexchange.com/questions/103866",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/90371/"
] |
>
> Is this is a concern?
>
>
>
I won't say no but one thing to realize is that Java comes with a lot of things that aren't relevant in any context you are likely to see it used. Almost all of the security vulnerabilities in Java are client-side. That is, most are only applicable when you are using Java plug-ins on your browser. Very few of the Java vulnerabilities are server-side. A lot of IT-admin type people fail to understand this and it's possible they are over-reacting.
On the other hand, a small number of vulnerabilities (e.g. one) can allow for pretty significant exploits so there's really no good argument for not working to patch the JVM.
Overall, though, it's likely that your vendor has creared vulnerabilities in their code or have used libraries with vulnerabilities. I would wager it's far more likely those issues will be exploited than JVM/JDK vulnerabilities. This is strongly suggested by the fact that they seem unable/unwilling to retest their code on a new version of Java. Backwards compatability is very well maintained in Java. It's really unacceptable for them to respond in this way. It implies they have very weak development processes.
>
> What questions do I ask of the vendor to determine our specific vulnerability - e.g. if they keep our system updated with critical patches, etc?
>
>
>
Those are good questions to start. You probably need to have some help though. You might want to have someone do a static analysis (their license may forbid this, however) of the system and/or penetration testing.
>
> Is the only recourse to insist that the vendor move to 1.8 before we create access into our network from outside entities?
>
>
>
I would. It really should not be an issue. They won't need to change any code, to my knowledge. It's unreasonable for them to refuse.
EDIT: I should add, if this is an application that is running on servers that your IT team controls, you can have your IT people install and run the application on an updated JVM without any assistance from the vendor. It's unlikely there will be any issues unless they are doing something goofy but they might use this as an excuse for not supporting the application. There's no need to recompile or anything, an application written using Java 1.6 should run on a 1.8 JVM.
>
> Is it true that 1.6 is at end of life and therefore no more security patches are generated?
>
>
>
Yes, in [2013](http://www.oracle.com/technetwork/java/eol-135779.html). You can buy extended support, however.
|
>
> Is this is a concern?
>
>
>
Obviously yes because that is quite an old version of Java [before even Oracle buys Sun](http://www.oracle.com/us/corporate/press/018363) so your version misses critical security patches.
>
> What questions do I ask of the vendor to determine our specific
> vulnerability - e.g. if they keep our system updated with critical
> patches, etc?
>
>
>
The problem is that the whole platform may be troublesome for the reason I stated in the previous question. The ideal solution is to ask the third party company to assess that application within the newest Java version but that is not practical as it will cost you so much both in money and time.
>
> Is the only recourse to insist that the vendor move to 1.8 before we
> create access into our network from outside entities?
>
>
>
As I said before, it can be the ideal solution: but then, how much time and money will it take?
>
> Is it true that 1.6 is at end of life and therefore no more security
> patches are generated?
>
>
>
Refer to @JimmyJames' answer (additional link: [Oracle Lifetime Support Policies](http://www.oracle.com/us/support/lifetime-support/index.html))
|
1,106,239
|
I've just finished building a new control room at work. It has 32 monitors and the plan was to have a single computer powering it. The old room had a few computers with odd screens keyboards/mice everywhere and decided it was time to simplify things and have a single PC - with it being a single operator most of the time.
There's not an awful lot of demanding stuff running on the machine, some scada packages, IP camera viewing software, office etc.
The issue that I'm having isn't down to performance. At least I don't think, the computer is of a fairly high spec. It's a HP Z840 with 2 Intel Xeon E5-2670's 4 nvidia nvs810s, 256GB of ram and a 500GB SSD. The operating system is Windows 10 Enterprise 64bit. The screens are all HP Z24n.
My slots are used as follows.
1. PCIe3x4 - None
2. PCIe3x16 - NVS 810 1
3. PCIe3x8 - None
4. PCIe3x16 - NVS 810 2
5. PCIe3x8 - NVS 810 3
6. PCIe3x16 - NVS 810 4
7. PCIe2x4 - None
I've realised after looking at the manual [1](http://h20195.www2.hp.com/V2/GetPDF.aspx/4AA5-4045ENW.pdf) that I should have GPU 3 in slot 3. However the behavior of the machine is strange, I connected all 32 at first and most came on with the windows background and task bar. About 10 had no background but had the taskbar. The mouse moved at a snails pace and I was unable to position the screens in nvidia control panel as it would crash/freeze. I unplugged the cables from GPUs 1 and 2 and managed to get 16 screens on from cards 3 and 4. When I got to screen 21, 5th screen on GPU 2 the machine when crazy again. The mouse started to lag again, and some screens were showing as duplicates of each other.
I've had a look in task manager, I've not seen the CPU or RAM go any higher than 4% when it locks up its just nvidia control panel that is not responding.
I'm thinking it must be some sort of bandwidth problem but not sure how to prove this or fix it.
Should I be able to get 32 1920x1200 screens of this hardware?
Is this behavior normal?
I will try moving NVS 810 3 to slot 3 and see what difference that makes, any other ideas would be appreciated.
The screens are arranged in a 8 by 4 matrix.
[](https://i.stack.imgur.com/H5rgU.jpg)
[](https://i.stack.imgur.com/VpVb9.jpg)
**update from 30/07/16**
There had been questions about whether I had reached the max horizontal limit for windows so I wanted to give this a test and prove it.
So I uninstalled the video card driver and removed 1 card so I only had cards in slots 2, 4 and 6. I connected up 16 screens in a 8 by 2 matrix to the cards in slots 2 and 6 and it worked OK. The PC was still struggling when using Windows display settings and nvidia control panel. After applying the video settings it took at least a minute to settle and allow me to accept the config. I stretched a window across the whole screen matrix.
[](https://i.stack.imgur.com/S8wlP.jpg)
I then tried to put a 17th screen on and all hell broke loose again. So as you can see below I added the 17th screen in the middle of the two rows. And applied the settings. The PC took ages to settle and allow me to accept again.
[](https://i.stack.imgur.com/RyjIa.jpg)
So at this point it the newly added screen is duplicated off the bottom left and windows display settings is showing some freaky 6|17 instead of what the nvidia control panel is showing.
[](https://i.stack.imgur.com/4NYOg.jpg)
I had a go at building the matrix up 4 x 4 and adding more in. Again I made it to 16 screens with no great shakes still a little struggle waiting for it to settle and apply the config but nothing major.
I connected them to the card as follows
NVS 810 1 - top 2 rows of 4
NVS 810 2 - bottom 2 rows of 4
(dont worry about the white screen,it was just an explore window)
[](https://i.stack.imgur.com/3RKR2.jpg)
I moved the right side top four and connect 2 of them.
They worked 'OK' however they had black wall papers not like the others. Also when you did like a left click drag to select things it wouldn't clear off. So I could draw blue boxes all over so I knew at this point something was up. For the heck of it I connected the next 2 and it threw all the toys out the pram again. It merged/duplicated the top 2 middle screens.
[](https://i.stack.imgur.com/6kD9k.jpg)
**8/1/16**
Ordering 6 x AMD Firepro W600 hopefully will have them for the end of the week and will feed back!
**8/4/16**
Installed 3 x AMD Firepro W600 and hit the same wall at 16 screens, however it was less flaky to setup compared to the nvidia settings, the AMD display settings never crashed and allowed Windows display settings to control the screen layout.
|
2016/07/28
|
[
"https://superuser.com/questions/1106239",
"https://superuser.com",
"https://superuser.com/users/513603/"
] |
I've been there and done that.
Even with the same hardware :-)
Never got it to work beyond 2 cards or 16 screens (4 screens per card and 4 cards also doesn't work properly) in Windows.
Worked fine with the free Nvidia drivers in Linux, but not with Nvidias own proprietary driver. But that wasn't a solution as we needed to run Windows only software on these.
We concluded that the Nvidia drivers are really crappy and badly tested (if at all) for these configurations even though, in theory, it should be possible.
We ended up using 2 computers. On for the top 2 rows, one for the bottom rows.
Another thing to consider: The cards can handle camera streams from that many monitors, but Windows really doesn't like streaming more than 20 or so simultaneously. Get's really choppy even though the hardware didn't get stressed. Seems a limitation of the Windows video codecs or the Windows desktop manager.
Splitting over 2 computers also allowed us to prevent that happening.
|
I bet you're running into limitations on the memory bridge and other bottlenecks not typically monitored under windows or UNIX since it's things like the CPU and GPU that normally cap out... but since you're pushing the PCIe bus to its maximum you're seeing it.
This is along the lines of what Tony and others who have tried this say: "things get choppy even though the hardware isn't stressed". But it is stressed, just not in a way that is monitored ie; taskmanager and GPU tools.
North/south bridges and the inter-CPU communication pathways all have limitations and you're hitting them with that much bit-slinging.
For this reason I think switching between AMD/nVidia or Windows/Linux is going to make no difference.
My suggestion: Break this up into 3 or 4 machines and then run something like Mulitplicity so it's all seamlessly controllable from one keyboard/mouse.
|
38,978,274
|
What I am trying to do is similar to this. [Search Filtering with PHP/MySQL](https://stackoverflow.com/questions/13206530/search-filtering-with-php-mysql)
```
<?php
require 'con.php';
$minage = $_POST['data'][0];
$maxage = $_POST['data'][1];
$gender = $_POST['data'][2];
$religion = $_POST['data'][3];
$query = "SELECT CONCAT(firstname, ' ', middlename, ' ', lastname, ' ', extension_name) as fullname, TIMESTAMPDIFF(YEAR, birthday ,NOW()) as age FROM mytable";
$filter = array();
if($gender != -1){
$gender = substr($gender, 1, -1);
$filter[] = "gender = :gender";
}
if($religion != -1){
$filter[] = "religion = :religion";
}
if(count($filter) > 0){
$query .= " WHERE " . implode(' AND ', $filter);
$sql = $connection->prepare($query);
-> $sql->bindParam(':gender', $gender, PDO::PARAM_STR);
-> $sql->bindParam(':religion', $religion, PDO::PARAM_STR);
$sql->execute();
$res = $sql->fetchAll();
}else{
$sql = $connection->prepare($query);
$sql->execute();
$res = $sql->fetchAll();
}
?>
<?php foreach($res as $row): ?>
<div><?php echo $row['fullname'];?></div>
<?php endforeach; ?>
```
When I select a gender and religion on my dropdown, the result is fine.
But when I select only one, let's say gender, I received an error:
>
> number of bound variables does not match number of tokens
>
>
>
I'm a bit confused where to place `$sql->bindParam(...);`. I guess this is the cause of my error? Or if there's more 'error' or if there's anything that's not right, please correct me. Thank you in advance.
|
2016/08/16
|
[
"https://Stackoverflow.com/questions/38978274",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5750898/"
] |
Just add value right along with placeholder and then send them right to execute
```
if($gender != -1){
$filter[] = "gender = ?";
$values[] = $gender;
}
if($religion != -1){
$filter[] = "religion = ?";
$values[] = $religion;
}
$query .= " WHERE 1 AND " . implode(' AND ', $filter);
$stmt = $connection->prepare($query);
$stmt->execute($values);
```
|
Bind shoud be conditional also:
```
if($gender != -1){
$filter[] = "gender = :gender";
}
if($religion != -1){
$filter[] = "religion = :religion";
}
if(count($filter) > 0){
$gender = substr($gender, 1, -1);
$query .= " WHERE " . implode(' AND ', $filter);
$sql = $connection->prepare($query);
if($gender != -1){
$sql->bindParam(':gender', $gender, PDO::PARAM_STR);
}
if($religion != -1){
$sql->bindParam(':religion', $religion, PDO::PARAM_STR);
}
$sql->execute();
$res = $sql->fetchAll();
}else{
```
thats a very badly organized code, try to make it more readable
|
2,729,079
|
I am looking for an example of a sequence of positive real numbers $(a\_k)$ with $\lim\_{k \to \infty} a\_k = 1$ such that the sequence $(p\_n)$ defined as $p\_n=a\_1 a\_2 \dots a\_n$ has limit 0 as $n \to \infty$.
Can anyone provide me with a concrete example, or maybe some hint or useful property of such a sequence?
|
2018/04/09
|
[
"https://math.stackexchange.com/questions/2729079",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/479389/"
] |
Let consider
$$a\_k=\frac{k}{k+1}$$
then
* $a\_k \to 1$
* $\prod a\_i =\frac12\frac23...\frac{k-1}k\frac{k}{k+1}=\frac1{k+1}\to 0$
|
Equivalently you're looking for $b\_k = \ln (a\_k)$ such that $b\_k\to 0$ and $\sum\_k b\_k\to -\infty$.
$b\_k=-\frac 1k$ fits the bill, yielding $a\_k = e^{-1/k}$.
|
2,729,079
|
I am looking for an example of a sequence of positive real numbers $(a\_k)$ with $\lim\_{k \to \infty} a\_k = 1$ such that the sequence $(p\_n)$ defined as $p\_n=a\_1 a\_2 \dots a\_n$ has limit 0 as $n \to \infty$.
Can anyone provide me with a concrete example, or maybe some hint or useful property of such a sequence?
|
2018/04/09
|
[
"https://math.stackexchange.com/questions/2729079",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/479389/"
] |
Let consider
$$a\_k=\frac{k}{k+1}$$
then
* $a\_k \to 1$
* $\prod a\_i =\frac12\frac23...\frac{k-1}k\frac{k}{k+1}=\frac1{k+1}\to 0$
|
**Hint**. Consider a sequence such that $0<a\_k <1$ for all $k$, but still approaches $1$. This will ensure that the product becomes smaller and smaller.
However, you're trying to balance things. If $a\_k$ converges too fast to $1$, then the product will not diverge to zero (It will be decreasing, but bounded above by some $\varepsilon > 0$).
|
2,729,079
|
I am looking for an example of a sequence of positive real numbers $(a\_k)$ with $\lim\_{k \to \infty} a\_k = 1$ such that the sequence $(p\_n)$ defined as $p\_n=a\_1 a\_2 \dots a\_n$ has limit 0 as $n \to \infty$.
Can anyone provide me with a concrete example, or maybe some hint or useful property of such a sequence?
|
2018/04/09
|
[
"https://math.stackexchange.com/questions/2729079",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/479389/"
] |
Let consider
$$a\_k=\frac{k}{k+1}$$
then
* $a\_k \to 1$
* $\prod a\_i =\frac12\frac23...\frac{k-1}k\frac{k}{k+1}=\frac1{k+1}\to 0$
|
Any $a\_k$ such that
$0<a\_k<1$ and
$\sum (1-a\_k)$ diverges will do.
|
2,729,079
|
I am looking for an example of a sequence of positive real numbers $(a\_k)$ with $\lim\_{k \to \infty} a\_k = 1$ such that the sequence $(p\_n)$ defined as $p\_n=a\_1 a\_2 \dots a\_n$ has limit 0 as $n \to \infty$.
Can anyone provide me with a concrete example, or maybe some hint or useful property of such a sequence?
|
2018/04/09
|
[
"https://math.stackexchange.com/questions/2729079",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/479389/"
] |
Equivalently you're looking for $b\_k = \ln (a\_k)$ such that $b\_k\to 0$ and $\sum\_k b\_k\to -\infty$.
$b\_k=-\frac 1k$ fits the bill, yielding $a\_k = e^{-1/k}$.
|
**Hint**. Consider a sequence such that $0<a\_k <1$ for all $k$, but still approaches $1$. This will ensure that the product becomes smaller and smaller.
However, you're trying to balance things. If $a\_k$ converges too fast to $1$, then the product will not diverge to zero (It will be decreasing, but bounded above by some $\varepsilon > 0$).
|
2,729,079
|
I am looking for an example of a sequence of positive real numbers $(a\_k)$ with $\lim\_{k \to \infty} a\_k = 1$ such that the sequence $(p\_n)$ defined as $p\_n=a\_1 a\_2 \dots a\_n$ has limit 0 as $n \to \infty$.
Can anyone provide me with a concrete example, or maybe some hint or useful property of such a sequence?
|
2018/04/09
|
[
"https://math.stackexchange.com/questions/2729079",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/479389/"
] |
Equivalently you're looking for $b\_k = \ln (a\_k)$ such that $b\_k\to 0$ and $\sum\_k b\_k\to -\infty$.
$b\_k=-\frac 1k$ fits the bill, yielding $a\_k = e^{-1/k}$.
|
Any $a\_k$ such that
$0<a\_k<1$ and
$\sum (1-a\_k)$ diverges will do.
|
6,265
|
What kind of "New years resolutions" do you have as a photographer?
|
2011/01/01
|
[
"https://photo.stackexchange.com/questions/6265",
"https://photo.stackexchange.com",
"https://photo.stackexchange.com/users/67/"
] |
I'm Starting one of those 365 projects. And I plan on sharing on my blog how I use digiKam and Rawtherapee in my "workflow" (Not sure if I'm qualified to use that term :P ). There doesn't seem to be much on the inernet on the subject of photography on Linux... besides allot of people who seems satisfied with just using UFRaw + Gimp. :/
|
Get faster at post processing, I am currently about 2 months and 1000 pictures behind in processing, and I am currently on a Eurotrip racking up more, I need to get faster and picking keepers, and processing them for the web.
|
6,265
|
What kind of "New years resolutions" do you have as a photographer?
|
2011/01/01
|
[
"https://photo.stackexchange.com/questions/6265",
"https://photo.stackexchange.com",
"https://photo.stackexchange.com/users/67/"
] |
I'm Starting one of those 365 projects. And I plan on sharing on my blog how I use digiKam and Rawtherapee in my "workflow" (Not sure if I'm qualified to use that term :P ). There doesn't seem to be much on the inernet on the subject of photography on Linux... besides allot of people who seems satisfied with just using UFRaw + Gimp. :/
|
**Build my reputation** by sticking to my guns about charging for shoots requested of me and nailing the shoots in the process. I've only been doing this for six months so it's vital people don't get the "shoots for free" reputation going around.
**Master** some general off camera lighting for said shoots, thus building my reputation of being able to deliver.
**Check my framing** for tips of elbows, feet and even spacing.
Amongst other things.
|
6,265
|
What kind of "New years resolutions" do you have as a photographer?
|
2011/01/01
|
[
"https://photo.stackexchange.com/questions/6265",
"https://photo.stackexchange.com",
"https://photo.stackexchange.com/users/67/"
] |
Learn off-camera lighting techniques.
-------------------------------------
I bought a [speed light that's way more than what I need](http://rads.stackoverflow.com/amzn/click/B0002EMY9Y) a few months ago, and recently bought a [cheapo radio trigger](http://rads.stackoverflow.com/amzn/click/B002W3IXZW) for it. My goal is to learn as much as I can with one light before looking for an older manual flash for my second light.
|
Huuu ... nice one! :-)
Well here goes:
1. Learn how to use a flash properly - maybe even a ringflash.
2. Make my own webpage to sell my photos.
3. Organize my growing photo collection better. (a new years resolution last year also)
4. Maybe get a fullframe camera! :-)
|
6,265
|
What kind of "New years resolutions" do you have as a photographer?
|
2011/01/01
|
[
"https://photo.stackexchange.com/questions/6265",
"https://photo.stackexchange.com",
"https://photo.stackexchange.com/users/67/"
] |
Get faster at post processing, I am currently about 2 months and 1000 pictures behind in processing, and I am currently on a Eurotrip racking up more, I need to get faster and picking keepers, and processing them for the web.
|
1. Take some walks to take some photos.
* No, really, plan some more time to look for possibilites instead of taking pictures along on the walk to somewhere. Be it on bike or on foot, just go somewhere to explicitly concentrate on taking photos there.
2. Analyze my pictures from the last parties with [Exposureplot](http://www.cpr.demon.nl/prog_plotf.html) and decide on ~50mm or ~85mm prime.
3. Buy a flash. Maybe. :)
|
6,265
|
What kind of "New years resolutions" do you have as a photographer?
|
2011/01/01
|
[
"https://photo.stackexchange.com/questions/6265",
"https://photo.stackexchange.com",
"https://photo.stackexchange.com/users/67/"
] |
I'm Starting one of those 365 projects. And I plan on sharing on my blog how I use digiKam and Rawtherapee in my "workflow" (Not sure if I'm qualified to use that term :P ). There doesn't seem to be much on the inernet on the subject of photography on Linux... besides allot of people who seems satisfied with just using UFRaw + Gimp. :/
|
Same as last year, increase the ratio of keepers without lowering my standards.
The year before I has a 1:10 ratio (I deleted 90% of images I took) and I finished with about a 1:8 ratio (deleting now 87%). I want to eventually stop shooting the bad pictures by better previsualization and finding more creative to make things look interesting.
I'm adding two this year:
* Learn panning (might seriously affect my previous goal though ;)
* Try out high-speed photography (water drops and splashes mostly)
|
6,265
|
What kind of "New years resolutions" do you have as a photographer?
|
2011/01/01
|
[
"https://photo.stackexchange.com/questions/6265",
"https://photo.stackexchange.com",
"https://photo.stackexchange.com/users/67/"
] |
Learn off-camera lighting techniques.
-------------------------------------
I bought a [speed light that's way more than what I need](http://rads.stackoverflow.com/amzn/click/B0002EMY9Y) a few months ago, and recently bought a [cheapo radio trigger](http://rads.stackoverflow.com/amzn/click/B002W3IXZW) for it. My goal is to learn as much as I can with one light before looking for an older manual flash for my second light.
|
For 2011 I've taken a step back from commercial work to focus on personal projects, but my main resolution is to print more black and white images, as most of my work has been colour the last few years.
|
6,265
|
What kind of "New years resolutions" do you have as a photographer?
|
2011/01/01
|
[
"https://photo.stackexchange.com/questions/6265",
"https://photo.stackexchange.com",
"https://photo.stackexchange.com/users/67/"
] |
I'm Starting one of those 365 projects. And I plan on sharing on my blog how I use digiKam and Rawtherapee in my "workflow" (Not sure if I'm qualified to use that term :P ). There doesn't seem to be much on the inernet on the subject of photography on Linux... besides allot of people who seems satisfied with just using UFRaw + Gimp. :/
|
I'm doing another project 365, just finished my first and it was really fun. And when doing it again, I'm planning to do more squares and other weird aspect ratios. Another goal is using my tripod more. And more portraits. That will do.
|
6,265
|
What kind of "New years resolutions" do you have as a photographer?
|
2011/01/01
|
[
"https://photo.stackexchange.com/questions/6265",
"https://photo.stackexchange.com",
"https://photo.stackexchange.com/users/67/"
] |
I think I'll start a 365 project again, I really need to start taking more pictures. And I really need to finish my A-Z project...
|
Huuu ... nice one! :-)
Well here goes:
1. Learn how to use a flash properly - maybe even a ringflash.
2. Make my own webpage to sell my photos.
3. Organize my growing photo collection better. (a new years resolution last year also)
4. Maybe get a fullframe camera! :-)
|
6,265
|
What kind of "New years resolutions" do you have as a photographer?
|
2011/01/01
|
[
"https://photo.stackexchange.com/questions/6265",
"https://photo.stackexchange.com",
"https://photo.stackexchange.com/users/67/"
] |
Huuu ... nice one! :-)
Well here goes:
1. Learn how to use a flash properly - maybe even a ringflash.
2. Make my own webpage to sell my photos.
3. Organize my growing photo collection better. (a new years resolution last year also)
4. Maybe get a fullframe camera! :-)
|
**Build my reputation** by sticking to my guns about charging for shoots requested of me and nailing the shoots in the process. I've only been doing this for six months so it's vital people don't get the "shoots for free" reputation going around.
**Master** some general off camera lighting for said shoots, thus building my reputation of being able to deliver.
**Check my framing** for tips of elbows, feet and even spacing.
Amongst other things.
|
6,265
|
What kind of "New years resolutions" do you have as a photographer?
|
2011/01/01
|
[
"https://photo.stackexchange.com/questions/6265",
"https://photo.stackexchange.com",
"https://photo.stackexchange.com/users/67/"
] |
Same as last year, increase the ratio of keepers without lowering my standards.
The year before I has a 1:10 ratio (I deleted 90% of images I took) and I finished with about a 1:8 ratio (deleting now 87%). I want to eventually stop shooting the bad pictures by better previsualization and finding more creative to make things look interesting.
I'm adding two this year:
* Learn panning (might seriously affect my previous goal though ;)
* Try out high-speed photography (water drops and splashes mostly)
|
I'm doing another project 365, just finished my first and it was really fun. And when doing it again, I'm planning to do more squares and other weird aspect ratios. Another goal is using my tripod more. And more portraits. That will do.
|
49,640,938
|
How can I re-write conjunction of conditions (with early termination) over same parameters?
Let say I have 3 conditions
```
cond1 :: Maybe a -> Maybe a -> Maybe Bool
cond2 :: Maybe a -> Maybe a -> Maybe Bool
cond3 :: Maybe a -> Maybe a -> Maybe Bool
```
and
```
result = cond1 x y .&& cond2 x y .&& cond3 x y
```
where `.&&` is short-circuiting operations for Maybe Bool
```
(.&&):: Maybe Bool -> Maybe Bool -> Maybe Bool
fa .&& fb = do a <- fa; if a then fb else return False
```
I am looking for re-write of `result` such that it takes the list of those conditions `[cond1, cond2, cond3]` and consecutively apply it to tuple `(x,y)` with early termination or any other elegant suggestion.
Thank you
|
2018/04/04
|
[
"https://Stackoverflow.com/questions/49640938",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9472767/"
] |
The `Maybe a` types are just a distraction - they might as well be `a`, since all we want to do with them is call cond*n* on them.
So, we have a list:
```
conditions :: [a -> b -> Maybe Bool]
```
and a tuple:
```
inputs :: (a, b)
```
Since the only thing we can do with the conditions in that list is call them with `inputs` as an argument, we might as well do that first:
```
results :: [Maybe Bool]
results = map (($ inputs) . uncurry) conditions
```
and would like to find an f, such that
```
f results :: Maybe Bool
```
meaning
```
f :: [Maybe Bool] -> Maybe Bool
```
There are some interesting functions with that type signature, for example
```
f = fmap and . sequenceA
```
but this may be a little bit less short-circuity than you'd like: if one of the conditions returns `Just False`, we will still evaluate the remainder to see if any of them yield `Nothing`. This may or may not be what you wanted; to do something that short-circuits on `Just False` as well as on `Nothing`, I don't see any approach more clever than writing a recursive function that consumes the list.
|
You can get a similar solution to above, but with short-circuiting across the foldable structure, i.e. not having to consume the whole list.
```
(??) :: b -> b -> Bool -> b
a ?? b = \x -> if x then a else b
andM :: (Monad m) => m Bool -> m Bool -> m Bool
andM a b = a >>= \x -> (b ?? pure x) x
shortCircuitOnFalse :: (Foldable t, Monad m) => t (m Bool) -> m Bool
shortCircuitOnFalse = foldr andM (pure True)
```
So if you have a list (which is Foldable) of functions resembling conditions;
```
conditions :: [a -> b -> Maybe Bool]
```
and a tuple of inputs to those functions;
```
inputs :: (a, b)
```
then you can take the same approach as above:
```
results :: [Maybe Bool]
results = map (($ inputs) . uncurry) conditions
whatYouWant :: Maybe Bool
whatYouWant = shortCircuitOnFalse results
```
and a generalised version of `whatYouWant`:
```
gWhatYouWant
:: (Foldable t, Functor t, Monad m)
=> t (a -> b -> m Bool) -- ^ conditions
-> (a, b) -- ^ inputs
-> m Bool
gWhatYouWant cs is = shortCircuitOnFalse $ fmap (($ is) . uncurry) cs
```
|
49,640,938
|
How can I re-write conjunction of conditions (with early termination) over same parameters?
Let say I have 3 conditions
```
cond1 :: Maybe a -> Maybe a -> Maybe Bool
cond2 :: Maybe a -> Maybe a -> Maybe Bool
cond3 :: Maybe a -> Maybe a -> Maybe Bool
```
and
```
result = cond1 x y .&& cond2 x y .&& cond3 x y
```
where `.&&` is short-circuiting operations for Maybe Bool
```
(.&&):: Maybe Bool -> Maybe Bool -> Maybe Bool
fa .&& fb = do a <- fa; if a then fb else return False
```
I am looking for re-write of `result` such that it takes the list of those conditions `[cond1, cond2, cond3]` and consecutively apply it to tuple `(x,y)` with early termination or any other elegant suggestion.
Thank you
|
2018/04/04
|
[
"https://Stackoverflow.com/questions/49640938",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9472767/"
] |
The `Maybe a` types are just a distraction - they might as well be `a`, since all we want to do with them is call cond*n* on them.
So, we have a list:
```
conditions :: [a -> b -> Maybe Bool]
```
and a tuple:
```
inputs :: (a, b)
```
Since the only thing we can do with the conditions in that list is call them with `inputs` as an argument, we might as well do that first:
```
results :: [Maybe Bool]
results = map (($ inputs) . uncurry) conditions
```
and would like to find an f, such that
```
f results :: Maybe Bool
```
meaning
```
f :: [Maybe Bool] -> Maybe Bool
```
There are some interesting functions with that type signature, for example
```
f = fmap and . sequenceA
```
but this may be a little bit less short-circuity than you'd like: if one of the conditions returns `Just False`, we will still evaluate the remainder to see if any of them yield `Nothing`. This may or may not be what you wanted; to do something that short-circuits on `Just False` as well as on `Nothing`, I don't see any approach more clever than writing a recursive function that consumes the list.
|
`guard` is useful for short-circuiting behavior.
```
module Main where
import Control.Monad
import Debug.Trace
cond1 :: Maybe Bool
cond1 = Just True
cond2 :: Maybe Bool
cond2 = Just False
cond3 :: Maybe Bool
cond3 = Just True
-- | traceShow added to show the short-circuiting behavior
shortCircuit :: Maybe Bool
shortCircuit = foldl1 ((>>) . (guard =<<)) [ traceShow "1" cond1
, traceShow "2" cond2
, traceShow "3" cond3
]
-- λ> shortCircuit
-- "1"
-- "2"
-- Nothing
```
|
49,640,938
|
How can I re-write conjunction of conditions (with early termination) over same parameters?
Let say I have 3 conditions
```
cond1 :: Maybe a -> Maybe a -> Maybe Bool
cond2 :: Maybe a -> Maybe a -> Maybe Bool
cond3 :: Maybe a -> Maybe a -> Maybe Bool
```
and
```
result = cond1 x y .&& cond2 x y .&& cond3 x y
```
where `.&&` is short-circuiting operations for Maybe Bool
```
(.&&):: Maybe Bool -> Maybe Bool -> Maybe Bool
fa .&& fb = do a <- fa; if a then fb else return False
```
I am looking for re-write of `result` such that it takes the list of those conditions `[cond1, cond2, cond3]` and consecutively apply it to tuple `(x,y)` with early termination or any other elegant suggestion.
Thank you
|
2018/04/04
|
[
"https://Stackoverflow.com/questions/49640938",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9472767/"
] |
You can get a similar solution to above, but with short-circuiting across the foldable structure, i.e. not having to consume the whole list.
```
(??) :: b -> b -> Bool -> b
a ?? b = \x -> if x then a else b
andM :: (Monad m) => m Bool -> m Bool -> m Bool
andM a b = a >>= \x -> (b ?? pure x) x
shortCircuitOnFalse :: (Foldable t, Monad m) => t (m Bool) -> m Bool
shortCircuitOnFalse = foldr andM (pure True)
```
So if you have a list (which is Foldable) of functions resembling conditions;
```
conditions :: [a -> b -> Maybe Bool]
```
and a tuple of inputs to those functions;
```
inputs :: (a, b)
```
then you can take the same approach as above:
```
results :: [Maybe Bool]
results = map (($ inputs) . uncurry) conditions
whatYouWant :: Maybe Bool
whatYouWant = shortCircuitOnFalse results
```
and a generalised version of `whatYouWant`:
```
gWhatYouWant
:: (Foldable t, Functor t, Monad m)
=> t (a -> b -> m Bool) -- ^ conditions
-> (a, b) -- ^ inputs
-> m Bool
gWhatYouWant cs is = shortCircuitOnFalse $ fmap (($ is) . uncurry) cs
```
|
`guard` is useful for short-circuiting behavior.
```
module Main where
import Control.Monad
import Debug.Trace
cond1 :: Maybe Bool
cond1 = Just True
cond2 :: Maybe Bool
cond2 = Just False
cond3 :: Maybe Bool
cond3 = Just True
-- | traceShow added to show the short-circuiting behavior
shortCircuit :: Maybe Bool
shortCircuit = foldl1 ((>>) . (guard =<<)) [ traceShow "1" cond1
, traceShow "2" cond2
, traceShow "3" cond3
]
-- λ> shortCircuit
-- "1"
-- "2"
-- Nothing
```
|
46,483
|
I tried rebinding my keyboard with Ukelele to switch the `return` key with the `'` key. This works fine most of the time. There are just a few websites (that I've found so far) that aren't compatible with this change:
* **Facebook**: Sending IM messages no longer works.
* **Google Docs**: Does not allow you to insert new lines when editing word documents.
* **StackExchange**: Pressing `shift`+`return` at the end of a bullet list does not insert a new bullet.
How can I get this rebinding to work flawlessly everywhere in the operating system?
I'm using Chrome 18.0 beta.
These are the applications I've found which don't recognize the return key after rebinding it:
* Chrome
* Microsoft Word (specifically when a dialog is open and the OK button is the default button. Pressing return should be the same thing as clicking the OK button).
* Java apps
|
2012/03/28
|
[
"https://apple.stackexchange.com/questions/46483",
"https://apple.stackexchange.com",
"https://apple.stackexchange.com/users/218/"
] |
You probably need to step a level further down the tree of software and hardware, to the level of [KyRemap4MacBook](http://pqrs.org/macosx/keyremap4macbook/) - which acts as a filter between the physical keyboard and the keyboard events reported to MacOS.
The software keyboard map is an optional thing - software can intercept keyboard events at a level that bypasses it, if they wish, and it sounds like some of what the browser does with key binding is touching on that.
A lower level remapping may bypass that problem.
|
So far, it looks like it is a bug in Chrome. When I try to do the same thing in Safari, it works just fine. Here are the results of a simple test I did.
Definitions:
* Custom keyboard: this is exactly the same as the normal keyboard. The only difference is that I used Ukelele to swap the `'` and `return` keys.
* Return key code: outputs the Javascript key code that shows up when the physical `return` key is depressed.
* Likewise, `Quote key code` refers to the key code when the key with the `'` symbol is depressed.
Results:
```
+----------+---------+-----------------+----------------+
| Keyboard | Browser | Return key code | Quote key code |
+----------+---------+-----------------+----------------+
| Normal | Chrome | 13 | 222 |
| Normal | Safari | 13 | 222 |
| Custom | Chrome | 222 | 222 |
| Custom | Safari | 222 | 13 |
+----------+---------+-----------------+----------------+
```
Notice how Chrome and Safari's behaviors differ. Since the `'` has been changed to `return`, Chrome should be sending `13` instead of `222` when that key is depressed.
|
72,230
|
I am located in Venezuela right now, and for the whole weekend have been unable to access grubhub.com and seamless.com.
Finally, I tried using the Tor Browser and got access. The same thing happened in January when I tried to access the police department's website in a New York State county when I was abroad.
Is this a measure to avoid hackers? Or do they do it to avoid spending bandwidth in countries where the website doesn't serve the population?
|
2014/11/03
|
[
"https://security.stackexchange.com/questions/72230",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/60097/"
] |
Country-based blocking is usually put in place as a result of some organisational policy whose *intention* is indeed to "block hackers". This sort of things fail on three points:
1. Such a policy assumes that malicious people can be categorized by nationality. This is old-style, World-War-I type of thinking.
2. Geographical position is immaterial for computers; a firewall can only see *IP addresses*. Inferring geography from IP addresses relies on big tables that are never completely up-to-date.
3. As you observed, working around these blocking systems is trivial for attackers; it suffices to use a relay host outside of the blocked country, and this happens "naturally" when using Tor. Most attackers will use such relays anyway, to cover their tracks.
So the usual net effect of such a blocking is to irritate a few normal users (who might have been customers, but will not now that they are angry), without actually impeding the efforts of competent attackers.
---
On the bright side, though, "country"-based blocking is sometimes put in place to prevent thousands of mindless drones from spamming the connection logs. For instance, the sysadmin might have noticed a surge of dummy connections from some botnet, most machines of which being located in Venezuela. In that case, blocking Venezuela altogether may help prevent the clogging of log files, while implying only minor impact on business (assuming that the server in question has very few honest Venezuela-based customers). Thus, it is *conceivable* that a risk/cost analysis has determined that such a large blocking would improve things.
However, in most cases, the "country blocking" is there for the show: a whole-country blocking helps sysadmins demonstrate to managers that they are doing something for security, in a way that managers readily understand. This is the usual predicament of security: when all things work well, security is invisible. It is unfortunately hard to negotiate budgets for activities that don't imply any visible result. Even though the whole point of security is to avoid having visible results, e.g. a defaced Web site or a list of 16 millions of user passwords leaked and hitting the news.
In the case of media distribution, some distributors enforce country-based blocking because they did not have whole-World retransmission rights, and by doing a modicus of blocking effort they fulfil their legal obligations. Arguably, this case is also "for the show".
|
In my case, our expected customers come from predictable countries, and so to limit the "threat surface", other countries are blocked.
This has limited value as any determined person can do what you did and simply re-route their traffic. The side benefit, though, is that the countries we permit are those with stringent cyber-laws and we can get law enforcement help if an attack happens. So, if an attacker from an non-allowed country routes their traffic through an allowed country, we can get the police involved. It's a small thing, but it does lower the risk without any impact to business and at no cost (except for the time to enter the allowed country into the firewall's whitelist).
When I did this, the bad traffic load on our web servers dropped 90%, which is significant in terms of resource cost-savings, if nothing else.
|
72,230
|
I am located in Venezuela right now, and for the whole weekend have been unable to access grubhub.com and seamless.com.
Finally, I tried using the Tor Browser and got access. The same thing happened in January when I tried to access the police department's website in a New York State county when I was abroad.
Is this a measure to avoid hackers? Or do they do it to avoid spending bandwidth in countries where the website doesn't serve the population?
|
2014/11/03
|
[
"https://security.stackexchange.com/questions/72230",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/60097/"
] |
In my case, our expected customers come from predictable countries, and so to limit the "threat surface", other countries are blocked.
This has limited value as any determined person can do what you did and simply re-route their traffic. The side benefit, though, is that the countries we permit are those with stringent cyber-laws and we can get law enforcement help if an attack happens. So, if an attacker from an non-allowed country routes their traffic through an allowed country, we can get the police involved. It's a small thing, but it does lower the risk without any impact to business and at no cost (except for the time to enter the allowed country into the firewall's whitelist).
When I did this, the bad traffic load on our web servers dropped 90%, which is significant in terms of resource cost-savings, if nothing else.
|
It's true, if a hacker would like to get access to your page, it will not help, he can simply use a vpn or proxy.
But if you think about all the bots out there which attack every page they find to test exploits and/or passwords, you will be able to block a lot of them. This will also help you against ddos attacks, if you block every country except the one you live in, you are able to block the most of the traffic. OF course, there are more effective methods against ddos attacks but a filter is a reasonable and simple one.
|
72,230
|
I am located in Venezuela right now, and for the whole weekend have been unable to access grubhub.com and seamless.com.
Finally, I tried using the Tor Browser and got access. The same thing happened in January when I tried to access the police department's website in a New York State county when I was abroad.
Is this a measure to avoid hackers? Or do they do it to avoid spending bandwidth in countries where the website doesn't serve the population?
|
2014/11/03
|
[
"https://security.stackexchange.com/questions/72230",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/60097/"
] |
In my case, our expected customers come from predictable countries, and so to limit the "threat surface", other countries are blocked.
This has limited value as any determined person can do what you did and simply re-route their traffic. The side benefit, though, is that the countries we permit are those with stringent cyber-laws and we can get law enforcement help if an attack happens. So, if an attacker from an non-allowed country routes their traffic through an allowed country, we can get the police involved. It's a small thing, but it does lower the risk without any impact to business and at no cost (except for the time to enter the allowed country into the firewall's whitelist).
When I did this, the bad traffic load on our web servers dropped 90%, which is significant in terms of resource cost-savings, if nothing else.
|
Some websites block countries for business reasons. Traffic from some countries doesn't generate enough revenue to warrant the resources to serve them. Sometimes companies don't want to expand into a country until they can "do it right."
It's likely not a security issue. This can be circumventing by using TOR and VPNs. Of course, they can also block traffic from TOR and VPNs, or at least monitor more closely.
This is a business or political decision, not technical.
|
72,230
|
I am located in Venezuela right now, and for the whole weekend have been unable to access grubhub.com and seamless.com.
Finally, I tried using the Tor Browser and got access. The same thing happened in January when I tried to access the police department's website in a New York State county when I was abroad.
Is this a measure to avoid hackers? Or do they do it to avoid spending bandwidth in countries where the website doesn't serve the population?
|
2014/11/03
|
[
"https://security.stackexchange.com/questions/72230",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/60097/"
] |
In my case, our expected customers come from predictable countries, and so to limit the "threat surface", other countries are blocked.
This has limited value as any determined person can do what you did and simply re-route their traffic. The side benefit, though, is that the countries we permit are those with stringent cyber-laws and we can get law enforcement help if an attack happens. So, if an attacker from an non-allowed country routes their traffic through an allowed country, we can get the police involved. It's a small thing, but it does lower the risk without any impact to business and at no cost (except for the time to enter the allowed country into the firewall's whitelist).
When I did this, the bad traffic load on our web servers dropped 90%, which is significant in terms of resource cost-savings, if nothing else.
|
In the case of a site like Grubhub, the reason for blocking certain countries is likely not hacking (technical interference) but rather an effort to thwart fake reviews and similar unwanted content. It is relatively common nowadays that people in poor countries are hired for posting spam, or spam-like things like fake reviews for a few pennies a pop.
Sure anyone could bypass this block, but that could then be detected in a different way, as these users would then likely connect through a proxy server that can be detected through a proxy blacklist. The point here is to set up roadblocks, rather than absolute security. Unlike a security vulnerability, fake reviews on a restaurant rating are not an immediately fatal threat to the site.
If this is done in a judicious way and targeting an actual problem that has been identified, it can absolutely an effective of stopping unwanted posts. To name an example, for a site I run, I found that I was consistently getting spam from Bangladesh, Pakistan and Cameroon (of all places) and zero useful traffic from those same places. I blocked, to the best of my ability, those countries from posting content, but not from reading the site. Users from these countries are now greeted with a polite message asking them to contact an e-mail address that was set up just for this purpose, if they were legitimate users. This has been effective in blocking this particular class of spam, and is an example of what I would call a well-informed and judicious use of blocking a certain country.
|
72,230
|
I am located in Venezuela right now, and for the whole weekend have been unable to access grubhub.com and seamless.com.
Finally, I tried using the Tor Browser and got access. The same thing happened in January when I tried to access the police department's website in a New York State county when I was abroad.
Is this a measure to avoid hackers? Or do they do it to avoid spending bandwidth in countries where the website doesn't serve the population?
|
2014/11/03
|
[
"https://security.stackexchange.com/questions/72230",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/60097/"
] |
Country-based blocking is usually put in place as a result of some organisational policy whose *intention* is indeed to "block hackers". This sort of things fail on three points:
1. Such a policy assumes that malicious people can be categorized by nationality. This is old-style, World-War-I type of thinking.
2. Geographical position is immaterial for computers; a firewall can only see *IP addresses*. Inferring geography from IP addresses relies on big tables that are never completely up-to-date.
3. As you observed, working around these blocking systems is trivial for attackers; it suffices to use a relay host outside of the blocked country, and this happens "naturally" when using Tor. Most attackers will use such relays anyway, to cover their tracks.
So the usual net effect of such a blocking is to irritate a few normal users (who might have been customers, but will not now that they are angry), without actually impeding the efforts of competent attackers.
---
On the bright side, though, "country"-based blocking is sometimes put in place to prevent thousands of mindless drones from spamming the connection logs. For instance, the sysadmin might have noticed a surge of dummy connections from some botnet, most machines of which being located in Venezuela. In that case, blocking Venezuela altogether may help prevent the clogging of log files, while implying only minor impact on business (assuming that the server in question has very few honest Venezuela-based customers). Thus, it is *conceivable* that a risk/cost analysis has determined that such a large blocking would improve things.
However, in most cases, the "country blocking" is there for the show: a whole-country blocking helps sysadmins demonstrate to managers that they are doing something for security, in a way that managers readily understand. This is the usual predicament of security: when all things work well, security is invisible. It is unfortunately hard to negotiate budgets for activities that don't imply any visible result. Even though the whole point of security is to avoid having visible results, e.g. a defaced Web site or a list of 16 millions of user passwords leaked and hitting the news.
In the case of media distribution, some distributors enforce country-based blocking because they did not have whole-World retransmission rights, and by doing a modicus of blocking effort they fulfil their legal obligations. Arguably, this case is also "for the show".
|
It's true, if a hacker would like to get access to your page, it will not help, he can simply use a vpn or proxy.
But if you think about all the bots out there which attack every page they find to test exploits and/or passwords, you will be able to block a lot of them. This will also help you against ddos attacks, if you block every country except the one you live in, you are able to block the most of the traffic. OF course, there are more effective methods against ddos attacks but a filter is a reasonable and simple one.
|
72,230
|
I am located in Venezuela right now, and for the whole weekend have been unable to access grubhub.com and seamless.com.
Finally, I tried using the Tor Browser and got access. The same thing happened in January when I tried to access the police department's website in a New York State county when I was abroad.
Is this a measure to avoid hackers? Or do they do it to avoid spending bandwidth in countries where the website doesn't serve the population?
|
2014/11/03
|
[
"https://security.stackexchange.com/questions/72230",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/60097/"
] |
Country-based blocking is usually put in place as a result of some organisational policy whose *intention* is indeed to "block hackers". This sort of things fail on three points:
1. Such a policy assumes that malicious people can be categorized by nationality. This is old-style, World-War-I type of thinking.
2. Geographical position is immaterial for computers; a firewall can only see *IP addresses*. Inferring geography from IP addresses relies on big tables that are never completely up-to-date.
3. As you observed, working around these blocking systems is trivial for attackers; it suffices to use a relay host outside of the blocked country, and this happens "naturally" when using Tor. Most attackers will use such relays anyway, to cover their tracks.
So the usual net effect of such a blocking is to irritate a few normal users (who might have been customers, but will not now that they are angry), without actually impeding the efforts of competent attackers.
---
On the bright side, though, "country"-based blocking is sometimes put in place to prevent thousands of mindless drones from spamming the connection logs. For instance, the sysadmin might have noticed a surge of dummy connections from some botnet, most machines of which being located in Venezuela. In that case, blocking Venezuela altogether may help prevent the clogging of log files, while implying only minor impact on business (assuming that the server in question has very few honest Venezuela-based customers). Thus, it is *conceivable* that a risk/cost analysis has determined that such a large blocking would improve things.
However, in most cases, the "country blocking" is there for the show: a whole-country blocking helps sysadmins demonstrate to managers that they are doing something for security, in a way that managers readily understand. This is the usual predicament of security: when all things work well, security is invisible. It is unfortunately hard to negotiate budgets for activities that don't imply any visible result. Even though the whole point of security is to avoid having visible results, e.g. a defaced Web site or a list of 16 millions of user passwords leaked and hitting the news.
In the case of media distribution, some distributors enforce country-based blocking because they did not have whole-World retransmission rights, and by doing a modicus of blocking effort they fulfil their legal obligations. Arguably, this case is also "for the show".
|
Some websites block countries for business reasons. Traffic from some countries doesn't generate enough revenue to warrant the resources to serve them. Sometimes companies don't want to expand into a country until they can "do it right."
It's likely not a security issue. This can be circumventing by using TOR and VPNs. Of course, they can also block traffic from TOR and VPNs, or at least monitor more closely.
This is a business or political decision, not technical.
|
72,230
|
I am located in Venezuela right now, and for the whole weekend have been unable to access grubhub.com and seamless.com.
Finally, I tried using the Tor Browser and got access. The same thing happened in January when I tried to access the police department's website in a New York State county when I was abroad.
Is this a measure to avoid hackers? Or do they do it to avoid spending bandwidth in countries where the website doesn't serve the population?
|
2014/11/03
|
[
"https://security.stackexchange.com/questions/72230",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/60097/"
] |
Country-based blocking is usually put in place as a result of some organisational policy whose *intention* is indeed to "block hackers". This sort of things fail on three points:
1. Such a policy assumes that malicious people can be categorized by nationality. This is old-style, World-War-I type of thinking.
2. Geographical position is immaterial for computers; a firewall can only see *IP addresses*. Inferring geography from IP addresses relies on big tables that are never completely up-to-date.
3. As you observed, working around these blocking systems is trivial for attackers; it suffices to use a relay host outside of the blocked country, and this happens "naturally" when using Tor. Most attackers will use such relays anyway, to cover their tracks.
So the usual net effect of such a blocking is to irritate a few normal users (who might have been customers, but will not now that they are angry), without actually impeding the efforts of competent attackers.
---
On the bright side, though, "country"-based blocking is sometimes put in place to prevent thousands of mindless drones from spamming the connection logs. For instance, the sysadmin might have noticed a surge of dummy connections from some botnet, most machines of which being located in Venezuela. In that case, blocking Venezuela altogether may help prevent the clogging of log files, while implying only minor impact on business (assuming that the server in question has very few honest Venezuela-based customers). Thus, it is *conceivable* that a risk/cost analysis has determined that such a large blocking would improve things.
However, in most cases, the "country blocking" is there for the show: a whole-country blocking helps sysadmins demonstrate to managers that they are doing something for security, in a way that managers readily understand. This is the usual predicament of security: when all things work well, security is invisible. It is unfortunately hard to negotiate budgets for activities that don't imply any visible result. Even though the whole point of security is to avoid having visible results, e.g. a defaced Web site or a list of 16 millions of user passwords leaked and hitting the news.
In the case of media distribution, some distributors enforce country-based blocking because they did not have whole-World retransmission rights, and by doing a modicus of blocking effort they fulfil their legal obligations. Arguably, this case is also "for the show".
|
In the case of a site like Grubhub, the reason for blocking certain countries is likely not hacking (technical interference) but rather an effort to thwart fake reviews and similar unwanted content. It is relatively common nowadays that people in poor countries are hired for posting spam, or spam-like things like fake reviews for a few pennies a pop.
Sure anyone could bypass this block, but that could then be detected in a different way, as these users would then likely connect through a proxy server that can be detected through a proxy blacklist. The point here is to set up roadblocks, rather than absolute security. Unlike a security vulnerability, fake reviews on a restaurant rating are not an immediately fatal threat to the site.
If this is done in a judicious way and targeting an actual problem that has been identified, it can absolutely an effective of stopping unwanted posts. To name an example, for a site I run, I found that I was consistently getting spam from Bangladesh, Pakistan and Cameroon (of all places) and zero useful traffic from those same places. I blocked, to the best of my ability, those countries from posting content, but not from reading the site. Users from these countries are now greeted with a polite message asking them to contact an e-mail address that was set up just for this purpose, if they were legitimate users. This has been effective in blocking this particular class of spam, and is an example of what I would call a well-informed and judicious use of blocking a certain country.
|
72,230
|
I am located in Venezuela right now, and for the whole weekend have been unable to access grubhub.com and seamless.com.
Finally, I tried using the Tor Browser and got access. The same thing happened in January when I tried to access the police department's website in a New York State county when I was abroad.
Is this a measure to avoid hackers? Or do they do it to avoid spending bandwidth in countries where the website doesn't serve the population?
|
2014/11/03
|
[
"https://security.stackexchange.com/questions/72230",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/60097/"
] |
It's true, if a hacker would like to get access to your page, it will not help, he can simply use a vpn or proxy.
But if you think about all the bots out there which attack every page they find to test exploits and/or passwords, you will be able to block a lot of them. This will also help you against ddos attacks, if you block every country except the one you live in, you are able to block the most of the traffic. OF course, there are more effective methods against ddos attacks but a filter is a reasonable and simple one.
|
In the case of a site like Grubhub, the reason for blocking certain countries is likely not hacking (technical interference) but rather an effort to thwart fake reviews and similar unwanted content. It is relatively common nowadays that people in poor countries are hired for posting spam, or spam-like things like fake reviews for a few pennies a pop.
Sure anyone could bypass this block, but that could then be detected in a different way, as these users would then likely connect through a proxy server that can be detected through a proxy blacklist. The point here is to set up roadblocks, rather than absolute security. Unlike a security vulnerability, fake reviews on a restaurant rating are not an immediately fatal threat to the site.
If this is done in a judicious way and targeting an actual problem that has been identified, it can absolutely an effective of stopping unwanted posts. To name an example, for a site I run, I found that I was consistently getting spam from Bangladesh, Pakistan and Cameroon (of all places) and zero useful traffic from those same places. I blocked, to the best of my ability, those countries from posting content, but not from reading the site. Users from these countries are now greeted with a polite message asking them to contact an e-mail address that was set up just for this purpose, if they were legitimate users. This has been effective in blocking this particular class of spam, and is an example of what I would call a well-informed and judicious use of blocking a certain country.
|
72,230
|
I am located in Venezuela right now, and for the whole weekend have been unable to access grubhub.com and seamless.com.
Finally, I tried using the Tor Browser and got access. The same thing happened in January when I tried to access the police department's website in a New York State county when I was abroad.
Is this a measure to avoid hackers? Or do they do it to avoid spending bandwidth in countries where the website doesn't serve the population?
|
2014/11/03
|
[
"https://security.stackexchange.com/questions/72230",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/60097/"
] |
Some websites block countries for business reasons. Traffic from some countries doesn't generate enough revenue to warrant the resources to serve them. Sometimes companies don't want to expand into a country until they can "do it right."
It's likely not a security issue. This can be circumventing by using TOR and VPNs. Of course, they can also block traffic from TOR and VPNs, or at least monitor more closely.
This is a business or political decision, not technical.
|
In the case of a site like Grubhub, the reason for blocking certain countries is likely not hacking (technical interference) but rather an effort to thwart fake reviews and similar unwanted content. It is relatively common nowadays that people in poor countries are hired for posting spam, or spam-like things like fake reviews for a few pennies a pop.
Sure anyone could bypass this block, but that could then be detected in a different way, as these users would then likely connect through a proxy server that can be detected through a proxy blacklist. The point here is to set up roadblocks, rather than absolute security. Unlike a security vulnerability, fake reviews on a restaurant rating are not an immediately fatal threat to the site.
If this is done in a judicious way and targeting an actual problem that has been identified, it can absolutely an effective of stopping unwanted posts. To name an example, for a site I run, I found that I was consistently getting spam from Bangladesh, Pakistan and Cameroon (of all places) and zero useful traffic from those same places. I blocked, to the best of my ability, those countries from posting content, but not from reading the site. Users from these countries are now greeted with a polite message asking them to contact an e-mail address that was set up just for this purpose, if they were legitimate users. This has been effective in blocking this particular class of spam, and is an example of what I would call a well-informed and judicious use of blocking a certain country.
|
35,552,179
|
i want to insert a database table into another one which has more columns. I need all records of it. What i tried is following which is not working and does not give me an error message:
```
$sql = mysqli_query($con, "SELECT * FROM table1");
while ($row = mysqli_fetch_array($sql)) {
$sql1 = mysqli_query($con, "INSERT INTO table2
(uid,
pid,
tstamp,
crdate)
VALUES ('',
'".$row['value1']."',
'".$row['value2']."',
'".$row['value3']."'");}
```
|
2016/02/22
|
[
"https://Stackoverflow.com/questions/35552179",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5210963/"
] |
you can directly do it like this, that's much faster and better than fetching the data and iterating over it.
```
insert into table2(uid,pid,tstamp,crdate)
select value1,value2,value3,value4 from table1
```
|
you forget close `"` :
```
mysqli_query($con, "INSERT INTO table2
(uid,
pid,
tstamp,
crdate)
VALUES ('',
'".$row['value1']."',
'".$row['value2']."',
'".$row['value3']."');");
```
also you can do this in query by select into to reduce consuming time:
```
INSERT INTO table2 (uid,pid,tstamp,crdate)
SELECT '',val1,val2,val3 FROM table1;
```
|
57,142,332
|
I'm trying to active the device owner of my system application using hidden API
from `DevicePolicyManager` method `dpm.setDeviceOwner(cmpName)`. This method is throwing illegalStateException. I also tried
`Settings.Global.putInt(context.getContentResolver(), Settings.Global.DEVICE_PROVISIONED, 0);` and
`Settings.Secure.putInt(context.getContentResolver(), Settings.Secure.USER_SETUP_COMPLETE, 0);`. But android studio is still throwing an error.
**Note** : I have both permission in manifest `<uses-permission android:name="android.permission.WRITE_SECURE_SETTINGS" />` and `<uses-permission android:name="android.permission.MANAGE_PROFILE_AND_DEVICE_OWNERS" />`
|
2019/07/22
|
[
"https://Stackoverflow.com/questions/57142332",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6008027/"
] |
I received that error when calling `dpm.setProfileOwner` before `dpm.setActiveAdmin`; after all, a profile owner must first be an active admin. However, you'll quickly find that, even if you issue the appropriate sequence of commands you'll then receive the error: `java.lang.IllegalStateException: Unable to set non-default profile owner post-setup`.
If you check your logcat, though, I suspect you'll also find a warning similar to one I received: `avc: denied { write } for name="com.myorg.mapp-0AMhJFjDAJrJ-KmxrLiEPA==" dev="dm-3" ino=3558 scontext=u:r:system_app:s0 tcontext=u:object_r:apk_data_file:s0 tclass=dir permissive=0`
This message is the key... The problem is that selinux rules prevent the apk from making changes directly to the /data/system directory, which is where the xml files (device\_owner\_2.xml and device\_policies.xml) that define profile ownership are located.
In short, you're out of luck. You have a few workaround options:
* Run the `dpm set-profile-owner` command from within a rooted shell. Since it is run as root this will bypass selinux rules. This is a great option for quick tests
* Grant your application root access to execute the command directly. This is a good option if you know your devices will be rooted and don't want to have to remember the command.
* Compile your ROM with the relevant access xml files already baked-in.
If you're building a system app (which you must be with those permissions), you're almost certainly rooted or building a ROM, so one of the above options should work.
|
I've encountered a very similar problem using Android Q. I know it's been answered already, but I found another thing that I did that worked, based on DPM implementation in this [link](https://android.googlesource.com/platform/frameworks/base/+/1c14fbc/cmds/dpm/src/com/android/commands/dpm/Dpm.java). I implemented a platform priv-app with this method:
```
private void setDeviceOwnerAndAdmin() {
int mUserId = UserHandle.USER_SYSTEM;
try {
//Get the Stub implementation for device policy service
IDevicePolicyManager mDevicePolicyManager = IDevicePolicyManager.Stub.asInterface(
ServiceManager.getService(Context.DEVICE_POLICY_SERVICE));
//Get the admin component from DeviceAdmin class
ComponentName component = new ComponentName(mContext, DeviceAdmin.class);
//Set active system admin
mDevicePolicyManager.setActiveAdmin(component, true /*refreshing*/, mUserId);
//Set the device owner for this component
if (!mDevicePolicyManager.setDeviceOwner(component, "OwnerName", mUserId)) {
throw new RuntimeException(
"Can't set package " + component + " as device owner.");
}
//Set provisioning state
mDevicePolicyManager.setUserProvisioningState(
DevicePolicyManager.STATE_USER_SETUP_FINALIZED, mUserId);
} catch (Exception e) {
Log.e(TAG, "Error at setting Owner and Admin", e);
}
}
```
Then the exception occured with the message
>
> Cannot set the device owner if the device is already set-up
>
>
>
I then added `<uses-permission android:name="android.permission.WRITE_SECURE_SETTINGS" />` and `<uses-permission android:name="android.permission.MANAGE_PROFILE_AND_DEVICE_OWNERS" />` to the Manifest.
Also, added the priv-app package to `/frameworks/base/data/etc/privapp-permissions-platform.xml` with the right permissions.
After all that, I still had the same exception message, until I figured out that the `frameworks/base/packages/SettingsProvider/res/values/defaults.xml` had the value `<bool name="def_user_setup_complete">true</bool>`. That was preventing me from adding a device owner, so I changed this value to `false` and it worked.
|
12,040,816
|
I need to dump *all* tables in MySQL in CSV format.
Is there a command using `mysqldump` to *just* output every row for every table in CSV format?
|
2012/08/20
|
[
"https://Stackoverflow.com/questions/12040816",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/638510/"
] |
First, I can give you the answer for *one* table:
The trouble with all these `INTO OUTFILE` or `--tab=tmpfile` (and `-T/path/to/directory`) answers is that it requires running **mysqldump** *on the same server* as the MySQL server, and having those access rights.
My solution was simply to use `mysql` (*not* `mysqldump`) with the `-B` parameter, inline the SELECT statement with `-e`, then massage the ASCII output with `sed`, and wind up with CSV including a header field row:
Example:
```
mysql -B -u username -p password database -h dbhost -e "SELECT * FROM accounts;" \
| sed "s/\"/\"\"/g;s/'/\'/;s/\t/\",\"/g;s/^/\"/;s/$/\"/;s/\n//g"
```
>
> "id","login","password","folder","email"
> "8","mariana","xxxxxxxxxx","mariana",""
> "3","squaredesign","xxxxxxxxxxxxxxxxx","squaredesign","mkobylecki@squaredesign.com"
> "4","miedziak","xxxxxxxxxx","miedziak","miedziak@mail.com"
> "5","Sarko","xxxxxxxxx","Sarko",""
> "6","Logitrans
> Poland","xxxxxxxxxxxxxx","LogitransPoland",""
> "7","Amos","xxxxxxxxxxxxxxxxxxxx","Amos",""
> "9","Annabelle","xxxxxxxxxxxxxxxx","Annabelle",""
> "11","Brandfathers and
> Sons","xxxxxxxxxxxxxxxxx","BrandfathersAndSons",""
> "12","Imagine
> Group","xxxxxxxxxxxxxxxx","ImagineGroup",""
> "13","EduSquare.pl","xxxxxxxxxxxxxxxxx","EduSquare.pl",""
> "101","tmp","xxxxxxxxxxxxxxxxxxxxx","\_","WOBC-14.squaredesign.atlassian.net@yoMama.com"
>
>
>
Add a `> outfile.csv` at the end of that one-liner, to get your CSV file for that table.
Next, get a list of *all* your tables with
```
mysql -u username -ppassword dbname -sN -e "SHOW TABLES;"
```
From there, it's only one more step to make a loop, for example, in the Bash shell to iterate over those tables:
```
for tb in $(mysql -u username -ppassword dbname -sN -e "SHOW TABLES;"); do
echo .....;
done
```
Between the `do` and `; done` insert the long command I wrote in Part 1 above, but substitute your tablename with `$tb` instead.
|
This command will create two files in */path/to/directory table\_name.sql* and *table\_name.txt*.
The SQL file will contain the table creation schema and the txt file will contain the records of the mytable table with fields delimited by a comma.
```
mysqldump -u username -p -t -T/path/to/directory dbname table_name --fields-terminated-by=','
```
|
12,040,816
|
I need to dump *all* tables in MySQL in CSV format.
Is there a command using `mysqldump` to *just* output every row for every table in CSV format?
|
2012/08/20
|
[
"https://Stackoverflow.com/questions/12040816",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/638510/"
] |
`mysqldump` has options for CSV formatting:
```
--fields-terminated-by=name
Fields in the output file are terminated by the given
--lines-terminated-by=name
Lines in the output file are terminated by the given
```
The `name` should contain one of the following:
```
`--fields-terminated-by`
```
`\t` or `"\""`
```
`--fields-enclosed-by=name`
Fields in the output file are enclosed by the given
```
and
`--lines-terminated-by`
* `\r`
* `\n`
* `\r\n`
Naturally you should mysqldump each table individually.
I suggest you gather all table names in a text file. Then, iterate through all tables running mysqldump. Here is a script that will dump and gzip 10 tables at a time:
```
MYSQL_USER=root
MYSQL_PASS=rootpassword
MYSQL_CONN="-u${MYSQL_USER} -p${MYSQL_PASS}"
SQLSTMT="SELECT CONCAT(table_schema,'.',table_name)"
SQLSTMT="${SQLSTMT} FROM information_schema.tables WHERE table_schema NOT IN "
SQLSTMT="${SQLSTMT} ('information_schema','performance_schema','mysql')"
mysql ${MYSQL_CONN} -ANe"${SQLSTMT}" > /tmp/DBTB.txt
COMMIT_COUNT=0
COMMIT_LIMIT=10
TARGET_FOLDER=/path/to/csv/files
for DBTB in `cat /tmp/DBTB.txt`
do
DB=`echo "${DBTB}" | sed 's/\./ /g' | awk '{print $1}'`
TB=`echo "${DBTB}" | sed 's/\./ /g' | awk '{print $2}'`
DUMPFILE=${DB}-${TB}.csv.gz
mysqldump ${MYSQL_CONN} -T ${TARGET_FOLDER} --fields-terminated-by="," --fields-enclosed-by="\"" --lines-terminated-by="\r\n" ${DB} ${TB} | gzip > ${DUMPFILE}
(( COMMIT_COUNT++ ))
if [ ${COMMIT_COUNT} -eq ${COMMIT_LIMIT} ]
then
COMMIT_COUNT=0
wait
fi
done
if [ ${COMMIT_COUNT} -gt 0 ]
then
wait
fi
```
|
It looks like others had this problem also, and [there is a simple Python script](https://github.com/jamesmishra/mysqldump-to-csv) now, for converting output of mysqldump into CSV files.
```
wget https://raw.githubusercontent.com/jamesmishra/mysqldump-to-csv/master/mysqldump_to_csv.py
mysqldump -u username -p --host=rdshostname database table | python mysqldump_to_csv.py > table.csv
```
|
12,040,816
|
I need to dump *all* tables in MySQL in CSV format.
Is there a command using `mysqldump` to *just* output every row for every table in CSV format?
|
2012/08/20
|
[
"https://Stackoverflow.com/questions/12040816",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/638510/"
] |
This worked well for me:
```
mysqldump <DBNAME> --fields-terminated-by ',' \
--fields-enclosed-by '"' --fields-escaped-by '\' \
--no-create-info --tab /var/lib/mysql-files/
```
Or if you want to only dump a specific table:
```
mysqldump <DBNAME> <TABLENAME> --fields-terminated-by ',' \
--fields-enclosed-by '"' --fields-escaped-by '\' \
--no-create-info --tab /var/lib/mysql-files/
```
I'm dumping to `/var/lib/mysql-files/` to avoid this error:
>
> mysqldump: Got error: 1290: The MySQL server is running with the --secure-file-priv option so it cannot execute this statement when executing 'SELECT INTO OUTFILE'
>
>
>
|
It looks like others had this problem also, and [there is a simple Python script](https://github.com/jamesmishra/mysqldump-to-csv) now, for converting output of mysqldump into CSV files.
```
wget https://raw.githubusercontent.com/jamesmishra/mysqldump-to-csv/master/mysqldump_to_csv.py
mysqldump -u username -p --host=rdshostname database table | python mysqldump_to_csv.py > table.csv
```
|
12,040,816
|
I need to dump *all* tables in MySQL in CSV format.
Is there a command using `mysqldump` to *just* output every row for every table in CSV format?
|
2012/08/20
|
[
"https://Stackoverflow.com/questions/12040816",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/638510/"
] |
First, I can give you the answer for *one* table:
The trouble with all these `INTO OUTFILE` or `--tab=tmpfile` (and `-T/path/to/directory`) answers is that it requires running **mysqldump** *on the same server* as the MySQL server, and having those access rights.
My solution was simply to use `mysql` (*not* `mysqldump`) with the `-B` parameter, inline the SELECT statement with `-e`, then massage the ASCII output with `sed`, and wind up with CSV including a header field row:
Example:
```
mysql -B -u username -p password database -h dbhost -e "SELECT * FROM accounts;" \
| sed "s/\"/\"\"/g;s/'/\'/;s/\t/\",\"/g;s/^/\"/;s/$/\"/;s/\n//g"
```
>
> "id","login","password","folder","email"
> "8","mariana","xxxxxxxxxx","mariana",""
> "3","squaredesign","xxxxxxxxxxxxxxxxx","squaredesign","mkobylecki@squaredesign.com"
> "4","miedziak","xxxxxxxxxx","miedziak","miedziak@mail.com"
> "5","Sarko","xxxxxxxxx","Sarko",""
> "6","Logitrans
> Poland","xxxxxxxxxxxxxx","LogitransPoland",""
> "7","Amos","xxxxxxxxxxxxxxxxxxxx","Amos",""
> "9","Annabelle","xxxxxxxxxxxxxxxx","Annabelle",""
> "11","Brandfathers and
> Sons","xxxxxxxxxxxxxxxxx","BrandfathersAndSons",""
> "12","Imagine
> Group","xxxxxxxxxxxxxxxx","ImagineGroup",""
> "13","EduSquare.pl","xxxxxxxxxxxxxxxxx","EduSquare.pl",""
> "101","tmp","xxxxxxxxxxxxxxxxxxxxx","\_","WOBC-14.squaredesign.atlassian.net@yoMama.com"
>
>
>
Add a `> outfile.csv` at the end of that one-liner, to get your CSV file for that table.
Next, get a list of *all* your tables with
```
mysql -u username -ppassword dbname -sN -e "SHOW TABLES;"
```
From there, it's only one more step to make a loop, for example, in the Bash shell to iterate over those tables:
```
for tb in $(mysql -u username -ppassword dbname -sN -e "SHOW TABLES;"); do
echo .....;
done
```
Between the `do` and `; done` insert the long command I wrote in Part 1 above, but substitute your tablename with `$tb` instead.
|
`mysqldump` has options for CSV formatting:
```
--fields-terminated-by=name
Fields in the output file are terminated by the given
--lines-terminated-by=name
Lines in the output file are terminated by the given
```
The `name` should contain one of the following:
```
`--fields-terminated-by`
```
`\t` or `"\""`
```
`--fields-enclosed-by=name`
Fields in the output file are enclosed by the given
```
and
`--lines-terminated-by`
* `\r`
* `\n`
* `\r\n`
Naturally you should mysqldump each table individually.
I suggest you gather all table names in a text file. Then, iterate through all tables running mysqldump. Here is a script that will dump and gzip 10 tables at a time:
```
MYSQL_USER=root
MYSQL_PASS=rootpassword
MYSQL_CONN="-u${MYSQL_USER} -p${MYSQL_PASS}"
SQLSTMT="SELECT CONCAT(table_schema,'.',table_name)"
SQLSTMT="${SQLSTMT} FROM information_schema.tables WHERE table_schema NOT IN "
SQLSTMT="${SQLSTMT} ('information_schema','performance_schema','mysql')"
mysql ${MYSQL_CONN} -ANe"${SQLSTMT}" > /tmp/DBTB.txt
COMMIT_COUNT=0
COMMIT_LIMIT=10
TARGET_FOLDER=/path/to/csv/files
for DBTB in `cat /tmp/DBTB.txt`
do
DB=`echo "${DBTB}" | sed 's/\./ /g' | awk '{print $1}'`
TB=`echo "${DBTB}" | sed 's/\./ /g' | awk '{print $2}'`
DUMPFILE=${DB}-${TB}.csv.gz
mysqldump ${MYSQL_CONN} -T ${TARGET_FOLDER} --fields-terminated-by="," --fields-enclosed-by="\"" --lines-terminated-by="\r\n" ${DB} ${TB} | gzip > ${DUMPFILE}
(( COMMIT_COUNT++ ))
if [ ${COMMIT_COUNT} -eq ${COMMIT_LIMIT} ]
then
COMMIT_COUNT=0
wait
fi
done
if [ ${COMMIT_COUNT} -gt 0 ]
then
wait
fi
```
|
12,040,816
|
I need to dump *all* tables in MySQL in CSV format.
Is there a command using `mysqldump` to *just* output every row for every table in CSV format?
|
2012/08/20
|
[
"https://Stackoverflow.com/questions/12040816",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/638510/"
] |
If you are using MySQL or MariaDB, the easiest and performant way dump CSV for single table is -
```
SELECT customer_id, firstname, surname INTO OUTFILE '/exportdata/customers.txt'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM customers;
```
Now you can use other techniques to repeat this command for multiple tables. See more details here:
* <https://mariadb.com/kb/en/the-mariadb-library/select-into-outfile/>
* <https://dev.mysql.com/doc/refman/5.7/en/select-into.html>
|
You also can do it using Data Export tool in [dbForge Studio for MySQL](http://www.devart.com/dbforge/mysql/studio/).
It will allow you to select some or all tables and export them into CSV format.
|
12,040,816
|
I need to dump *all* tables in MySQL in CSV format.
Is there a command using `mysqldump` to *just* output every row for every table in CSV format?
|
2012/08/20
|
[
"https://Stackoverflow.com/questions/12040816",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/638510/"
] |
First, I can give you the answer for *one* table:
The trouble with all these `INTO OUTFILE` or `--tab=tmpfile` (and `-T/path/to/directory`) answers is that it requires running **mysqldump** *on the same server* as the MySQL server, and having those access rights.
My solution was simply to use `mysql` (*not* `mysqldump`) with the `-B` parameter, inline the SELECT statement with `-e`, then massage the ASCII output with `sed`, and wind up with CSV including a header field row:
Example:
```
mysql -B -u username -p password database -h dbhost -e "SELECT * FROM accounts;" \
| sed "s/\"/\"\"/g;s/'/\'/;s/\t/\",\"/g;s/^/\"/;s/$/\"/;s/\n//g"
```
>
> "id","login","password","folder","email"
> "8","mariana","xxxxxxxxxx","mariana",""
> "3","squaredesign","xxxxxxxxxxxxxxxxx","squaredesign","mkobylecki@squaredesign.com"
> "4","miedziak","xxxxxxxxxx","miedziak","miedziak@mail.com"
> "5","Sarko","xxxxxxxxx","Sarko",""
> "6","Logitrans
> Poland","xxxxxxxxxxxxxx","LogitransPoland",""
> "7","Amos","xxxxxxxxxxxxxxxxxxxx","Amos",""
> "9","Annabelle","xxxxxxxxxxxxxxxx","Annabelle",""
> "11","Brandfathers and
> Sons","xxxxxxxxxxxxxxxxx","BrandfathersAndSons",""
> "12","Imagine
> Group","xxxxxxxxxxxxxxxx","ImagineGroup",""
> "13","EduSquare.pl","xxxxxxxxxxxxxxxxx","EduSquare.pl",""
> "101","tmp","xxxxxxxxxxxxxxxxxxxxx","\_","WOBC-14.squaredesign.atlassian.net@yoMama.com"
>
>
>
Add a `> outfile.csv` at the end of that one-liner, to get your CSV file for that table.
Next, get a list of *all* your tables with
```
mysql -u username -ppassword dbname -sN -e "SHOW TABLES;"
```
From there, it's only one more step to make a loop, for example, in the Bash shell to iterate over those tables:
```
for tb in $(mysql -u username -ppassword dbname -sN -e "SHOW TABLES;"); do
echo .....;
done
```
Between the `do` and `; done` insert the long command I wrote in Part 1 above, but substitute your tablename with `$tb` instead.
|
This worked well for me:
```
mysqldump <DBNAME> --fields-terminated-by ',' \
--fields-enclosed-by '"' --fields-escaped-by '\' \
--no-create-info --tab /var/lib/mysql-files/
```
Or if you want to only dump a specific table:
```
mysqldump <DBNAME> <TABLENAME> --fields-terminated-by ',' \
--fields-enclosed-by '"' --fields-escaped-by '\' \
--no-create-info --tab /var/lib/mysql-files/
```
I'm dumping to `/var/lib/mysql-files/` to avoid this error:
>
> mysqldump: Got error: 1290: The MySQL server is running with the --secure-file-priv option so it cannot execute this statement when executing 'SELECT INTO OUTFILE'
>
>
>
|
12,040,816
|
I need to dump *all* tables in MySQL in CSV format.
Is there a command using `mysqldump` to *just* output every row for every table in CSV format?
|
2012/08/20
|
[
"https://Stackoverflow.com/questions/12040816",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/638510/"
] |
First, I can give you the answer for *one* table:
The trouble with all these `INTO OUTFILE` or `--tab=tmpfile` (and `-T/path/to/directory`) answers is that it requires running **mysqldump** *on the same server* as the MySQL server, and having those access rights.
My solution was simply to use `mysql` (*not* `mysqldump`) with the `-B` parameter, inline the SELECT statement with `-e`, then massage the ASCII output with `sed`, and wind up with CSV including a header field row:
Example:
```
mysql -B -u username -p password database -h dbhost -e "SELECT * FROM accounts;" \
| sed "s/\"/\"\"/g;s/'/\'/;s/\t/\",\"/g;s/^/\"/;s/$/\"/;s/\n//g"
```
>
> "id","login","password","folder","email"
> "8","mariana","xxxxxxxxxx","mariana",""
> "3","squaredesign","xxxxxxxxxxxxxxxxx","squaredesign","mkobylecki@squaredesign.com"
> "4","miedziak","xxxxxxxxxx","miedziak","miedziak@mail.com"
> "5","Sarko","xxxxxxxxx","Sarko",""
> "6","Logitrans
> Poland","xxxxxxxxxxxxxx","LogitransPoland",""
> "7","Amos","xxxxxxxxxxxxxxxxxxxx","Amos",""
> "9","Annabelle","xxxxxxxxxxxxxxxx","Annabelle",""
> "11","Brandfathers and
> Sons","xxxxxxxxxxxxxxxxx","BrandfathersAndSons",""
> "12","Imagine
> Group","xxxxxxxxxxxxxxxx","ImagineGroup",""
> "13","EduSquare.pl","xxxxxxxxxxxxxxxxx","EduSquare.pl",""
> "101","tmp","xxxxxxxxxxxxxxxxxxxxx","\_","WOBC-14.squaredesign.atlassian.net@yoMama.com"
>
>
>
Add a `> outfile.csv` at the end of that one-liner, to get your CSV file for that table.
Next, get a list of *all* your tables with
```
mysql -u username -ppassword dbname -sN -e "SHOW TABLES;"
```
From there, it's only one more step to make a loop, for example, in the Bash shell to iterate over those tables:
```
for tb in $(mysql -u username -ppassword dbname -sN -e "SHOW TABLES;"); do
echo .....;
done
```
Between the `do` and `; done` insert the long command I wrote in Part 1 above, but substitute your tablename with `$tb` instead.
|
If you are using MySQL or MariaDB, the easiest and performant way dump CSV for single table is -
```
SELECT customer_id, firstname, surname INTO OUTFILE '/exportdata/customers.txt'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM customers;
```
Now you can use other techniques to repeat this command for multiple tables. See more details here:
* <https://mariadb.com/kb/en/the-mariadb-library/select-into-outfile/>
* <https://dev.mysql.com/doc/refman/5.7/en/select-into.html>
|
12,040,816
|
I need to dump *all* tables in MySQL in CSV format.
Is there a command using `mysqldump` to *just* output every row for every table in CSV format?
|
2012/08/20
|
[
"https://Stackoverflow.com/questions/12040816",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/638510/"
] |
First, I can give you the answer for *one* table:
The trouble with all these `INTO OUTFILE` or `--tab=tmpfile` (and `-T/path/to/directory`) answers is that it requires running **mysqldump** *on the same server* as the MySQL server, and having those access rights.
My solution was simply to use `mysql` (*not* `mysqldump`) with the `-B` parameter, inline the SELECT statement with `-e`, then massage the ASCII output with `sed`, and wind up with CSV including a header field row:
Example:
```
mysql -B -u username -p password database -h dbhost -e "SELECT * FROM accounts;" \
| sed "s/\"/\"\"/g;s/'/\'/;s/\t/\",\"/g;s/^/\"/;s/$/\"/;s/\n//g"
```
>
> "id","login","password","folder","email"
> "8","mariana","xxxxxxxxxx","mariana",""
> "3","squaredesign","xxxxxxxxxxxxxxxxx","squaredesign","mkobylecki@squaredesign.com"
> "4","miedziak","xxxxxxxxxx","miedziak","miedziak@mail.com"
> "5","Sarko","xxxxxxxxx","Sarko",""
> "6","Logitrans
> Poland","xxxxxxxxxxxxxx","LogitransPoland",""
> "7","Amos","xxxxxxxxxxxxxxxxxxxx","Amos",""
> "9","Annabelle","xxxxxxxxxxxxxxxx","Annabelle",""
> "11","Brandfathers and
> Sons","xxxxxxxxxxxxxxxxx","BrandfathersAndSons",""
> "12","Imagine
> Group","xxxxxxxxxxxxxxxx","ImagineGroup",""
> "13","EduSquare.pl","xxxxxxxxxxxxxxxxx","EduSquare.pl",""
> "101","tmp","xxxxxxxxxxxxxxxxxxxxx","\_","WOBC-14.squaredesign.atlassian.net@yoMama.com"
>
>
>
Add a `> outfile.csv` at the end of that one-liner, to get your CSV file for that table.
Next, get a list of *all* your tables with
```
mysql -u username -ppassword dbname -sN -e "SHOW TABLES;"
```
From there, it's only one more step to make a loop, for example, in the Bash shell to iterate over those tables:
```
for tb in $(mysql -u username -ppassword dbname -sN -e "SHOW TABLES;"); do
echo .....;
done
```
Between the `do` and `; done` insert the long command I wrote in Part 1 above, but substitute your tablename with `$tb` instead.
|
You also can do it using Data Export tool in [dbForge Studio for MySQL](http://www.devart.com/dbforge/mysql/studio/).
It will allow you to select some or all tables and export them into CSV format.
|
12,040,816
|
I need to dump *all* tables in MySQL in CSV format.
Is there a command using `mysqldump` to *just* output every row for every table in CSV format?
|
2012/08/20
|
[
"https://Stackoverflow.com/questions/12040816",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/638510/"
] |
First, I can give you the answer for *one* table:
The trouble with all these `INTO OUTFILE` or `--tab=tmpfile` (and `-T/path/to/directory`) answers is that it requires running **mysqldump** *on the same server* as the MySQL server, and having those access rights.
My solution was simply to use `mysql` (*not* `mysqldump`) with the `-B` parameter, inline the SELECT statement with `-e`, then massage the ASCII output with `sed`, and wind up with CSV including a header field row:
Example:
```
mysql -B -u username -p password database -h dbhost -e "SELECT * FROM accounts;" \
| sed "s/\"/\"\"/g;s/'/\'/;s/\t/\",\"/g;s/^/\"/;s/$/\"/;s/\n//g"
```
>
> "id","login","password","folder","email"
> "8","mariana","xxxxxxxxxx","mariana",""
> "3","squaredesign","xxxxxxxxxxxxxxxxx","squaredesign","mkobylecki@squaredesign.com"
> "4","miedziak","xxxxxxxxxx","miedziak","miedziak@mail.com"
> "5","Sarko","xxxxxxxxx","Sarko",""
> "6","Logitrans
> Poland","xxxxxxxxxxxxxx","LogitransPoland",""
> "7","Amos","xxxxxxxxxxxxxxxxxxxx","Amos",""
> "9","Annabelle","xxxxxxxxxxxxxxxx","Annabelle",""
> "11","Brandfathers and
> Sons","xxxxxxxxxxxxxxxxx","BrandfathersAndSons",""
> "12","Imagine
> Group","xxxxxxxxxxxxxxxx","ImagineGroup",""
> "13","EduSquare.pl","xxxxxxxxxxxxxxxxx","EduSquare.pl",""
> "101","tmp","xxxxxxxxxxxxxxxxxxxxx","\_","WOBC-14.squaredesign.atlassian.net@yoMama.com"
>
>
>
Add a `> outfile.csv` at the end of that one-liner, to get your CSV file for that table.
Next, get a list of *all* your tables with
```
mysql -u username -ppassword dbname -sN -e "SHOW TABLES;"
```
From there, it's only one more step to make a loop, for example, in the Bash shell to iterate over those tables:
```
for tb in $(mysql -u username -ppassword dbname -sN -e "SHOW TABLES;"); do
echo .....;
done
```
Between the `do` and `; done` insert the long command I wrote in Part 1 above, but substitute your tablename with `$tb` instead.
|
It looks like others had this problem also, and [there is a simple Python script](https://github.com/jamesmishra/mysqldump-to-csv) now, for converting output of mysqldump into CSV files.
```
wget https://raw.githubusercontent.com/jamesmishra/mysqldump-to-csv/master/mysqldump_to_csv.py
mysqldump -u username -p --host=rdshostname database table | python mysqldump_to_csv.py > table.csv
```
|
12,040,816
|
I need to dump *all* tables in MySQL in CSV format.
Is there a command using `mysqldump` to *just* output every row for every table in CSV format?
|
2012/08/20
|
[
"https://Stackoverflow.com/questions/12040816",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/638510/"
] |
This command will create two files in */path/to/directory table\_name.sql* and *table\_name.txt*.
The SQL file will contain the table creation schema and the txt file will contain the records of the mytable table with fields delimited by a comma.
```
mysqldump -u username -p -t -T/path/to/directory dbname table_name --fields-terminated-by=','
```
|
It looks like others had this problem also, and [there is a simple Python script](https://github.com/jamesmishra/mysqldump-to-csv) now, for converting output of mysqldump into CSV files.
```
wget https://raw.githubusercontent.com/jamesmishra/mysqldump-to-csv/master/mysqldump_to_csv.py
mysqldump -u username -p --host=rdshostname database table | python mysqldump_to_csv.py > table.csv
```
|
2,536
|
I am a Java dev and trying to prepare for interviews in C++. Can someone please review the below solution for me?
>
> Assume that you are given the head and tail pointers of a doubly linked list where each node can also have a single child pointer to another similar doubly linked list. There are no cycles in this structure outside of the traditional double links. Write a procedure in C++ that flattens this structure into a single list.
>
>
>
\*\*I am ignoring that it is a doubly linked list for now to make the posted code more simple, since the "doubly" part doesn't seem to be of any help to me for this question.
```
void LL::flatten(Node *head,Node *tail)
{
if(!head||!tail) return;
while(!head->down &&!(head==tail))
{
head = head->next;
}
// Flattening is complete
if(head==tail && !head->down) return;
tail->next = head->down;
head->down = null;
//getTail returns the last node of the linkedlist
flatten(head->next, getTail(tail->next));
}
class Node
{
public:
Node()
{
next = prev = down = null;
data = -1;
}
private:
Node * next;
Node * down;
Node * prev;
int data;
};
```
I have some questions:
* Is the "doubly" part required. What am I missing?
* I have read it is better non-recursively. How is that? Since we have only pointers on the stack and the actual objects/list structure are on the heap, this should not take huge space on the stack, apart from usual recursion overheads.
* Does someone have a non-recursive way of doing it in C++ or Java. The code on this forum is in C# and I am having trouble understanding it with enumerations and all the iterations
* The `getTail(tail->next)` function is \$O(n)\$. Is there a way of not using this function ?. Even If I attach the 'down' linked list to the next node instead of last node, I will need the last node of the 'down' LL so that I can attach it to the next node of parent.
Though I know there might be many solution already posted elsewhere, I am looking for review for my code as well, this will help me understand my gaps while coding in c++ (at least up to interview standards).
|
2011/05/21
|
[
"https://codereview.stackexchange.com/questions/2536",
"https://codereview.stackexchange.com",
"https://codereview.stackexchange.com/users/4465/"
] |
You could write this code in Java first, it would not be much different from a C++ implementation. It seems that your problem is not the language, but rather the algorithm: E.g. you only update the next pointer, not the prev pointer. The following must be true for every link between two nodes: node->next->prev == node.
Let me try to answer your questions:
* The "doubly" part is not required to write something that can be flattened, but it is explicitly asked for so you must maintain it. It will allow you to traverse the list in two directions, instead of just one. This can be valuable in many situations.
* The question states that the child pointer is to a list similar to the first list. This means that it can also have child pointers of its own. A recursive solution would be easy to write and easy to understand.
* Why do you want a non-recursive algorithm? This is not stated in the question.
* I believe you can use any language you like in this forum.
|
```
if(!head||!tail) return;
```
Should it really happen that head or tail can be null? Is it really right to ignore a situation head is null and tail isn't? I'm suspicious of what this test is doing.
Don't do this:
```
while(!head->down &&!(head==tail))
```
Do this:
```
while(!head->down && head != tail)
```
Less symbols makes its easier to read the code.
As for the algorithm:
In order to implement the iterative algorithm you need the doubly-linked lists. i.e. by ignoring those you've made yourself unable to find the non-recursive algorithm.
Since you want to practice for an interview, I'm rot13ing the following hints, so you can use as much or as little as you want:
1. Guvax guebhtu n cebprff bs jevgvat n shapgvba gung sbyybjf gur hasynggrarq punva jvgubhg synggravat vg. Gura nqq synggravat gb gung cebprff.
2. Ng rnpu abqr lbh pna znxr bar bs guerr pubvprf: Zbir Arkg, Zbir Qbja, Zbir Onpx Hc.
3. Jura lbh zbir qbja, gur cerivbhf cbvagre ba gur abqr jvyy or hahfrq. Frg gung cbvagre fb lbh pna pbzr onpx gb gur cerivbhf yriry.
4. Zbivat hc vf gevpxl, ohg lbh fubhyq xabj gur gnvy naq urnq bs gur ybjre yvfg nf jryy nf gur abqr sebz juvpu lbh npprffrq gur ybjre yvfg. Pnershyyl zbir gubfr cbvagref nebhaq!
|
61,132,089
|
I have a method to update people's attribute, and it will rescue `ActiveRecord::RecordNotFound` if the people cannot be found. The method is:
```
def update
@people= People.find(params[:id])
if @people.update(people_params)
render json: { success: 'Success' }
else
render :edit
end
rescue ActiveRecord::RecordNotFound => e
render json: { error: 'Failed') }
end
```
And I want to test the situation when the record not found, here's my test for now:
```
let(:people) { create(:people) }
let(:people_id) { people.id }
let(:user) { people}
# Other tests...
context 'when person not found' do
let(:exception) { ActiveRecord::RecordNotFound }
# What should I write so that I can let the record not been found?
before { allow(People).to receive(:find).and_raise(exception) }
it 'responds with json containing the error message' do
expect(JSON.parse(response.body)).to eq({error:'Error'})
end
end
```
I want my test executed under the condition that records not found. But I don't know how to do it. I tried to set `let(people) {nil}` but it not work. Is there an anyway to do that? Thanks!
|
2020/04/09
|
[
"https://Stackoverflow.com/questions/61132089",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8898054/"
] |
This is not a good solution to begin with. In Rails you want to use `rescue_from` to handle common errors on the controller level.
```
class ApplicationController
rescue_from ActiveRecord::RecordNotFound, with: :not_found
def not_found
respond_to do |format|
format.json { head :404 }
end
end
end
```
This lets you use inheritance to DRY your code.
```
render json: { error: 'Failed') }
```
Is a huge anti-pattern. If the request failed you should tell the client by sending the correct HTTP status code. Don't reinvent the wheel. Especially not when your solution is a square wheel. If your JS relies on monkeying around with a json response to see if the request was a success or not you're doing it wrong.
If you want to test that your controller handles a missing resource correctly you would do:
```
let(:people) { create(:people) }
let(:people_id) { people.id }
let(:user) { people}
it "returns the correct response code if the person cannot be found" do
get '/people/notarealid'
expect(response).to have_http_status :not_found
end
```
This does not use any stubbing and actually tests the implementation.
|
you can try :
```
let!(:error_failed) { { error: 'Failed' } }
context 'when people is not found by params' do
it 'return 404 and render json failed'
null_object = double.as_null_object
allow(People).to receive(:find).with(params[:id]).and_raise(ActiveRecord::RecordNotFound.new(null_object)
put :update, format: :json, .....
expect(response.body).to error_dailed.to_json
expect(response.status).to .....
end
end
```
|
38,557,112
|
I am trying to develop a basic program that takes your name and provides the output in standard format. The problem is that I want the user to have an option of not adding the middle name.
For Example: Carl Mia Austin gives me C. M. Austin but I want that even if the Input is Carl Austin it should give me C. Austin without asking the user if they have a middle name or not.
So, Is there a way or function which could automatically detect that??
```
#include <stdio.h>
int main(void) {
char first[32], middle[20], last[20];
printf("Enter full name: ");
scanf("%s %s %s", first, middle, last);
printf("Standard name: ");
printf("%c. %c. %s\n", first[0], middle[0], last);
return 0;
}
```
|
2016/07/24
|
[
"https://Stackoverflow.com/questions/38557112",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6398110/"
] |
As currently written, `scanf("%s %s %s", first, middle, last);` expects 3 parts to be typed and will wait until the user types them.
You want to read a line of input with `fgets()` and scan that for name parts with `sscanf` and count how many parts were converted:
```
#include <stdio.h>
int main(void) {
char first[32], middle[32], last[32];
char line[32];
printf("Enter full name: ");
fflush(stdout); // make sure prompt is output
if (fgets(line, sizeof line, stdin)) {
// split the line into parts.
// all buffers have the same length, no need to protect the `%s` formats
*first = *middle = *last = '\0';
switch (sscanf(line, "%s %s %[^\n]", first, middle, last)) {
case EOF: // empty line, unlikely but possible if stdin contains '\0'
case 0: // no name was input
printf("No name\n");
break;
case 1: // name has a single part, like Superman
printf("Standard name: %s\n", first);
strcpy(last, first);
*first = '\0';
break;
case 2: // name has 2 parts
printf("Standard name: %c. %s\n", first[0], middle);
strcpy(last, middle);
*middle = '\0';
break;
case 3: // name has 3 or more parts
printf("Standard name: %c. %c. %s\n", first[0], middle[0], last);
break;
}
}
return 0;
}
```
Note that names can be a bit more versatile in real life: think of foreign names with multibyte characters, or even simply `William Henry Gates III`, also known as Bill Gates. The above code handles the latter, but not this one: `Éléonore de Provence`, the young wife of Henry III, King of England, 1223 - 1291.
|
You could use `isspace` and look for spaces in the name:
```
#include <stdio.h>
#include <ctype.h>
int main(void)
{
char first[32], middle[32], last[32];
int count=0;
int i = 0;
printf("Enter full name: ");
scanf(" %[^\n]s",first);
for (i = 0; first[i] != '\0'; i++) {
if (isspace(first[i]))
count++;
}
if (count == 1) {
int read = 0;
int k=0;
for (int j = 0; j < i; j++) {
if (isspace(first[j]))
read++;
if (read > 0) {
last[k]=first[j];
k++;
}
}
last[k+1] = '\0';
}
printf("Standard name: ");
printf("%c. %s\n", first[0], last);
return 0;
}
```
Test
```
Enter full name: Carl Austin
Standard name: C. Austin
```
|
2,428,563
|
I have a CGI script that prints the following on stdout:
```
print "Status: 302 Redirect\n";
print "Server: Apache-Coyote/1.1\n";
print "Location: $redirect\n";
print "Content-Length: 0\n";
print "Date: $date\n\n";
```
Where $redirect and $date are reasonable values. What Apache2 actually sends also includes a Content-Type: header (text/plain). I've commented out the DefaultType in the server configuration file.
I'm trying to debug a downstream problem that arises when no Content-Type: header is sent. So what magic incantation do I have to perform to *prevent* Apache2 from adding the content type header?
|
2010/03/11
|
[
"https://Stackoverflow.com/questions/2428563",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1055/"
] |
According to my (admittedly brief) reading of server/protocol.c and server/core.c, you cannot.
It always defaults to DefaultType (text/plain by default) if that header is not present.
|
If all you are trying to do is prep a very specific test case server-side, you can always "cheat" by pre-baking output in a text file and having netcat listen for connections on some port.
I use that trick when I want to be 100% sure of each byte that the server sends.
|
2,428,563
|
I have a CGI script that prints the following on stdout:
```
print "Status: 302 Redirect\n";
print "Server: Apache-Coyote/1.1\n";
print "Location: $redirect\n";
print "Content-Length: 0\n";
print "Date: $date\n\n";
```
Where $redirect and $date are reasonable values. What Apache2 actually sends also includes a Content-Type: header (text/plain). I've commented out the DefaultType in the server configuration file.
I'm trying to debug a downstream problem that arises when no Content-Type: header is sent. So what magic incantation do I have to perform to *prevent* Apache2 from adding the content type header?
|
2010/03/11
|
[
"https://Stackoverflow.com/questions/2428563",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1055/"
] |
According to my (admittedly brief) reading of server/protocol.c and server/core.c, you cannot.
It always defaults to DefaultType (text/plain by default) if that header is not present.
|
[RemoveType](http://httpd.apache.org/docs/2.2/mod/mod_mime.html#removetype) will stop sending a content type with the resource.
Addendum
```
<Files defaulttypenone.txt>
DefaultType None
</Files>
<Files removetype.txt>
RemoveType .txt
</Files>
<Files forcetype.txt>
ForceType None
</Files>
```
Tested on my own server, these three solutions and none worked. They all returned text/plain.
|
2,428,563
|
I have a CGI script that prints the following on stdout:
```
print "Status: 302 Redirect\n";
print "Server: Apache-Coyote/1.1\n";
print "Location: $redirect\n";
print "Content-Length: 0\n";
print "Date: $date\n\n";
```
Where $redirect and $date are reasonable values. What Apache2 actually sends also includes a Content-Type: header (text/plain). I've commented out the DefaultType in the server configuration file.
I'm trying to debug a downstream problem that arises when no Content-Type: header is sent. So what magic incantation do I have to perform to *prevent* Apache2 from adding the content type header?
|
2010/03/11
|
[
"https://Stackoverflow.com/questions/2428563",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1055/"
] |
Even if we delete the Content-Type header from the request via the "Header unset Content-Type" directive, apache regenerates the Content-Type header from another field of the request structure. Therefore, we first force that other field to a reserved value, in order to prevent the header regeneration, then we remove the Content-Type via the "Header unset" directive.
For apache2.2:
```
Header set Content-Type none
Header unset Content-Type
```
For apache2.4:
```
Header set Content-Type ""
Header unset Content-Type
```
|
As I read [the Apache docs in question](http://httpd.apache.org/docs/2.2/mod/mod_headers.html), what you want may actually be
```
Header unset Content-Type
```
Hope this does it!
|
2,428,563
|
I have a CGI script that prints the following on stdout:
```
print "Status: 302 Redirect\n";
print "Server: Apache-Coyote/1.1\n";
print "Location: $redirect\n";
print "Content-Length: 0\n";
print "Date: $date\n\n";
```
Where $redirect and $date are reasonable values. What Apache2 actually sends also includes a Content-Type: header (text/plain). I've commented out the DefaultType in the server configuration file.
I'm trying to debug a downstream problem that arises when no Content-Type: header is sent. So what magic incantation do I have to perform to *prevent* Apache2 from adding the content type header?
|
2010/03/11
|
[
"https://Stackoverflow.com/questions/2428563",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1055/"
] |
If all you are trying to do is prep a very specific test case server-side, you can always "cheat" by pre-baking output in a text file and having netcat listen for connections on some port.
I use that trick when I want to be 100% sure of each byte that the server sends.
|
As I read [the Apache docs in question](http://httpd.apache.org/docs/2.2/mod/mod_headers.html), what you want may actually be
```
Header unset Content-Type
```
Hope this does it!
|
2,428,563
|
I have a CGI script that prints the following on stdout:
```
print "Status: 302 Redirect\n";
print "Server: Apache-Coyote/1.1\n";
print "Location: $redirect\n";
print "Content-Length: 0\n";
print "Date: $date\n\n";
```
Where $redirect and $date are reasonable values. What Apache2 actually sends also includes a Content-Type: header (text/plain). I've commented out the DefaultType in the server configuration file.
I'm trying to debug a downstream problem that arises when no Content-Type: header is sent. So what magic incantation do I have to perform to *prevent* Apache2 from adding the content type header?
|
2010/03/11
|
[
"https://Stackoverflow.com/questions/2428563",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1055/"
] |
According to my (admittedly brief) reading of server/protocol.c and server/core.c, you cannot.
It always defaults to DefaultType (text/plain by default) if that header is not present.
|
Even if we delete the Content-Type header from the request via the "Header unset Content-Type" directive, apache regenerates the Content-Type header from another field of the request structure. Therefore, we first force that other field to a reserved value, in order to prevent the header regeneration, then we remove the Content-Type via the "Header unset" directive.
For apache2.2:
```
Header set Content-Type none
Header unset Content-Type
```
For apache2.4:
```
Header set Content-Type ""
Header unset Content-Type
```
|
2,428,563
|
I have a CGI script that prints the following on stdout:
```
print "Status: 302 Redirect\n";
print "Server: Apache-Coyote/1.1\n";
print "Location: $redirect\n";
print "Content-Length: 0\n";
print "Date: $date\n\n";
```
Where $redirect and $date are reasonable values. What Apache2 actually sends also includes a Content-Type: header (text/plain). I've commented out the DefaultType in the server configuration file.
I'm trying to debug a downstream problem that arises when no Content-Type: header is sent. So what magic incantation do I have to perform to *prevent* Apache2 from adding the content type header?
|
2010/03/11
|
[
"https://Stackoverflow.com/questions/2428563",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1055/"
] |
[RemoveType](http://httpd.apache.org/docs/2.2/mod/mod_mime.html#removetype) will stop sending a content type with the resource.
Addendum
```
<Files defaulttypenone.txt>
DefaultType None
</Files>
<Files removetype.txt>
RemoveType .txt
</Files>
<Files forcetype.txt>
ForceType None
</Files>
```
Tested on my own server, these three solutions and none worked. They all returned text/plain.
|
You can try with the directive:
```
ResponseHeader unset Content-Type
```
|
2,428,563
|
I have a CGI script that prints the following on stdout:
```
print "Status: 302 Redirect\n";
print "Server: Apache-Coyote/1.1\n";
print "Location: $redirect\n";
print "Content-Length: 0\n";
print "Date: $date\n\n";
```
Where $redirect and $date are reasonable values. What Apache2 actually sends also includes a Content-Type: header (text/plain). I've commented out the DefaultType in the server configuration file.
I'm trying to debug a downstream problem that arises when no Content-Type: header is sent. So what magic incantation do I have to perform to *prevent* Apache2 from adding the content type header?
|
2010/03/11
|
[
"https://Stackoverflow.com/questions/2428563",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1055/"
] |
[RemoveType](http://httpd.apache.org/docs/2.2/mod/mod_mime.html#removetype) will stop sending a content type with the resource.
Addendum
```
<Files defaulttypenone.txt>
DefaultType None
</Files>
<Files removetype.txt>
RemoveType .txt
</Files>
<Files forcetype.txt>
ForceType None
</Files>
```
Tested on my own server, these three solutions and none worked. They all returned text/plain.
|
As I read [the Apache docs in question](http://httpd.apache.org/docs/2.2/mod/mod_headers.html), what you want may actually be
```
Header unset Content-Type
```
Hope this does it!
|
2,428,563
|
I have a CGI script that prints the following on stdout:
```
print "Status: 302 Redirect\n";
print "Server: Apache-Coyote/1.1\n";
print "Location: $redirect\n";
print "Content-Length: 0\n";
print "Date: $date\n\n";
```
Where $redirect and $date are reasonable values. What Apache2 actually sends also includes a Content-Type: header (text/plain). I've commented out the DefaultType in the server configuration file.
I'm trying to debug a downstream problem that arises when no Content-Type: header is sent. So what magic incantation do I have to perform to *prevent* Apache2 from adding the content type header?
|
2010/03/11
|
[
"https://Stackoverflow.com/questions/2428563",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1055/"
] |
Even if we delete the Content-Type header from the request via the "Header unset Content-Type" directive, apache regenerates the Content-Type header from another field of the request structure. Therefore, we first force that other field to a reserved value, in order to prevent the header regeneration, then we remove the Content-Type via the "Header unset" directive.
For apache2.2:
```
Header set Content-Type none
Header unset Content-Type
```
For apache2.4:
```
Header set Content-Type ""
Header unset Content-Type
```
|
You can try with the directive:
```
ResponseHeader unset Content-Type
```
|
2,428,563
|
I have a CGI script that prints the following on stdout:
```
print "Status: 302 Redirect\n";
print "Server: Apache-Coyote/1.1\n";
print "Location: $redirect\n";
print "Content-Length: 0\n";
print "Date: $date\n\n";
```
Where $redirect and $date are reasonable values. What Apache2 actually sends also includes a Content-Type: header (text/plain). I've commented out the DefaultType in the server configuration file.
I'm trying to debug a downstream problem that arises when no Content-Type: header is sent. So what magic incantation do I have to perform to *prevent* Apache2 from adding the content type header?
|
2010/03/11
|
[
"https://Stackoverflow.com/questions/2428563",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1055/"
] |
According to my (admittedly brief) reading of server/protocol.c and server/core.c, you cannot.
It always defaults to DefaultType (text/plain by default) if that header is not present.
|
You can try with the directive:
```
ResponseHeader unset Content-Type
```
|
2,428,563
|
I have a CGI script that prints the following on stdout:
```
print "Status: 302 Redirect\n";
print "Server: Apache-Coyote/1.1\n";
print "Location: $redirect\n";
print "Content-Length: 0\n";
print "Date: $date\n\n";
```
Where $redirect and $date are reasonable values. What Apache2 actually sends also includes a Content-Type: header (text/plain). I've commented out the DefaultType in the server configuration file.
I'm trying to debug a downstream problem that arises when no Content-Type: header is sent. So what magic incantation do I have to perform to *prevent* Apache2 from adding the content type header?
|
2010/03/11
|
[
"https://Stackoverflow.com/questions/2428563",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1055/"
] |
According to my (admittedly brief) reading of server/protocol.c and server/core.c, you cannot.
It always defaults to DefaultType (text/plain by default) if that header is not present.
|
As I read [the Apache docs in question](http://httpd.apache.org/docs/2.2/mod/mod_headers.html), what you want may actually be
```
Header unset Content-Type
```
Hope this does it!
|
24,436,021
|
I'm making a simple server that listens for clients which is going to read the clients requests, do some calculations, send a response back to the client and the close again ASAP (somewhat similar to HTTP).
There might be many connections every seconds, so I want it to make it as fast and efficient as possible.
So far, the best way I can think of doing this, is shown as an example below:
```
private static ManualResetEvent gate = new ManualResetEvent(false);
static async void ListenToClient(TcpListener listener)
{
Console.WriteLine("Waiting for connection");
TcpClient client = await listener.AcceptTcpClientAsync();
Console.WriteLine("Connection accepted & establised");
gate.Set(); //Unblocks the mainthread
Stream stream = client.GetStream();
byte[] requestBuffer = new byte[1024];
int size = await stream.ReadAsync(requestBuffer, 0, requestBuffer.Length);
//PSEUDO CODE: Do some calculations
byte[] responseBuffer = Encoding.ASCII.GetBytes("Ok");
await stream.WriteAsync(responseBuffer, 0, responseBuffer.Length);
stream.Close();
client.Close();
}
static void Main(string[] args)
{
TcpListener listener = new TcpListener(IPAddress.Any, 8888);
listener.Start();
while (true)
{
gate.Reset();
ListenToClient(listener);
gate.WaitOne(); //Blocks the main thread and waits until the gate.Set() is called
}
}
```
*Note: for this example and simplicity, I haven't made any error handling like try-catch and I know the response here will always be "Ok"*
The code here is simply waiting for a connection, when it arrives to `await listener.AcceptTcpClientAsync()`, then it jumps back to the whileloop and waits until a connection is made and gate.Set() is called so it can listen for new connections again. So this will allow multiple clients at the same time (Especially if the calculations can take long time)
But should I use stream.ReadAsync() or stream.Read() instead? Im curious if it even matters because I'm already in an asynchronous function which will not block the main thread.
So my final questions are:
1. Is this the best/right way to accomplish this task (also by using ManualResetEvent class)?
2. Would there be any difference in this scenario to use async or non async operations when reading and writing to the stream? (because I'm not blocking the mainthread)
3. If it lags, and takes 1-2 seconds to send/receive the data, would it still matter to choose between async and nonasync operations?
---
**UPDATE FOR NEW IMPROVEMENTS**
Due to an answer, I have updated my code to this:
```
private static ManualResetEvent gate = new ManualResetEvent(false);
static async Task ListenToClient(TcpListener listener)
{
//Same Code
}
static void Main(string[] args)
{
TcpListener listener = new TcpListener(IPAddress.Any, 8888);
listener.Start();
while (true)
{
gate.Reset();
Task task = ListenToClient(listener);
task.ContinueWith((Task paramTask) =>
{
//Inspect the paramTask
});
gate.WaitOne(); //Blocks the main thread and waits until the gate.Set() is called
}
}
```
|
2014/06/26
|
[
"https://Stackoverflow.com/questions/24436021",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2202062/"
] |
>
> Is this the best/right way to accomplish this task (also by using ManualResetEvent class)?
>
>
>
No. You start an async operation, then immediately wait for it. For some reason I often see this crazy dance. Just make it synchronous:
```
while (true) {
var clientSocket = Accept();
ProcessClientAsync(clientSocket);
}
```
So simple.
>
> Would there be any difference in this scenario to use async or non async operations when reading and writing to the stream?
>
>
>
Using async IO with sockets is good if you have very many clients. For a few dozens at a time you can just use sync IO with threads. Async IO is about not blocking threads (which each use 1MB of stack space).
If you decide to use async IO, `ProcessClientAsync` should be an async function like you have it now.
If you decide for sync IO, start `ProcessClientAsync` on a new thread to be able to process multiple clients simultaneously.
>
> If it lags, and takes 1-2 seconds to send/receive the data, would it still matter to choose between async and nonasync operations?
>
>
>
As long as you process individual clients independently, you are fine. The choice between sync and async only comes into play at high scale (more than dozens of connections open at the same time).
It is a common mistake to overcomplicate things by going async without the need for it. Basically all tutorials make this mistake.
|
Right off the bat I see two common `async` mistakes:
```
async void
```
Don't do this. The only reason the compiler even *supports* `async void` is for handling existing event-driven interfaces. This isn't one of those, so here it's an anti-pattern. `async void` effectively results in losing any way of ever responding to that task or doing anything with it, such as handling an error.
Speaking of responding to tasks...
```
ListenToClient(listener);
```
You're spawning a task, but never examining its state. What will you do if there's an exception within that task? It's not caught anyway, it'll just be silently ignored. At the very least you should provide a top-level callback for the task once it's completed. Even something as simple as this:
```
ListenToClient(listener).ContinueWith(t =>
{
// t is the task. Examine it for errors, cancelations, etc.
// Respond to error conditions here.
});
```
|
70,476,055
|
This works perfectly
```py
def get_count(sentence):
return sum(1 for letter in sentence if letter in ('aeiou'))
```
But when I apply the in operator to an array like this, it fails
```py
# Incorrect answer for "aeiou": 0 should equal 5
def get_count(sentence):
return sum(1 for letter in sentence if letter in ['aeiou'])
```
|
2021/12/24
|
[
"https://Stackoverflow.com/questions/70476055",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15358017/"
] |
In the first snippet, `('aeiou')` is a string surrounded by parentheses (which have no syntactic function in this case). The `in` is applied to the string, which is itself an iterable, and thus checks if `letter` is one of the characters in that string.
In the second snippet, `['aeoiu']` is a list with a single element, the string `'aeiou'`. The `in` operator applies to the list, and checks if `letter` is one of the elements in the list. It obviously isn't (since `letter` is a single character), and thus this condition always evaluates to `False`.
|
`('aeiou')` equals `'aeiou'`, brackets like this are redundant.
Second time you are looking if chars from your `sentence` are in `['aeiou']`. This list contains a string of 5 characters, so `letter in ['aeiou']` would never return `True`.
|
70,476,055
|
This works perfectly
```py
def get_count(sentence):
return sum(1 for letter in sentence if letter in ('aeiou'))
```
But when I apply the in operator to an array like this, it fails
```py
# Incorrect answer for "aeiou": 0 should equal 5
def get_count(sentence):
return sum(1 for letter in sentence if letter in ['aeiou'])
```
|
2021/12/24
|
[
"https://Stackoverflow.com/questions/70476055",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15358017/"
] |
It's is not so much the behavior of `in` that's different, as the syntax being unexpected for beginners. `()` is an empty tuple. But that's the only time that parentheses make a tuple. In all other cases, parentheses are needed for grouping, but it's the comma that makes a tuple.
* `('aeiou')` is not a tuple. It's a string with superfluous parentheses. `('aeiou',)` is a tuple, and so is `'aeiou',`.
* `['aeiou']` is unambiguously a list with one element.
Letters are length-one strings in python: there are no character objects as such.
* In the first case, you are searching for each letter in a string, which is a container of length-one strings, so you occasionally find one.
* In the second case, you are searching for each letter in a list, which is a container of not-length-one strings, so you never find a match.
Compare the second case to `list('aeiou')`. Instead of making a list of one element as `['aeiou']`, this creates a list containing all the letters in the input string.
Unrelated fun fact: `sum` defaults to a starting value of `0` (the `int`), and `bool` is a subclass of `int` that always equals zero or one. You can therefore rewrite your counter in the following ways:
```
sum(letter in 'aeiou' for letter in sentence)
sum(map('aeiou'.__contains__, sentence))
```
|
`('aeiou')` equals `'aeiou'`, brackets like this are redundant.
Second time you are looking if chars from your `sentence` are in `['aeiou']`. This list contains a string of 5 characters, so `letter in ['aeiou']` would never return `True`.
|
70,476,055
|
This works perfectly
```py
def get_count(sentence):
return sum(1 for letter in sentence if letter in ('aeiou'))
```
But when I apply the in operator to an array like this, it fails
```py
# Incorrect answer for "aeiou": 0 should equal 5
def get_count(sentence):
return sum(1 for letter in sentence if letter in ['aeiou'])
```
|
2021/12/24
|
[
"https://Stackoverflow.com/questions/70476055",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15358017/"
] |
`('aeiou')` is a string. You can verify this with
```py
>>> x = ('aeiou')
>>> type(x)
<class 'str'>
```
`['aeiou']` on the other hand is a list containing one element which in turn is a string.
```py
>>> x = ['aeiou']
>>> type(x)
<class 'list'>
>>> len(x)
1
>>> type(x[0])
<class 'str'>
```
When you write `if letter in ('aeiou')`, here you are testing if the letter is a substring of the string `'aeiou'`. However, when you write `if letter in ['aeiou']`, you are testing if the letter is an element in the list the contains one element, the string `'aeiou'` which is always `False` for `letter` that is a single character.
|
`('aeiou')` equals `'aeiou'`, brackets like this are redundant.
Second time you are looking if chars from your `sentence` are in `['aeiou']`. This list contains a string of 5 characters, so `letter in ['aeiou']` would never return `True`.
|
64,087,730
|
I need to calculate a sum of `computed` properties that starts with `calculateSum` string.
I'm not sure how to do that since I can't get their names using `this.computed`
So my method/attempt is:
```
getSubTotal(){
var computed_names = [];
var computed_names_filtered = computed_names.filter(x => {return x.startsWith('calculateSum')})
return _.sum(computed_names_filtered.map(x => eval(x+'()'))
}
```
Do you know how to do that?
|
2020/09/27
|
[
"https://Stackoverflow.com/questions/64087730",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2607447/"
] |
I assume the question is loosely-worded in the sense that "random" is not meant in a probability sense; that is, the intent is not to select a set of intervals (that total a given number of hours in length) with a mechanism that ensures all possible sets of such intervals have an equal likelihood of being selected. Rather, I understand that a set of intervals is to be chosen (e.g., for testing purposes) in a way that incorporates elements of randomness.
I have assumed the intervals are to be non-overlapping and the number of intervals is to be specified. I don't understand what "with ideally weekdays" means so I have disregarded that.
---
The heart of the approach I will propose is the following method.
```
def rnd_lengths(tot_secs, target_nbr)
max_secs = 2 * tot_secs/target_nbr - 1
arr = []
loop do
break(arr) if tot_secs.zero?
l = [(0.5 + max_secs * rand).round, tot_secs].min
arr << l
tot_secs -= l
end
end
```
The method generates an array of integers (lengths of intervals), measured in seconds, ideally having `target_nbr` elements. `tot_secs` is the required combined length of the "random" intervals (e.g., 150\*3600).
Each element of the array is drawn randomly drawn from a uniform distribution that ranges from zero to `max_secs` (to be computed). This is done sequentially until `tot_secs` is reached. Should the last random value cause the total to exceed `tot_secs` it is reduced to make the total equal `tot_secs`.`
Suppose `tot_secs` equals `100` and we wish to generate `4` random intervals (`target_nbr = 4`). That means the average length of the intervals would be `25`. As we are using a uniform distribution having an average of `(1 + max_secs)/2`, we may derive the value of `max_secs` from the expression
```
target_nbr * (1 + max_secs)/2 = tot_secs
```
which is
```
max_secs = 2 * tot_secs/target_nbr - 1
```
the first line of the method. For the example I mentioned, this would be
```
max_secs = 2 * 100/4 - 1
#=> 49
```
Let's try it.
```
rnd_lengths(100, 4)
#=> [49, 36, 15]
```
As you see the array that is returned sums to `100`, as required, but it contains only `3` elements. That's why I named the argument `target_nbr`, as there is no assurance the array returned will have that number of elements. What to do? Try again!
```
rnd_lengths(100, 4)
#=> [14, 17, 26, 37, 6]
```
Still not `4` elements, so keep trying:
```
rnd_lengths(100, 4)
#=> [11, 37, 39, 13]
```
Success! It may take a few tries to get the correct number of elements, but for parameters likely to be used, and the nature of the probability distribution employed, I wouldn't expect that to be a problem.
Let's put this in a method.
```
def rdm_intervals(tot_secs, nbr_intervals)
loop do
arr = rnd_lengths(tot_secs, nbr_intervals)
break(arr) if arr.size == nbr_intervals
end
end
intervals = rdm_intervals(100, 4)
#=> [29, 26, 7, 38]
```
---
We can compute random gaps between intervals in the same way. Suppose the intervals fall within a range of 175 seconds (the number of seconds between the start time and end time). Then:
```
gaps = rdm_intervals(175-100, 5)
#=> [26, 5, 19, 4, 21]
```
As seen, the gaps sum to `75`, as required. We can disregard the last element.
---
We can now form the intervals. The first interval begins at `26` seconds and ends at `26+29 #=> 55` seconds. The second interval begins at `55+5 #=> 60` seconds and ends at `60+26 #=> 86` seconds, and so on. We therefore find the intervals (each in ranges of seconds from zero) to be:
```
[26..55, 60..86, 105..112, 116..154]
```
Note that `175 - 154 = 21`, the last element of `gaps`.
---
If one is uncomfortable with the fact that the last elements of `intervals` and `gaps` that are generally constrained in size one could of course randomly reposition those elements within their respective arrays.
One might not care if the number of intervals is exactly `target_nbr`. It would be simpler and faster to just use the first array of interval lengths produced. That's fine, but we still need the above methods to compute the random gaps, as their number must equal the number of intervals plus one:
```
gaps = rdm_intervals(175-100, intervals.size + 1)
```
---
We can now use these two methods to construct a method that will return the desired result. The argument `tot_secs` of this method equals total number of seconds spanned by the array intervals returned (e.g., `3600 * 150`). The method returns an array containing `nbr_intervals` non-overlapping ranges of `Time` objects that fall between the given start and end dates.
```
require 'date'
```
```
def construct_intervals(start_date_str, end_date_str, tot_secs, nbr_intervals)
start_time = Date.strptime(start_date_str, '%Y-%m-%d').to_time
secs_in_period = Date.strptime(end_date_str, '%Y-%m-%d').to_time - start_time
intervals = rdm_intervals(tot_secs, nbr_intervals)
gaps = rdm_intervals(secs_in_period - tot_secs, nbr_intervals+1)
nbr_intervals.times.with_object([]) do |_,arr|
start_time += gaps.shift
end_time = start_time + intervals.shift
arr << (start_time..end_time)
start_time = end_time
end
end
```
See [Date::strptime](https://ruby-doc.org/stdlib-2.7.0/libdoc/date/rdoc/Date.html#method-c-strptime).
---
Let's try an example.
```
start_date_str = '2020-01-01'
end_date_str = '2020-01-31'
tot_secs = 3600*150
#=> 540000
```
```
construct_intervals(start_date_str, end_date_str, tot_secs, 4)
#=> [2020-01-06 18:05:04 -0800..2020-01-09 03:48:00 -0800,
# 2020-01-09 06:44:16 -0800..2020-01-11 23:33:44 -0800,
# 2020-01-20 20:30:21 -0800..2020-01-21 17:27:44 -0800,
# 2020-01-27 19:08:38 -0800..2020-01-28 01:38:51 -0800]
```
```
construct_intervals(start_date_str, end_date_str, tot_secs, 8)
#=> [2020-01-03 18:43:36 -0800..2020-01-04 10:49:14 -0800,
# 2020-01-08 07:55:44 -0800..2020-01-08 08:17:18 -0800,
# 2020-01-11 00:54:36 -0800..2020-01-11 23:00:53 -0800,
# 2020-01-14 05:20:14 -0800..2020-01-14 22:48:45 -0800,
# 2020-01-16 18:28:28 -0800..2020-01-17 22:50:24 -0800,
# 2020-01-22 02:59:31 -0800..2020-01-22 22:33:08 -0800,
# 2020-01-23 00:36:59 -0800..2020-01-24 12:15:37 -0800,
# 2020-01-29 11:22:21 -0800..2020-01-29 21:46:10 -0800]
```
See [Date::strptime](https://ruby-doc.org/stdlib-2.7.0/libdoc/date/rdoc/Date.html#method-c-strptime)
|
```
START -xxx----xxx--x----xxxxx---xx--xx---xx-xx-x-xxx-- END
```
We need to fill a timespan with alternating periods of ON and OFF. This can be
denoted by a list of timestamps. Let's say that the period always starts with
an OFF period for simplicity's sake.
From the start/end of the timespan and the total seconds in ON state, we
gather useful facts:
* the timespan's total size in seconds `total_seconds`
* the second totals of both the ON (`on_total_seconds`) and the OFF (`off_total_seconds`) periods
Once we know these, a workable algorithm looks more or less like this - pardon
the functions without implementation:
```rb
# this can be a parameter as well
MIN_PERIODS = 10
MAX_PERIODS = 100
def fill_periods(start_date, end_date, on_total_seconds = 150*60*60)
total_seconds = get_total_seconds(start_date, end_date)
off_total_seconds = total_seconds - on_total_seconds
# establish two buckets to pull from alternately in populating our array of durations
on_bucket = on_total_seconds
off_bucket = off_total_seconds
result = []
# populate `result` with durations in seconds. `result` will sum to `total_seconds`
while on_bucket > 0 || off_bucket > 0 do
off_slice = rand(off_total_seconds / MAX_PERIODS / 2, off_total_seconds / MIN_PERIODS / 2).to_i
off_bucket -= [off_slice, off_bucket].min
on_slice = rand(on_total_seconds / MAX_PERIODS / 2, on_total_seconds / MIN_PERIODS / 2).to_i
on_bucket -= [on_slice, on_bucket].min
# randomness being random, we're going to hit 0 in one bucket before the
# other. when this happens, just add this (off, on) pair to the last one.
if off_slice == 0 || on_slice == 0
last_off, last_on = result.pop(2)
result << last_off + off_slice << last_on + on_slice
else
result << off_slice << on_slice
end
end
# build up an array of datetimes by progressively adding seconds to the last timestamp.
datetimes = result.each_with_object([start_date]) do |period, memo|
memo << add_seconds(memo.last, period)
end
# we want a list of datetime pairs denoting ON periods. since we know our
# timespan starts with OFF, we start our list of pairs with the second element.
datetimes.slice(1..-1).each_slice(2).to_a
end
```
|
58,608,443
|
Is it possible to select only one of the SVG paths created from a geoJSON file using D3? In my case the paths are areas in a map. The user can choose an area from a dropdown list. I want to use the selected value from this list to select the path with the matching attribute and color it differently. The name of the area is one of the attributes in the geoJSON file.
Can I qualify the `d3.select("path")` further by adding some kind of filter?
This is how the code looks like ...
```
d3.json(polygonFile, function(json) {
for (var g = 0; g < json.features.length; g++) {
if(json.features[g].properties.NAME == selectedAreaName) {
d3.select("path") //THIS IS WHERE I NEED TO ADD THE FILTER ...
.transition()
.duration(600)
.style("filter", "brightness(0.7)")
}
}
});
```
|
2019/10/29
|
[
"https://Stackoverflow.com/questions/58608443",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12291521/"
] |
I think it's @keyframes not @-keyframes
<https://developer.mozilla.org/en-US/docs/Web/CSS/@keyframes>
|
I used just @keyframe + autoprefixer for different browser:
```
@keyframes blinker {
50% {
opacity: 0;
}
}
```
More on auto-prefixer: <https://www.npmjs.com/package/gulp-autoprefixer>
|
8,299,303
|
I need to generate a sine wave sound in Python, and I need to be able to control frequency, duration, and relative volume. By 'generate' I mean that I want it to play though the speakers immediately, not save to a file.
What is the easiest way to do this?
|
2011/11/28
|
[
"https://Stackoverflow.com/questions/8299303",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/180783/"
] |
Version with numpy:
```
import time
import numpy as np
import pyaudio
p = pyaudio.PyAudio()
volume = 0.5 # range [0.0, 1.0]
fs = 44100 # sampling rate, Hz, must be integer
duration = 5.0 # in seconds, may be float
f = 440.0 # sine frequency, Hz, may be float
# generate samples, note conversion to float32 array
samples = (np.sin(2 * np.pi * np.arange(fs * duration) * f / fs)).astype(np.float32)
# per @yahweh comment explicitly convert to bytes sequence
output_bytes = (volume * samples).tobytes()
# for paFloat32 sample values must be in range [-1.0, 1.0]
stream = p.open(format=pyaudio.paFloat32,
channels=1,
rate=fs,
output=True)
# play. May repeat with different volume values (if done interactively)
start_time = time.time()
stream.write(output_bytes)
print("Played sound for {:.2f} seconds".format(time.time() - start_time))
stream.stop_stream()
stream.close()
p.terminate()
```
Version without numpy:
```
import array
import math
import time
import pyaudio
p = pyaudio.PyAudio()
volume = 0.5 # range [0.0, 1.0]
fs = 44100 # sampling rate, Hz, must be integer
duration = 5.0 # in seconds, may be float
f = 440.0 # sine frequency, Hz, may be float
# generate samples, note conversion to float32 array
num_samples = int(fs * duration)
samples = [volume * math.sin(2 * math.pi * k * f / fs) for k in range(0, num_samples)]
# per @yahweh comment explicitly convert to bytes sequence
output_bytes = array.array('f', samples).tobytes()
# for paFloat32 sample values must be in range [-1.0, 1.0]
stream = p.open(format=pyaudio.paFloat32,
channels=1,
rate=fs,
output=True)
# play. May repeat with different volume values (if done interactively)
start_time = time.time()
stream.write(output_bytes)
print("Played sound for {:.2f} seconds".format(time.time() - start_time))
stream.stop_stream()
stream.close()
p.terminate()
```
|
Today for Python 3.5+ the best way is to install the packages recommended by the developer.
<http://people.csail.mit.edu/hubert/pyaudio/>
For Debian do
```
sudo apt-get install python3-all-dev portaudio19-dev
```
before trying to install pyaudio
|
8,299,303
|
I need to generate a sine wave sound in Python, and I need to be able to control frequency, duration, and relative volume. By 'generate' I mean that I want it to play though the speakers immediately, not save to a file.
What is the easiest way to do this?
|
2011/11/28
|
[
"https://Stackoverflow.com/questions/8299303",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/180783/"
] |
One of the more consistent andeasy to install ways to deal with sound in Python is the Pygame multimedia libraries.
I'd recomend using it - there is the pygame.sndarray submodule that allows you to manipulate numbers in a data vector that become a high-level sound object that can be playerd in the pygame.mixer module.
The documentation in the pygame.org site should be enough for using the sndarray module.
|
I the [bregman lab toolbox](http://bregman.dartmouth.edu/~mcasey/bregman/bregman_tutorials/) you have a set of functions that does exactly what you want. This python module is a little bit buggy but you can adapt this code to get your own functions
|
8,299,303
|
I need to generate a sine wave sound in Python, and I need to be able to control frequency, duration, and relative volume. By 'generate' I mean that I want it to play though the speakers immediately, not save to a file.
What is the easiest way to do this?
|
2011/11/28
|
[
"https://Stackoverflow.com/questions/8299303",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/180783/"
] |
One of the more consistent andeasy to install ways to deal with sound in Python is the Pygame multimedia libraries.
I'd recomend using it - there is the pygame.sndarray submodule that allows you to manipulate numbers in a data vector that become a high-level sound object that can be playerd in the pygame.mixer module.
The documentation in the pygame.org site should be enough for using the sndarray module.
|
The script from ivan\_onys produces a signal that is four times shorter than intended. If a TypeError is returned when volume is a float, try adding .tobytes() to the following line instead.
```
stream.write((volume*samples).tobytes())
```
@mm\_ float32 = 32 bits, and 8 bits = 1 byte, so float32 = 4 bytes. When samples are passed to stream.write as float32, byte count (duration) is divided by 4. Writing samples back .tobytes() corrects for quartering the sample count when writing to float32.
|
8,299,303
|
I need to generate a sine wave sound in Python, and I need to be able to control frequency, duration, and relative volume. By 'generate' I mean that I want it to play though the speakers immediately, not save to a file.
What is the easiest way to do this?
|
2011/11/28
|
[
"https://Stackoverflow.com/questions/8299303",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/180783/"
] |
Version with numpy:
```
import time
import numpy as np
import pyaudio
p = pyaudio.PyAudio()
volume = 0.5 # range [0.0, 1.0]
fs = 44100 # sampling rate, Hz, must be integer
duration = 5.0 # in seconds, may be float
f = 440.0 # sine frequency, Hz, may be float
# generate samples, note conversion to float32 array
samples = (np.sin(2 * np.pi * np.arange(fs * duration) * f / fs)).astype(np.float32)
# per @yahweh comment explicitly convert to bytes sequence
output_bytes = (volume * samples).tobytes()
# for paFloat32 sample values must be in range [-1.0, 1.0]
stream = p.open(format=pyaudio.paFloat32,
channels=1,
rate=fs,
output=True)
# play. May repeat with different volume values (if done interactively)
start_time = time.time()
stream.write(output_bytes)
print("Played sound for {:.2f} seconds".format(time.time() - start_time))
stream.stop_stream()
stream.close()
p.terminate()
```
Version without numpy:
```
import array
import math
import time
import pyaudio
p = pyaudio.PyAudio()
volume = 0.5 # range [0.0, 1.0]
fs = 44100 # sampling rate, Hz, must be integer
duration = 5.0 # in seconds, may be float
f = 440.0 # sine frequency, Hz, may be float
# generate samples, note conversion to float32 array
num_samples = int(fs * duration)
samples = [volume * math.sin(2 * math.pi * k * f / fs) for k in range(0, num_samples)]
# per @yahweh comment explicitly convert to bytes sequence
output_bytes = array.array('f', samples).tobytes()
# for paFloat32 sample values must be in range [-1.0, 1.0]
stream = p.open(format=pyaudio.paFloat32,
channels=1,
rate=fs,
output=True)
# play. May repeat with different volume values (if done interactively)
start_time = time.time()
stream.write(output_bytes)
print("Played sound for {:.2f} seconds".format(time.time() - start_time))
stream.stop_stream()
stream.close()
p.terminate()
```
|
I the [bregman lab toolbox](http://bregman.dartmouth.edu/~mcasey/bregman/bregman_tutorials/) you have a set of functions that does exactly what you want. This python module is a little bit buggy but you can adapt this code to get your own functions
|
8,299,303
|
I need to generate a sine wave sound in Python, and I need to be able to control frequency, duration, and relative volume. By 'generate' I mean that I want it to play though the speakers immediately, not save to a file.
What is the easiest way to do this?
|
2011/11/28
|
[
"https://Stackoverflow.com/questions/8299303",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/180783/"
] |
ivan-onys gave an excellent answer, but there is a little addition to it:
this script will produce 4 times shorter sound than expected because Pyaudio write method needs string data of float32, but when you pass numpy array to this method, it converts whole array as entity to a string, therefore you have to convert data in numpy array to the byte sequence yourself like this:
```
samples = (np.sin(2*np.pi*np.arange(fs*duration)*f/fs)).astype(np.float32).tobytes()
```
and you have to change this line as well:
```
stream.write(samples)
```
|
Today for Python 3.5+ the best way is to install the packages recommended by the developer.
<http://people.csail.mit.edu/hubert/pyaudio/>
For Debian do
```
sudo apt-get install python3-all-dev portaudio19-dev
```
before trying to install pyaudio
|
8,299,303
|
I need to generate a sine wave sound in Python, and I need to be able to control frequency, duration, and relative volume. By 'generate' I mean that I want it to play though the speakers immediately, not save to a file.
What is the easiest way to do this?
|
2011/11/28
|
[
"https://Stackoverflow.com/questions/8299303",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/180783/"
] |
ivan-onys gave an excellent answer, but there is a little addition to it:
this script will produce 4 times shorter sound than expected because Pyaudio write method needs string data of float32, but when you pass numpy array to this method, it converts whole array as entity to a string, therefore you have to convert data in numpy array to the byte sequence yourself like this:
```
samples = (np.sin(2*np.pi*np.arange(fs*duration)*f/fs)).astype(np.float32).tobytes()
```
and you have to change this line as well:
```
stream.write(samples)
```
|
I the [bregman lab toolbox](http://bregman.dartmouth.edu/~mcasey/bregman/bregman_tutorials/) you have a set of functions that does exactly what you want. This python module is a little bit buggy but you can adapt this code to get your own functions
|
8,299,303
|
I need to generate a sine wave sound in Python, and I need to be able to control frequency, duration, and relative volume. By 'generate' I mean that I want it to play though the speakers immediately, not save to a file.
What is the easiest way to do this?
|
2011/11/28
|
[
"https://Stackoverflow.com/questions/8299303",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/180783/"
] |
Version with numpy:
```
import time
import numpy as np
import pyaudio
p = pyaudio.PyAudio()
volume = 0.5 # range [0.0, 1.0]
fs = 44100 # sampling rate, Hz, must be integer
duration = 5.0 # in seconds, may be float
f = 440.0 # sine frequency, Hz, may be float
# generate samples, note conversion to float32 array
samples = (np.sin(2 * np.pi * np.arange(fs * duration) * f / fs)).astype(np.float32)
# per @yahweh comment explicitly convert to bytes sequence
output_bytes = (volume * samples).tobytes()
# for paFloat32 sample values must be in range [-1.0, 1.0]
stream = p.open(format=pyaudio.paFloat32,
channels=1,
rate=fs,
output=True)
# play. May repeat with different volume values (if done interactively)
start_time = time.time()
stream.write(output_bytes)
print("Played sound for {:.2f} seconds".format(time.time() - start_time))
stream.stop_stream()
stream.close()
p.terminate()
```
Version without numpy:
```
import array
import math
import time
import pyaudio
p = pyaudio.PyAudio()
volume = 0.5 # range [0.0, 1.0]
fs = 44100 # sampling rate, Hz, must be integer
duration = 5.0 # in seconds, may be float
f = 440.0 # sine frequency, Hz, may be float
# generate samples, note conversion to float32 array
num_samples = int(fs * duration)
samples = [volume * math.sin(2 * math.pi * k * f / fs) for k in range(0, num_samples)]
# per @yahweh comment explicitly convert to bytes sequence
output_bytes = array.array('f', samples).tobytes()
# for paFloat32 sample values must be in range [-1.0, 1.0]
stream = p.open(format=pyaudio.paFloat32,
channels=1,
rate=fs,
output=True)
# play. May repeat with different volume values (if done interactively)
start_time = time.time()
stream.write(output_bytes)
print("Played sound for {:.2f} seconds".format(time.time() - start_time))
stream.stop_stream()
stream.close()
p.terminate()
```
|
ivan-onys gave an excellent answer, but there is a little addition to it:
this script will produce 4 times shorter sound than expected because Pyaudio write method needs string data of float32, but when you pass numpy array to this method, it converts whole array as entity to a string, therefore you have to convert data in numpy array to the byte sequence yourself like this:
```
samples = (np.sin(2*np.pi*np.arange(fs*duration)*f/fs)).astype(np.float32).tobytes()
```
and you have to change this line as well:
```
stream.write(samples)
```
|
8,299,303
|
I need to generate a sine wave sound in Python, and I need to be able to control frequency, duration, and relative volume. By 'generate' I mean that I want it to play though the speakers immediately, not save to a file.
What is the easiest way to do this?
|
2011/11/28
|
[
"https://Stackoverflow.com/questions/8299303",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/180783/"
] |
ivan-onys gave an excellent answer, but there is a little addition to it:
this script will produce 4 times shorter sound than expected because Pyaudio write method needs string data of float32, but when you pass numpy array to this method, it converts whole array as entity to a string, therefore you have to convert data in numpy array to the byte sequence yourself like this:
```
samples = (np.sin(2*np.pi*np.arange(fs*duration)*f/fs)).astype(np.float32).tobytes()
```
and you have to change this line as well:
```
stream.write(samples)
```
|
The script from ivan\_onys produces a signal that is four times shorter than intended. If a TypeError is returned when volume is a float, try adding .tobytes() to the following line instead.
```
stream.write((volume*samples).tobytes())
```
@mm\_ float32 = 32 bits, and 8 bits = 1 byte, so float32 = 4 bytes. When samples are passed to stream.write as float32, byte count (duration) is divided by 4. Writing samples back .tobytes() corrects for quartering the sample count when writing to float32.
|
8,299,303
|
I need to generate a sine wave sound in Python, and I need to be able to control frequency, duration, and relative volume. By 'generate' I mean that I want it to play though the speakers immediately, not save to a file.
What is the easiest way to do this?
|
2011/11/28
|
[
"https://Stackoverflow.com/questions/8299303",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/180783/"
] |
ivan-onys gave an excellent answer, but there is a little addition to it:
this script will produce 4 times shorter sound than expected because Pyaudio write method needs string data of float32, but when you pass numpy array to this method, it converts whole array as entity to a string, therefore you have to convert data in numpy array to the byte sequence yourself like this:
```
samples = (np.sin(2*np.pi*np.arange(fs*duration)*f/fs)).astype(np.float32).tobytes()
```
and you have to change this line as well:
```
stream.write(samples)
```
|
One of the more consistent andeasy to install ways to deal with sound in Python is the Pygame multimedia libraries.
I'd recomend using it - there is the pygame.sndarray submodule that allows you to manipulate numbers in a data vector that become a high-level sound object that can be playerd in the pygame.mixer module.
The documentation in the pygame.org site should be enough for using the sndarray module.
|
8,299,303
|
I need to generate a sine wave sound in Python, and I need to be able to control frequency, duration, and relative volume. By 'generate' I mean that I want it to play though the speakers immediately, not save to a file.
What is the easiest way to do this?
|
2011/11/28
|
[
"https://Stackoverflow.com/questions/8299303",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/180783/"
] |
Today for Python 3.5+ the best way is to install the packages recommended by the developer.
<http://people.csail.mit.edu/hubert/pyaudio/>
For Debian do
```
sudo apt-get install python3-all-dev portaudio19-dev
```
before trying to install pyaudio
|
The script from ivan\_onys produces a signal that is four times shorter than intended. If a TypeError is returned when volume is a float, try adding .tobytes() to the following line instead.
```
stream.write((volume*samples).tobytes())
```
@mm\_ float32 = 32 bits, and 8 bits = 1 byte, so float32 = 4 bytes. When samples are passed to stream.write as float32, byte count (duration) is divided by 4. Writing samples back .tobytes() corrects for quartering the sample count when writing to float32.
|
67,479,873
|
I am new to ReactJS, the following error reported in console:
`Warning: Functions are not valid as a React child. This may happen if you return a Component instead of <Component /> from render. Or maybe you meant to call this function rather than return it.`
What is wrong?
```
function Test({children}) {
return (
<div>
{children}
</div>
);
}
export default function App() {
return (
<div className="App">
<Test> {() => (<h1>Title</h1>)}</Test>
</div>
);
}
```
|
2021/05/11
|
[
"https://Stackoverflow.com/questions/67479873",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14031054/"
] |
Part of the render props pattern is that the child component calls the children as a function in order to render it.
If you expect to have either normal `ReactNode`s or a function, you'll need to check to see if the children is a function to determine how to use it.
If you expect children will always be a function you can just call it without the type check first, though there will be an error if you pass in something that isn't callable.
```js
function Test({children}) {
return (
<div>
{typeof children==='function'? children() : children}
</div>
);
}
function App() {
return (
<div className="App">
<Test>{() => (<h1>Title</h1>)}</Test>
</div>
);
}
ReactDOM.render(<App />, document.getElementById('root'))
```
```html
<script src="https://cdnjs.cloudflare.com/ajax/libs/react/16.6.3/umd/react.production.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/react-dom/16.6.3/umd/react-dom.production.min.js"></script>
<div id="root" />
```
In addition, you need to make sure there's no extra whitespace before the curly braces to set up the render props function, otherwise it is interpreted as ['', function(){}] which won't work:
```
<Test> {() => (<h1>Title</h1>)}</Test>
```
|
```
export default function App() {
return (
<div className="App">
<Test><h1>Title</h1></Test> // Change hehre
</div>
);
}
```
|
48,180,831
|
In Html:
```
<form [formGroup]="myform">
<table>
<tr *ngFor="let item of items">
<td>
<input type="text" [(ngModel)]="item.name" formControlName="item.name"/>
</td>
</tr>
</table>
<button (click)="addRow()">ADD Row </button>
</form>
```
Initially I have one data so I bind it in the text box. When I click Add button, I need to add empty text box row.
In component.ts
```
ngOninit(){
this.myform= formBuilder.group([
item.name : new FormControl('',Validators.Required);
]);
}
```
I have no idea how to use formArray in my scenario.
|
2018/01/10
|
[
"https://Stackoverflow.com/questions/48180831",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8295908/"
] |
You maybe forgot `f(S)` for previous calculation. Try it before using your command:
```
1> f(S).
ok
2> S = lists:concat(A) ++ " " ++ [254,874] ++ "\n".
```
Moreover, you can use `$[` or `$]` for indicate `"[" "]"` in ASCII
```
3> $[.
91
4> $].
93
5> S = lists:concat(A) ++ " " ++ [91,254,874,93] ++ "\n".
```
|
found the answer
```
A = [254,876].
lists:flatten(io_lib:format("~p",[A])).
```
this gives exact result
"[254,876]"
|
48,180,831
|
In Html:
```
<form [formGroup]="myform">
<table>
<tr *ngFor="let item of items">
<td>
<input type="text" [(ngModel)]="item.name" formControlName="item.name"/>
</td>
</tr>
</table>
<button (click)="addRow()">ADD Row </button>
</form>
```
Initially I have one data so I bind it in the text box. When I click Add button, I need to add empty text box row.
In component.ts
```
ngOninit(){
this.myform= formBuilder.group([
item.name : new FormControl('',Validators.Required);
]);
}
```
I have no idea how to use formArray in my scenario.
|
2018/01/10
|
[
"https://Stackoverflow.com/questions/48180831",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8295908/"
] |
You maybe forgot `f(S)` for previous calculation. Try it before using your command:
```
1> f(S).
ok
2> S = lists:concat(A) ++ " " ++ [254,874] ++ "\n".
```
Moreover, you can use `$[` or `$]` for indicate `"[" "]"` in ASCII
```
3> $[.
91
4> $].
93
5> S = lists:concat(A) ++ " " ++ [91,254,874,93] ++ "\n".
```
|
To convert list of integers to string
for your case I would have done:
```
[A, B] = [254,876],
C = "[" ++ integer_to_list(A) ++ "," ++ integer_to_list(B) ++ "]".
```
for a more generic case:
```
-module(l2s).
-compile(export_all).
list_to_string([H|List]) ->
list_to_string(List, "[" ++ integer_to_list(H)).
list_to_string([], String) -> String ++ "]";
list_to_string([H | List], String) ->
list_to_string(List, String ++ "," ++ integer_to_list(H)).
```
Test:
```
Eshell V7.3 (abort with ^G)
1> A = [1,2,3,4,5].
[1,2,3,4,5]
2> l2s:list_to_string(A).
"[1,2,3,4,5]"
```
|
48,180,831
|
In Html:
```
<form [formGroup]="myform">
<table>
<tr *ngFor="let item of items">
<td>
<input type="text" [(ngModel)]="item.name" formControlName="item.name"/>
</td>
</tr>
</table>
<button (click)="addRow()">ADD Row </button>
</form>
```
Initially I have one data so I bind it in the text box. When I click Add button, I need to add empty text box row.
In component.ts
```
ngOninit(){
this.myform= formBuilder.group([
item.name : new FormControl('',Validators.Required);
]);
}
```
I have no idea how to use formArray in my scenario.
|
2018/01/10
|
[
"https://Stackoverflow.com/questions/48180831",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8295908/"
] |
You maybe forgot `f(S)` for previous calculation. Try it before using your command:
```
1> f(S).
ok
2> S = lists:concat(A) ++ " " ++ [254,874] ++ "\n".
```
Moreover, you can use `$[` or `$]` for indicate `"[" "]"` in ASCII
```
3> $[.
91
4> $].
93
5> S = lists:concat(A) ++ " " ++ [91,254,874,93] ++ "\n".
```
|
```
"["++lists:concat(lists:join(",",A))++"]".
```
```
"[1,2,3,4]"
```
|
42,099,974
|
I found [this solution](https://productforums.google.com/forum/#!topic/docs/VShFogMNDyQ) but am struggling to get it to work on my sheet.
The user who submitted that question had 3 header rows and wanted the script to only work on row 4 and down. I have 1 header, and as such need the script to work on row 2 and down.
I've got it leaving row 1 alone - but it ONLY hides rows 2 and 3. I can't figure out where I'm going wrong.
```
function onOpen() {
var ss = SpreadsheetApp.getActiveSpreadsheet();
var menuItems=[{name: 'HideRows', functionName: 'hideRows'}];
ss.addMenu('Hide Rows', menuItems);
};
function hideRows() {
var ss = SpreadsheetApp.getActiveSpreadsheet();
var s = ss.getSheetByName("Responses");
var v = s.getRange("B:B").getValues();
var today = new Date();
var m = today.getMonth();
for(var i=3;i<v.length;i++)
if(v[i][0]=="" || v[i][0].getMonth()>=m) break;
if(i>1) s.hideRows(2,i-1)
};
```
ETA: Here's a link to my sheet/script: <https://docs.google.com/spreadsheets/d/1PkB1_hlJoI-iFYTAN8to_ES9R8QyUxEgPsWtSTUmj8U/edit?usp=sharing>
|
2017/02/07
|
[
"https://Stackoverflow.com/questions/42099974",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3869183/"
] |
That is correct, you only have access to basic `NSManagedObject`s during migration.
>
> **Three-Stage Migration**
>
>
> The migration process itself is in three stages. It uses a copy of the source and destination models in which the validation rules are disabled and the class of all entities is changed to NSManagedObject.
>
>
>
From: [Core Data Model Versioning and Data Migration Guide](https://developer.apple.com/library/content/documentation/Cocoa/Conceptual/CoreDataVersioning/Articles/vmMigrationProcess.html)
|
[Dave's answer](https://stackoverflow.com/a/42104283/2547229) clarified that during a migration, core data object are only available as `NSManagedObject` instances. You don't get to use their Entity classes.
Worse, if you're using a tool like [mogenerator](https://stackoverflow.com/a/42104283/2547229), then any handy logic that you've extended the entity classes with is inaccessible.
Poor solutions
==============
Working with an `NSManagedObject` directly feels dangerous to me. Using `[managedObject valueForKey:@"someKey"]` is verbose, but worse, there's no compiler checking that you've got your key name correct, so you might be asking for something `managedObject` doesn't have. There's also no compiler checking of the returned type either – it could be anything that you can put in to a managed object.
Slightly better is `[managedObject valueForKey: NSStringFromSelector(@selector(someKey))]` is safer, but horribly verbose and awkward, and still isn't *that* safe – lots of things might implement the method `someKey`.
You might also declare your keys as literals:
```
NSString *const someKey = @"someKey";
[managedObject valueForKey: someKey];
```
Again – this is slightly safer, less error prone and verbose, but you've got no guarantee still that *this* managed object has `someKey`.
None of these approaches will give us access to custom logic, either.
Better solution
===============
What I did instead of this was to define a `protocol` for the properties that *I know* my managed object has. Here's are examples for entities `Choice` and `ChoiceType`.
```
@protocol ChoiceProtocol
- (NSSet<id <ChoiceTypeProtocol> > *)choiceTypes;
- (NSNumber *)selected;
- (NSNumber *)order;
@end
@protocol ChoiceTypeProtocol
- (NSNumber *)selected;
- (NSString *)name;
- (NSString *)textCustom;
- (NSNumber *)order;
@end
```
Now in my migration code, instead of having a custom migration function similar to:
```
- (NSString *)migratedRepresentationOfChoice:(NSManagedObject *)choice;
```
I have:
```
- (NSString *)migratedRepresentationOfChoice:(id <ChoiceProtocol>)choice;
```
In the body of this function I can use `choice` exactly as I would any regular object. I get code completion as I type, I get the right syntax highlighting, the compiler will complain if I call a non existent method.
I also get *return type* checking, so the compiler will complain if I use the `NSNumber` property `selected` as an `NSString` or `NSSet`. And it'll also be helpful and suggest `NSNumber` methods as completions while I type.
How about the logic?
====================
This approach doesn't provide the logic you've added to entity classes.
In Swift you might be able to use a protocol extension to achieve that.
I was retiring these entities (hence the migration), so I moved the entity logic functions *I needed* in to a helper functions of my custom migration. As an example `choice.orderedChoiceTypes` becomes `[self choiceOrderedChoiceTypes:choice]`.
In future I will probably avoid ever adding logic to `NSManagedObject` entities. I think it's probably a better plan to put any such logic in domain objects that you build from your managed objects. Further, I will probably avoid defining entity classes and instead *only* access `NSManagedObject` instances through a protocol as during the migration. It seems clean, simple, removes magic, and has benefits for testing – not just for migrations.
|
13,064,882
|
I have a button click event that do some server side job and finally open a new tab. But when that happens, the parent tab on Internet Explorer become very bizarre. Then I found Document Mode is changed to Quirk mode that make the whole website move to left instead of centre and I lost some styling as well. I tried with the code below but still happen.
```
<meta http-equiv="X-UA-Compatible" content="IE=edge"/>
<meta http-equiv="X-UA-Compatible" content="IE=EmulateIE7" />
```
Can anyone tell me why that is happening and how to solve this? Thanks.
My code sample.
```
Sub btnTeachersView_Click
........ server side code ........
Response.Write("<script>")
Response.Write("window.open('../abc.aspx','_blank')")
Response.Write("</script>")
End Sub
```
NOTE: It needs to be on server side coz there are other jobs need to be done first and my client wants it in a new tab so i can't just redirect to that page.
|
2012/10/25
|
[
"https://Stackoverflow.com/questions/13064882",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1029608/"
] |
Try to wrap your `<script></script>` tags in the correct `<html></html>` tags with doctype etc. I think that might be the problem that triggers quirks mode, the lack of proper dtd
|
I had this exact same problem. The only workaround I could find was to navigate to a blank location then set the value to location that you desire. Like so:
```
var newWindow = window.open('', '_blank');
newWindow.location = '../abc.aspx';
```
And in VB:
```
Response.Write("<script>")
Response.Write("var newWindow = window.open('','_blank');")
Response.Write("newWindow.location = '../abc.aspx';")
Response.Write("</script>")
```
|
72,875,121
|
I am pretty new to js. And I am looking to learn js. Recently I came across one library called bounce.js which is animation library. Which require NPM to install but why? I am dont want to use NPM (or any packet Manager) they havent provided min.js file to direct import in scrpit tag?? Why??. Similarly for Tailwind it require NPM. And as NPM require it means I need vercel to deploy and all stuff.
2) As I use django I dont know how to install NPM modules in my templates.
Please help me clear out from this mess.
|
2022/07/05
|
[
"https://Stackoverflow.com/questions/72875121",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19489880/"
] |
To get an image of a pivot table, you need to have a line like the below:
```
workbook.getWorksheet("Sheet1").getPivotTable("My Pivot Table").getLayout().getRange().getImage();
```
Basically, you can specify the pivot table that you want using getPivotTable(id) and then you need to get the layout and the range of that layout. Then finally, you can use the getImage method. Hope that helps!
|
Your conditional formatting rule highlights the values which are equal to zero. You can just loop through the values of the range (K:R), see if they're zero, and if so, set the cells to the color you used in the conditional formatting. If you do it this way, the colors should be maintained when you create an image. You can see code to do that below:
```
function main(workbook: ExcelScript.Workbook) {
let sh: ExcelScript.Worksheet = workbook.getWorksheet("Sheet1")
let range: ExcelScript.Range = sh.getRange("K:R")
let vals: string[][] = range.getValues() as string[][]
let rowCount:number = range.getRowCount()
let colCount:number = range.getColumnCount()
for (let i = 0; i < rowCount; i++){
for (let j = 0; j < colCount; j++){
if (vals[i][j] as unknown === 0) {
let rang: ExcelScript.Range = sh.getRangeByIndexes(i,j,1,1)
rang.getFormat().getFont().setColor("#9C0006");
rang.getFormat().getFill().setColor("#FFC7CE");
}
}
}
}
```
|
18,765,845
|
I'm new in Angular I want to get a SQL request from php script to angular, but I see only a big bulleted list and I dont know what is the problem.
My html code:
```
.
.
<script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.0.8/angular.js">
</script>
<script src="../js/proc_list.js"></script>
.
.
.
<div align="center" id="prod_list" ng-controller="proc_list">
<ul>
<li ng-repeat="processor in proc">
{{processor.manufacturer}}
<p>{{processor.description}}</p>
{{processor.price}}
</li>
</ul>
</div>
```
Controller code:
```
function proc_list($scope, $http){
$http.post('../phps/get_proc_list.php').success(function(data){
$scope.proc = data;
});
}
```
PHP code:
```
$received_data = file_get_contents("php://input");
$objData = json_decode($received_data);
require_once('login.php');
.
.
//Connect to database and send query
.
.
$data_requested = json_encode($data);
echo $data_requested;
```
I tried to do similar like this link:<http://www.cleverweb.nl/javascript/a-simple-search-with-angularjs-and-php/>
but its still not working. Anybody has any idea?
Thank you
|
2013/09/12
|
[
"https://Stackoverflow.com/questions/18765845",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2623568/"
] |
You have to turn off clipping by adding `clip_on=False` to your plot command:
```
import numpy as np
import matplotlib.pyplot as plt
plt.plot(range(10), marker='o', ms=20, clip_on=False)
axes = plt.gca()
axes.spines['right'].set_color('none')
axes.spines['top'].set_color('none')
axes.xaxis.set_ticks_position('bottom')
axes.spines['bottom'].set_position(('axes', -0.05))
axes.yaxis.set_ticks_position('left')
axes.spines['left'].set_position(('axes', -0.05))
axes.tick_params(axis='x', direction='out')
axes.tick_params(axis='y', direction='out')
plt.show()
```
which produces:

|
```
from pylab import *
plot(range(10), marker='o', ms=20)
#customize axes
axes = gca()
axes.spines['right'].set_color('none')
axes.spines['top'].set_color('none')
axes.xaxis.set_ticks_position('bottom')
axes.spines['bottom'].set_position(('axes', -0.05))
axes.yaxis.set_ticks_position('left')
axes.spines['left'].set_position(('axes', -0.05))
axes.tick_params(axis='x', direction='out')
axes.tick_params(axis='y', direction='out')
axes.spines['bottom'].set_smart_bounds(True)
axes.spines['left'].set_smart_bounds(True)
axes.set_xlim(-0.5, 9.5)
axes.set_ylim(-0.5, 9.5)
axes.set_xticks(range(10))
axes.set_yticks(range(10))
```

|
6,691,224
|
I have a C code which reads 1 line at a time, from a file opened in text mode using
```
fgets(buf,200,fin);
```
The input file which fgets() reads lines from, is an command line argument to the program.
Now fgets leaves the newline character included in the string copied to buf.
Somewhere do the line in the code I check
```
length = strlen(buf);
```
For some input files , which I guess are edited in \*nix environment newline character is just `'\n'`
But for some other test case input files(which I guess are edited/created under Windows environment) have 2 characters indicating a newline - `'\r''\n'`
I want to remove the newline character and want to put a '\0' as the string terminator character. So I have to either do -
```
if(len == (N+1))
{
if(buf[length-1] == '\n')
{
buf[length-2] = '\0'; //for a `\r\n` newline
}
}
```
or
```
if(len == (N))
{
if(buf[length-1] == '\n')
{
buf[length-1] = '\0'; //for a `\n` newline
}
}
```
Since the text files are passed as commandline argument to the program I have no control of how it is edited/composed and hence cannot filter it using some tool to make newlines consistent.
How can I handle this situation?
Is there any fgets equivalent function in standard C library(no extensions) which can handle these inconsistent newline characters and return a string without them?
|
2011/07/14
|
[
"https://Stackoverflow.com/questions/6691224",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2759376/"
] |
I like to update `length` at the same time
```
if (buf[length - 1] == '\n') buf[--length] = 0;
if (buf[length - 1] == '\r') buf[--length] = 0;
```
or, to remove **all** trailing whitespace
```
/* remember to #include <ctype.h> */
while ((length > 0) && isspace((unsigned char)buf[length - 1])) {
buf[--length] = 0;
}
```
|
If you are troubled by the different line endings (`\n` and `\r\n`) on different machines, one way to neutralize them would be to use the `dos2unix` command (assuming you are working on linux and have files edited in a Windows environment). That command would replace all window-style line endings with linux-style line endings. The reverse `unix2dos` also exists. You can call these utilities from within the C program (`system` maybe) and then process the line like you are currently doing. This would reduce the burden on your program.
|
6,691,224
|
I have a C code which reads 1 line at a time, from a file opened in text mode using
```
fgets(buf,200,fin);
```
The input file which fgets() reads lines from, is an command line argument to the program.
Now fgets leaves the newline character included in the string copied to buf.
Somewhere do the line in the code I check
```
length = strlen(buf);
```
For some input files , which I guess are edited in \*nix environment newline character is just `'\n'`
But for some other test case input files(which I guess are edited/created under Windows environment) have 2 characters indicating a newline - `'\r''\n'`
I want to remove the newline character and want to put a '\0' as the string terminator character. So I have to either do -
```
if(len == (N+1))
{
if(buf[length-1] == '\n')
{
buf[length-2] = '\0'; //for a `\r\n` newline
}
}
```
or
```
if(len == (N))
{
if(buf[length-1] == '\n')
{
buf[length-1] = '\0'; //for a `\n` newline
}
}
```
Since the text files are passed as commandline argument to the program I have no control of how it is edited/composed and hence cannot filter it using some tool to make newlines consistent.
How can I handle this situation?
Is there any fgets equivalent function in standard C library(no extensions) which can handle these inconsistent newline characters and return a string without them?
|
2011/07/14
|
[
"https://Stackoverflow.com/questions/6691224",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2759376/"
] |
I like to update `length` at the same time
```
if (buf[length - 1] == '\n') buf[--length] = 0;
if (buf[length - 1] == '\r') buf[--length] = 0;
```
or, to remove **all** trailing whitespace
```
/* remember to #include <ctype.h> */
while ((length > 0) && isspace((unsigned char)buf[length - 1])) {
buf[--length] = 0;
}
```
|
I think your best (and easiest) option is to write your own strlen function:
```
size_t zstrlen(char *line)
{
char *s = line;
while (*s && *s != '\r' && s != '\n) s++;
*s = '\0';
return (s - line);
}
```
Now, to calculate the length of the string excluding the newline character(s) and eliminating it(/them) you simply do:
```
fgets(buf,200,fin);
length = zstrlen(buf);
```
It works for Unix style ('\n'), Windows style ('\r\n') and old Mac style ('\r').
Note that there are faster (but non-portable) implementation of strlen that you can adapt to your needs.
Hope it helps,
RD:
|
6,691,224
|
I have a C code which reads 1 line at a time, from a file opened in text mode using
```
fgets(buf,200,fin);
```
The input file which fgets() reads lines from, is an command line argument to the program.
Now fgets leaves the newline character included in the string copied to buf.
Somewhere do the line in the code I check
```
length = strlen(buf);
```
For some input files , which I guess are edited in \*nix environment newline character is just `'\n'`
But for some other test case input files(which I guess are edited/created under Windows environment) have 2 characters indicating a newline - `'\r''\n'`
I want to remove the newline character and want to put a '\0' as the string terminator character. So I have to either do -
```
if(len == (N+1))
{
if(buf[length-1] == '\n')
{
buf[length-2] = '\0'; //for a `\r\n` newline
}
}
```
or
```
if(len == (N))
{
if(buf[length-1] == '\n')
{
buf[length-1] = '\0'; //for a `\n` newline
}
}
```
Since the text files are passed as commandline argument to the program I have no control of how it is edited/composed and hence cannot filter it using some tool to make newlines consistent.
How can I handle this situation?
Is there any fgets equivalent function in standard C library(no extensions) which can handle these inconsistent newline characters and return a string without them?
|
2011/07/14
|
[
"https://Stackoverflow.com/questions/6691224",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2759376/"
] |
I think your best (and easiest) option is to write your own strlen function:
```
size_t zstrlen(char *line)
{
char *s = line;
while (*s && *s != '\r' && s != '\n) s++;
*s = '\0';
return (s - line);
}
```
Now, to calculate the length of the string excluding the newline character(s) and eliminating it(/them) you simply do:
```
fgets(buf,200,fin);
length = zstrlen(buf);
```
It works for Unix style ('\n'), Windows style ('\r\n') and old Mac style ('\r').
Note that there are faster (but non-portable) implementation of strlen that you can adapt to your needs.
Hope it helps,
RD:
|
If you are troubled by the different line endings (`\n` and `\r\n`) on different machines, one way to neutralize them would be to use the `dos2unix` command (assuming you are working on linux and have files edited in a Windows environment). That command would replace all window-style line endings with linux-style line endings. The reverse `unix2dos` also exists. You can call these utilities from within the C program (`system` maybe) and then process the line like you are currently doing. This would reduce the burden on your program.
|
1,759,307
|
I created a project as a Class Library. Now I need to make it into a WCF. I can create a WCF project, but I would like to avoid all that fuss with TFS. I've done the App.config and added the /client:"wcfTestClient.exe" line to the Command line arguments. But there seems to be something else missing from it launching the Hosting.
|
2009/11/18
|
[
"https://Stackoverflow.com/questions/1759307",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/133910/"
] |
I discovered the following doing the opposite to what you are trying to achieve, i.e. changing a service library to a console application..
some of the settings in the csproj files cannot be edited from the settings screen from within VS to convert an class library to a WCF Service Library you need to add the following to your project file
Add the following to the first `PropertyGroup` [these are the guids for a C# WCF Project]
```
<ProjectTypeGuids>{3D9AD99F-2412-4246-B90B-4EAA41C64699};{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}</ProjectTypeGuids>
```
See here for further information on [ProjectTypeGuids](https://stackoverflow.com/questions/2911565)
You may also need to add the following line immediately below:
```
<StartArguments>/client:"WcfTestClient.exe"</StartArguments>
```
But ultimately it's the PropertyTypeGuids that you need to manually insert to get VS to recognise the project as a WCF Service Library Project.
|
This is what I had to do to convert my class library to WCF REST application.
1) Modify the .csproj file and add the below two lines to the first PropertyGroup element in .csproj file.
```
<ProjectTypeGuids>{349c5851-65df-11da-9384-00065b846f21};{fae04ec0-301f-11d3-bf4b-00c04f79efbc}</ProjectTypeGuids>
<UseIISExpress>false</UseIISExpress>
```
2) Add the following line to below `<Import Project="$(MSBuildBinPath)\Microsoft.CSharp.targets" />` to import Microsoft.WebApplication.targets file
```
<Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.targets" />
```
3) Add the following code to the end of the file before the `</Project>` tag.
```
<ProjectExtensions>
<VisualStudio>
<FlavorProperties GUID="{349c5851-65df-11da-9384-00065b846f21}">
<WebProjectProperties>
<UseIIS>False</UseIIS>
<AutoAssignPort>True</AutoAssignPort>
<DevelopmentServerPort>50178</DevelopmentServerPort>
<DevelopmentServerVPath>/</DevelopmentServerVPath>
<IISUrl>
</IISUrl>
<NTLMAuthentication>False</NTLMAuthentication>
<UseCustomServer>False</UseCustomServer>
<CustomServerUrl>
</CustomServerUrl>
<SaveServerSettingsInUserFile>False</SaveServerSettingsInUserFile>
</WebProjectProperties>
</FlavorProperties>
</VisualStudio>
```
4) Save the .csproj file and **Reload the project.**
5) Add a Web.Config file to the project and add the below bare minimal code. You can add more later per your requirement.
```
<?xml version="1.0"?>
<configuration>
<system.web>
<compilation debug="true" targetFramework="4.0" />
</system.web>
<system.webServer>
<modules runAllManagedModulesForAllRequests="true">
<add name="UrlRoutingModule" type="System.Web.Routing.UrlRoutingModule, System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" />
</modules>
</system.webServer>
<system.serviceModel>
<serviceHostingEnvironment aspNetCompatibilityEnabled="true"/>
<standardEndpoints>
<webHttpEndpoint>
<!--
Configure the WCF REST service base address via the global.asax.cs file and the default endpoint
via the attributes on the <standardEndpoint> element below
-->
<standardEndpoint name="" helpEnabled="true" automaticFormatSelectionEnabled="true"/>
</webHttpEndpoint>
</standardEndpoints>
</system.serviceModel>
</configuration>
```
6) Add a Global.asax file. Below is a sample file.
```
public class Global : HttpApplication
{
void Application_Start(object sender, EventArgs e)
{
RegisterRoutes();
}
private void RegisterRoutes()
{
// Edit the base address of Service1 by replacing the "Service1" string below
RouteTable.Routes.Add(new ServiceRoute("YourService", new WebServiceHostFactory(), typeof(YourServiceClass)));
}
}
```
7) Finally in the project's properties, under **Build tab, if the output path** is set to `bin\Debug` modify it to `bin\`.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.