qid
int64 1
74.7M
| question
stringlengths 0
58.3k
| date
stringlengths 10
10
| metadata
list | response_j
stringlengths 2
48.3k
| response_k
stringlengths 2
40.5k
|
|---|---|---|---|---|---|
3,686,820
|
I can't seem to understand this question at all. It does not make sense to me.
The question is
>
> Given $\left|\vec a\right| = 3, \left|\vec b\right| = 5$ and $\left|\vec a+\vec b\right| = 7$. Determine $\left|\vec a-\vec b\right|$.
>
>
>
I have tried finding $\left|\vec a+\vec b\right|$ using cosine rule such that $\left|\vec a+\vec b\right| = 7 = 3^2 + 5^2 - 2\cosθ$
Which failed as I clearly am unable to picture this question correctly in my head. If someone could explain this question (or maybe help me sketch it) that's be very helpful, thanks in advance.
|
2020/05/22
|
[
"https://math.stackexchange.com/questions/3686820",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/791513/"
] |
$0 = 0^2 = (a + b + c)^2 = a^2 + b^2 + c^2 + 2ab + 2ac + 2bc$
$a^2 + b^2 + c^2 = -2(ab + ac + bc)$
$b² - ac = -b(a + c) - ac = -(ab + ac + bc)$ as $b = -(a + c)$
Hence, the answer is 2.
|
With $b = \lambda a, c = \mu a$
$$
\frac{1+\lambda^2+\mu^2}{\lambda^2+\mu}=\frac{1+\lambda^2+(1+\lambda)^2}{\lambda^2+\lambda+1} = 2
$$
|
32,122,648
|
I have a query in UNIX script. I have a file as below:
```
A|B|C|D|
E|F|G|
H|I|J|K
L|M|N|
O|P|Q|
```
I want to select records from this file with condition as 'only records with no 4th value' will be picked up. The result file should look like
```
E|F|G|
L|M|N|
O|P|Q|
```
Can someone please help me with this.
Also : Got one more problem with this: what if the line E|F|G| has a space after the last pipe (|). It wont select the line. We need to trim this?
|
2015/08/20
|
[
"https://Stackoverflow.com/questions/32122648",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/844365/"
] |
You can use awk:
```
awk -F '|' '$4 == ""' file
E|F|G|
L|M|N|
O|P|Q|
```
**Breakup:**
```
-F '|' # sets input field separator as |
$4 == "" # selects only records that have 4th column empty
```
You can also use:
```
awk -F ' *\\| *' '$4 == ""' file
```
If there are spaces around `|` character.
|
awk -F '|' '!length($4)' file
E|F|G|
L|M|N|
O|P|Q|
|
434,402
|
From what I learned in tensor calculus so far, coordinate transformations are supposed to preserve the metric of the space. (Here I used GR notation, but the metric doesn't have to be the spacetime metric.)
$$\Lambda{^\rho}{\_\mu}\Lambda{^\sigma}{\_\nu}g{\_\rho}{\_\sigma}=g{\_\mu}{\_\nu}$$
So, to find all possible $\Lambda$'s, I thought that I just have to use the rule described above. That is to find all $\Lambda$'s that give back the exact same metric.
However, for the transformations between Cartesian and Polar coordinates in a 2-d plane, the metric looks very much different in different coordinates, and yet they are equivalent. Is it because going from Cartesian to Polar is not a linear transformation or something? And the set of transformations that I get from the method above does not contain the Cartesian to Polar transformations? If so, then what kind of transformations are they?
|
2018/10/14
|
[
"https://physics.stackexchange.com/questions/434402",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/209292/"
] |
You seem a bit confused. A general coordinate transformation is just any differentiable, bijective and with differentiable inverse function (called a diffeomorphism) between open sets in $\mathbb{R}^n$. So you can't really list them all; any function that satisfies the above conditions will work. Under a transformation $x'^\mu = x'^\mu(x)$, the metric changes as
$$g'\_{\mu\nu} = \frac{\partial x^\alpha}{\partial x'^\mu} \frac{\partial x^\beta}{\partial x'^\nu} g\_{\alpha\beta},$$
where the matrices $g\_{\mu\nu}$ and $g'\_{\mu\nu}$ will not in general be the same. A coordinate transformation preserves the metric in the sense that the abstract tensor is coordinate-independent, but its components do depend on the coordinates.
There is a special class of diffeomorphisms, called isometries, that do leave the components of the metric invariant:
$$g\_{\mu\nu} = \frac{\partial x^\alpha}{\partial x'^\mu} \frac{\partial x^\beta}{\partial x'^\nu} g\_{\alpha\beta}.$$
(Pay attention to the primes!) We think of them as the symmetries of our space. In Euclidean space they are rotations and translations; in Minkowski spacetime some rotations are replaced by Lorentz boosts. In a more general situation you might have less isometries: a black hole only has rotation and time translation symmetry, but not space translation or boost. A space might not have any isometries at all.
|
It is actually not the metric tensor --- if it is considered as the ensemble of its components --- which is supposed to be preserved under coordinate transformations, but the invariant line element $ds$ respectively the square of the line element:
$$ds^2 = g\_{ik}dx^i dx^k$$
Cartesian coordinates and the polar counterparts are very useful to demonstrate that: The square of the invariant line element in 2-dimensional coordinates is:
$$ds^2 = dx^2 + dy^2$$
A look on this formula shows us that the only non-zero components of the metric tensor are $$g\_{11}= 1 = g\_{22}$$ where we have identified for simplicity $x\equiv x^1$ and $y\equiv x^2$.
If we now go over to polar coordinates, i.e. use $(r,\phi)$ instead of $(x=r\cos\phi,y=r\sin\phi)$, the components of the metric tensor undergo the following transformation:
$$ g\_{\bar{i}\bar{k}} = \frac{\partial x^j}{\partial x^\bar{i}}\frac{\partial x^m}{\partial x^\bar{k}}g\_{jm} \equiv \Lambda^j\_\bar{i} \Lambda^m\_\bar{k} g\_{jm} $$
$$ g\_{\bar{1}\bar{1}} = \frac{\partial x^1}{\partial x^\bar{1}}\frac{\partial x^1}{\partial x^\bar{1}}g\_{11} +2\frac{\partial x^1}{\partial x^\bar{1}}\frac{\partial x^2}{\partial x^\bar{1}}g\_{12} + \frac{\partial x^2}{\partial x^\bar{1}}\frac{\partial x^2}{\partial x^\bar{1}}g\_{22}= \cos^2\phi g\_{11} + 2\frac{\partial x^1}{\partial x^\bar{1}}\frac{\partial x^2}{\partial x^\bar{1}} \cdot 0 + \sin^2\phi g\_{22} = 1$$
and
$$ g\_{\bar{2}\bar{2}} = \frac{\partial x^1}{\partial x^\bar{2}}\frac{\partial x^1}{\partial x^\bar{2}}g\_{11} +2\frac{\partial x^1}{\partial x^\bar{2}}\frac{\partial x^2}{\partial x^\bar{2}}g\_{12} + \frac{\partial x^2}{\partial x^\bar{2}}\frac{\partial x^2}{\partial x^\bar{2}}g\_{22}= \cos^2\phi g\_{11} + 2\frac{\partial x^1}{\partial x^\bar{1}}\frac{\partial x^2}{\partial x^\bar{1}} \cdot 0 + \sin^2\phi g\_{22}= r^2 sin^2\phi g\_{11} + 2\frac{\partial x^1}{\partial x^\bar{2}}\frac{\partial x^2}{\partial x^\bar{2}} \cdot 0 + r^2 cos^2\phi g\_{22}= r^2 $$
It can equally easily checked in the same way that the component $g\_{\bar{1}\bar{2}}=0$.
Therefore in polar coordinates the square of the invariant line element are:
$$ds^2 = 1\cdot dr^2 + r^2 d\phi^2$$ and as the name of $ds^2$ suggests, it does not change under the coordinate transformation:
$$ds^2 = dx^2 + dy^2 = 1\cdot dr^2 + r^2 d\phi^2$$.
You can of course consider the metric tensor in a coordinate-independent way, i.e.:
$$g= g\_{ik} e^i \otimes e^k$$
One is free to choose the coordinates, e.g. cartesian coordinates or polar coordinates or what you like, but the components of the metric tensor $g\_{ik}$ depend on the chosen coordinates, in any holonome coordinate set one would get:
$$g = g\_{ik} dx^i \otimes dx^k=ds^2$$
The coordinate independent definition of the metric tensor turns out to be equivalent to the square of the invariant line element. $g\equiv ds^2$ does not change, but the coefficients $g\_{ik}$, -- the components of g -- change according to the formula given above.
EDIT: Of course for most of the spaces which are described by a metric there are coordinate transformations which keep the components of the metric tensor invariant: For the N-dim. euclidian space these are rotations belonging to the group O(N) and translations, for the Minkowski space-time these are the Lorentz transformations and corresponding translations. For other spaces the search for the invariance group of the metric tensor is a problem of differential geometry.
|
6,388,388
|
I set up a javascript alert() handler in a WebChromeClient for an embedded WebView:
```
@Override
public boolean onJsAlert(WebView view, String url, String message, final android.webkit.JsResult result)
{
Log.d("alert", message);
Toast.makeText(activity.getApplicationContext(), message, 3000).show();
return true;
};
```
Unfortunately, this only shows a popup toast once, then the WebView stops responding to any events. I can't even use my menu command to load another page. I don't see errors in LogCat, what could be the problem here?
|
2011/06/17
|
[
"https://Stackoverflow.com/questions/6388388",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/778234/"
] |
You need to invoke `cancel()` or `confirm()` on the `JsResult result` parameter.
|
add this
```
public boolean onJsAlert(WebView view, String url, String message, JsResult result) {
result.confirm();
Toast.makeText(getApplicationContext(), message, Toast.LENGTH_LONG).show();
return true;
}
```
|
6,372
|
I am trying to `automate` editors work by automating `excerpts`.
My solution works but there are few problems with it:
1. If a post has images/broken html at the beginning it breaks the layout.
2. Substring cuts words.
Is there a better solution to automate excerpts or improve my existing code?
```
<?php if(!empty($post->post_excerpt)) {
the_excerpt();
} else {
echo "<p>".substr(get_the_content(), 0, 160)."...</p>";
}
?>
```
|
2011/01/05
|
[
"https://wordpress.stackexchange.com/questions/6372",
"https://wordpress.stackexchange.com",
"https://wordpress.stackexchange.com/users/2313/"
] |
The excerpt filter by default cuts your post by a word count, which I think is probably preferable to a character-based substr function like you're doing, and it strings out tags and images as well while doing it.
You can set the number of words to excerpt with the filter **[excerpt\_length](http://codex.wordpress.org/Plugin_API/Filter_Reference/excerpt_length)** (it defaults to 55 words, this function from the codex shows how to change it to 20:)
```
function new_excerpt_length($length) {
return 20;
}
add_filter('excerpt_length', 'new_excerpt_length', 999);
```
If you need to use a character-length based cutoff as in your example, you could fix broken tags and such just by applying an appropriate filter to your output, like this:
```
$content_to_excerpt = strip_tags( strip_shortcodes( get_the_content() ) );
echo "<p>". substr( apply_filters('the_excerpt', $content_to_excerpt), 0, 160)."...</p>";
```
Note that you're stripping tags and applying the filters before truncating the excerpt, so as not to leave an open tag in your excerpt that will screw up the rest of your layout.
There are a number of great themes out there that deals with excerpts in creative ways, I advise you to take a look at how they do it. Here are a few good blog posts from people who have thought through the issue:
* [More Tags or Excerpts](http://wptheming.com/2010/11/more-tags-or-excerpts/)
(wptheming)
* [Replacing WordPress content with an excerpt without editing theme files](http://justintadlock.com/archives/2008/08/24/replacing-wordpress-content-with-an-excerpt-without-editing-theme-files) (justintadlock)
Also, for a really random way of dealing with excerpts, look at the [Kirby theme](http://themeshaper.com/kirby/) - it tries to implement something like Microsoft Word's autosummarize feature by using css to show only headers and lists (from what I remember).
|
You don't really need to do this. The the\_excerpt() tag automatically checks for an excerpt, and if none exists it uses the first 55 words of the post's content (with all tags stripped). This excerpt length can be controlled by hooking into [the excerpt\_length filter](http://codex.wordpress.org/Function_Reference/the_excerpt#Control_Excerpt_Length_using_Filters).
If you're trying to include html (images, links, etc) in the automatically-generated excerpt, that's kinda tricky. Obviously, breaking a string at an arbitrary number of words or characters might also break any (x)html.
Wordpress has an internal function that can help with this called force\_balance\_tags(). It's located in the /wp-includes/formatting.php file. This function fairly reliably adds closing tags to open tags in a string. But it's not a cure-all... it doesn't fix incomplete tags (tags that may be cut off in the middle). So it'd be up to you to figure out how to split the string between tags in the first place.
|
12,983,260
|
I downloaded MapBox example from github using the following
git clone --recursive <https://github.com/mapbox/mapbox-ios-example.git>
Which downloaded it including all dependencies. Now I'm trying to create a separate project and include MapBox DSK as it was in that example. I tried creating workspace then creating a single view project then add new file and select .xcodepro for the MapBox DSK but didn't work when I tried importing `MapBox.h` file. I never tried importing 3rd parties API before and a bit not sure how I can do that correctly. Any Idea how I can accomplish that ?
Thanks in Advance
|
2012/10/19
|
[
"https://Stackoverflow.com/questions/12983260",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/519274/"
] |
You simply drag the Mapbox-ios-sdk project file from Finder to the files pane in Xcode.
And then click the project in Xcode files pane, Target-->Build Settings. Search for "User Header Search Paths". Specify where the MapBox sdk is located.
What I do is I put the MapBox-iOS-sdk in my project directory. And I set the path as `$(SRCROOT)` and make sure to set it as recursive.
While you're at it also make sure -ObjC and -all\_load are set in Other linker flags.
That only helps you reference the .h files, to link, also under Build Setting, Link Binary with Libraries you need libMapBox.a.
If there is a MapBox.bundle (as in the latest development branch) in the group and files pane, you want to drag that into Target->Build phases->Copy bundle resources as well. (The add button doesn't work for me.)
|
I think the best way is to look at [mapbox-ios-example](https://github.com/mapbox/mapbox-ios-example) provided by MapBox and try to replicate all dependencies into your own project.
|
12,983,260
|
I downloaded MapBox example from github using the following
git clone --recursive <https://github.com/mapbox/mapbox-ios-example.git>
Which downloaded it including all dependencies. Now I'm trying to create a separate project and include MapBox DSK as it was in that example. I tried creating workspace then creating a single view project then add new file and select .xcodepro for the MapBox DSK but didn't work when I tried importing `MapBox.h` file. I never tried importing 3rd parties API before and a bit not sure how I can do that correctly. Any Idea how I can accomplish that ?
Thanks in Advance
|
2012/10/19
|
[
"https://Stackoverflow.com/questions/12983260",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/519274/"
] |
You simply drag the Mapbox-ios-sdk project file from Finder to the files pane in Xcode.
And then click the project in Xcode files pane, Target-->Build Settings. Search for "User Header Search Paths". Specify where the MapBox sdk is located.
What I do is I put the MapBox-iOS-sdk in my project directory. And I set the path as `$(SRCROOT)` and make sure to set it as recursive.
While you're at it also make sure -ObjC and -all\_load are set in Other linker flags.
That only helps you reference the .h files, to link, also under Build Setting, Link Binary with Libraries you need libMapBox.a.
If there is a MapBox.bundle (as in the latest development branch) in the group and files pane, you want to drag that into Target->Build phases->Copy bundle resources as well. (The add button doesn't work for me.)
|
A bit late but I did it like it was explained here: <http://mapbox.com/mapbox-ios-sdk/#binary>.
Not messing around with git, just dragging things into your project, easy!
|
12,983,260
|
I downloaded MapBox example from github using the following
git clone --recursive <https://github.com/mapbox/mapbox-ios-example.git>
Which downloaded it including all dependencies. Now I'm trying to create a separate project and include MapBox DSK as it was in that example. I tried creating workspace then creating a single view project then add new file and select .xcodepro for the MapBox DSK but didn't work when I tried importing `MapBox.h` file. I never tried importing 3rd parties API before and a bit not sure how I can do that correctly. Any Idea how I can accomplish that ?
Thanks in Advance
|
2012/10/19
|
[
"https://Stackoverflow.com/questions/12983260",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/519274/"
] |
You simply drag the Mapbox-ios-sdk project file from Finder to the files pane in Xcode.
And then click the project in Xcode files pane, Target-->Build Settings. Search for "User Header Search Paths". Specify where the MapBox sdk is located.
What I do is I put the MapBox-iOS-sdk in my project directory. And I set the path as `$(SRCROOT)` and make sure to set it as recursive.
While you're at it also make sure -ObjC and -all\_load are set in Other linker flags.
That only helps you reference the .h files, to link, also under Build Setting, Link Binary with Libraries you need libMapBox.a.
If there is a MapBox.bundle (as in the latest development branch) in the group and files pane, you want to drag that into Target->Build phases->Copy bundle resources as well. (The add button doesn't work for me.)
|
I think problem here is he couldn't find a specific 'file' that was titled "MapBox.Framework" inside the folder of resources downloaded from Map Box, however what you actually need to do is copy that whole folder, which is titled "MapBox.Framework" into the frameworks section. I think the confusion was that the main folder that needs to be copied doesn't look like the yellow framework icon until you copy that folder into Xcode's frameworks section.
|
12,983,260
|
I downloaded MapBox example from github using the following
git clone --recursive <https://github.com/mapbox/mapbox-ios-example.git>
Which downloaded it including all dependencies. Now I'm trying to create a separate project and include MapBox DSK as it was in that example. I tried creating workspace then creating a single view project then add new file and select .xcodepro for the MapBox DSK but didn't work when I tried importing `MapBox.h` file. I never tried importing 3rd parties API before and a bit not sure how I can do that correctly. Any Idea how I can accomplish that ?
Thanks in Advance
|
2012/10/19
|
[
"https://Stackoverflow.com/questions/12983260",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/519274/"
] |
I think the best way is to look at [mapbox-ios-example](https://github.com/mapbox/mapbox-ios-example) provided by MapBox and try to replicate all dependencies into your own project.
|
A bit late but I did it like it was explained here: <http://mapbox.com/mapbox-ios-sdk/#binary>.
Not messing around with git, just dragging things into your project, easy!
|
12,983,260
|
I downloaded MapBox example from github using the following
git clone --recursive <https://github.com/mapbox/mapbox-ios-example.git>
Which downloaded it including all dependencies. Now I'm trying to create a separate project and include MapBox DSK as it was in that example. I tried creating workspace then creating a single view project then add new file and select .xcodepro for the MapBox DSK but didn't work when I tried importing `MapBox.h` file. I never tried importing 3rd parties API before and a bit not sure how I can do that correctly. Any Idea how I can accomplish that ?
Thanks in Advance
|
2012/10/19
|
[
"https://Stackoverflow.com/questions/12983260",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/519274/"
] |
I think the best way is to look at [mapbox-ios-example](https://github.com/mapbox/mapbox-ios-example) provided by MapBox and try to replicate all dependencies into your own project.
|
I think problem here is he couldn't find a specific 'file' that was titled "MapBox.Framework" inside the folder of resources downloaded from Map Box, however what you actually need to do is copy that whole folder, which is titled "MapBox.Framework" into the frameworks section. I think the confusion was that the main folder that needs to be copied doesn't look like the yellow framework icon until you copy that folder into Xcode's frameworks section.
|
12,983,260
|
I downloaded MapBox example from github using the following
git clone --recursive <https://github.com/mapbox/mapbox-ios-example.git>
Which downloaded it including all dependencies. Now I'm trying to create a separate project and include MapBox DSK as it was in that example. I tried creating workspace then creating a single view project then add new file and select .xcodepro for the MapBox DSK but didn't work when I tried importing `MapBox.h` file. I never tried importing 3rd parties API before and a bit not sure how I can do that correctly. Any Idea how I can accomplish that ?
Thanks in Advance
|
2012/10/19
|
[
"https://Stackoverflow.com/questions/12983260",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/519274/"
] |
Just try:
```
#import <Mapbox/Mapbox.h>
```
instead of just importing Mapbox.h as suggested here:
<https://www.mapbox.com/blog/ios-sdk-framework>
|
I think the best way is to look at [mapbox-ios-example](https://github.com/mapbox/mapbox-ios-example) provided by MapBox and try to replicate all dependencies into your own project.
|
12,983,260
|
I downloaded MapBox example from github using the following
git clone --recursive <https://github.com/mapbox/mapbox-ios-example.git>
Which downloaded it including all dependencies. Now I'm trying to create a separate project and include MapBox DSK as it was in that example. I tried creating workspace then creating a single view project then add new file and select .xcodepro for the MapBox DSK but didn't work when I tried importing `MapBox.h` file. I never tried importing 3rd parties API before and a bit not sure how I can do that correctly. Any Idea how I can accomplish that ?
Thanks in Advance
|
2012/10/19
|
[
"https://Stackoverflow.com/questions/12983260",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/519274/"
] |
Just try:
```
#import <Mapbox/Mapbox.h>
```
instead of just importing Mapbox.h as suggested here:
<https://www.mapbox.com/blog/ios-sdk-framework>
|
A bit late but I did it like it was explained here: <http://mapbox.com/mapbox-ios-sdk/#binary>.
Not messing around with git, just dragging things into your project, easy!
|
12,983,260
|
I downloaded MapBox example from github using the following
git clone --recursive <https://github.com/mapbox/mapbox-ios-example.git>
Which downloaded it including all dependencies. Now I'm trying to create a separate project and include MapBox DSK as it was in that example. I tried creating workspace then creating a single view project then add new file and select .xcodepro for the MapBox DSK but didn't work when I tried importing `MapBox.h` file. I never tried importing 3rd parties API before and a bit not sure how I can do that correctly. Any Idea how I can accomplish that ?
Thanks in Advance
|
2012/10/19
|
[
"https://Stackoverflow.com/questions/12983260",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/519274/"
] |
Just try:
```
#import <Mapbox/Mapbox.h>
```
instead of just importing Mapbox.h as suggested here:
<https://www.mapbox.com/blog/ios-sdk-framework>
|
I think problem here is he couldn't find a specific 'file' that was titled "MapBox.Framework" inside the folder of resources downloaded from Map Box, however what you actually need to do is copy that whole folder, which is titled "MapBox.Framework" into the frameworks section. I think the confusion was that the main folder that needs to be copied doesn't look like the yellow framework icon until you copy that folder into Xcode's frameworks section.
|
36,550,651
|
I'm trying some activities in AngularJS and wondering if it's possible to dynamically create a table using only an ng-repeat for table headers, an ng-repeat for rows, and an ng-repeat for fields in rows?
Essentially I'd like to say "for each property that exists in an instance of an object, print a new <\th>, for each object that exists in myArray, print a new <\tr>, and for each property that exists in each instance of each object, in each row, print a new <\td>.
Here's my controller:
```
var app=angular.module("app04",[]);
app.controller("Controller1",function(){
this.name="ABCDEFGH";
this.objectArray=[{name:"Jane Doe", email:"Jane@gmail.com",
phoneModel:"LG Optimus S", status:"sad",purchaseDate:"2015-12-01"
},{name:"John Doe", email:"John@gmail.com",
phoneModel:"iphone 6s", status:"happy",purchaseDate:"2016-12-05"
}];
})
```
Here is the body:
```
<body>
<h1>Hello Angular!</h1>
<div ng-controller="Controller1 as con1">
<table>
<theader>
<tr>
<th ng-repeat="object in con1.objectArray[0]">
{{Object.getOwnPropertyName(object)}}</th>
</tr>
</theader>
<tbody>
<tr ng-repeat="object in con1.objectArray">
<td>{{object.name}}</td>
<td>{{object.email}}</td>
<td>{{object.phoneModel}}</td>
<td>{{object.status}}</td>
<td>{{object.purchaseDate}}</td>
</tr>
</tbody>
</table>
</div>
</body>
</html>
```
I was instructed to write out the headers, since it's a very basic tutorial (I'm only on the 4th video), but it seems more convenient and better for re-usability to try a small thought challenge and see if it would be possible to do something like what I'm trying above.
The problem is that Object.getOwnPropertyName and Object.keys doesn't seem to be working with this javascript, so I was wondering if I was doing this incorrectly, or if there is a better way of doing it. I was also wondering the community's thoughts on dynamically creating everything in the situation that I know all objects will contain the same properties?
|
2016/04/11
|
[
"https://Stackoverflow.com/questions/36550651",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3654055/"
] |
Simply change your view to use (key,value) for iterating through object properties:
```
<body>
<h1>Hello Angular!</h1>
<div ng-controller="Controller1 as con1">
<table>
<theader>
<tr>
<th ng-repeat="(key,value) in con1.objectArray[0]">
{{key}}</th>
</tr>
</theader>
<tbody>
<tr ng-repeat="object in con1.objectArray">
<td>{{object.name}}</td>
<td>{{object.email}}</td>
<td>{{object.phoneModel}}</td>
<td>{{object.status}}</td>
<td>{{object.purchaseDate}}</td>
</tr>
</tbody>
</table>
</div>
</body>
</html>
```
|
You are nearly there, this is one way:
Clear up your controller and separate the header and data objects:
```
var app=angular.module("app04",[]);
app.controller("Controller1",function(){
this.name="ABCDEFGH";
this.tableHeaders = ["header1", "header2, "header3"... etc]
this.objectArray=[
{name:"Jane Doe", email:"Jane@gmail.com", phoneModel:"LG Optimus S", status:"sad",purchaseDate:"2015-12-01"
},
{name:"John Doe", email:"John@gmail.com", phoneModel:"iphone 6s", status:"happy",purchaseDate:"2016-12-05"
}];
})
<div ng-controller="Controller1 as con1">
<table>
<theader>
<tr>
<th ng-repeat="object in con1.tableHeaders">
{{object}}</th>
</tr>
</theader>
<tbody>
<tr ng-repeat="object in con1.objectArray">
<td>{{object.name}}</td>
<td>{{object.email}}</td>
<td>{{object.phoneModel}}</td>
<td>{{object.status}}</td>
<td>{{object.purchaseDate}}</td>
</tr>
</tbody>
</table>
```
|
36,550,651
|
I'm trying some activities in AngularJS and wondering if it's possible to dynamically create a table using only an ng-repeat for table headers, an ng-repeat for rows, and an ng-repeat for fields in rows?
Essentially I'd like to say "for each property that exists in an instance of an object, print a new <\th>, for each object that exists in myArray, print a new <\tr>, and for each property that exists in each instance of each object, in each row, print a new <\td>.
Here's my controller:
```
var app=angular.module("app04",[]);
app.controller("Controller1",function(){
this.name="ABCDEFGH";
this.objectArray=[{name:"Jane Doe", email:"Jane@gmail.com",
phoneModel:"LG Optimus S", status:"sad",purchaseDate:"2015-12-01"
},{name:"John Doe", email:"John@gmail.com",
phoneModel:"iphone 6s", status:"happy",purchaseDate:"2016-12-05"
}];
})
```
Here is the body:
```
<body>
<h1>Hello Angular!</h1>
<div ng-controller="Controller1 as con1">
<table>
<theader>
<tr>
<th ng-repeat="object in con1.objectArray[0]">
{{Object.getOwnPropertyName(object)}}</th>
</tr>
</theader>
<tbody>
<tr ng-repeat="object in con1.objectArray">
<td>{{object.name}}</td>
<td>{{object.email}}</td>
<td>{{object.phoneModel}}</td>
<td>{{object.status}}</td>
<td>{{object.purchaseDate}}</td>
</tr>
</tbody>
</table>
</div>
</body>
</html>
```
I was instructed to write out the headers, since it's a very basic tutorial (I'm only on the 4th video), but it seems more convenient and better for re-usability to try a small thought challenge and see if it would be possible to do something like what I'm trying above.
The problem is that Object.getOwnPropertyName and Object.keys doesn't seem to be working with this javascript, so I was wondering if I was doing this incorrectly, or if there is a better way of doing it. I was also wondering the community's thoughts on dynamically creating everything in the situation that I know all objects will contain the same properties?
|
2016/04/11
|
[
"https://Stackoverflow.com/questions/36550651",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3654055/"
] |
The one way you could do it is like this:
```
var app=angular.module("app04",[]);
app.controller("Controller1",["$scope", function($scope){
this.name="ABCDEFGH";
this.objectArray=[{name:"Jane Doe", email:"Jane@gmail.com",
phoneModel:"LG Optimus S", status:"sad",purchaseDate:"2015-12-01"
},{name:"John Doe", email:"John@gmail.com",
phoneModel:"iphone 6s", status:"happy",purchaseDate:"2016-12-05"
}];
}]);
<body>
<h1>Hello Angular!</h1>
<div ng-controller="Controller1 as con1">
<table>
<theader>
<tr>
<th ng-repeat="(key,value) in con1.objectArray[0]">
{{key}}</th>
</tr>
</theader>
<tbody>
<tr ng-repeat="object in con1.objectArray">
<td>{{object.name}}</td>
<td>{{object.email}}</td>
<td>{{object.phoneModel}}</td>
<td>{{object.status}}</td>
<td>{{object.purchaseDate}}</td>
</tr>
</tbody>
</table>
</div>
</body>
</html>
```
But I will now quote `ng-repeat` documentation from [this link](https://docs.angularjs.org/api/ng/directive/ngRepeat):
>
> The JavaScript specification does not define the order of keys returned for an object, so Angular relies on the order returned by the browser when running for key in myObj. Browsers generally follow the strategy of providing keys in the order in which they were defined, although there are exceptions when keys are deleted and reinstated. See the MDN page on delete for more info.
>
>
>
Which basically means that the order of columns in your header is not guaranteed to be same as the order of data columns you expect:
```
<td>{{object.name}}</td>
<td>{{object.email}}</td>
<td>{{object.phoneModel}}</td>
<td>{{object.status}}</td>
<td>{{object.purchaseDate}}</td>
```
For example if you define your `con1.objectArray[0]` like this:
```
{
email:"Jane@gmail.com",
name:"Jane Doe",
phoneModel:"LG Optimus S",
status:"sad",
purchaseDate:"2015-12-01"
}
```
On most browsers column order in the `thead` will be different then the expected one, the `email` will be first column, then `name` etc ...
But if you know **that all your objects will be defined in the same order and you did not delete properties or do anything else that can affect the order of the properties in the object** you can do something like this:
```
<table>
<theader>
<tr>
<th ng-repeat="(key,val) in con1.objectArray[0]">{{key}}</th>
</tr>
</theader>
<tbody>
<tr ng-repeat="object in con1.objectArray">
<td ng-repeat="(key,val) in object">{{object[key]}}</td>
</tr>
</tbody>
</table>
```
Which is IMO better than the first example as it will work in all browsers provided that you follow the constraint in bold text.
But the safest approach is that you simply define columns (property names) in the controller in an array which guarantees order on all browsers:
```
app.controller("Controller1",function(){
this.name="ABCDEFGH";
this.objectArray=[{name:"Jane Doe", email:"Jane@gmail.com",
phoneModel:"LG Optimus S", status:"sad",purchaseDate:"2015-12-01"
},{name:"John Doe", email:"John@gmail.com",
phoneModel:"iphone 6s", status:"happy",purchaseDate:"2016-12-05"
},{email:"John@gmail.com", name:"John Doe",
phoneModel:"iphone 6s", status:"happy",purchaseDate:"2016-12-05"
}];
this.columns = Object.getOwnPropertyNames(this.objectArray[0]); // or you can do it manually with array ['name', 'email', ...]
});
```
And then in HTML
```
<div ng-controller="Controller1 as con1">
<table border="1">
<theader>
<tr>
<th ng-repeat="col in con1.columns">{{col}}</th>
</tr>
</theader>
<tbody>
<tr ng-repeat="object in con1.objectArray">
<td ng-repeat="col in con1.columns">{{object[col]}}</td>
</tr>
</tbody>
</table>
</div>
```
|
You are nearly there, this is one way:
Clear up your controller and separate the header and data objects:
```
var app=angular.module("app04",[]);
app.controller("Controller1",function(){
this.name="ABCDEFGH";
this.tableHeaders = ["header1", "header2, "header3"... etc]
this.objectArray=[
{name:"Jane Doe", email:"Jane@gmail.com", phoneModel:"LG Optimus S", status:"sad",purchaseDate:"2015-12-01"
},
{name:"John Doe", email:"John@gmail.com", phoneModel:"iphone 6s", status:"happy",purchaseDate:"2016-12-05"
}];
})
<div ng-controller="Controller1 as con1">
<table>
<theader>
<tr>
<th ng-repeat="object in con1.tableHeaders">
{{object}}</th>
</tr>
</theader>
<tbody>
<tr ng-repeat="object in con1.objectArray">
<td>{{object.name}}</td>
<td>{{object.email}}</td>
<td>{{object.phoneModel}}</td>
<td>{{object.status}}</td>
<td>{{object.purchaseDate}}</td>
</tr>
</tbody>
</table>
```
|
36,550,651
|
I'm trying some activities in AngularJS and wondering if it's possible to dynamically create a table using only an ng-repeat for table headers, an ng-repeat for rows, and an ng-repeat for fields in rows?
Essentially I'd like to say "for each property that exists in an instance of an object, print a new <\th>, for each object that exists in myArray, print a new <\tr>, and for each property that exists in each instance of each object, in each row, print a new <\td>.
Here's my controller:
```
var app=angular.module("app04",[]);
app.controller("Controller1",function(){
this.name="ABCDEFGH";
this.objectArray=[{name:"Jane Doe", email:"Jane@gmail.com",
phoneModel:"LG Optimus S", status:"sad",purchaseDate:"2015-12-01"
},{name:"John Doe", email:"John@gmail.com",
phoneModel:"iphone 6s", status:"happy",purchaseDate:"2016-12-05"
}];
})
```
Here is the body:
```
<body>
<h1>Hello Angular!</h1>
<div ng-controller="Controller1 as con1">
<table>
<theader>
<tr>
<th ng-repeat="object in con1.objectArray[0]">
{{Object.getOwnPropertyName(object)}}</th>
</tr>
</theader>
<tbody>
<tr ng-repeat="object in con1.objectArray">
<td>{{object.name}}</td>
<td>{{object.email}}</td>
<td>{{object.phoneModel}}</td>
<td>{{object.status}}</td>
<td>{{object.purchaseDate}}</td>
</tr>
</tbody>
</table>
</div>
</body>
</html>
```
I was instructed to write out the headers, since it's a very basic tutorial (I'm only on the 4th video), but it seems more convenient and better for re-usability to try a small thought challenge and see if it would be possible to do something like what I'm trying above.
The problem is that Object.getOwnPropertyName and Object.keys doesn't seem to be working with this javascript, so I was wondering if I was doing this incorrectly, or if there is a better way of doing it. I was also wondering the community's thoughts on dynamically creating everything in the situation that I know all objects will contain the same properties?
|
2016/04/11
|
[
"https://Stackoverflow.com/questions/36550651",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3654055/"
] |
The one way you could do it is like this:
```
var app=angular.module("app04",[]);
app.controller("Controller1",["$scope", function($scope){
this.name="ABCDEFGH";
this.objectArray=[{name:"Jane Doe", email:"Jane@gmail.com",
phoneModel:"LG Optimus S", status:"sad",purchaseDate:"2015-12-01"
},{name:"John Doe", email:"John@gmail.com",
phoneModel:"iphone 6s", status:"happy",purchaseDate:"2016-12-05"
}];
}]);
<body>
<h1>Hello Angular!</h1>
<div ng-controller="Controller1 as con1">
<table>
<theader>
<tr>
<th ng-repeat="(key,value) in con1.objectArray[0]">
{{key}}</th>
</tr>
</theader>
<tbody>
<tr ng-repeat="object in con1.objectArray">
<td>{{object.name}}</td>
<td>{{object.email}}</td>
<td>{{object.phoneModel}}</td>
<td>{{object.status}}</td>
<td>{{object.purchaseDate}}</td>
</tr>
</tbody>
</table>
</div>
</body>
</html>
```
But I will now quote `ng-repeat` documentation from [this link](https://docs.angularjs.org/api/ng/directive/ngRepeat):
>
> The JavaScript specification does not define the order of keys returned for an object, so Angular relies on the order returned by the browser when running for key in myObj. Browsers generally follow the strategy of providing keys in the order in which they were defined, although there are exceptions when keys are deleted and reinstated. See the MDN page on delete for more info.
>
>
>
Which basically means that the order of columns in your header is not guaranteed to be same as the order of data columns you expect:
```
<td>{{object.name}}</td>
<td>{{object.email}}</td>
<td>{{object.phoneModel}}</td>
<td>{{object.status}}</td>
<td>{{object.purchaseDate}}</td>
```
For example if you define your `con1.objectArray[0]` like this:
```
{
email:"Jane@gmail.com",
name:"Jane Doe",
phoneModel:"LG Optimus S",
status:"sad",
purchaseDate:"2015-12-01"
}
```
On most browsers column order in the `thead` will be different then the expected one, the `email` will be first column, then `name` etc ...
But if you know **that all your objects will be defined in the same order and you did not delete properties or do anything else that can affect the order of the properties in the object** you can do something like this:
```
<table>
<theader>
<tr>
<th ng-repeat="(key,val) in con1.objectArray[0]">{{key}}</th>
</tr>
</theader>
<tbody>
<tr ng-repeat="object in con1.objectArray">
<td ng-repeat="(key,val) in object">{{object[key]}}</td>
</tr>
</tbody>
</table>
```
Which is IMO better than the first example as it will work in all browsers provided that you follow the constraint in bold text.
But the safest approach is that you simply define columns (property names) in the controller in an array which guarantees order on all browsers:
```
app.controller("Controller1",function(){
this.name="ABCDEFGH";
this.objectArray=[{name:"Jane Doe", email:"Jane@gmail.com",
phoneModel:"LG Optimus S", status:"sad",purchaseDate:"2015-12-01"
},{name:"John Doe", email:"John@gmail.com",
phoneModel:"iphone 6s", status:"happy",purchaseDate:"2016-12-05"
},{email:"John@gmail.com", name:"John Doe",
phoneModel:"iphone 6s", status:"happy",purchaseDate:"2016-12-05"
}];
this.columns = Object.getOwnPropertyNames(this.objectArray[0]); // or you can do it manually with array ['name', 'email', ...]
});
```
And then in HTML
```
<div ng-controller="Controller1 as con1">
<table border="1">
<theader>
<tr>
<th ng-repeat="col in con1.columns">{{col}}</th>
</tr>
</theader>
<tbody>
<tr ng-repeat="object in con1.objectArray">
<td ng-repeat="col in con1.columns">{{object[col]}}</td>
</tr>
</tbody>
</table>
</div>
```
|
Simply change your view to use (key,value) for iterating through object properties:
```
<body>
<h1>Hello Angular!</h1>
<div ng-controller="Controller1 as con1">
<table>
<theader>
<tr>
<th ng-repeat="(key,value) in con1.objectArray[0]">
{{key}}</th>
</tr>
</theader>
<tbody>
<tr ng-repeat="object in con1.objectArray">
<td>{{object.name}}</td>
<td>{{object.email}}</td>
<td>{{object.phoneModel}}</td>
<td>{{object.status}}</td>
<td>{{object.purchaseDate}}</td>
</tr>
</tbody>
</table>
</div>
</body>
</html>
```
|
11,776,662
|
I'm trying to make a loader gif using CSS animation and transforms instead. Unfortunately, the following code converts Firefox's (and sometimes Chrome's,Safari's) CPU usage on my Mac OSX from <10% to >90%.
```
i.icon-repeat {
display:none;
-webkit-animation: Rotate 1s infinite linear;
-moz-animation: Rotate 1s infinite linear; //**this is the offending line**
animation: Rotate 1s infinite linear;
}
@-webkit-keyframes Rotate {
from {-webkit-transform:rotate(0deg);}
to {-webkit-transform:rotate(360deg);}
}
@-keyframes Rotate {
from {transform:rotate(0deg);}
to {transform:rotate(360deg);}
}
@-moz-keyframes Rotate {
from {-moz-transform:rotate(0deg);}
to {-moz-transform:rotate(360deg);}
}
```
Note, that without the `infinite linear` rotation or the `-moz-` vendor prefix, the "loader gif"-like behavior is lost. That is, the icon doesn't continuously rotate.
Perhaps this is just a bug or maybe I'm doing something wrong?
|
2012/08/02
|
[
"https://Stackoverflow.com/questions/11776662",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/702275/"
] |
First, which version of Firefox are you using? It might be a bug but CSS3 animations are known to use a lot of CPU, for a fraction of a second. However, they are much faster than their jQuery counterpart.
It's not @-keyframes. It's @keyframes.
On a side note, I guess it's better you use something new rather than the rotating image.
|
Could be a bug. But as with many of these vendor prefixed things, it's still very much a work in progress. For more reliable results across the board, I'd recommend using JavaScript - perhaps jQuery.
|
11,776,662
|
I'm trying to make a loader gif using CSS animation and transforms instead. Unfortunately, the following code converts Firefox's (and sometimes Chrome's,Safari's) CPU usage on my Mac OSX from <10% to >90%.
```
i.icon-repeat {
display:none;
-webkit-animation: Rotate 1s infinite linear;
-moz-animation: Rotate 1s infinite linear; //**this is the offending line**
animation: Rotate 1s infinite linear;
}
@-webkit-keyframes Rotate {
from {-webkit-transform:rotate(0deg);}
to {-webkit-transform:rotate(360deg);}
}
@-keyframes Rotate {
from {transform:rotate(0deg);}
to {transform:rotate(360deg);}
}
@-moz-keyframes Rotate {
from {-moz-transform:rotate(0deg);}
to {-moz-transform:rotate(360deg);}
}
```
Note, that without the `infinite linear` rotation or the `-moz-` vendor prefix, the "loader gif"-like behavior is lost. That is, the icon doesn't continuously rotate.
Perhaps this is just a bug or maybe I'm doing something wrong?
|
2012/08/02
|
[
"https://Stackoverflow.com/questions/11776662",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/702275/"
] |
I fixed my own problem. Instead of toggling the visibility of the icon, I simply added it then removed it from the DOM. The key thing I hadn't known about using CSS animations is that `display:none` vs. `display:inline` consumes CPU either way.
So instead of that, do this (combined with the CSS in my question above):
```
var icon = document.createElement("i"); //create the icon
icon.className = "icon-repeat";
document.body.appendChild(icon); //icon append to the DOM
function removeElement(el) { // generic function to remove element could be used elsewhere besides this example
el.parentNode.removeChild(el);
}
removeElement(icon); //triggers the icon's removal from the DOM
```
|
Could be a bug. But as with many of these vendor prefixed things, it's still very much a work in progress. For more reliable results across the board, I'd recommend using JavaScript - perhaps jQuery.
|
11,776,662
|
I'm trying to make a loader gif using CSS animation and transforms instead. Unfortunately, the following code converts Firefox's (and sometimes Chrome's,Safari's) CPU usage on my Mac OSX from <10% to >90%.
```
i.icon-repeat {
display:none;
-webkit-animation: Rotate 1s infinite linear;
-moz-animation: Rotate 1s infinite linear; //**this is the offending line**
animation: Rotate 1s infinite linear;
}
@-webkit-keyframes Rotate {
from {-webkit-transform:rotate(0deg);}
to {-webkit-transform:rotate(360deg);}
}
@-keyframes Rotate {
from {transform:rotate(0deg);}
to {transform:rotate(360deg);}
}
@-moz-keyframes Rotate {
from {-moz-transform:rotate(0deg);}
to {-moz-transform:rotate(360deg);}
}
```
Note, that without the `infinite linear` rotation or the `-moz-` vendor prefix, the "loader gif"-like behavior is lost. That is, the icon doesn't continuously rotate.
Perhaps this is just a bug or maybe I'm doing something wrong?
|
2012/08/02
|
[
"https://Stackoverflow.com/questions/11776662",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/702275/"
] |
I fixed my own problem. Instead of toggling the visibility of the icon, I simply added it then removed it from the DOM. The key thing I hadn't known about using CSS animations is that `display:none` vs. `display:inline` consumes CPU either way.
So instead of that, do this (combined with the CSS in my question above):
```
var icon = document.createElement("i"); //create the icon
icon.className = "icon-repeat";
document.body.appendChild(icon); //icon append to the DOM
function removeElement(el) { // generic function to remove element could be used elsewhere besides this example
el.parentNode.removeChild(el);
}
removeElement(icon); //triggers the icon's removal from the DOM
```
|
First, which version of Firefox are you using? It might be a bug but CSS3 animations are known to use a lot of CPU, for a fraction of a second. However, they are much faster than their jQuery counterpart.
It's not @-keyframes. It's @keyframes.
On a side note, I guess it's better you use something new rather than the rotating image.
|
66,428,050
|
I was hoping something like this would work to get all but the last entry of a group:
```
from io import StringIO
import pandas as pd
df = pd.read_table(StringIO("""A B
1 a
1 b
2 c
3 z
3 z
3 z"""), sep="\s+")
g = df.groupby("A")
g.head(g.size() - 1)
```
I'd like to do it with vectorized functions or be told why it is not possible :)
|
2021/03/01
|
[
"https://Stackoverflow.com/questions/66428050",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/992687/"
] |
Check `duplicated`
```
out = df[df.duplicated('A',keep='last')]
Out[50]:
A B
0 1 a
3 3 z
4 3 z
```
Or `tail`
```
df.drop(g.tail(1).index)
Out[54]:
A B
0 1 a
3 3 z
4 3 z
```
|
Easy way along your train of thought, try `lambda`:
```
df.groupby('A').apply(lambda x: x.iloc[:-1])
```
Less easy way, use `transform`:
```
g = df.groupby('A')
df[g['A'].transform('size')-1 > g.cumcount()]
```
But easiest and fastest:
```
df[~df.duplicated('A', keep='last')]
```
|
3,691,900
|
For my data structures class, the first project requires a text file of songs to be parsed.
An example of input is:
ARTIST="unknown"
TITLE="Rockabye Baby"
LYRICS="Rockabye baby in the treetops
When the wind blows your cradle will rock
When the bow breaks your cradle will fall
Down will come baby cradle and all
"
I'm wondering the best way to extract the Artist, Title and Lyrics to their respective string fields in a Song class. My first reaction was to use a Scanner, take in the first character, and based on the letter, use skip() to advance the required characters and read the text between the quotation marks.
If I use this, I'm losing out on buffering the input. The full song text file has over 422K lines of text. Can the Scanner handle this even without buffering?
|
2010/09/11
|
[
"https://Stackoverflow.com/questions/3691900",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/214892/"
] |
For something like this, you should probably just use Regular Expressions. The Matcher class supports buffered input.
The find method takes an offset, so you can just parse them at each offset.
<http://download.oracle.com/javase/1.4.2/docs/api/java/util/regex/Matcher.html>
Regex is a whole world into itself. If you've never used them before, start here <http://download.oracle.com/javase/tutorial/essential/regex/> and be prepared. The effort is *so* very worth the time required.
|
If the source data can be parsed using one token look ahead, [`StreamTokenizer`](http://download.oracle.com/javase/6/docs/api/java/io/StreamTokenizer.html) may be a choice. Here is an [example](https://stackoverflow.com/questions/2082174) that compares `StreamTokenizer` and `Scanner`.
|
3,691,900
|
For my data structures class, the first project requires a text file of songs to be parsed.
An example of input is:
ARTIST="unknown"
TITLE="Rockabye Baby"
LYRICS="Rockabye baby in the treetops
When the wind blows your cradle will rock
When the bow breaks your cradle will fall
Down will come baby cradle and all
"
I'm wondering the best way to extract the Artist, Title and Lyrics to their respective string fields in a Song class. My first reaction was to use a Scanner, take in the first character, and based on the letter, use skip() to advance the required characters and read the text between the quotation marks.
If I use this, I'm losing out on buffering the input. The full song text file has over 422K lines of text. Can the Scanner handle this even without buffering?
|
2010/09/11
|
[
"https://Stackoverflow.com/questions/3691900",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/214892/"
] |
For something like this, you should probably just use Regular Expressions. The Matcher class supports buffered input.
The find method takes an offset, so you can just parse them at each offset.
<http://download.oracle.com/javase/1.4.2/docs/api/java/util/regex/Matcher.html>
Regex is a whole world into itself. If you've never used them before, start here <http://download.oracle.com/javase/tutorial/essential/regex/> and be prepared. The effort is *so* very worth the time required.
|
In this case, you could use a [CSV reader](http://h2database.com/javadoc/org/h2/tools/Csv.html), with the field separator '=' and the field delimiter '"' (double quote). It's not perfect, as you get one row for ARTIST, TITLE, and LYRICS.
|
29,317,431
|
How to delete the last 5 characters from the string?
```
procedure TForm1.Button15Click(Sender: TObject);
var
str:string;
begin
str:='012345678911234567892223456789';
showmessage(str);
end;
```
Thanks in advance
|
2015/03/28
|
[
"https://Stackoverflow.com/questions/29317431",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3276608/"
] |
The absolute easiest way, with the least amount of overhead:
```
str := 'ABCDEFGHIJKLMNOPQRSTUVWXYZ';
ShowMessage(str);
SetLength(str, Length(str) - 5);
ShowMessage(str);
```
This involves no allocation of a temporary string, no access to anything in the RTL that wastes CPU time, and is extremely fast and efficient.
|
One way would be
```
str:= copy (str, 1, length (str) - 5)
```
Another would be
```
delete (str, length (str) - 4, 5)
```
|
29,317,431
|
How to delete the last 5 characters from the string?
```
procedure TForm1.Button15Click(Sender: TObject);
var
str:string;
begin
str:='012345678911234567892223456789';
showmessage(str);
end;
```
Thanks in advance
|
2015/03/28
|
[
"https://Stackoverflow.com/questions/29317431",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3276608/"
] |
Using stringhelper routines (not available in D7 though):
```
ShowMessage(str.Substring(0,str.Length-5));
```
In D7 using the StrUtils unit:
```
ShowMessage(LeftStr(str,Length(str)-5));
```
|
One way would be
```
str:= copy (str, 1, length (str) - 5)
```
Another would be
```
delete (str, length (str) - 4, 5)
```
|
29,317,431
|
How to delete the last 5 characters from the string?
```
procedure TForm1.Button15Click(Sender: TObject);
var
str:string;
begin
str:='012345678911234567892223456789';
showmessage(str);
end;
```
Thanks in advance
|
2015/03/28
|
[
"https://Stackoverflow.com/questions/29317431",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3276608/"
] |
The absolute easiest way, with the least amount of overhead:
```
str := 'ABCDEFGHIJKLMNOPQRSTUVWXYZ';
ShowMessage(str);
SetLength(str, Length(str) - 5);
ShowMessage(str);
```
This involves no allocation of a temporary string, no access to anything in the RTL that wastes CPU time, and is extremely fast and efficient.
|
Using stringhelper routines (not available in D7 though):
```
ShowMessage(str.Substring(0,str.Length-5));
```
In D7 using the StrUtils unit:
```
ShowMessage(LeftStr(str,Length(str)-5));
```
|
67,981,095
|
I am new in Spring and although I can convert domain entities as `List<Entity>`, I cannot convert them properly for the the `Optional<Entity>`. I have the following methods in repository and service:
***EmployeeRepository:***
```
@Query(value = "SELECT ...")
Optional<Employee> findByUuid(@Param(value = "uuid") final UUID uuid);
```
***EmployeeService:***
```
@Override
@LogExecution
@Transactional(readOnly = true)
public Optional<EmployeeDTO> findByUuid(UUID uuid) {
Optional<Employee> employee = employeeRepository.findByUuid(uuid);
return employee
.stream()
.map(EmployeeDTO::new)
// .orElse(null);
//.findFirst(); /// ???
}
```
My questions:
**1.** How should I convert `Optional<Employee>` to `Optional<EmployeeDTO>` properly?
**2.** Does `Spring JPA` collect the fields in the `SELECT` clause and map them in the service method to the corresponding `DTO` by matching their names? If so, does it maintain the naming e.g. `employee_name` to `employeeName` in database table and domain model class?
|
2021/06/15
|
[
"https://Stackoverflow.com/questions/67981095",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
The mapping that happens between the output of `employeeRepository#findByUuid` that is `Optional<Employee>` and the method output type `Optional<EmployeeDTO>` is *1:1*, so no `Stream` (calling `stream()`) here is involved.
All you need is to map properly the fields of `Employee` into `EmployeeDTO`. Handling the case the `Optional` returned from the `employeeRepository#findByUuid` is actually empty could be left on the subsequent chains of the optional. There is no need for `orElse` or `findFirst` calls.
Assuming the following classes both with all-args constructor and getters:
```
class Employee {
private final long id;
private final String firstName;
private final String lastName;
}
```
```
class EmployeeDTO {
private final long id;
private final String name;
private final String surname;
}
```
... you can perform this. Nothing else than finding a way to create `EmployeeDTO` from `Employee`'s fields is needed. If the `Optional` returned from the `employeeRepository` is returned, no mapping happens and an empty `Optional` is returned.
```
@Override
@LogExecution
@Transactional(readOnly = true)
public Optional<EmployeeDTO> findByUuid(UUID uuid) {
return employeeRepository
.findByUuid(uuid) // Optional<Employee>
.map(emp -> new EmployeeDTO( // Optional<EmployeeDTO>
emp.getId(), // .. id -> id
emp.getFirstName(), // .. firstName -> name
emp.getLastName())); // .. lastName -> surname
}
```
Note: For `Employee` -> `EmployeeDTO` mapping I recommend picking one of these:
* Create a constructor accepting `Employee` in `EmployeeDTO` allowing to map with `.map(EmployeeDTO::new)` (drawback: creates a dependency).
* Just map with getters/setters.
* Use a mapping framework such as [MapStruct](https://mapstruct.org/) or any other.
|
There are multiple options to map your entity to a DTO.
1. Using projections: Your repository can directly return a DTO by using projections. This might be the best option if you don't need the entity at all. You can find everything about projections here <https://docs.spring.io/spring-data/jpa/docs/current/reference/html/#projections>
2. Using a library like [mapstruct](https://mapstruct.org/) or [modelmapper](http://modelmapper.org/) to generate your mapping code
3. Add a constructor or static factory method to your DTO. Something like
```java
class EmployeeDTO {
// fields here ...
public static EmployeeDTO ofEntity(Employee entity) {
var dto = new EmployeeDTO();
// set fields
return dto;
}
}
```
And call `employee.map(EmployeeDTO::ofEntity)` in your service.
|
919,030
|
Prove that sequence {(2n+1)/n} is Cauchy.
I understand the definition of a Cauchy sequence; however, I'm not sure how to find the necessary value of N to satisfy the prove.
I know that you can simply proof that the sequence is Cauchy by stating that it converges to 2. But, for this specific problem we are asked to use strictly the definition of a Cauchy sequence in writing the proof.
Thanks for the help in advance.
|
2014/09/04
|
[
"https://math.stackexchange.com/questions/919030",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/173692/"
] |
**Hint:** Without loss of generality, suppose $m>n$. Then
$$\Bigg\vert \frac{2n+1}{n}- \frac{2m+1}{m}\Bigg\vert= \Bigg\vert \frac{m-n}{mn}\Bigg\vert\le\Bigg\vert \frac{m}{mn}\Bigg\vert=\frac{1}{n}$$
|
Assuming that you're considering this sequence as a sequence in $\mathbb{R}$, then the fact that $\{ \frac{2n+1}{n}\}\_{n=1}^\infty$ is a convergent sequence (it converges to $2$) and using the fact that any convergent sequence is necessarily Cauchy.
|
919,030
|
Prove that sequence {(2n+1)/n} is Cauchy.
I understand the definition of a Cauchy sequence; however, I'm not sure how to find the necessary value of N to satisfy the prove.
I know that you can simply proof that the sequence is Cauchy by stating that it converges to 2. But, for this specific problem we are asked to use strictly the definition of a Cauchy sequence in writing the proof.
Thanks for the help in advance.
|
2014/09/04
|
[
"https://math.stackexchange.com/questions/919030",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/173692/"
] |
There is a theorem"a sequence is convergent iff it is a Cauchy sequence"
so it needs to prove that the given sequence is convergent which emphasises that it is Cauchy.$$a\_n=2+\frac{1}{n}\to 2$$i.e. the sequence is convergent and hence it is Cauchy sequence
|
Assuming that you're considering this sequence as a sequence in $\mathbb{R}$, then the fact that $\{ \frac{2n+1}{n}\}\_{n=1}^\infty$ is a convergent sequence (it converges to $2$) and using the fact that any convergent sequence is necessarily Cauchy.
|
919,030
|
Prove that sequence {(2n+1)/n} is Cauchy.
I understand the definition of a Cauchy sequence; however, I'm not sure how to find the necessary value of N to satisfy the prove.
I know that you can simply proof that the sequence is Cauchy by stating that it converges to 2. But, for this specific problem we are asked to use strictly the definition of a Cauchy sequence in writing the proof.
Thanks for the help in advance.
|
2014/09/04
|
[
"https://math.stackexchange.com/questions/919030",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/173692/"
] |
**Hint:** Without loss of generality, suppose $m>n$. Then
$$\Bigg\vert \frac{2n+1}{n}- \frac{2m+1}{m}\Bigg\vert= \Bigg\vert \frac{m-n}{mn}\Bigg\vert\le\Bigg\vert \frac{m}{mn}\Bigg\vert=\frac{1}{n}$$
|
There is a theorem"a sequence is convergent iff it is a Cauchy sequence"
so it needs to prove that the given sequence is convergent which emphasises that it is Cauchy.$$a\_n=2+\frac{1}{n}\to 2$$i.e. the sequence is convergent and hence it is Cauchy sequence
|
30,446,531
|
For a sample dataframe:
```
df1 <- structure(list(i.d = structure(1:9, .Label = c("a", "b", "c",
"d", "e", "f", "g", "h", "i"), class = "factor"), group = c(1L,
1L, 2L, 1L, 3L, 3L, 2L, 2L, 1L), cat = c(0L, 0L, 1L, 1L, 0L,
0L, 1L, 0L, NA)), .Names = c("i.d", "group", "cat"), class = "data.frame", row.names = c(NA,
-9L))
```
I wish to add an additional column to my dataframe ("pc.cat") which records the percentage '1s' in column cat BY the group ID variable.
For example, there are four values in group 1 (i.d's a, b, d and i). Value 'i' is NA so this can be ignored for now. Only one of the three values left is one, so the percentage would read 33.33 (to 2 dp). This value will be populated into column 'pc.cat' next to all the rows with '1' in the group (even the NA columns). The process would then be repeated for the other groups (2 and 3).
If anyone could help me with the code for this I would greatly appreciate it.
|
2015/05/25
|
[
"https://Stackoverflow.com/questions/30446531",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1073425/"
] |
This can be accomplished with the `ave` function:
```
df1$pc.cat <- ave(df1$cat, df1$group, FUN=function(x) 100*mean(na.omit(x)))
df1
# i.d group cat pc.cat
# 1 a 1 0 33.33333
# 2 b 1 0 33.33333
# 3 c 2 1 66.66667
# 4 d 1 1 33.33333
# 5 e 3 0 0.00000
# 6 f 3 0 0.00000
# 7 g 2 1 66.66667
# 8 h 2 0 66.66667
# 9 i 1 NA 33.33333
```
|
With data.table:
```
library(data.table)
DT <- data.table(df1)
DT[, list(sum(na.omit(cat))/length(cat)), by = "group"]
```
|
30,446,531
|
For a sample dataframe:
```
df1 <- structure(list(i.d = structure(1:9, .Label = c("a", "b", "c",
"d", "e", "f", "g", "h", "i"), class = "factor"), group = c(1L,
1L, 2L, 1L, 3L, 3L, 2L, 2L, 1L), cat = c(0L, 0L, 1L, 1L, 0L,
0L, 1L, 0L, NA)), .Names = c("i.d", "group", "cat"), class = "data.frame", row.names = c(NA,
-9L))
```
I wish to add an additional column to my dataframe ("pc.cat") which records the percentage '1s' in column cat BY the group ID variable.
For example, there are four values in group 1 (i.d's a, b, d and i). Value 'i' is NA so this can be ignored for now. Only one of the three values left is one, so the percentage would read 33.33 (to 2 dp). This value will be populated into column 'pc.cat' next to all the rows with '1' in the group (even the NA columns). The process would then be repeated for the other groups (2 and 3).
If anyone could help me with the code for this I would greatly appreciate it.
|
2015/05/25
|
[
"https://Stackoverflow.com/questions/30446531",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1073425/"
] |
This can be accomplished with the `ave` function:
```
df1$pc.cat <- ave(df1$cat, df1$group, FUN=function(x) 100*mean(na.omit(x)))
df1
# i.d group cat pc.cat
# 1 a 1 0 33.33333
# 2 b 1 0 33.33333
# 3 c 2 1 66.66667
# 4 d 1 1 33.33333
# 5 e 3 0 0.00000
# 6 f 3 0 0.00000
# 7 g 2 1 66.66667
# 8 h 2 0 66.66667
# 9 i 1 NA 33.33333
```
|
```
library(data.table)
setDT(df1)
df1[!is.na(cat), mean(cat), by=group]
```
|
30,446,531
|
For a sample dataframe:
```
df1 <- structure(list(i.d = structure(1:9, .Label = c("a", "b", "c",
"d", "e", "f", "g", "h", "i"), class = "factor"), group = c(1L,
1L, 2L, 1L, 3L, 3L, 2L, 2L, 1L), cat = c(0L, 0L, 1L, 1L, 0L,
0L, 1L, 0L, NA)), .Names = c("i.d", "group", "cat"), class = "data.frame", row.names = c(NA,
-9L))
```
I wish to add an additional column to my dataframe ("pc.cat") which records the percentage '1s' in column cat BY the group ID variable.
For example, there are four values in group 1 (i.d's a, b, d and i). Value 'i' is NA so this can be ignored for now. Only one of the three values left is one, so the percentage would read 33.33 (to 2 dp). This value will be populated into column 'pc.cat' next to all the rows with '1' in the group (even the NA columns). The process would then be repeated for the other groups (2 and 3).
If anyone could help me with the code for this I would greatly appreciate it.
|
2015/05/25
|
[
"https://Stackoverflow.com/questions/30446531",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1073425/"
] |
```
library(data.table)
setDT(df1)
df1[!is.na(cat), mean(cat), by=group]
```
|
With data.table:
```
library(data.table)
DT <- data.table(df1)
DT[, list(sum(na.omit(cat))/length(cat)), by = "group"]
```
|
26,030
|
I often hear people say
>
> 你给我放聪明点
>
>
>
or
>
> 你给我放老实点
>
>
>
I understand 放 usually means to place or to release, but in this context it is really confusing. Do people mean to place things clever or honest in the speaker?
Another example is 放肆. The 放 is hard to understand.
|
2017/08/16
|
[
"https://chinese.stackexchange.com/questions/26030",
"https://chinese.stackexchange.com",
"https://chinese.stackexchange.com/users/17398/"
] |
放老实点 means put in a honest way. Similarly, 放聪明点 means put in a wise way.
放肆 means your behavior cross the line people just can not tolerate. it's kind of talk-down.
放 is such a common word that it can be used in lots of circumstances. There will be a lot of meaning you will get from a dictionary.
For example, 放鞭炮,放牛,放纵,放荡,绽放, 放 in these phrases means differently.
|
'放 usually means to place or to release' your consider is right, but 放肆 can’t split,because 放肆 is a entirety, for example 你太放肆了, translate to english you are realy lack of courtesy.
about 你给我放聪明点 or 你给我放老实点,放 expression to rebuke,
|
13,895,605
|
Title says it all. I have some code which is included below and I am wondering how I would go about obtaining the statistics/information related to the threads (i.e. how many different threads are running, names of the different threads). For consistency sake, image the code is run using `22 33 44 55` as command line arguments.
I am also wondering what the purpose of the try blocks are in this particular example. I understand what try blocks do in general, but specifically what do the try blocks do for the threads.
```
public class SimpleThreads {
//Display a message, preceded by the name of the current thread
static void threadMessage(String message) {
long threadName = Thread.currentThread().getId();
System.out.format("id is %d: %s%n", threadName, message);
}
private static class MessageLoop implements Runnable {
String info[];
MessageLoop(String x[]) {
info = x;
}
public void run() {
try {
for (int i = 1; i < info.length; i++) {
//Pause for 4 seconds
Thread.sleep(4000);
//Print a message
threadMessage(info[i]);
}
} catch (InterruptedException e) {
threadMessage("I wasn't done!");
}
}
}
public static void main(String args[])throws InterruptedException {
//Delay, in milliseconds before we interrupt MessageLoop
//thread (default one minute).
long extent = 1000 * 60;//one minute
String[] nargs = {"33","ONE", "TWO"};
if (args.length != 0) nargs = args;
else System.out.println("assumed: java SimpleThreads 33 ONE TWO");
try {
extent = Long.parseLong(nargs[0]) * 1000;
} catch (NumberFormatException e) {
System.err.println("First Argument must be an integer.");
System.exit(1);
}
threadMessage("Starting MessageLoop thread");
long startTime = System.currentTimeMillis();
Thread t = new Thread(new MessageLoop(nargs));
t.start();
threadMessage("Waiting for MessageLoop thread to finish");
//loop until MessageLoop thread exits
int seconds = 0;
while (t.isAlive()) {
threadMessage("Seconds: " + seconds++);
//Wait maximum of 1 second for MessageLoop thread to
//finish.
t.join(1000);
if (((System.currentTimeMillis() - startTime) > extent) &&
t.isAlive()) {
threadMessage("Tired of waiting!");
t.interrupt();
//Shouldn't be long now -- wait indefinitely
t.join();
}
}
threadMessage("All done!");
}
}
```
|
2012/12/15
|
[
"https://Stackoverflow.com/questions/13895605",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1906808/"
] |
you can use [VisualVM](http://visualvm.java.net/threads.html) for threads monitoring. which is included in JDK 6 update 7 and later. You can find visualVm in JDK path/bin folder.
>
> VisualVM presents data for local and remote applications in a tab
> specific for that application. Application tabs are displayed in the
> main window to the right of the Applications window. You can have
> multiple application tabs open at one time. Each application tab
> contains sub-tabs that display different types of information about
> the application.**VisualVM displays real-time, high-level data on
> thread activity in the Threads tab.**
>
>
>

|
For the first issue:
Consider using [VisualVM](http://docs.oracle.com/javase/6/docs/technotes/guides/visualvm/index.html) to monitor those threads. Or just use your IDEs debugger(eclipse has such a function imo).
```
I am also wondering what the purpose of the try blocks are in this particular example.
```
`InterruptedException`s occur if `Thread.interrupt()` is called, while a thread was sleeping. Then the `Thread.sleep()` is interrupted and the Thread will jump into the catch-code.
In your example your thread sleeps for 4 seconds. If another thread invokes `Thread.interrupt()` on your sleeping one, it will then execute `threadMessage("I wasn't done!");`.
Well.. as you might have understood now, the catch-blocks handle the `sleep()`-method, not a exception thrown by a thread. It throws a checked exception which you are forced to catch.
|
13,895,605
|
Title says it all. I have some code which is included below and I am wondering how I would go about obtaining the statistics/information related to the threads (i.e. how many different threads are running, names of the different threads). For consistency sake, image the code is run using `22 33 44 55` as command line arguments.
I am also wondering what the purpose of the try blocks are in this particular example. I understand what try blocks do in general, but specifically what do the try blocks do for the threads.
```
public class SimpleThreads {
//Display a message, preceded by the name of the current thread
static void threadMessage(String message) {
long threadName = Thread.currentThread().getId();
System.out.format("id is %d: %s%n", threadName, message);
}
private static class MessageLoop implements Runnable {
String info[];
MessageLoop(String x[]) {
info = x;
}
public void run() {
try {
for (int i = 1; i < info.length; i++) {
//Pause for 4 seconds
Thread.sleep(4000);
//Print a message
threadMessage(info[i]);
}
} catch (InterruptedException e) {
threadMessage("I wasn't done!");
}
}
}
public static void main(String args[])throws InterruptedException {
//Delay, in milliseconds before we interrupt MessageLoop
//thread (default one minute).
long extent = 1000 * 60;//one minute
String[] nargs = {"33","ONE", "TWO"};
if (args.length != 0) nargs = args;
else System.out.println("assumed: java SimpleThreads 33 ONE TWO");
try {
extent = Long.parseLong(nargs[0]) * 1000;
} catch (NumberFormatException e) {
System.err.println("First Argument must be an integer.");
System.exit(1);
}
threadMessage("Starting MessageLoop thread");
long startTime = System.currentTimeMillis();
Thread t = new Thread(new MessageLoop(nargs));
t.start();
threadMessage("Waiting for MessageLoop thread to finish");
//loop until MessageLoop thread exits
int seconds = 0;
while (t.isAlive()) {
threadMessage("Seconds: " + seconds++);
//Wait maximum of 1 second for MessageLoop thread to
//finish.
t.join(1000);
if (((System.currentTimeMillis() - startTime) > extent) &&
t.isAlive()) {
threadMessage("Tired of waiting!");
t.interrupt();
//Shouldn't be long now -- wait indefinitely
t.join();
}
}
threadMessage("All done!");
}
}
```
|
2012/12/15
|
[
"https://Stackoverflow.com/questions/13895605",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1906808/"
] |
you can use [VisualVM](http://visualvm.java.net/threads.html) for threads monitoring. which is included in JDK 6 update 7 and later. You can find visualVm in JDK path/bin folder.
>
> VisualVM presents data for local and remote applications in a tab
> specific for that application. Application tabs are displayed in the
> main window to the right of the Applications window. You can have
> multiple application tabs open at one time. Each application tab
> contains sub-tabs that display different types of information about
> the application.**VisualVM displays real-time, high-level data on
> thread activity in the Threads tab.**
>
>
>

|
If you are not able to use tools like VisualVM (which is very useful, IMHO), you can also dump the thread stack in Java, e.g. to your logfile. I am using such dumps on my server programs, when certain thresholds are crossed. I found doing such snapshots as part of the program very helpful. Gives you some hints on what happened before the system crashes and it is too late to use profilers (deadlock, OutOfMemory, slowdown etc.). Have a look here for the code: [Trigger complete stack dump programmatically?](https://stackoverflow.com/questions/12965836/trigger-complete-stack-dump-programmatically/12970158#12970158)
|
13,895,605
|
Title says it all. I have some code which is included below and I am wondering how I would go about obtaining the statistics/information related to the threads (i.e. how many different threads are running, names of the different threads). For consistency sake, image the code is run using `22 33 44 55` as command line arguments.
I am also wondering what the purpose of the try blocks are in this particular example. I understand what try blocks do in general, but specifically what do the try blocks do for the threads.
```
public class SimpleThreads {
//Display a message, preceded by the name of the current thread
static void threadMessage(String message) {
long threadName = Thread.currentThread().getId();
System.out.format("id is %d: %s%n", threadName, message);
}
private static class MessageLoop implements Runnable {
String info[];
MessageLoop(String x[]) {
info = x;
}
public void run() {
try {
for (int i = 1; i < info.length; i++) {
//Pause for 4 seconds
Thread.sleep(4000);
//Print a message
threadMessage(info[i]);
}
} catch (InterruptedException e) {
threadMessage("I wasn't done!");
}
}
}
public static void main(String args[])throws InterruptedException {
//Delay, in milliseconds before we interrupt MessageLoop
//thread (default one minute).
long extent = 1000 * 60;//one minute
String[] nargs = {"33","ONE", "TWO"};
if (args.length != 0) nargs = args;
else System.out.println("assumed: java SimpleThreads 33 ONE TWO");
try {
extent = Long.parseLong(nargs[0]) * 1000;
} catch (NumberFormatException e) {
System.err.println("First Argument must be an integer.");
System.exit(1);
}
threadMessage("Starting MessageLoop thread");
long startTime = System.currentTimeMillis();
Thread t = new Thread(new MessageLoop(nargs));
t.start();
threadMessage("Waiting for MessageLoop thread to finish");
//loop until MessageLoop thread exits
int seconds = 0;
while (t.isAlive()) {
threadMessage("Seconds: " + seconds++);
//Wait maximum of 1 second for MessageLoop thread to
//finish.
t.join(1000);
if (((System.currentTimeMillis() - startTime) > extent) &&
t.isAlive()) {
threadMessage("Tired of waiting!");
t.interrupt();
//Shouldn't be long now -- wait indefinitely
t.join();
}
}
threadMessage("All done!");
}
}
```
|
2012/12/15
|
[
"https://Stackoverflow.com/questions/13895605",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1906808/"
] |
For the first issue:
Consider using [VisualVM](http://docs.oracle.com/javase/6/docs/technotes/guides/visualvm/index.html) to monitor those threads. Or just use your IDEs debugger(eclipse has such a function imo).
```
I am also wondering what the purpose of the try blocks are in this particular example.
```
`InterruptedException`s occur if `Thread.interrupt()` is called, while a thread was sleeping. Then the `Thread.sleep()` is interrupted and the Thread will jump into the catch-code.
In your example your thread sleeps for 4 seconds. If another thread invokes `Thread.interrupt()` on your sleeping one, it will then execute `threadMessage("I wasn't done!");`.
Well.. as you might have understood now, the catch-blocks handle the `sleep()`-method, not a exception thrown by a thread. It throws a checked exception which you are forced to catch.
|
If you are not able to use tools like VisualVM (which is very useful, IMHO), you can also dump the thread stack in Java, e.g. to your logfile. I am using such dumps on my server programs, when certain thresholds are crossed. I found doing such snapshots as part of the program very helpful. Gives you some hints on what happened before the system crashes and it is too late to use profilers (deadlock, OutOfMemory, slowdown etc.). Have a look here for the code: [Trigger complete stack dump programmatically?](https://stackoverflow.com/questions/12965836/trigger-complete-stack-dump-programmatically/12970158#12970158)
|
3,334,991
|
Let $(X\_j)\_{j\in J}$ be a family of nonempty toplogical spaces. Set $X=\prod\_{j\in J} X\_j$. Now let $A\_j\subseteq X\_j$ be closed.
It is stated that $A\_j\times\prod\_{i\neq j} X\_i\subseteq X$ is closed.
How can we see this? Or in particular how can we deduce that the complement $(A\_j\times\prod\_{i\neq j} X\_i)^c\subseteq X$ is open?
Because taking the complement of an infinite product gets messy, doesnt it?
Is it: $\bigcup\_{j\in J} A\_j^c\times\prod\_{i\neq j} X\_i$ Where this would be indeed an open set.
Oh, I think I finally understand why there is most of the time intersection involved when there is set equality to such closed sets in product topology.
Would this be correct?
Are there other things good to know about product sets?
|
2019/08/26
|
[
"https://math.stackexchange.com/questions/3334991",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/439383/"
] |
The best thing to keep in mind about the product topology, imo, is the following “universal” property. “The product topology is the coarsest topology such that all the projections are continuous.”
The “coarsest” part is hard to wrap your head around, but it is certainly true that the projection maps are all continuous. So in particular, if $\pi\_j$ is the projection to X\_j, it should be true that $\pi\_j^{-1}(A\_j)$ is closed. But this is exactly
$$
A\_j \times \prod\_{i\neq j} X\_i,
$$
so sets of this form are closed. Similarly, preimages of open sets $U\_j\subseteq X\_j$ are of the form
$$
U\_j \times \prod\_{i\neq j} X\_i
$$
and so things of this form are open. These are often called “open (or closed) cylinders.”
The “coarsest” part of the definition means we want the product topology to have as few open (or closed) sets as possible while keeping the $\pi\_j$ all continuous. So we need cylinder sets to be open, and any arbitrary union or finite intersection of them should also be open. But this is all. Consider a product of open sets in each component:
$$
\prod\_i U\_i, \quad U\_i \subseteq X\_i.
$$
Any finite intersection or union of cylinder sets will still contain the full $X\_i$ in all but finitely many components. So the product above can only be open if $U\_i \subsetneq X\_i$ for finitely many $i$, and $U\_i = X\_i$ otherwise.
Addendum:
Complements in the product topology can be messy, but in this case are not:
$$(A\_j \times \prod\_{i \neq j} X\_i)^C = A\_j^C \times \prod\_{i \neq j} X\_i,$$
which is clearly open since it is $\pi\_j^{-1}(A\_j)$. "The complement of an open cylinder is a closed cylinder." We can check it manually: both sides are the set of all tuples $(a\_i)\_i$ such that $a\_j \not\in A\_j$.
|
Note that by definition $X:= \prod\_{j \in J} X\_j$ is the cartesian product of the $X\_j$ with the topology given by the coarsest topology s.t. the projections $\pi\_j: X \rightarrow X\_j$ are continuous $\forall j \in J$.
Then $A\_j \times \prod\_{j \in J, j \neq i} X\_j = \pi^{-1}\_j (A\_j)$ and since $A\_j$ is closed in $X\_j$, its preimage is closed in $\prod\_{j \in J} X\_j$.
---
In general, I would recommend to (at least when there are infinite products involved) rely on the universal constructions i.e. to work with morphisms.
|
38,255,655
|
I'm learning git, and I'm following the Git community book.
Previously (long time ago) I made a public repository on Github, with some files. Now I set up a local Git repository on my current computer, and committed some files. Then I added a remote pointing to my Github page:
```
[root@osboxes c]# git remote add learnc https://github.com/michaelklachko/Learning-C
```
That seemed to be successful:
```
[root@osboxes c]# git remote show learnc
* remote learnc
Fetch URL: https://github.com/michaelklachko/Learning-C
Push URL: https://github.com/michaelklachko/Learning-C
HEAD branch: master
Remote branch:
master tracked
Local ref configured for 'git push':
master pushes to master (local out of date)
```
Now I want to download the files from my Github repo to my computer. I did this:
```
[root@osboxes c]# git fetch learnc
[root@osboxes c]# git merge learnc/master
warning: refname 'learnc/master' is ambiguous.
Already up-to-date.
```
However, I don't see any new files in my local directory. How can I get them?
I also tried to do this:
```
[root@osboxes c]# git pull learnc master
From https://github.com/michaelklachko/Learning-C
* branch master -> FETCH_HEAD
fatal: refusing to merge unrelated histories
```
BTW, locally I'm on master branch (there are no other branches):
```
[root@osboxes c]# git status
On branch master
nothing to commit, working directory clean
```
|
2016/07/07
|
[
"https://Stackoverflow.com/questions/38255655",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2722504/"
] |
Try `--allow-unrelated-histories`
Like max630 commented, or as explained here [Git refusing to merge unrelated histories](https://stackoverflow.com/questions/37937984/git-refusing-to-merge-unrelated-histories)
|
Execute the following command:
```
git pull origin master --allow-unrelated-histories
```
A merge vim will open. Add some merging message and:
1. Press ESC
2. Press Shift + ';'
3. Press 'w' and then press 'q'.
And you are good to go.
|
38,255,655
|
I'm learning git, and I'm following the Git community book.
Previously (long time ago) I made a public repository on Github, with some files. Now I set up a local Git repository on my current computer, and committed some files. Then I added a remote pointing to my Github page:
```
[root@osboxes c]# git remote add learnc https://github.com/michaelklachko/Learning-C
```
That seemed to be successful:
```
[root@osboxes c]# git remote show learnc
* remote learnc
Fetch URL: https://github.com/michaelklachko/Learning-C
Push URL: https://github.com/michaelklachko/Learning-C
HEAD branch: master
Remote branch:
master tracked
Local ref configured for 'git push':
master pushes to master (local out of date)
```
Now I want to download the files from my Github repo to my computer. I did this:
```
[root@osboxes c]# git fetch learnc
[root@osboxes c]# git merge learnc/master
warning: refname 'learnc/master' is ambiguous.
Already up-to-date.
```
However, I don't see any new files in my local directory. How can I get them?
I also tried to do this:
```
[root@osboxes c]# git pull learnc master
From https://github.com/michaelklachko/Learning-C
* branch master -> FETCH_HEAD
fatal: refusing to merge unrelated histories
```
BTW, locally I'm on master branch (there are no other branches):
```
[root@osboxes c]# git status
On branch master
nothing to commit, working directory clean
```
|
2016/07/07
|
[
"https://Stackoverflow.com/questions/38255655",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2722504/"
] |
If there is not substantial history on one end (aka if it is just a single readme commit on the github end), I often find it easier to manually copy the readme to my local repo and do a `git push -f` to make my version the new root commit.
I find it is slightly less complicated, doesn't require remembering an obscure flag, and keeps the history a bit cleaner.
|
When I used `--allow-unrelated-histories`, this command generated too many conflicts. There were conflicts in files which I didn't even work on. To get over the error `" Refusing to merge unrelated histories"`, I used following rebase command:
```
git pull --rebase=preserve --allow-unrelated-histories
```
After this commit the uncommitted changes with a commit message. Finally, run the following command:
```
git rebase --continue
```
After this, my working copy was up-to-date with the remote copy and I was able to push my changes as before. No more unrelated histories error while pulling.
|
38,255,655
|
I'm learning git, and I'm following the Git community book.
Previously (long time ago) I made a public repository on Github, with some files. Now I set up a local Git repository on my current computer, and committed some files. Then I added a remote pointing to my Github page:
```
[root@osboxes c]# git remote add learnc https://github.com/michaelklachko/Learning-C
```
That seemed to be successful:
```
[root@osboxes c]# git remote show learnc
* remote learnc
Fetch URL: https://github.com/michaelklachko/Learning-C
Push URL: https://github.com/michaelklachko/Learning-C
HEAD branch: master
Remote branch:
master tracked
Local ref configured for 'git push':
master pushes to master (local out of date)
```
Now I want to download the files from my Github repo to my computer. I did this:
```
[root@osboxes c]# git fetch learnc
[root@osboxes c]# git merge learnc/master
warning: refname 'learnc/master' is ambiguous.
Already up-to-date.
```
However, I don't see any new files in my local directory. How can I get them?
I also tried to do this:
```
[root@osboxes c]# git pull learnc master
From https://github.com/michaelklachko/Learning-C
* branch master -> FETCH_HEAD
fatal: refusing to merge unrelated histories
```
BTW, locally I'm on master branch (there are no other branches):
```
[root@osboxes c]# git status
On branch master
nothing to commit, working directory clean
```
|
2016/07/07
|
[
"https://Stackoverflow.com/questions/38255655",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2722504/"
] |
On your branch - say master, pull and allow unrelated histories
```
git pull origin master --allow-unrelated-histories
```
Worked for me.
|
Execute the following command:
```
git pull origin master --allow-unrelated-histories
```
A merge vim will open. Add some merging message and:
1. Press ESC
2. Press Shift + ';'
3. Press 'w' and then press 'q'.
And you are good to go.
|
38,255,655
|
I'm learning git, and I'm following the Git community book.
Previously (long time ago) I made a public repository on Github, with some files. Now I set up a local Git repository on my current computer, and committed some files. Then I added a remote pointing to my Github page:
```
[root@osboxes c]# git remote add learnc https://github.com/michaelklachko/Learning-C
```
That seemed to be successful:
```
[root@osboxes c]# git remote show learnc
* remote learnc
Fetch URL: https://github.com/michaelklachko/Learning-C
Push URL: https://github.com/michaelklachko/Learning-C
HEAD branch: master
Remote branch:
master tracked
Local ref configured for 'git push':
master pushes to master (local out of date)
```
Now I want to download the files from my Github repo to my computer. I did this:
```
[root@osboxes c]# git fetch learnc
[root@osboxes c]# git merge learnc/master
warning: refname 'learnc/master' is ambiguous.
Already up-to-date.
```
However, I don't see any new files in my local directory. How can I get them?
I also tried to do this:
```
[root@osboxes c]# git pull learnc master
From https://github.com/michaelklachko/Learning-C
* branch master -> FETCH_HEAD
fatal: refusing to merge unrelated histories
```
BTW, locally I'm on master branch (there are no other branches):
```
[root@osboxes c]# git status
On branch master
nothing to commit, working directory clean
```
|
2016/07/07
|
[
"https://Stackoverflow.com/questions/38255655",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2722504/"
] |
While I'm all for unblocking people's work issues, I don't think "push --force" or "--allow\_unrelated\_histories" should be taught to new users as general solutions because they can cause real havoc to a repository when one uses them without understand why things aren't working in the first place.
When you have a situation like this where you started with a local repository, and want to make a remote on GitHub to share your work with, there is something to watch out for.
When you create the new online repository, there's an option "Initialize this repository with a README". If you read the fine print, it says "Skip this step if you’re importing an existing repository."
You may have checked that box. Or similarly, you made an add/commit online before you attempted an initial push. What happens is you create a unique commit history in each place and they can't be reconciled without the special allowance mentioned in Nevermore's answer (because git doesn't want you to operate that way). You can follow some of the advice mentioned here, or more simply just don't check that option next time you want to link some local files to a brand new remote; keeping the remote clean for that initial push.
Reference: my first experience with git + hub was to run into this same problem and do a lot of learning to understand what had happened and why.
|
Execute the following command:
```
git pull origin master --allow-unrelated-histories
```
A merge vim will open. Add some merging message and:
1. Press ESC
2. Press Shift + ';'
3. Press 'w' and then press 'q'.
And you are good to go.
|
38,255,655
|
I'm learning git, and I'm following the Git community book.
Previously (long time ago) I made a public repository on Github, with some files. Now I set up a local Git repository on my current computer, and committed some files. Then I added a remote pointing to my Github page:
```
[root@osboxes c]# git remote add learnc https://github.com/michaelklachko/Learning-C
```
That seemed to be successful:
```
[root@osboxes c]# git remote show learnc
* remote learnc
Fetch URL: https://github.com/michaelklachko/Learning-C
Push URL: https://github.com/michaelklachko/Learning-C
HEAD branch: master
Remote branch:
master tracked
Local ref configured for 'git push':
master pushes to master (local out of date)
```
Now I want to download the files from my Github repo to my computer. I did this:
```
[root@osboxes c]# git fetch learnc
[root@osboxes c]# git merge learnc/master
warning: refname 'learnc/master' is ambiguous.
Already up-to-date.
```
However, I don't see any new files in my local directory. How can I get them?
I also tried to do this:
```
[root@osboxes c]# git pull learnc master
From https://github.com/michaelklachko/Learning-C
* branch master -> FETCH_HEAD
fatal: refusing to merge unrelated histories
```
BTW, locally I'm on master branch (there are no other branches):
```
[root@osboxes c]# git status
On branch master
nothing to commit, working directory clean
```
|
2016/07/07
|
[
"https://Stackoverflow.com/questions/38255655",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2722504/"
] |
```
git checkout master
git merge origin/master --allow-unrelated-histories
```
Resolve conflict, then
```
git add -A .
git commit -m "Upload"
git push
```
|
Execute the following command:
```
git pull origin master --allow-unrelated-histories
```
A merge vim will open. Add some merging message and:
1. Press ESC
2. Press Shift + ';'
3. Press 'w' and then press 'q'.
And you are good to go.
|
38,255,655
|
I'm learning git, and I'm following the Git community book.
Previously (long time ago) I made a public repository on Github, with some files. Now I set up a local Git repository on my current computer, and committed some files. Then I added a remote pointing to my Github page:
```
[root@osboxes c]# git remote add learnc https://github.com/michaelklachko/Learning-C
```
That seemed to be successful:
```
[root@osboxes c]# git remote show learnc
* remote learnc
Fetch URL: https://github.com/michaelklachko/Learning-C
Push URL: https://github.com/michaelklachko/Learning-C
HEAD branch: master
Remote branch:
master tracked
Local ref configured for 'git push':
master pushes to master (local out of date)
```
Now I want to download the files from my Github repo to my computer. I did this:
```
[root@osboxes c]# git fetch learnc
[root@osboxes c]# git merge learnc/master
warning: refname 'learnc/master' is ambiguous.
Already up-to-date.
```
However, I don't see any new files in my local directory. How can I get them?
I also tried to do this:
```
[root@osboxes c]# git pull learnc master
From https://github.com/michaelklachko/Learning-C
* branch master -> FETCH_HEAD
fatal: refusing to merge unrelated histories
```
BTW, locally I'm on master branch (there are no other branches):
```
[root@osboxes c]# git status
On branch master
nothing to commit, working directory clean
```
|
2016/07/07
|
[
"https://Stackoverflow.com/questions/38255655",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2722504/"
] |
On your branch - say master, pull and allow unrelated histories
```
git pull origin master --allow-unrelated-histories
```
Worked for me.
|
When I used `--allow-unrelated-histories`, this command generated too many conflicts. There were conflicts in files which I didn't even work on. To get over the error `" Refusing to merge unrelated histories"`, I used following rebase command:
```
git pull --rebase=preserve --allow-unrelated-histories
```
After this commit the uncommitted changes with a commit message. Finally, run the following command:
```
git rebase --continue
```
After this, my working copy was up-to-date with the remote copy and I was able to push my changes as before. No more unrelated histories error while pulling.
|
38,255,655
|
I'm learning git, and I'm following the Git community book.
Previously (long time ago) I made a public repository on Github, with some files. Now I set up a local Git repository on my current computer, and committed some files. Then I added a remote pointing to my Github page:
```
[root@osboxes c]# git remote add learnc https://github.com/michaelklachko/Learning-C
```
That seemed to be successful:
```
[root@osboxes c]# git remote show learnc
* remote learnc
Fetch URL: https://github.com/michaelklachko/Learning-C
Push URL: https://github.com/michaelklachko/Learning-C
HEAD branch: master
Remote branch:
master tracked
Local ref configured for 'git push':
master pushes to master (local out of date)
```
Now I want to download the files from my Github repo to my computer. I did this:
```
[root@osboxes c]# git fetch learnc
[root@osboxes c]# git merge learnc/master
warning: refname 'learnc/master' is ambiguous.
Already up-to-date.
```
However, I don't see any new files in my local directory. How can I get them?
I also tried to do this:
```
[root@osboxes c]# git pull learnc master
From https://github.com/michaelklachko/Learning-C
* branch master -> FETCH_HEAD
fatal: refusing to merge unrelated histories
```
BTW, locally I'm on master branch (there are no other branches):
```
[root@osboxes c]# git status
On branch master
nothing to commit, working directory clean
```
|
2016/07/07
|
[
"https://Stackoverflow.com/questions/38255655",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2722504/"
] |
While I'm all for unblocking people's work issues, I don't think "push --force" or "--allow\_unrelated\_histories" should be taught to new users as general solutions because they can cause real havoc to a repository when one uses them without understand why things aren't working in the first place.
When you have a situation like this where you started with a local repository, and want to make a remote on GitHub to share your work with, there is something to watch out for.
When you create the new online repository, there's an option "Initialize this repository with a README". If you read the fine print, it says "Skip this step if you’re importing an existing repository."
You may have checked that box. Or similarly, you made an add/commit online before you attempted an initial push. What happens is you create a unique commit history in each place and they can't be reconciled without the special allowance mentioned in Nevermore's answer (because git doesn't want you to operate that way). You can follow some of the advice mentioned here, or more simply just don't check that option next time you want to link some local files to a brand new remote; keeping the remote clean for that initial push.
Reference: my first experience with git + hub was to run into this same problem and do a lot of learning to understand what had happened and why.
|
When I used `--allow-unrelated-histories`, this command generated too many conflicts. There were conflicts in files which I didn't even work on. To get over the error `" Refusing to merge unrelated histories"`, I used following rebase command:
```
git pull --rebase=preserve --allow-unrelated-histories
```
After this commit the uncommitted changes with a commit message. Finally, run the following command:
```
git rebase --continue
```
After this, my working copy was up-to-date with the remote copy and I was able to push my changes as before. No more unrelated histories error while pulling.
|
38,255,655
|
I'm learning git, and I'm following the Git community book.
Previously (long time ago) I made a public repository on Github, with some files. Now I set up a local Git repository on my current computer, and committed some files. Then I added a remote pointing to my Github page:
```
[root@osboxes c]# git remote add learnc https://github.com/michaelklachko/Learning-C
```
That seemed to be successful:
```
[root@osboxes c]# git remote show learnc
* remote learnc
Fetch URL: https://github.com/michaelklachko/Learning-C
Push URL: https://github.com/michaelklachko/Learning-C
HEAD branch: master
Remote branch:
master tracked
Local ref configured for 'git push':
master pushes to master (local out of date)
```
Now I want to download the files from my Github repo to my computer. I did this:
```
[root@osboxes c]# git fetch learnc
[root@osboxes c]# git merge learnc/master
warning: refname 'learnc/master' is ambiguous.
Already up-to-date.
```
However, I don't see any new files in my local directory. How can I get them?
I also tried to do this:
```
[root@osboxes c]# git pull learnc master
From https://github.com/michaelklachko/Learning-C
* branch master -> FETCH_HEAD
fatal: refusing to merge unrelated histories
```
BTW, locally I'm on master branch (there are no other branches):
```
[root@osboxes c]# git status
On branch master
nothing to commit, working directory clean
```
|
2016/07/07
|
[
"https://Stackoverflow.com/questions/38255655",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2722504/"
] |
Try `--allow-unrelated-histories`
Like max630 commented, or as explained here [Git refusing to merge unrelated histories](https://stackoverflow.com/questions/37937984/git-refusing-to-merge-unrelated-histories)
|
If there is not substantial history on one end (aka if it is just a single readme commit on the github end), I often find it easier to manually copy the readme to my local repo and do a `git push -f` to make my version the new root commit.
I find it is slightly less complicated, doesn't require remembering an obscure flag, and keeps the history a bit cleaner.
|
38,255,655
|
I'm learning git, and I'm following the Git community book.
Previously (long time ago) I made a public repository on Github, with some files. Now I set up a local Git repository on my current computer, and committed some files. Then I added a remote pointing to my Github page:
```
[root@osboxes c]# git remote add learnc https://github.com/michaelklachko/Learning-C
```
That seemed to be successful:
```
[root@osboxes c]# git remote show learnc
* remote learnc
Fetch URL: https://github.com/michaelklachko/Learning-C
Push URL: https://github.com/michaelklachko/Learning-C
HEAD branch: master
Remote branch:
master tracked
Local ref configured for 'git push':
master pushes to master (local out of date)
```
Now I want to download the files from my Github repo to my computer. I did this:
```
[root@osboxes c]# git fetch learnc
[root@osboxes c]# git merge learnc/master
warning: refname 'learnc/master' is ambiguous.
Already up-to-date.
```
However, I don't see any new files in my local directory. How can I get them?
I also tried to do this:
```
[root@osboxes c]# git pull learnc master
From https://github.com/michaelklachko/Learning-C
* branch master -> FETCH_HEAD
fatal: refusing to merge unrelated histories
```
BTW, locally I'm on master branch (there are no other branches):
```
[root@osboxes c]# git status
On branch master
nothing to commit, working directory clean
```
|
2016/07/07
|
[
"https://Stackoverflow.com/questions/38255655",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2722504/"
] |
Try `--allow-unrelated-histories`
Like max630 commented, or as explained here [Git refusing to merge unrelated histories](https://stackoverflow.com/questions/37937984/git-refusing-to-merge-unrelated-histories)
|
When I used `--allow-unrelated-histories`, this command generated too many conflicts. There were conflicts in files which I didn't even work on. To get over the error `" Refusing to merge unrelated histories"`, I used following rebase command:
```
git pull --rebase=preserve --allow-unrelated-histories
```
After this commit the uncommitted changes with a commit message. Finally, run the following command:
```
git rebase --continue
```
After this, my working copy was up-to-date with the remote copy and I was able to push my changes as before. No more unrelated histories error while pulling.
|
38,255,655
|
I'm learning git, and I'm following the Git community book.
Previously (long time ago) I made a public repository on Github, with some files. Now I set up a local Git repository on my current computer, and committed some files. Then I added a remote pointing to my Github page:
```
[root@osboxes c]# git remote add learnc https://github.com/michaelklachko/Learning-C
```
That seemed to be successful:
```
[root@osboxes c]# git remote show learnc
* remote learnc
Fetch URL: https://github.com/michaelklachko/Learning-C
Push URL: https://github.com/michaelklachko/Learning-C
HEAD branch: master
Remote branch:
master tracked
Local ref configured for 'git push':
master pushes to master (local out of date)
```
Now I want to download the files from my Github repo to my computer. I did this:
```
[root@osboxes c]# git fetch learnc
[root@osboxes c]# git merge learnc/master
warning: refname 'learnc/master' is ambiguous.
Already up-to-date.
```
However, I don't see any new files in my local directory. How can I get them?
I also tried to do this:
```
[root@osboxes c]# git pull learnc master
From https://github.com/michaelklachko/Learning-C
* branch master -> FETCH_HEAD
fatal: refusing to merge unrelated histories
```
BTW, locally I'm on master branch (there are no other branches):
```
[root@osboxes c]# git status
On branch master
nothing to commit, working directory clean
```
|
2016/07/07
|
[
"https://Stackoverflow.com/questions/38255655",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2722504/"
] |
Try `--allow-unrelated-histories`
Like max630 commented, or as explained here [Git refusing to merge unrelated histories](https://stackoverflow.com/questions/37937984/git-refusing-to-merge-unrelated-histories)
|
While I'm all for unblocking people's work issues, I don't think "push --force" or "--allow\_unrelated\_histories" should be taught to new users as general solutions because they can cause real havoc to a repository when one uses them without understand why things aren't working in the first place.
When you have a situation like this where you started with a local repository, and want to make a remote on GitHub to share your work with, there is something to watch out for.
When you create the new online repository, there's an option "Initialize this repository with a README". If you read the fine print, it says "Skip this step if you’re importing an existing repository."
You may have checked that box. Or similarly, you made an add/commit online before you attempted an initial push. What happens is you create a unique commit history in each place and they can't be reconciled without the special allowance mentioned in Nevermore's answer (because git doesn't want you to operate that way). You can follow some of the advice mentioned here, or more simply just don't check that option next time you want to link some local files to a brand new remote; keeping the remote clean for that initial push.
Reference: my first experience with git + hub was to run into this same problem and do a lot of learning to understand what had happened and why.
|
3,051,037
|
I am hoping that someone could review my solution. Thanks!
>
> for A (a matrix), $A:V\rightarrow V$ and every $v$ in $V$, the vectors $v, A(v), A^2(v), ..., A^k(v)$ are linearly dependent where $k$ is a natural number $\leq \dim(V)$.
>
>
> show that $A, A^2, ..., A^k$ are linearly dependent
>
>
>
Thanks for the feedback in advance!
|
2018/12/24
|
[
"https://math.stackexchange.com/questions/3051037",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/399104/"
] |
This isn't true as written. Consider the identity matrix in $\mathbb R^3$. For every $v$, $\{v, Iv\}$ is linearly dependent but $\{I\}$ isn't.
|
I think the fact that the zero coefficients lead to zero sum of $A^k$ can't prove that there is no other linear combinations of $A^k(v)$ equal to zero. You only assigned same coefficients from $c\_1A + c\_2A^2 + ... + c\_kA^k$, but can't prove it is a must.
|
3,051,037
|
I am hoping that someone could review my solution. Thanks!
>
> for A (a matrix), $A:V\rightarrow V$ and every $v$ in $V$, the vectors $v, A(v), A^2(v), ..., A^k(v)$ are linearly dependent where $k$ is a natural number $\leq \dim(V)$.
>
>
> show that $A, A^2, ..., A^k$ are linearly dependent
>
>
>
Thanks for the feedback in advance!
|
2018/12/24
|
[
"https://math.stackexchange.com/questions/3051037",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/399104/"
] |
This isn't true as written. Consider the identity matrix in $\mathbb R^3$. For every $v$, $\{v, Iv\}$ is linearly dependent but $\{I\}$ isn't.
|
Say $c\_0I+c\_1A+c\_2A^2+...+c\_kA^k=O$.
Post-multiply by $v$, we have $c\_0v+c\_1Av+c\_2A^2v+...+c\_kA^kv=\bf0$. Since $v,Av,A^2v,...,A^kv$ are linearly dependent, we have a non-trivial tuple $(c\_0,c\_1,c\_2,...,c\_k)$ which satisfies the above relation. Therefore, the best you can say is that $I,A,A^2,...,A^k$ are linearly dependent.
|
5,627,819
|
When a subclass inherits **main()** from a superclass, is it possible to determine the actual class invoked on the command-line? For example, consider the following two classes, in which **main** is implemented by **A** and inherited by **B**:
```
public class A {
public static void main(String[] args) throws Exception {
// Replace with <some magic here> to determine the class
// invoked on the command-line
final Class<? extends A> c = A.class;
System.out.println("Invoked class: " + c.getName());
final A instance = c.newInstance();
// Do something with instance here...
}
}
public class B extends A {
}
```
We can invoke **B** successfully (i.e., **B** does 'inherit' **main** - at least in whatever sense static methods can be inherited), but I have not found a method to determine the actual class invoked by the user:
```
$ java -cp . A
Invoked class: A
$ java -cp . B
Invoked class: A
```
The closest I've come is to require that the subclass implement **main()** and call a helper method in the superclass, which then reads the thread stack to determine the calling class:
```
public class AByStack {
public static void run(String[] args) throws Exception {
// Read the thread stack to find the calling class
final Class<? extends AByStack> c = (Class<? extends AByStack>)
Class.forName(Thread.currentThread().getStackTrace()[2].getClassName());
System.out.println("Invoked class: " + c.getName());
final AByStack instance = c.newInstance();
// Do something with instance here...
}
public static void main(String[] args) throws Exception {
run(args);
}
}
public class BByStack extends AByStack {
public static void main(String[] args) throws Exception {
// Call the master 'run' method
run(args);
}
}
```
This method works:
```
$ java -cp . AByStack
Invoked class: AByStack
$ java -cp . BByStack
Invoked class: BByStack
```
But I'd really like to eliminate the requirement that subclasses implement **main()** (yes, call me picky...). I don't mind if it requires some ugly code, since it will be implemented once and buried in the base class, and I'm mostly interested in Sun/Oracle VMs, so I'd be willing to consider using a private sun.misc class or something similar.
But I do want to avoid platform-dependencies. For example, on Linux, we can look at */proc/self/cmdline*, but that's of course not portable to Windows (I'm not sure about Mac OS - I don't have my Mac with me at the moment to test this trick). And I think JNI and JVMTI are out for the same reason. I might be wrong about JVMTI, but it looks to me like it would require a C wrapper. If not, perhaps we could use that interface somehow.
This question was asked years ago at <http://www.coderanch.com/t/375326/java/java/Getting-command-line-class>. The best answer there required a static initializer block in each subclass - a different, but similar requirement on the subclass author to the **main** calling **run()** solution I demonstrated. But I haven't seen more recent discussions; I'm hopeful that current VMs might allow access to information that wasn't available at the time of that discussion.
|
2011/04/11
|
[
"https://Stackoverflow.com/questions/5627819",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/670234/"
] |
My guess is the database you are using does not like you having a table named "user" which is what you get when you don't provide a specific name for your table. I know that Postgres does not allow for a "user" table because "user" is a keyword. I'm not sure for MySQL. Try adding the javax.persistence.Table annotation after your @Entity annotation in the User class:
```
@Entity
@Table(name = "my_user")
public class User extends Model {
...
}
```
where you give it a whatever name you want that is not "user". Alternatively (but not tested) you may be able wrap the "user" name in quotes:
```
@Entity
@Table(name = "\"user\"")
public class User extends Model {
...
}
```
|
Be careful with column names.
```
2012-08-08T12:58:29+00:00 app[web.1]: 12:58:29,513 ERROR ~ Unsuccessful: create table users (id int8 not null, email varchar(255), enabled bool not null, name varchar(255), user varchar(255), primary key (id))
2012-08-08T12:58:29+00:00 app[web.1]: 12:58:29,514 ERROR ~ ERROR: syntax error at or near "user"
```
That was my case, changing column "user" to "userId" solved my issue
|
9,749,655
|
Recently I read a paper that proposed algorithm for mining Maximum Contiguous patterns from DNA data. The proposed method, which sounds pretty interesting, used the following model of MapReduce.
map->map->reduce->reduce. That is, First map phase is executed and its output is input to the second phase map. The second phase map's output is input to the first phase reduce. The output of the first phase reduce is input to the second phase reduce and finally the results are flushed into HDFS.
Although it seems like an interesting method, the paper didn't mention how they have implemented it. My question is, how do you implement this sort of MapReduce chaining?
|
2012/03/17
|
[
"https://Stackoverflow.com/questions/9749655",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1129408/"
] |
In Hadoop, as far as I know, you cannot do this as of now.
One approach can be to use [ChainMapper](http://hadoop.apache.org/common/docs/r0.20.1/api/org/apache/hadoop/mapred/lib/ChainMapper.html) to do the map->map->reduce part. Then, send the result of this job to another job, and set the mapper to [IdentityMapper](http://hadoop.apache.org/common/docs/r0.20.2/api/org/apache/hadoop/mapred/lib/IdentityMapper.html) and the reducer to the second phase reducer that you have.
|
Please read about TEZ . M->M->R->R->R any combo is supported there
|
9,749,655
|
Recently I read a paper that proposed algorithm for mining Maximum Contiguous patterns from DNA data. The proposed method, which sounds pretty interesting, used the following model of MapReduce.
map->map->reduce->reduce. That is, First map phase is executed and its output is input to the second phase map. The second phase map's output is input to the first phase reduce. The output of the first phase reduce is input to the second phase reduce and finally the results are flushed into HDFS.
Although it seems like an interesting method, the paper didn't mention how they have implemented it. My question is, how do you implement this sort of MapReduce chaining?
|
2012/03/17
|
[
"https://Stackoverflow.com/questions/9749655",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1129408/"
] |
I think there are two methods to deal with your case:
1. Integrate the two maps function code into one map task with two phase. Reduce task using the same method as map.
2. Divide the map-map-reduce-reduce progress into two jobs: two maps as first Hadoop job after converting the second map task type to reduce task; two reduces as second Hadoop job after converting first reduce task to map. May be you could use [Oozie](http://oozie.apache.org/) to deal with Hadoop workflow if submit some hadoop jobs depending on others.
|
Please read about TEZ . M->M->R->R->R any combo is supported there
|
659,227
|
Please identify the most popular lightweight markup languages and compare their strengths and weaknesses. These languages should be general-purpose markup for technical prose, such as for documentation (for example, Haml doesn't count).
See also: [Markdown versus ReStructuredText](https://stackoverflow.com/questions/34276/markdown-versus-restructuredtext)
|
2009/03/18
|
[
"https://Stackoverflow.com/questions/659227",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2938/"
] |
I know of three main languages used commonly in the greater programming and tech community: Textile, Markdown, and reStructuredText. All three can be learned in a couple of hours or "winged" with the cheat sheet nearby.
### Textile
* Used by Redmine and the Ruby community
* 113 questions currently tagged on Stack Overflow
* The most similar to HTML, but least readable source
* Easiest nested lists of all three languages
* Not understandable to non-programmers or the HTML-ignorant
* Ideal for complex short documents (with links, nested lists, code, custom HTML); for example: short documentation, how-tos, blog or CMS content
* [Syntax reference](http://redcloth.org/hobix.com/textile/)
### Markdown
* Doesn't seem to have a home language "community" but...
* 1274 questions tagged on Stack Overflow\*
* Emphasizes source code readability, similar to email traditions
* Straightforward HTML embedding (you just type the tags out)
* No way to make tables besides embedding HTML
* You know it already if you know Stack Overflow
* Easy to learn if you already know reStructuredText
* Automatic email address obfuscation for the format <address@example.com> (with angle brackets)
* [Syntax reference](http://daringfireball.net/projects/markdown/syntax)
### reStructuredText (A.K.A. ReST)
* Popular in the Python community
* 285 questions tagged on Stack Overflow
* A bit persnickety about whitespace and alignment if you ask me
* Lists (especially nested lists) and paragraphs always seem to get in fights
* Readable by non-programmers
* Only format which can build a table of contents (via an extension in the Python reference implementation)
* Directly converts to other formats like PDF and XML
* Ideal for large documents with lots of prose (e.g. an alternative to docbook for a user manual)
* [Syntax reference](http://docutils.sourceforge.net/docs/user/rst/quickref.html)
|
You might also consider [asciidoc](http://www.methods.co.nz/asciidoc/)
* relatively readable markup
* straightforward command-line use
* some might perceive it as relatively 'picky' (vs. flexible) with respect to syntax
* docbook and (x)html output
|
659,227
|
Please identify the most popular lightweight markup languages and compare their strengths and weaknesses. These languages should be general-purpose markup for technical prose, such as for documentation (for example, Haml doesn't count).
See also: [Markdown versus ReStructuredText](https://stackoverflow.com/questions/34276/markdown-versus-restructuredtext)
|
2009/03/18
|
[
"https://Stackoverflow.com/questions/659227",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2938/"
] |
I know of three main languages used commonly in the greater programming and tech community: Textile, Markdown, and reStructuredText. All three can be learned in a couple of hours or "winged" with the cheat sheet nearby.
### Textile
* Used by Redmine and the Ruby community
* 113 questions currently tagged on Stack Overflow
* The most similar to HTML, but least readable source
* Easiest nested lists of all three languages
* Not understandable to non-programmers or the HTML-ignorant
* Ideal for complex short documents (with links, nested lists, code, custom HTML); for example: short documentation, how-tos, blog or CMS content
* [Syntax reference](http://redcloth.org/hobix.com/textile/)
### Markdown
* Doesn't seem to have a home language "community" but...
* 1274 questions tagged on Stack Overflow\*
* Emphasizes source code readability, similar to email traditions
* Straightforward HTML embedding (you just type the tags out)
* No way to make tables besides embedding HTML
* You know it already if you know Stack Overflow
* Easy to learn if you already know reStructuredText
* Automatic email address obfuscation for the format <address@example.com> (with angle brackets)
* [Syntax reference](http://daringfireball.net/projects/markdown/syntax)
### reStructuredText (A.K.A. ReST)
* Popular in the Python community
* 285 questions tagged on Stack Overflow
* A bit persnickety about whitespace and alignment if you ask me
* Lists (especially nested lists) and paragraphs always seem to get in fights
* Readable by non-programmers
* Only format which can build a table of contents (via an extension in the Python reference implementation)
* Directly converts to other formats like PDF and XML
* Ideal for large documents with lots of prose (e.g. an alternative to docbook for a user manual)
* [Syntax reference](http://docutils.sourceforge.net/docs/user/rst/quickref.html)
|
for documentation?
how about [doxygen](http://www.doxygen.nl/)?
I've use it for some of c/c++ project that I need to documentize.
Even you can 'abuse' it just like doxygen author uses for doxygen documentation
|
659,227
|
Please identify the most popular lightweight markup languages and compare their strengths and weaknesses. These languages should be general-purpose markup for technical prose, such as for documentation (for example, Haml doesn't count).
See also: [Markdown versus ReStructuredText](https://stackoverflow.com/questions/34276/markdown-versus-restructuredtext)
|
2009/03/18
|
[
"https://Stackoverflow.com/questions/659227",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2938/"
] |
I know of three main languages used commonly in the greater programming and tech community: Textile, Markdown, and reStructuredText. All three can be learned in a couple of hours or "winged" with the cheat sheet nearby.
### Textile
* Used by Redmine and the Ruby community
* 113 questions currently tagged on Stack Overflow
* The most similar to HTML, but least readable source
* Easiest nested lists of all three languages
* Not understandable to non-programmers or the HTML-ignorant
* Ideal for complex short documents (with links, nested lists, code, custom HTML); for example: short documentation, how-tos, blog or CMS content
* [Syntax reference](http://redcloth.org/hobix.com/textile/)
### Markdown
* Doesn't seem to have a home language "community" but...
* 1274 questions tagged on Stack Overflow\*
* Emphasizes source code readability, similar to email traditions
* Straightforward HTML embedding (you just type the tags out)
* No way to make tables besides embedding HTML
* You know it already if you know Stack Overflow
* Easy to learn if you already know reStructuredText
* Automatic email address obfuscation for the format <address@example.com> (with angle brackets)
* [Syntax reference](http://daringfireball.net/projects/markdown/syntax)
### reStructuredText (A.K.A. ReST)
* Popular in the Python community
* 285 questions tagged on Stack Overflow
* A bit persnickety about whitespace and alignment if you ask me
* Lists (especially nested lists) and paragraphs always seem to get in fights
* Readable by non-programmers
* Only format which can build a table of contents (via an extension in the Python reference implementation)
* Directly converts to other formats like PDF and XML
* Ideal for large documents with lots of prose (e.g. an alternative to docbook for a user manual)
* [Syntax reference](http://docutils.sourceforge.net/docs/user/rst/quickref.html)
|
I am attempting to cover all the various lightweight markup languages here:
<http://www.subspacefield.org/~travis/static_blog_generators.html>
As you can see, it started with "how can I make a secure blog" - i.e. one that generates static HTML, and I found myself ensnared in markup languages, templating systems, etc.
**Update**
I refocused only on LWMLs with python implementations, and theyre here:
<http://www.subspacefield.org/~travis/python_lightweight_markup_languages.html>
So far I've tried markdown and ReST, and I like the latter better for anything but
HTML snippets embedded in other pages. Tables, cross refs, indirect links, etc...
|
659,227
|
Please identify the most popular lightweight markup languages and compare their strengths and weaknesses. These languages should be general-purpose markup for technical prose, such as for documentation (for example, Haml doesn't count).
See also: [Markdown versus ReStructuredText](https://stackoverflow.com/questions/34276/markdown-versus-restructuredtext)
|
2009/03/18
|
[
"https://Stackoverflow.com/questions/659227",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2938/"
] |
I know of three main languages used commonly in the greater programming and tech community: Textile, Markdown, and reStructuredText. All three can be learned in a couple of hours or "winged" with the cheat sheet nearby.
### Textile
* Used by Redmine and the Ruby community
* 113 questions currently tagged on Stack Overflow
* The most similar to HTML, but least readable source
* Easiest nested lists of all three languages
* Not understandable to non-programmers or the HTML-ignorant
* Ideal for complex short documents (with links, nested lists, code, custom HTML); for example: short documentation, how-tos, blog or CMS content
* [Syntax reference](http://redcloth.org/hobix.com/textile/)
### Markdown
* Doesn't seem to have a home language "community" but...
* 1274 questions tagged on Stack Overflow\*
* Emphasizes source code readability, similar to email traditions
* Straightforward HTML embedding (you just type the tags out)
* No way to make tables besides embedding HTML
* You know it already if you know Stack Overflow
* Easy to learn if you already know reStructuredText
* Automatic email address obfuscation for the format <address@example.com> (with angle brackets)
* [Syntax reference](http://daringfireball.net/projects/markdown/syntax)
### reStructuredText (A.K.A. ReST)
* Popular in the Python community
* 285 questions tagged on Stack Overflow
* A bit persnickety about whitespace and alignment if you ask me
* Lists (especially nested lists) and paragraphs always seem to get in fights
* Readable by non-programmers
* Only format which can build a table of contents (via an extension in the Python reference implementation)
* Directly converts to other formats like PDF and XML
* Ideal for large documents with lots of prose (e.g. an alternative to docbook for a user manual)
* [Syntax reference](http://docutils.sourceforge.net/docs/user/rst/quickref.html)
|
The Wikipedia page on [lightweight markup languages](http://en.wikipedia.org/wiki/Lightweight_markup_language) has a good comparison between the various options, as well as showing syntax for common uses (headings, bold, italics, etc.)
|
659,227
|
Please identify the most popular lightweight markup languages and compare their strengths and weaknesses. These languages should be general-purpose markup for technical prose, such as for documentation (for example, Haml doesn't count).
See also: [Markdown versus ReStructuredText](https://stackoverflow.com/questions/34276/markdown-versus-restructuredtext)
|
2009/03/18
|
[
"https://Stackoverflow.com/questions/659227",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2938/"
] |
You might also consider [asciidoc](http://www.methods.co.nz/asciidoc/)
* relatively readable markup
* straightforward command-line use
* some might perceive it as relatively 'picky' (vs. flexible) with respect to syntax
* docbook and (x)html output
|
for documentation?
how about [doxygen](http://www.doxygen.nl/)?
I've use it for some of c/c++ project that I need to documentize.
Even you can 'abuse' it just like doxygen author uses for doxygen documentation
|
659,227
|
Please identify the most popular lightweight markup languages and compare their strengths and weaknesses. These languages should be general-purpose markup for technical prose, such as for documentation (for example, Haml doesn't count).
See also: [Markdown versus ReStructuredText](https://stackoverflow.com/questions/34276/markdown-versus-restructuredtext)
|
2009/03/18
|
[
"https://Stackoverflow.com/questions/659227",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2938/"
] |
You might also consider [asciidoc](http://www.methods.co.nz/asciidoc/)
* relatively readable markup
* straightforward command-line use
* some might perceive it as relatively 'picky' (vs. flexible) with respect to syntax
* docbook and (x)html output
|
I am attempting to cover all the various lightweight markup languages here:
<http://www.subspacefield.org/~travis/static_blog_generators.html>
As you can see, it started with "how can I make a secure blog" - i.e. one that generates static HTML, and I found myself ensnared in markup languages, templating systems, etc.
**Update**
I refocused only on LWMLs with python implementations, and theyre here:
<http://www.subspacefield.org/~travis/python_lightweight_markup_languages.html>
So far I've tried markdown and ReST, and I like the latter better for anything but
HTML snippets embedded in other pages. Tables, cross refs, indirect links, etc...
|
659,227
|
Please identify the most popular lightweight markup languages and compare their strengths and weaknesses. These languages should be general-purpose markup for technical prose, such as for documentation (for example, Haml doesn't count).
See also: [Markdown versus ReStructuredText](https://stackoverflow.com/questions/34276/markdown-versus-restructuredtext)
|
2009/03/18
|
[
"https://Stackoverflow.com/questions/659227",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2938/"
] |
You might also consider [asciidoc](http://www.methods.co.nz/asciidoc/)
* relatively readable markup
* straightforward command-line use
* some might perceive it as relatively 'picky' (vs. flexible) with respect to syntax
* docbook and (x)html output
|
The Wikipedia page on [lightweight markup languages](http://en.wikipedia.org/wiki/Lightweight_markup_language) has a good comparison between the various options, as well as showing syntax for common uses (headings, bold, italics, etc.)
|
659,227
|
Please identify the most popular lightweight markup languages and compare their strengths and weaknesses. These languages should be general-purpose markup for technical prose, such as for documentation (for example, Haml doesn't count).
See also: [Markdown versus ReStructuredText](https://stackoverflow.com/questions/34276/markdown-versus-restructuredtext)
|
2009/03/18
|
[
"https://Stackoverflow.com/questions/659227",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2938/"
] |
I am attempting to cover all the various lightweight markup languages here:
<http://www.subspacefield.org/~travis/static_blog_generators.html>
As you can see, it started with "how can I make a secure blog" - i.e. one that generates static HTML, and I found myself ensnared in markup languages, templating systems, etc.
**Update**
I refocused only on LWMLs with python implementations, and theyre here:
<http://www.subspacefield.org/~travis/python_lightweight_markup_languages.html>
So far I've tried markdown and ReST, and I like the latter better for anything but
HTML snippets embedded in other pages. Tables, cross refs, indirect links, etc...
|
for documentation?
how about [doxygen](http://www.doxygen.nl/)?
I've use it for some of c/c++ project that I need to documentize.
Even you can 'abuse' it just like doxygen author uses for doxygen documentation
|
659,227
|
Please identify the most popular lightweight markup languages and compare their strengths and weaknesses. These languages should be general-purpose markup for technical prose, such as for documentation (for example, Haml doesn't count).
See also: [Markdown versus ReStructuredText](https://stackoverflow.com/questions/34276/markdown-versus-restructuredtext)
|
2009/03/18
|
[
"https://Stackoverflow.com/questions/659227",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2938/"
] |
The Wikipedia page on [lightweight markup languages](http://en.wikipedia.org/wiki/Lightweight_markup_language) has a good comparison between the various options, as well as showing syntax for common uses (headings, bold, italics, etc.)
|
for documentation?
how about [doxygen](http://www.doxygen.nl/)?
I've use it for some of c/c++ project that I need to documentize.
Even you can 'abuse' it just like doxygen author uses for doxygen documentation
|
659,227
|
Please identify the most popular lightweight markup languages and compare their strengths and weaknesses. These languages should be general-purpose markup for technical prose, such as for documentation (for example, Haml doesn't count).
See also: [Markdown versus ReStructuredText](https://stackoverflow.com/questions/34276/markdown-versus-restructuredtext)
|
2009/03/18
|
[
"https://Stackoverflow.com/questions/659227",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2938/"
] |
The Wikipedia page on [lightweight markup languages](http://en.wikipedia.org/wiki/Lightweight_markup_language) has a good comparison between the various options, as well as showing syntax for common uses (headings, bold, italics, etc.)
|
I am attempting to cover all the various lightweight markup languages here:
<http://www.subspacefield.org/~travis/static_blog_generators.html>
As you can see, it started with "how can I make a secure blog" - i.e. one that generates static HTML, and I found myself ensnared in markup languages, templating systems, etc.
**Update**
I refocused only on LWMLs with python implementations, and theyre here:
<http://www.subspacefield.org/~travis/python_lightweight_markup_languages.html>
So far I've tried markdown and ReST, and I like the latter better for anything but
HTML snippets embedded in other pages. Tables, cross refs, indirect links, etc...
|
46,026,208
|
I have below anonymous block where i am using cursor to generate the resultset and save it into `TEST_REPORT.csv` file. But i am getting error on the line where i am using spool as:
```
PLS-00103: Encountered the symbol "H" when expecting one of the following:
:= . ( @ % ;
```
I believe i cannot use spool in PL/SQL but not sure so i have tried below code. And also i cannot use UTL\_FILE or UTL\_FILE\_DIR logic because of security restriction on the Production. As employees works on different department i want to generate separate csv file for each employee with respect to their department.
Is there anyway where i can break this code and use spool to generate csv file or any other logic ? Also if it is not possible to use spool in anonymous block then can i use it during execution of this code to generate files ?
If its not possible using spool then is it possible if i can organize my query result in such a way that it will be easy to export the result into single csv file after executing this anonymous block and then i can separate the single csv file into multiple files depending on the employee with their department manually ?
Generate-And-Run.sql file
```
SET SERVEROUTPUT ON;
set verify off
SET LONG 100000
SET lines 1000
SET sqlformat SELECT;
SPOOL C:\Loop-Flattener.sql;
PROMPT VAR V_A VARCHAR2(64);
BEGIN
FOR TARGET_POINTER IN (select ID,
name,
ST_ID
from TEST_REPORT
where rownum <5)
LOOP
DBMS_OUTPUT.PUT_LINE('DEFINE TARGET = '''||TARGET_POINTER.ID||''';');
DBMS_OUTPUT.PUT_LINE('EXEC :V_A := '''||TARGET_POINTER.ID||'''; ');
DBMS_OUTPUT.PUT_LINE('@@Target-Csv-Generator.sql;');
END LOOP;
END;
/
SPOOL OFF;
```
|
2017/09/03
|
[
"https://Stackoverflow.com/questions/46026208",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4508605/"
] |
As OldProgrammer mentioned, you cannot call `SPOOL` within `PL/SQL`. But you can can print to a spooled file from within `PL/SQL`, and can pre-compile/flatten the loop into a delegating intermediate worker-script that repeatedly calls the csv-generator.
**Update**. In response to the specifics you provided, the following should work for you.
Example Test data:
```
INSERT INTO CSS_BOUTIQUE VALUES ('EUR', 'Belgium', 'a0Hb0000006LLdQ');
INSERT INTO CSS_BOUTIQUE VALUES ('EUR', 'Portugal', 'a0HB0000006LLOG');
INSERT INTO CSS_BOUTIQUE VALUES ('EUR', 'Portugal', 'a0HB0000006LLYu');
INSERT INTO CSS_BOUTIQUE VALUES ('ASIA', 'Korea', 'e0HB0000016MEIi');
INSERT INTO CSS_BOUTIQUE VALUES ('ASIA', 'China', 'e0HB0000026MEIi');
INSERT INTO CSS_BOUTIQUE VALUES ('ASIA', 'Japan', 'e0HB0000036MEIi');
INSERT INTO CSS_BOUTIQUE VALUES ('SA', 'Chile', 's0HB0000016MEIi');
INSERT INTO CSS_BOUTIQUE VALUES ('SA', 'Argentina', 's0HB0000026MEIi');
INSERT INTO CSS_BOUTIQUE VALUES ('SA', 'Equador', 's0HB0000036MEIi');
```
Then create the following two scripts:
**Generate-And-Run.sql** This script will create a flat, pseudo-loop by generating an intermediate script filled with iterative commands to set new variable names and call a reusable csv-generator.
```
SET ECHO OFF;
SET FEEDBACK OFF;
SET HEAD OFF;
SET LIN 256;
SET TRIMSPOOL ON;
SET WRAP OFF;
SET PAGES 0;
SET TERM OFF;
SET SERVEROUTPUT ON;
SPOOL Loop-Flattener.sql;
PROMPT VAR V_ZONE_NAME VARCHAR2(64);
BEGIN
FOR TARGET_POINTER IN (SELECT DISTINCT ZONE FROM CSS_BOUTIQUE)
LOOP
DBMS_OUTPUT.PUT_LINE('DEFINE TARGET = '''||TARGET_POINTER.ZONE||''';');
DBMS_OUTPUT.PUT_LINE('EXEC :V_ZONE_NAME := '''||TARGET_POINTER.ZONE||'''; ');
DBMS_OUTPUT.PUT_LINE('@@Target-Csv-Generator.sql;');
END LOOP;
END;
/
SPOOL OFF;
@@Loop-Flattener.sql;
```
**Target-Csv-Generator.sql**: This script will do the work of generating a single csv. Please note, the report-name here is a simple `REPORT_FOR...` without any additional path to help ensure it gets created in the working directory.
```
SPOOL REPORT_FOR_&&TARGET..csv;
PROMPT zone,market, boutique_id;
select zone||','||
market||','||
boutique_id
from CSS_BOUTIQUE
where rownum <5 and ZONE = :V_ZONE_NAME;
SPOOL OFF;
```
**Then run it:**
Place the above two scripts into the directory where you want your CSVs to be created, then **Start SQLPlus in that directory**
```
SQL*Plus: Release 12.2.0.1.0 Production on Sun Sep 10 14:38:13 2017
SQL> @@Generate-And-Run
```
Now, the working-directory has three new files:
```
REPORT_FOR_EUR.csv
REPORT_FOR_SA.csv
REPORT_FOR_ASIA.csv
```
And each only has the data for its zone. For example:
```
cat REPORT_FOR_ASIA.csv
```
Yields:
```
zone,market, boutique_id
ASIA,Korea,e0HB0000016MEIi
ASIA,China,e0HB0000026MEIi
ASIA,Japan,e0HB0000036MEIi
```
|
You cannot call spool within a pl/sql block Spool is a sqlplus command, not pl/sql. Move the statement to before the DECLARE statement.
|
25,343,681
|
I have the following 2 different datetime uses:
```
date=request.GET.get('date','')
if date:
date = datetime.strptime(date, "%m/%d/%Y")
print date
else:
date = datetime.date.today()
```
It seems the imports needed are:
```
from datetime import datetime
date = datetime.strptime(date, "%m/%d/%Y")
```
and
```
import datetime
date = datetime.date.today()
```
I can't have both:
```
from datetime import datetime
import datetime
```
or one overrides the other.
If I have one, I get the error:
object has no attribute today
How can I use both these datetime functions?
|
2014/08/16
|
[
"https://Stackoverflow.com/questions/25343681",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/84885/"
] |
You can alias the import names to ensure they're used differently. This is one of the reasons why datetime gets its fair share of criticism in the Python community.
What about:
```
from datetime import datetime as dt
import datetime
```
These will represent two separate things. AS shown by `dir(dt)` and `dir(datetime)`
|
Removing `.date.` from your code should work:
```py
from datetime import datetime
print datetime.strptime("12/31/2000", "%m/%d/%Y")
print datetime.today()
```
Output:
```
2000-12-31 00:00:00
2014-08-16 22:36:28.593481
```
|
25,343,681
|
I have the following 2 different datetime uses:
```
date=request.GET.get('date','')
if date:
date = datetime.strptime(date, "%m/%d/%Y")
print date
else:
date = datetime.date.today()
```
It seems the imports needed are:
```
from datetime import datetime
date = datetime.strptime(date, "%m/%d/%Y")
```
and
```
import datetime
date = datetime.date.today()
```
I can't have both:
```
from datetime import datetime
import datetime
```
or one overrides the other.
If I have one, I get the error:
object has no attribute today
How can I use both these datetime functions?
|
2014/08/16
|
[
"https://Stackoverflow.com/questions/25343681",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/84885/"
] |
Removing `.date.` from your code should work:
```py
from datetime import datetime
print datetime.strptime("12/31/2000", "%m/%d/%Y")
print datetime.today()
```
Output:
```
2000-12-31 00:00:00
2014-08-16 22:36:28.593481
```
|
In the case of datetime, you should always import the module itself, precisely to avoid this confusion.
```
import datetime
date = datetime.datetime.strptime(date, "%m/%d/%Y")
date = datetime.date.today()
```
|
25,343,681
|
I have the following 2 different datetime uses:
```
date=request.GET.get('date','')
if date:
date = datetime.strptime(date, "%m/%d/%Y")
print date
else:
date = datetime.date.today()
```
It seems the imports needed are:
```
from datetime import datetime
date = datetime.strptime(date, "%m/%d/%Y")
```
and
```
import datetime
date = datetime.date.today()
```
I can't have both:
```
from datetime import datetime
import datetime
```
or one overrides the other.
If I have one, I get the error:
object has no attribute today
How can I use both these datetime functions?
|
2014/08/16
|
[
"https://Stackoverflow.com/questions/25343681",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/84885/"
] |
You can alias the import names to ensure they're used differently. This is one of the reasons why datetime gets its fair share of criticism in the Python community.
What about:
```
from datetime import datetime as dt
import datetime
```
These will represent two separate things. AS shown by `dir(dt)` and `dir(datetime)`
|
In the case of datetime, you should always import the module itself, precisely to avoid this confusion.
```
import datetime
date = datetime.datetime.strptime(date, "%m/%d/%Y")
date = datetime.date.today()
```
|
8,003,788
|
By ordered I mean term1 will always come before term2 in the document.
I have two documents:
1. "By ordered I mean term1 will always come before term2 in the document"
2. "By ordered I mean term2 will always come before term1 in the document"
if I make the query:
```
"term1 term2"~Integer.MAX_VALUE
```
my results is: 2 documents
How can I query to have one result (only if term1 come before term2):
"By ordered I mean term1 will always come before term2 in the document"
Any Ideas?
|
2011/11/04
|
[
"https://Stackoverflow.com/questions/8003788",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1028852/"
] |
You would need to use [SpanNearQuery](http://lucene.apache.org/java/3_0_0/api/all/org/apache/lucene/search/spans/SpanNearQuery.html)
This Matches spans which are near one another. One can specify slop, the maximum number of intervening unmatched positions, *as well as whether matches are required to be in-order*.
However, the dismax and edismax query parsers with Solr uses the phrase query and hence the match is always unordered.
[SurroundQueryParser](http://wiki.apache.org/solr/SurroundQueryParser) supports the Span queries with an option to make a query for both ordered and unordered terms with a slight change in the syntax.
Although this Query Parser is available with the trunk only.
More info @ <http://www.lucidimagination.com/blog/2009/02/22/exploring-query-parsers/>
|
Look up span query. <http://www.lucidimagination.com/blog/2009/07/18/the-spanquery/>
|
71,499,942
|
I hope I'm not taking the wrong approach here but I feel like I'm on the right track and this shouldn't be too complicated. I want to take a simple function of x and y on the screen and return a color applied to each pixel of a webGL canvas.
For example: `f(x,y) -> rgb(x/canvasWidth,y/canvasHeight)` where x and y are positions on the canvas and color is that of the pixel.
My first thought is to take [this example](https://developer.mozilla.org/en-US/docs/Web/API/WebGL_API/Tutorial/Adding_2D_content_to_a_WebGL_context) and modify it so that the rectangle fills the screen and the color is as described. I think this is achieved by modifying the vertex shader so that the rectangle covers the canvas and fragment shader to implement the color but I'm not sure how to apply the vertex shader based on window size or get my x and y variables in the context of the fragment shader.
Here's the [shader code](https://github.com/mdn/webgl-examples/blob/gh-pages/tutorial/sample2/webgl-demo.js#L19) for the tutorial I'm going off of. I Haven't tried much besides manually changing the constant color in the fragment shader and mutating the square by changing the values in the intitBuffers method.
|
2022/03/16
|
[
"https://Stackoverflow.com/questions/71499942",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2334254/"
] |
It is your selection with `xpath`, you select the `<div>` that do not have an `href` attribute. Select also its first `<a>` like `.//div[@class="jobfeed-wrapper multiple-wrapper"]/a` and it will work:
```
links = driver.find_elements(by=By.XPATH, value='.//div[@class="jobfeed-wrapper multiple-wrapper"]/a')
for link in links:
print(link.get_attribute('href'))
```
#### Example
Instead of `time` use `WebDriverWait` to check if specific elements are available.
```
from selenium import webdriver
from selenium.webdriver.common.by import By
from webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
url = 'https://www.hirist.com/c/filter/mobile-applications-jobs-in-cochin%20kochi_trivandrum%20thiruvananthapuram-5-70_75-0-0-1-0-0-0-0-2.html?ref=homepagecat'
driver = webdriver.Chrome(ChromeDriverManager().install())
driver.maximize_window()
driver.get(url)
wait = WebDriverWait(driver, 10)
links = wait.until(EC.presence_of_all_elements_located((By.XPATH, './/div[@class="jobfeed-wrapper multiple-wrapper"]/a')))
for link in links:
print(link.get_attribute('href'))
```
#### Output
```
https://www.hirist.com/j/xforia-technologies-android-developer-javakotlin-10-15-yrs-1011605.html?ref=cl&jobpos=1&jobversion=2
https://www.hirist.com/j/firminiq-system-ios-developer-swiftobjective-c-3-10-yrs-1011762.html?ref=cl&jobpos=2&jobversion=2
https://www.hirist.com/j/firminiq-system-android-developer-kotlin-3-10-yrs-1011761.html?ref=cl&jobpos=3&jobversion=2
https://www.hirist.com/j/react-native-developer-mobile-app-designing-3-5-yrs-1009438.html?ref=cl&jobpos=4&jobversion=2
https://www.hirist.com/j/flutter-developer-iosandroid-apps-2-3-yrs-1008214.html?ref=cl&jobpos=5&jobversion=2
https://www.hirist.com/j/accubits-technologies-react-native-developer-ios-android-platforms-3-7-yrs-1003520.html?ref=cl&jobpos=6&jobversion=2
https://www.hirist.com/j/appincubator-react-native-developer-iosandroid-platform-2-7-yrs-1001957.html?ref=cl&jobpos=7&jobversion=2
```
|
You didn't declare path to chromedriver on your computer. Check where the chromdriver is, then try
```
driver = webdriver.Chrome(executable_path=CHROME_DRIVER_PATH)
```
|
232,860
|
This seems too easy but still can't get it work. I have:
```
-X-1.5
```
and need:
```
-X1.5
```
Using:
```
echo -X-1.5 | tr -d '-'
```
gives:
```
X1.5
```
which it's close, but not close enough.
Any pointers are welcome,
|
2015/09/29
|
[
"https://unix.stackexchange.com/questions/232860",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63970/"
] |
If you know that the character to delete is at position 2 (starting counting from 0), you can do this:
```
VAR="-X-1.5"
echo "${VAR:0:2}${VAR:3}"
```
which will give `-X1.5`.
If it were the first instance of a character that you wanted to delete, but you didn't know its position in advance, you could do this:
```
VAR="X-1.5-oh"
echo "${VAR/-/}"
```
which will give `X1.5-oh`.
|
Since the part to remove happens not to be at the beginning, end, nor is this a complete removal, it rules out a lot of generic shortcuts in favor of something more specific to match a known pattern, so it really depends on what the pattern will be..
If for example the pattern is always going to be a `-X` followed by `-` which you wish to remove, you can, for example:
```
$ echo "-X-1.5" | sed 's/-X-/-X/g'
```
Results:
```
-X1.5
```
* searches for `-X-`
* replaces with `-X`
* `/g` to perform it greedily meaning all such occurrences, in case you have `-X-1.5-X-2.5-X-3.5` etc
|
232,860
|
This seems too easy but still can't get it work. I have:
```
-X-1.5
```
and need:
```
-X1.5
```
Using:
```
echo -X-1.5 | tr -d '-'
```
gives:
```
X1.5
```
which it's close, but not close enough.
Any pointers are welcome,
|
2015/09/29
|
[
"https://unix.stackexchange.com/questions/232860",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63970/"
] |
Another sed solution
```
echo "-X-1.5" | sed 's/-//2'
```
Substitute the second occurrence of - with nothing.
|
If you know that the character to delete is at position 2 (starting counting from 0), you can do this:
```
VAR="-X-1.5"
echo "${VAR:0:2}${VAR:3}"
```
which will give `-X1.5`.
If it were the first instance of a character that you wanted to delete, but you didn't know its position in advance, you could do this:
```
VAR="X-1.5-oh"
echo "${VAR/-/}"
```
which will give `X1.5-oh`.
|
232,860
|
This seems too easy but still can't get it work. I have:
```
-X-1.5
```
and need:
```
-X1.5
```
Using:
```
echo -X-1.5 | tr -d '-'
```
gives:
```
X1.5
```
which it's close, but not close enough.
Any pointers are welcome,
|
2015/09/29
|
[
"https://unix.stackexchange.com/questions/232860",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63970/"
] |
Another sed solution
```
echo "-X-1.5" | sed 's/-//2'
```
Substitute the second occurrence of - with nothing.
|
Since the part to remove happens not to be at the beginning, end, nor is this a complete removal, it rules out a lot of generic shortcuts in favor of something more specific to match a known pattern, so it really depends on what the pattern will be..
If for example the pattern is always going to be a `-X` followed by `-` which you wish to remove, you can, for example:
```
$ echo "-X-1.5" | sed 's/-X-/-X/g'
```
Results:
```
-X1.5
```
* searches for `-X-`
* replaces with `-X`
* `/g` to perform it greedily meaning all such occurrences, in case you have `-X-1.5-X-2.5-X-3.5` etc
|
2,226,819
|
I'm applying a xslt to a HTML file (already filtered and tidied to make it parseable as XML).
My code looks like this:
```
TransformerFactory transformerFactory = TransformerFactory.newInstance();
this.xslt = transformerFactory.newTransformer(xsltSource);
xslt.transform(sanitizedXHTML, result);
```
However, I receive error for every doctype found like this:
>
> ERROR: 'Server returned HTTP response code: 503 for URL: <http://www.w3.org/TR/html4/loose.dtd>'
>
>
>
I have no issue accessing the dtds from my browser.
I have little control over the HTML being parsed, and can't rip the DOCTYPE since I need them for entities.
Any help is welcome.
**EDIT:**
I tried to disable DTD validation like this:
```
private Source getSource(StreamSource sanitizedXHTML) throws ParsingException {
SAXParserFactory spf = SAXParserFactory.newInstance();
spf.setNamespaceAware(false);
spf.setValidating(false); // Turn off validation
XMLReader rdr;
try {
rdr = spf.newSAXParser().getXMLReader();
} catch (SAXException e) {
throw new ParsingException(e);
} catch (ParserConfigurationException e) {
throw new ParsingException(e);
}
InputSource inputSrc = new InputSource(sanitizedXHTML.getInputStream());
return new SAXSource(rdr, inputSrc);
}
```
and then just calling it...
```
Source source = getSource(sanitizedXHTML);
xslt.transform(source, result);
```
The error persists.
**EDIT 2:**
Wrote a entity resolver, and got HTML 4.01 Transitional DTD on my local disk. However, I get this error now:
>
> ERROR: 'The declaration for the entity "HTML.Version" must end with '>'.'
>
>
>
The DTD is *as is*, downloaded from w3.org
|
2010/02/09
|
[
"https://Stackoverflow.com/questions/2226819",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/168562/"
] |
I have some suggestions in an [answer](https://stackoverflow.com/questions/1772440/error-received-when-attempting-to-load-xslt-generated-xhtml/1786193#1786193) to a related question.
In particular, when parsing the XML document, you might want to turn DTD validation off, to prevent the parser from trying to fetch the DTD. Alternatively, you might use your own entity resolver to return a local copy of the DTD instead of fetching it over the network.
---
**Edit:** Just calling `setValidating(false)` on the SAX Parser Factory might not be enough to prevent the parser from loading the external DTD. The parser may need the DTD for other purposes, such as entity definitions. (Perhaps you could change your HTML sanitization/preprocessing phase to replace all entity references with the equivalent numeric character entity references, eliminating the need for the DTD?)
I don't think there is a [standard SAX feature flag](http://www.saxproject.org/apidoc/org/xml/sax/package-summary.html#package_description) which would ensure that external DTD loading is completely disabled, so you might have to use something specific to your parser. So if you are using Xerces, for example, you might want to look up [Xerces-specific features](http://xerces.apache.org/xerces-j/features.html) and call `setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", false)` just to be sure.
|
Assuming you want the DTD loaded (for your entities), you will need to use a resolver. The basic problem that you are encountering is that the W3C limits access to the urls for the DTDs for performance reasons (they don't get any performance if they don't).
Now you should be working with a local copy of the DTD and using a catalog to handle this. You should take a look at the Apache Commons [Resolver](http://projects.apache.org/projects/xml_commons_resolver.html). If you don't know how to use a catalog, they're well documented in [Norm Walsh's article](http://xml.apache.org/commons/components/resolver/resolver-article.html)
Of course, you will have problems if you do validate. That's an SGML DTD and you are trying to use it for XML. This will not work (probably)
|
9,538,420
|
I have a newbie question. If I have some global variables that are shared by two classes or more how can I have them in a separate file so that any class can read and update them. Is this possible without using Interfaces?
|
2012/03/02
|
[
"https://Stackoverflow.com/questions/9538420",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1064089/"
] |
Yes, since `interface`s variables are all implicitly `static`, so each of these variables has only one instance in the jvm.
However, a better way to do it [in my opinion] would probably be having them declared in some [singleton](http://en.wikipedia.org/wiki/Singleton_pattern) class and using it.
|
The best way to do this is to have your shared application state accessible via interface methods, then have an implementing class that holds the variables, and pass this instance of the class to your other classes during construction (which they accept as an instance of the interface).
This is better than using a static class or singleton since it allows you to mock out the functionality of the shared state for testing, improves general code reusability, and allows you to change the implementation and configuration of the shared state without impacting any of the code using it.
E.g.
```
// Session interface for all application shared state.
public interface ApplicationSession
{
public int getMaxUserLimit();
}
// A backing for the interface (simple in memory version, maybe future versions use a database, who knows).
public class SomeApplicationSession implements ApplicationSession
{
private volatile int maxUserLimit = 0;
public void setMaxUserLimit(int limit) { this.maxUserLimit = limit; }
public int getMaxUserLimit() { return maxUserLimit; }
}
// ClassA uses the supplied session.
public class MyClassA
{
private ApplicationSession session;
public myClassA(ApplicationSession session)
{
this.session = session;
}
}
// usage...
public class MyMain
{
public static void main(String[] args)
{
// Create / get session (ultimately possibly from a factory).
ApplicationSession session = new SomeApplicationSession();
ClassA myClassA = new ClassA(session);
// do stuff..
}
}
```
|
52,522,613
|
There are n no of string which need to map with another string.
```
Ex : Bacardi_old - > Facundo
Smirnoff_old -> Pyotr
Seagram_old -> Joseph
This keep on ..... may be around 1000
```
There are some string which need to map with duplicate string.
```
Ex : Bacardi_new -> Facundo
Smirnoff_new -> Facundo
Seagram_new -> Facundo
```
Requirement: As Below case
case 1: when brand name input. Owner name as output.
```
input : Bacard_old
output: Facundo
```
case 2: When owner name input brand name as output.
```
input : Facundo
output : Bacardi_old, Bacardi_new ,Smirnoff_new ,Seagram_new
```
My Approach:
1.I have a map as below :
```
std::map<std::string,std::vector<std::string>> Mymap;
```
2.Should i create two map one unique mapping and another for duplicate
```
std::map<std::string,std::string>Mymap
std::map<std::string,std::vector<std::string>>Mymap
```
Is the second option good than first in terms of all the aspect.
Please suggest the best approach.
Note : I am stick with c++11. No boost library.
|
2018/09/26
|
[
"https://Stackoverflow.com/questions/52522613",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10420024/"
] |
The best approach depends on your needs. Are you interested in access speed or insertion speed? Or are you interested in reducing the used memory space?
The first solution that you propose (a map with key=brand and value=owner) uses less memory but requires a full scan to perform a search by owner.
The second solution:
* a map with key=brand and value=owner
* a map with key=brand and value= list of owners
is faster for both search by owner and search by brand. However, it requires more memory and you also need to perform 2 insertions for each new pair.
|
Best is highly relative :)
You can use [std::multimap](https://en.cppreference.com/w/cpp/container/multimap) for the same.
```
std::multimap<std::string,std::string> my_map;
my_map.insert(std::make_pair("owner_name", "brand_name"));
```
Now you can search based on `key` or `value` depending upon what you need.
|
61,760
|
I was running across this advertisement for an English language school on the subway this morning:
>
> English fit for travel, as if you have your teacher with you.
>
>
>
This sounds somehow wrong to me. Shouldn't the subjunctive use the past form of *have* in this case, as in 'As if you **had** your teacher with you'?
|
2015/07/14
|
[
"https://ell.stackexchange.com/questions/61760",
"https://ell.stackexchange.com",
"https://ell.stackexchange.com/users/21365/"
] |
This is a perfect example of something that is grammatically correct, yet highly unidiomatic. In other words, a native would never word it like this.
Here are some options that I would personally prefer as a native:
>
> **English for travel:** It's like traveling with your teacher.
>
>
> **English for tourists:** It's like having your teacher as a tour
> guide.
>
>
> **English for travel:** It's like taking your teacher on the road.
>
>
>
(*I don't really like any of these either - the whole premise seems corny to me, but they're a lot more palatable than the original to my ear at least*).
|
This is a great question. I had to think about it for a while! AmE native, btw.
"As if you **had** your teacher with you" is *definitely* grammatically correct and sounds natural when spoken. If you're erring on the side of caution, certainly go with the "had" version.
If we venture off the grammatically correct path and wander a bit into the forest of sounds-okay-when-spoken-but-maybe-don't-say-it-to-a-pedantic-grammarian-or-put-it-in-a-term-paper, the "have" version might be acceptable as well. It somewhat accents the present-tense aspect of the phrase. It might connote the feeling "this course is so good it'll feel like your teacher is with you all the time, very much in the present".
Note that this feeling is by no means a hard and fast rule and different English speakers, especially the purists among us, might beg strongly to differ. However, in spoken American English, many good speakers might not bat an eye at the "have" version. Whether the English language company that wrote the ad put this much thought behind it or not is another question entirely.
|
61,760
|
I was running across this advertisement for an English language school on the subway this morning:
>
> English fit for travel, as if you have your teacher with you.
>
>
>
This sounds somehow wrong to me. Shouldn't the subjunctive use the past form of *have* in this case, as in 'As if you **had** your teacher with you'?
|
2015/07/14
|
[
"https://ell.stackexchange.com/questions/61760",
"https://ell.stackexchange.com",
"https://ell.stackexchange.com/users/21365/"
] |
It is grammatically *incorrect*, according to standard English. If I were in charge of that school (but I am not, so I wrote 'were' just then) I would fire the folks who came up with that setence.
Whether one calls it the subjunctive or irrealis is another thing. English does not have a lot of things that Latin has, or had.
|
This is a great question. I had to think about it for a while! AmE native, btw.
"As if you **had** your teacher with you" is *definitely* grammatically correct and sounds natural when spoken. If you're erring on the side of caution, certainly go with the "had" version.
If we venture off the grammatically correct path and wander a bit into the forest of sounds-okay-when-spoken-but-maybe-don't-say-it-to-a-pedantic-grammarian-or-put-it-in-a-term-paper, the "have" version might be acceptable as well. It somewhat accents the present-tense aspect of the phrase. It might connote the feeling "this course is so good it'll feel like your teacher is with you all the time, very much in the present".
Note that this feeling is by no means a hard and fast rule and different English speakers, especially the purists among us, might beg strongly to differ. However, in spoken American English, many good speakers might not bat an eye at the "have" version. Whether the English language company that wrote the ad put this much thought behind it or not is another question entirely.
|
61,760
|
I was running across this advertisement for an English language school on the subway this morning:
>
> English fit for travel, as if you have your teacher with you.
>
>
>
This sounds somehow wrong to me. Shouldn't the subjunctive use the past form of *have* in this case, as in 'As if you **had** your teacher with you'?
|
2015/07/14
|
[
"https://ell.stackexchange.com/questions/61760",
"https://ell.stackexchange.com",
"https://ell.stackexchange.com/users/21365/"
] |
This is a perfect example of something that is grammatically correct, yet highly unidiomatic. In other words, a native would never word it like this.
Here are some options that I would personally prefer as a native:
>
> **English for travel:** It's like traveling with your teacher.
>
>
> **English for tourists:** It's like having your teacher as a tour
> guide.
>
>
> **English for travel:** It's like taking your teacher on the road.
>
>
>
(*I don't really like any of these either - the whole premise seems corny to me, but they're a lot more palatable than the original to my ear at least*).
|
It is grammatically *incorrect*, according to standard English. If I were in charge of that school (but I am not, so I wrote 'were' just then) I would fire the folks who came up with that setence.
Whether one calls it the subjunctive or irrealis is another thing. English does not have a lot of things that Latin has, or had.
|
49,590,870
|
I have an index page, which contains a form, fill in the form and sends the data to a URL, which captures the input, and does the search in the database, returning a JSON.
How do I get this JSON, and put it on another HTML page using Javascript?
Form of index:
```
<form action="{{ url_for('returnOne') }}", method="GET">
<p>Nome: <input type="text" name="nome" /></p>
<input type="submit" value="Pesquisar">
</form>
```
My function that returns JSON:
```
@app.route('/userQuery', methods=['GET'])
def returnOne():
dao = Dao()
nome = request.args.get('nome')
return jsonify(json.loads(dao.select(nome)))
```
|
2018/03/31
|
[
"https://Stackoverflow.com/questions/49590870",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9579597/"
] |
Your HTML page after you submit the form. let's call it `response.html`
```
<!DOCTYPE html>
<html>
<head>
<title>hello</title>
</head>
<body>
<p id="test"></p>
<p id="test1"></p>
<script>
var b = JSON.parse('{{ a | tojson | safe}}');
document.getElementById('test').innerHTML = b.test;
document.getElementById('test1').innerHTML = b.test1;
console.log(b);
</script>
</body>
</html>
```
Your flask function that sends `JOSN` and redirects to `response.html`
```
@app.route('/userQuery', methods=["POST", "GET"])
def returnOne():
a = {
"test": "abcd",
"test1": "efg"
}
return render_template("response.html", a=a)
```
|
Use ajax request. [This plugin](http://jquery.malsup.com/form/#getting-started) transform the submit button in your HTML form to an Ajax submit
```
<script>
$(document).ready(function() {
// bind 'myForm' and provide a simple callback function
$('#myForm').ajaxForm(function(response) {
var result = jQuery.parseJSON(response);
$('#someDiv').innerHTML = result.res
});
});
</script>
```
Use the correct form Id
|
71,559,974
|
I am trying to use a function to identify the convexity of data, instead of using visual observations from plots.
Here is the data:
```
x1 x2 x3 y
1 2.302585 0.8340324 -0.181482974 1.455921
2 2.397895 0.8745914 -0.133998493 1.504507
3 2.484907 0.9102351 -0.094052368 1.546571
4 2.564949 0.9419387 -0.059815044 1.583520
5 2.639057 0.9704218 -0.030024476 1.616374
6 2.708050 0.9962289 -0.003778236 1.645887
7 2.772589 1.0197814 0.019588330 1.672631
8 2.833213 1.0414115 0.040577028 1.697047
9 2.890372 1.0613851 0.059574781 1.719478
10 2.944439 1.0799183 0.076885390 1.740201
11 2.995732 1.0971887 0.092751181 1.759435
12 3.044522 1.1133441 0.107368148 1.777364
13 3.091042 1.1285084 0.120896759 1.794136
14 3.135494 1.1427868 0.133469846 1.809879
15 3.178054 1.1562690 0.145198448 1.824699
16 3.218876 1.1690322 0.156176206 1.838688
17 3.258097 1.1811431 0.166482733 1.851924
18 3.295837 1.1926601 0.176186204 1.864476
19 3.332205 1.2036341 0.185345397 1.876404
20 3.367296 1.2141100 0.194011296 1.887760
21 3.401197 1.2241275 0.202228379 1.898591
22 3.433987 1.2337220 0.210035646 1.908938
23 3.465736 1.2429250 0.217467466 1.918838
24 3.496508 1.2517646 0.224554260 1.928325
25 3.526361 1.2602663 0.231323069 1.937428
26 3.555348 1.2684530 0.237798021 1.946174
27 3.583519 1.2763453 0.244000729 1.954587
28 3.610918 1.2839620 0.249950617 1.962690
29 3.637586 1.2913203 0.255665198 1.970502
30 3.663562 1.2984358 0.261160310 1.978041
31 3.688879 1.3053227 0.266450321 1.985326
32 3.713572 1.3119942 0.271548296 1.992370
33 3.737670 1.3184623 0.276466149 1.999187
34 3.761200 1.3247381 0.281214769 2.005792
35 3.784190 1.3308318 0.285804134 2.012195
36 3.806662 1.3367528 0.290243403 2.018407
37 3.828641 1.3425100 0.294541006 2.024440
38 3.850148 1.3481115 0.298704714 2.030301
39 3.871201 1.3535648 0.302741703 2.036000
40 3.891820 1.3588770 0.306658617 2.041545
41 3.912023 1.3640546 0.310461612 2.046944
42 3.931826 1.3691039 0.314156406 2.052203
43 3.951244 1.3740304 0.317748315 2.057329
44 3.970292 1.3788396 0.321242292 2.062327
45 3.988984 1.3835366 0.324642955 2.067205
46 4.007333 1.3881260 0.327954621 2.071967
47 4.025352 1.3926123 0.331181324 2.076617
48 4.043051 1.3969997 0.334326845 2.081162
49 4.060443 1.4012921 0.337394728 2.085605
50 4.077537 1.4054932 0.340388301 2.089950
51 4.094345 1.4096066 0.343310691 2.094201
52 4.110874 1.4136356 0.346164843 2.098362
53 4.127134 1.4175833 0.348953529 2.102437
54 4.143135 1.4214527 0.351679364 2.106428
55 4.158883 1.4252465 0.354344815 2.110339
56 4.174387 1.4289676 0.356952216 2.114173
57 4.189655 1.4326183 0.359503770 2.117932
58 4.204693 1.4362012 0.362001567 2.121620
59 4.219508 1.4397185 0.364447583 2.125238
60 4.234107 1.4431723 0.366843695 2.128789
61 4.248495 1.4465649 0.369191683 2.132275
62 4.262680 1.4498980 0.371493238 2.135699
63 4.276666 1.4531738 0.373749966 2.139062
64 4.290459 1.4563938 0.375963396 2.142367
65 4.304065 1.4595599 0.378134984 2.145615
66 4.317488 1.4626738 0.380266116 2.148808
67 4.330733 1.4657369 0.382358113 2.151948
68 4.343805 1.4687508 0.384412236 2.155036
69 4.356709 1.4717169 0.386429689 2.158074
70 4.369448 1.4746367 0.388411622 2.161063
71 4.382027 1.4775113 0.390359131 2.164005
72 4.394449 1.4803422 0.392273270 2.166901
73 4.406719 1.4831305 0.394155042 2.169753
74 4.418841 1.4858774 0.396005410 2.172561
75 4.430817 1.4885839 0.397825296 2.175327
76 4.442651 1.4912513 0.399615585 2.178052
77 4.454347 1.4938805 0.401377124 2.180737
78 4.465908 1.4964726 0.403110725 2.183384
79 4.477337 1.4990284 0.404817171 2.185992
80 4.488636 1.5015490 0.406497210 2.188564
81 4.499810 1.5040351 0.408151563 2.191099
82 4.510860 1.5064877 0.409780924 2.193600
83 4.521789 1.5089076 0.411385958 2.196066
84 4.532599 1.5112956 0.412967306 2.198499
85 4.543295 1.5136525 0.414525586 2.200900
86 4.553877 1.5159789 0.416061392 2.203269
87 4.564348 1.5182757 0.417575296 2.205607
88 4.574711 1.5205435 0.419067852 2.207914
89 4.584967 1.5227830 0.420539590 2.210192
90 4.595120 1.5249948 0.421991025 2.212442
91 4.605170 1.5271796 0.423422652 2.214663
92 4.615121 1.5293380 0.424834950 2.216856
93 4.624973 1.5314705 0.426228380 2.219022
94 4.634729 1.5335777 0.427603389 2.221162
95 4.644391 1.5356602 0.428960408 2.223277
96 4.653960 1.5377185 0.430299854 2.225366
97 4.663439 1.5397532 0.431622130 2.227430
98 4.672829 1.5417646 0.432927627 2.229470
99 4.682131 1.5437534 0.434216722 2.231486
100 4.691348 1.5457199 0.435489780 2.233479
101 4.700480 1.5476647 0.436747155 2.235450
102 4.709530 1.5495882 0.437989191 2.237398
103 4.718499 1.5514907 0.439216219 2.239325
104 4.727388 1.5533728 0.440428562 2.241230
105 4.736198 1.5552348 0.441626530 2.243114
106 4.744932 1.5570771 0.442810428 2.244977
107 4.753590 1.5589002 0.443980548 2.246820
108 4.762174 1.5607043 0.445137176 2.248644
109 4.770685 1.5624898 0.446280589 2.250448
110 4.779123 1.5642572 0.447411053 2.252233
111 4.787492 1.5660066 0.448528831 2.254000
112 4.795791 1.5677386 0.449634176 2.255748
113 4.804021 1.5694533 0.450727333 2.257478
114 4.812184 1.5711511 0.451808541 2.259191
115 4.820282 1.5728323 0.452878034 2.260886
116 4.828314 1.5744973 0.453936037 2.262564
117 4.836282 1.5761462 0.454982769 2.264225
118 4.844187 1.5777794 0.456018445 2.265870
119 4.852030 1.5793972 0.457043273 2.267499
120 4.859812 1.5809998 0.458057455 2.269112
121 4.867534 1.5825875 0.459061188 2.270709
122 4.875197 1.5841606 0.460054665 2.272291
123 4.882802 1.5857192 0.461038071 2.273858
124 4.890349 1.5872637 0.462011589 2.275411
125 4.897840 1.5887943 0.462975396 2.276948
126 4.905275 1.5903111 0.463929665 2.278471
127 4.912655 1.5918145 0.464874564 2.279981
128 4.919981 1.5933047 0.465810258 2.281476
129 4.927254 1.5947818 0.466736906 2.282958
130 4.934474 1.5962461 0.467654665 2.284426
131 4.941642 1.5976978 0.468563687 2.285881
132 4.948760 1.5991370 0.469464120 2.287323
133 4.955827 1.6005641 0.470356109 2.288752
134 4.962845 1.6019791 0.471239796 2.290169
135 4.969813 1.6033823 0.472115319 2.291573
136 4.976734 1.6047738 0.472982813 2.292966
137 4.983607 1.6061539 0.473842408 2.294346
138 4.990433 1.6075226 0.474694234 2.295714
139 4.997212 1.6088802 0.475538416 2.297071
140 5.003946 1.6102269 0.476375077 2.298416
141 5.010635 1.6115627 0.477204337 2.299750
142 5.017280 1.6128879 0.478026312 2.301072
143 5.023881 1.6142026 0.478841118 2.302384
144 5.030438 1.6155070 0.479648865 2.303685
145 5.036953 1.6168013 0.480449664 2.304975
146 5.043425 1.6180854 0.481243622 2.306255
147 5.049856 1.6193597 0.482030842 2.307524
148 5.056246 1.6206243 0.482811428 2.308784
149 5.062595 1.6218792 0.483585480 2.310033
150 5.068904 1.6231247 0.484353094 2.311272
151 5.075174 1.6243608 0.485114368 2.312502
152 5.081404 1.6255877 0.485869395 2.313721
153 5.087596 1.6268055 0.486618267 2.314932
154 5.093750 1.6280143 0.487361074 2.316133
155 5.099866 1.6292143 0.488097904 2.317324
156 5.105945 1.6304056 0.488828843 2.318507
157 5.111988 1.6315883 0.489553975 2.319681
158 5.117994 1.6327625 0.490273383 2.320845
159 5.123964 1.6339284 0.490987149 2.322001
160 5.129899 1.6350859 0.491695351 2.323149
161 5.135798 1.6362353 0.492398067 2.324287
162 5.141664 1.6373767 0.493095373 2.325418
163 5.147494 1.6385101 0.493787345 2.326540
164 5.153292 1.6396357 0.494474056 2.327654
165 5.159055 1.6407535 0.495155576 2.328760
166 5.164786 1.6418637 0.495831977 2.329858
167 5.170484 1.6429663 0.496503328 2.330948
168 5.176150 1.6440615 0.497169696 2.332030
169 5.181784 1.6451493 0.497831147 2.333104
170 5.187386 1.6462299 0.498487747 2.334171
171 5.192957 1.6473033 0.499139560 2.335231
172 5.198497 1.6483696 0.499786649 2.336283
173 5.204007 1.6494288 0.500429074 2.337327
174 5.209486 1.6504812 0.501066896 2.338365
175 5.214936 1.6515268 0.501700175 2.339395
176 5.220356 1.6525656 0.502328968 2.340419
177 5.225747 1.6535977 0.502953333 2.341435
178 5.231109 1.6546232 0.503573326 2.342445
179 5.236442 1.6556423 0.504189002 2.343448
180 5.241747 1.6566548 0.504800414 2.344444
181 5.247024 1.6576611 0.505407616 2.345433
182 5.252273 1.6586610 0.506010660 2.346416
183 5.257495 1.6596547 0.506609598 2.347393
184 5.262690 1.6606423 0.507204479 2.348363
185 5.267858 1.6616239 0.507795352 2.349326
186 5.273000 1.6625994 0.508382267 2.350284
187 5.278115 1.6635690 0.508965271 2.351235
188 5.283204 1.6645327 0.509544412 2.352181
189 5.288267 1.6654906 0.510119734 2.353120
190 5.293305 1.6664428 0.510691284 2.354053
191 5.298317 1.6673893 0.511259105 2.354981
192 5.303305 1.6683302 0.511823242 2.355902
193 5.308268 1.6692655 0.512383738 2.356818
194 5.313206 1.6701954 0.512940635 2.357728
195 5.318120 1.6711199 0.513493974 2.358633
196 5.323010 1.6720389 0.514043797 2.359532
197 5.327876 1.6729527 0.514590144 2.360425
198 5.332719 1.6738612 0.515133054 2.361314
199 5.337538 1.6747645 0.515672566 2.362196
200 5.342334 1.6756627 0.516208719 2.363074
201 5.347108 1.6765558 0.516741550 2.363946
202 5.351858 1.6774438 0.517271096 2.364813
203 5.356586 1.6783269 0.517797394 2.365674
204 5.361292 1.6792050 0.518320480 2.366531
205 5.365976 1.6800783 0.518840389 2.367383
206 5.370638 1.6809467 0.519357155 2.368229
207 5.375278 1.6818104 0.519870814 2.369071
208 5.379897 1.6826693 0.520381398 2.369908
209 5.384495 1.6835235 0.520888942 2.370740
210 5.389072 1.6843731 0.521393476 2.371567
211 5.393628 1.6852182 0.521895035 2.372390
212 5.398163 1.6860587 0.522393649 2.373208
213 5.402677 1.6868946 0.522889349 2.374021
214 5.407172 1.6877262 0.523382166 2.374830
215 5.411646 1.6885533 0.523872131 2.375634
216 5.416100 1.6893761 0.524359273 2.376433
217 5.420535 1.6901945 0.524843622 2.377229
218 5.424950 1.6910087 0.525325207 2.378019
219 5.429346 1.6918186 0.525804055 2.378806
220 5.433722 1.6926244 0.526280195 2.379588
221 5.438079 1.6934259 0.526753655 2.380366
222 5.442418 1.6942234 0.527224461 2.381140
223 5.446737 1.6950168 0.527692642 2.381909
224 5.451038 1.6958061 0.528158222 2.382675
225 5.455321 1.6965915 0.528621229 2.383436
226 5.459586 1.6973729 0.529081687 2.384193
227 5.463832 1.6981503 0.529539623 2.384947
228 5.468060 1.6989239 0.529995061 2.385696
229 5.472271 1.6996936 0.530448026 2.386441
230 5.476464 1.7004596 0.530898541 2.387183
231 5.480639 1.7012217 0.531346632 2.387920
232 5.484797 1.7019801 0.531792321 2.388654
233 5.488938 1.7027347 0.532235632 2.389384
234 5.493061 1.7034857 0.532676587 2.390110
235 5.497168 1.7042331 0.533115210 2.390833
236 5.501258 1.7049768 0.533551522 2.391552
237 5.505332 1.7057170 0.533985546 2.392267
238 5.509388 1.7064536 0.534417303 2.392979
239 5.513429 1.7071867 0.534846815 2.393687
240 5.517453 1.7079163 0.535274102 2.394391
241 5.521461 1.7086425 0.535699186 2.395092
242 5.525453 1.7093652 0.536122088 2.395790
243 5.529429 1.7100846 0.536542826 2.396484
244 5.533389 1.7108006 0.536961422 2.397175
245 5.537334 1.7115132 0.537377895 2.397862
246 5.541264 1.7122226 0.537792264 2.398546
247 5.545177 1.7129286 0.538204550 2.399227
248 5.549076 1.7136314 0.538614769 2.399904
249 5.552960 1.7143310 0.539022943 2.400578
250 5.556828 1.7150275 0.539429088 2.401249
```
If we plot the data,
```
plot(y~x1, type = "l")
plot(y~x2, type = "l")
plot(y~x3, type = "l")
```
We will see the first plot looks concave; the second plot looks straight; and the third plot looks a little convex. Is there any function in R that can test this? Namely, is there a function that can identify the convexity of data (instead of a function)?
Thanks!
|
2022/03/21
|
[
"https://Stackoverflow.com/questions/71559974",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10173383/"
] |
Approach 1:
Modifying a measure of convexity from [this paper](https://users.cs.cf.ac.uk/Paul.Rosin/resources/papers/convexity-BMVC.pdf), measure how much longer (proportionally) the path length along the points is than the convex hull along the same points.
A base R solution:
```
path_length <- function(x, y) {
sum(sqrt(diff(x)^2 + diff(y)^2))
}
conv_meas <- function(x, y) {
ch <- chull(c(x, range(x)[2:1]), c(y, rep(min(y), 2)))
ch <- ch[ch <= length(x)]
pl <- path_length(x[ch], y[ch])
(path_length(x, y) - pl)/pl
}
conv_meas(df$x1, df$y)
#> [1] 1.053814e-11
conv_meas(df$x2, df$y)
#> [1] 1.407418e-09
conv_meas(df$x3, df$y)
#> [1] 0.002536735
```
A value of exactly zero indicates the points are perfectly convex. The further from zero, the less convex they are.
Approach 2:
Calculate the convex hull for both up and down being the "inside" of the curve and take the ratio of the path lengths.
```
conv_meas <- function(x, y) {
yrng <- range(y)
xhull <- c(x, range(x)[2:1])
chDown <- chull(xhull, c(y, rep(yrng[1], 2)))
chUp <- chull(xhull, c(y, rep(yrng[2], 2)))
chDown <- sort(chDown[chDown <= length(x)])
chUp <- sort(chUp[chUp <= length(x)])
path_length(x[chDown], y[chDown])/path_length(x[chUp], y[chUp])
}
conv_meas(df$x1, df$y)
#> [1] 1.003598
conv_meas(df$x2, df$y)
#> [1] 1.000438
conv_meas(df$x3, df$y)
#> [1] 0.9974697
set.seed(123)
conv_meas(1:100, log(1:100) + runif(100))
#> [1] 1.007082
conv_meas(1:100, exp((1:100 - 50)/20) + runif(100))
#> [1] 0.9871246
conv_meas(1:100, -(1:100))
#> [1] 1
```
A value < 1 indicates convex (concave up), while a value > 1 indicates concave (concave down). A value exactly 1 indicates a line (or that it is no more convex than concave).
|
You can use the `CVXR` package to identify convexity of data. I will give an example of how to approach this. You can use this link for extra info (<https://cran.r-project.org/web/packages/CVXR/vignettes/cvxr_intro.html>):
```
library(kableExtra)
set.seed(123)
n <- 100
p <- 10
beta <- -4:5 # beta is just -4 through 5.
X <- matrix(rnorm(n * p), nrow=n)
colnames(X) <- paste0("beta_", beta)
Y <- X %*% beta + rnorm(n)
ls.model <- lm(Y ~ 0 + X) # There is no intercept in our model above
m <- data.frame(ls.est = coef(ls.model))
rownames(m) <- paste0("$\\beta_{", 1:p, "}$")
# load packages
suppressWarnings(library(CVXR, warn.conflicts=FALSE))
betaHat <- Variable(p)
objective <- Minimize(sum((Y - X %*% betaHat)^2))
problem <- Problem(objective)
result <- solve(problem)
m <- cbind(coef(ls.model), result$getValue(betaHat))
colnames(m) <- c("lm est.", "CVXR est.")
rownames(m) <- paste0("$\\beta_{", 1:p, "}$")
kbl(m)
```
Output:
[](https://i.stack.imgur.com/srSEl.png)
|
55,952,391
|
So I want to create a XML-file, and one of the attributes that I have to add is called "from". When I want to add that I obviously get a "keyword can't be an expression" error. What do I have to do to make this work?
```
routes = ET.Element("routes")
for i,f in df.iterrows():
flow = ET.SubElement(routes, "flow", id=str(i), from=f["source"], to = f["sink"])
```
The output should like this:
```
<routes>
<flow id="0" from="A" to="B"></flow>
<flow id="1" from="B" to="C"></flow>
...
</routes>
```
|
2019/05/02
|
[
"https://Stackoverflow.com/questions/55952391",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10738077/"
] |
The problem is that DevOps is using non standard headers for some reason. This causes the built in postman to not work. The way that I figured out to work around this is to input the Auth url directly into your browser. You can then extract the auth code from the callback URL.
Then with that code, make a post to the token endpoint in postman
<https://app.vssps.visualstudio.com/oauth2/token>
Use x-www-form-urlencoded body and fill out the following keys
```
"client_assertion_type", "urn:ietf:params:oauth:client-assertion-type:jwt-bearer"
"client_assertion", your app secret
"grant_type", "urn:ietf:params:oauth:grant-type:jwt-bearer"
"assertion", your auth code from previous step
"redirect_uri", your callbackurl
```
|
Azure DevOps supports oAuth. If you follow the steps in the link below to get an oAuth token, and type the token directly in Postman, you'll get a successful response:
<https://github.com/Microsoft/azure-devops-auth-samples/tree/master/OAuthWebSample>
[](https://i.stack.imgur.com/Pr4SH.png)
|
17,758,337
|
I'm extending `BasicObject`, and I'd like to implement `Object#class` method. I'm not sure how to do this other than hard-coding the return value like this:
```
class MyObject < BasicObject
def class
::Kernel::eval "::MyObject"
end
end
MyObject.new.class
#=> MyObject
```
With this solution, any class that inherits from `MyObject` would have to overwrite this method in order to keep it accurate. Is there a better way?
|
2013/07/20
|
[
"https://Stackoverflow.com/questions/17758337",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1795402/"
] |
It's not possible to implement this method in Ruby. You need access to three things that you cannot access from Ruby:
* the class pointer
* the superclass pointer
* the virtual class flag (include classes and singleton classes are virtual classes)
`Object#class` works something like this:
```
class Object
def class
c = self.__class__
c = c.__superclass__ while c.__virtual__?
c
end
end
```
There's also `Class#superclass`:
```
class Class
def superclass
c = self.__superclass__
c = c.__superclass__ while c.__virtual__?
c
end
end
```
And since there is no way to get the class pointer (remember: the `class` method does *not* return it) and there is no way to get the superclass pointer (remember: the `superclass` method doesn't return it either) and there is no way for you to check whether a class is virtual, it is impossible to write this method in Ruby.
You need some sort of privileged access to the runtime internals to do that.
|
Since you are refusing to inherit `Object`, you are missing almost all basic methods, including `class` or `ancestors`. You would need to hard code everything. Besides the minimal syntax, everything is gone. Think of it as tabla rasa. There is no other method on which you can build your methods.
Ruby is designed to have all objects inherit from `Object`, and going against that does not result in useful result.
|
17,758,337
|
I'm extending `BasicObject`, and I'd like to implement `Object#class` method. I'm not sure how to do this other than hard-coding the return value like this:
```
class MyObject < BasicObject
def class
::Kernel::eval "::MyObject"
end
end
MyObject.new.class
#=> MyObject
```
With this solution, any class that inherits from `MyObject` would have to overwrite this method in order to keep it accurate. Is there a better way?
|
2013/07/20
|
[
"https://Stackoverflow.com/questions/17758337",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1795402/"
] |
It's not possible to implement this method in Ruby. You need access to three things that you cannot access from Ruby:
* the class pointer
* the superclass pointer
* the virtual class flag (include classes and singleton classes are virtual classes)
`Object#class` works something like this:
```
class Object
def class
c = self.__class__
c = c.__superclass__ while c.__virtual__?
c
end
end
```
There's also `Class#superclass`:
```
class Class
def superclass
c = self.__superclass__
c = c.__superclass__ while c.__virtual__?
c
end
end
```
And since there is no way to get the class pointer (remember: the `class` method does *not* return it) and there is no way to get the superclass pointer (remember: the `superclass` method doesn't return it either) and there is no way for you to check whether a class is virtual, it is impossible to write this method in Ruby.
You need some sort of privileged access to the runtime internals to do that.
|
I was directed towards this possible duplicate question: [How do I get the class of a BasicObject instance?](https://stackoverflow.com/questions/9197586/how-do-i-get-the-class-of-a-basicobject-instance)
The solution provided to that question solves my problem. Thanks!
|
7,737
|
How do I generate bokeh that simulates shallow depth of field given a perfect depth map?
Here's what I tried:
1. Generated a sample scene using blender and corresponding depth map
[](https://i.stack.imgur.com/AC8vN.png)
2. Using Python, generated what sigma (standard deviation) to use at each pixel for gaussian blurring: This one makes it look like the far end of the pink cube is perfectly focused.
```
sigma_map = np.array(np.round(abs(np.array(depth, dtype='float32') - preferred_focus_depth)).clip(0,255), dtype='uint8')
```
[](https://i.stack.imgur.com/Q25t3.png)
*black = 0, no blur; white = 255, large blur*
3. Applied gaussian blurring (using the sigma\_map) at every pixel and this is what I got
[](https://i.stack.imgur.com/Ll8N6.png)
Flaws observed: Averaging kernels just near the edges are taking in the pink color nearby hence the pink bleed. **How do I avoid this?**
Note: It's not completely blurred look closely at the zoomed in image on the right. The pixels that I wanted to remain untouched are still untouched (even though it appears they were blurred as well because of the blurring near the edges)
This [blog](https://ai.googleblog.com/2017/10/portrait-mode-on-pixel-2-and-pixel-2-xl.html?m=1) on Portrait mode on Pixel 2 says, "Actually applying the blur is conceptually the simplest part", **is there any simpler way of doing it then?**
|
2018/06/22
|
[
"https://computergraphics.stackexchange.com/questions/7737",
"https://computergraphics.stackexchange.com",
"https://computergraphics.stackexchange.com/users/8748/"
] |
The blog post that you talked about, is not about generating bokeh for a computer generated image. It is instead about generating a believable depth of field effect from an image captured by a smartphone camera, as the effect is desired for portraits to make the subject stand out. It generally works by splitting the image in to two parts. One part is the subject, which should not be blurred. The other part is the background, which should be blurred but the pixels of the subject should have no influence and would thus have a weight of zero. Slightly more sophisticated algorithms may make the blur kernel size variable across an image to create a smoother transition, but the subject's pixels would still be ignored.
In games and CGI, post process depth of field is done differently. There are multiple ways for it. A simple google search shows multiple. You could do depth of field like you have, and simply ignore the pixels which should not be blurred to remove the ghosting. However, this is not true to how depth of field occurs inside of a camera. So, in order to get a good algorithm to simulate depth of field, I will first try to explain how you get the depth of field effect in an actual camera.
Let us say, that we have an object, a single lens and an image sensor. We look at them from the side.
[](https://i.stack.imgur.com/aud9f.png)
Light scatters in many different directions when hitting the object. To simplify we only take one point. From this point, I will show the light that would hit the lens as a gray solid.
[](https://i.stack.imgur.com/FK7bL.png)
The lens bends the light towards a single point. That is the focal point.
[](https://i.stack.imgur.com/ZD9BY.png)
You can see that the focal point is in front of the sensor. This causes the light from that single point on the object, to fall on a large portion of the sensor. If the focal point is behind the sensor, the same would happen. But, if the focal point lies on the sensor, there would only be one point on the sensor where the light falls on. This area where the light falls on the sensor, is called the circle of confusion (CoC). This is also the cause of the depth of field effect.
One important thing to know, is that when the point on the object moves around, the focal point also moves around. If the point moves closer to the lens, the focal point moves closer to the lens, which would alter the size of the circle of confusion. This gives you the gradual change in blurriness that you have with depth of field, since the circle of confusion gradually becomes larger when an object moves away from the area in focus.
If we add in an aperture we simply block out a part of the light, which causes the circle of confusion to be smaller. This allows us to determine how much light to let in (to make the image brighter or darker), and how strong the depth of field effect is.
[](https://i.stack.imgur.com/sxmta.png)
From this, we can conclude a few things on how we should implement depth of field.
* Every point whose light enters the camera lens, has a circle of confusion.
* The circle of confusion is the area where the light, from that point, falls on.
* The circle of confusion changes size when the point moves closer to or further away from the lens, or when the aperture changes size.
* The area of the circle of confusion has no falloff. Every point inside the circle of confusion has the same weight. This results in a box blur.
* If a point is in focus, its circle of confusion would be either zero or something so small, that it is not noticed.
The algorithm is quite simple. For each pixel in the image, we use the depth to calculate the circle of confusion. Then, we add the colour of that pixel to all the other pixels that lie in the circle of confusion. However, if that other pixel is closer to the camera (a lower depth value), it would block the light from our pixel, and we should not add the colour of our pixel to the blocking pixel.
In pseudo code it would look something like this;
```
// for each pixel in the all-focus image
for(x = 0; x < width; x++){
for(y = 0; y < height; y++){
// How much blur? Zero or close to it if it should be in focus.
float CoCRadius = calcCoCRadius(depth(x, y));
// For every pixel in the square with the size of CoCRadius.
for(i = x - CoCRadius; i <= x + CoCRadius; i++){
for(j = y - CoCRadius; j <= y + CoCRadius; j++){
// Discard pixels outside of the shape. Square -> Circle
if(isInBokehShape(i - x, j - y)){
// Discard pixels that would block it.
if(depth(i, j) >= depth(x, y){
// Add the colour.
newColour[i, j] += colour(x, y);
// Add the weight, for normalizing.
weight[i, j] += 1.0f;
}
}
}
}
}
}
//Normalize the values with the weight.
for(x = 0; x < width; x++){
for(y = 0; y < height; y++){
newColour[x, y] /= weight[x, y];
}
}
```
This would give you a depth of field effect that is quite true, and does not give you the ghosting effect. $calcCoCRadius$ is a method that calculates the radius in pixels of the circle of confusion. It could be anything, but if you want it as true to an actual camera, then you can use the formula from [this Wikipedia page](https://en.wikipedia.org/wiki/Circle_of_confusion). $isInBokehShape$ is a method which returns a boolean on whether the point is inside the bokeh shape. With the two for loops we just get a square, but if we want to get a circle shaped bokeh, we need to discard the pixels that would lie outside of the circle. This discarding is also the reason why we keep track of the weights and normalize afterwards with the weight, since we do not know how many pixels we need to discard. The weight is also useful if you do want a falloff in the circle of confusion, for example if you want a gaussian blur.
Another important thing to know, is that the implementation listed here, is not very optimal. The problem is basically that you write to multiple different pixels. This causes race conditions. Instead, it is better to rework the algorithm so that you only need one write. Basically, you loop through every pixel and you then you check all other pixels (or pixels inside of a maximum CoC radius) whether their CoC radius is large enough that they contribute to the pixel's colour. This way, you can also put the normalizing of the values inside the first loop over all the pixels instead of having to do that later.
This should be a fairy good way of implementing depth of field as a post processing effect. There are also other ways, but this is one.
|
Does the effect need to be physically accurate? I've implemented this using a cheat before. @bram0101's answer is great. If you need accuracy, then by all means, implement that. I've been able to get something convincing to the eye by simply doing a threshold of the luminance of the image and placing a semi-transparent polygon or circle centered at every point that's above the threshold.
You also need to take into account distance from the focal plane. The distance can be used to control the size of the bokeh, with the size being 0 at the focal plane, and some maximum size at some farther distance.
|
65,937,191
|
There are few modules only visible in developer mode.
I need to visible it in non-develope mode. How can i do it?
**My findings:**
I have few xml views in common folder which has xml alone with out **<**menuitems**>** and in later point in another folder I have listed all the menuitems from common folder as well as current folder menu items to the order i want.
**why i need to place the menuitems in other folder ?**
If i place the menu items in the common folder, odoo is giving them first as per menu sequences by default.But i need it later. so i combined all the menu items to the order i want in current folder.
It works in Developer mode without any issues. But in non developer mode it isn't.
I have also verified if any groups is making this but no.
I hope i made some sense.
|
2021/01/28
|
[
"https://Stackoverflow.com/questions/65937191",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14071544/"
] |
Simply run [truncate](https://docs.snowflake.com/en/sql-reference/sql/truncate-table.html#truncate-table) query
```
truncate table if exists table_name;
```
If what you are asking is how to create an ETL that incorporate the AWS Glue job with running query on Snowflake that is depended on the ETL tool that you use and what it can do. That however is a question for the ETL tool that you are using.
i can suggest using [Apache Airflow](https://airflow.apache.org/) which can schedule Glue jobs and run Snowflake queries.
|
I am using an AWS Glue as an ETL tool. I dont to know how to fire an Snowflake DDL Statement from AWS Glue jobs.
Thanks,
Ram.
|
51,359,254
|
I am using instaFlights Search API of Sabre. I am unable to get response for any of the IATA 3-letter codes other then "JFK" and "LAX". Whenever I pass any other IATA code in the request like "DXB", i recieve an error.
please note that I am working on test environment and testing the api on the link given below.
[link of API test page](https://developer.sabre.com/io-docs)
The error that i receive every time i request with any other IATA Code.
[](https://i.stack.imgur.com/bectU.png)
|
2018/07/16
|
[
"https://Stackoverflow.com/questions/51359254",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8874135/"
] |
Simple enough in Perl. The secret is to put Perl in "paragraph mode" by setting the input record separator (`$/`) to an empty string. Then we only print records if they contain a letter.
```
#!/usr/bin/perl
use strict;
use warnings;
# Paragraph mode
local $/ = '';
# Read from STDIN a record (i.e. paragraph) at a time
while (<>) {
# Only print records that include a letter
print if /[a-z]/i;
}
```
This is written as a Unix filter, i.e. it reads from `STDIN` and writes to `STDOUT`. So if it's in a file called `filter`, you can call it like this:
```
$ filter < your_input_file > your_output_file
```
Alternatively this is a simple command line script in Perl (`-00` is the command line option to put Perl into paragraph mode):
```
$ perl -00 -ne'print if /[a-z]/' < your_input_file > your_output_file
```
|
If there's exactly one blank line after each paragraph you can use a long `awk` oneliner (three patterns, so probably not a oneliner actually):
```
$ echo '1
19:22
abcde
2
19:23
3
19:24
abbff
4
19:25
abbc
' | awk '/[^[:space:]]/ { accum = accum $0 "\n" } /^[[:space:]]*$/ { if(on) print accum $0; on = 0; accum = "" } /[[:alpha:]]/ { on = 1 }'
1
19:22
abcde
3
19:24
abbff
4
19:25
abbc
```
The idea is to accumulate non-blank lines, setting flag once an alphabetical character found, and on a blank input line, flush the whole accumulated paragraph if that flag is set, reset accum to empty string and reset flag to zero.
(Note that if the last line of input is not necessarily empty you might need to add an `END` block that checks if currently there's a paragraph unflushed and flush it as needed.)
|
51,359,254
|
I am using instaFlights Search API of Sabre. I am unable to get response for any of the IATA 3-letter codes other then "JFK" and "LAX". Whenever I pass any other IATA code in the request like "DXB", i recieve an error.
please note that I am working on test environment and testing the api on the link given below.
[link of API test page](https://developer.sabre.com/io-docs)
The error that i receive every time i request with any other IATA Code.
[](https://i.stack.imgur.com/bectU.png)
|
2018/07/16
|
[
"https://Stackoverflow.com/questions/51359254",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8874135/"
] |
sed is for doing s/old/new/ on individual lines, **that is all**. For anything else you should be using awk:
```
$ awk -v RS= -v ORS='\n\n' '/[[:alpha:]]/' file
1
19:22
abcde
3
19:24
abbff
4
19:25
abbc
```
The above is simply this:
* `RS=` tells awk the input records are separated by blank lines.
* `ORS='\n\n'` tells awk the output records must also be separated by blank lines.
* `/[[:alpha:]]/` searches for and prints records that contain alphabetic characters.
|
If there's exactly one blank line after each paragraph you can use a long `awk` oneliner (three patterns, so probably not a oneliner actually):
```
$ echo '1
19:22
abcde
2
19:23
3
19:24
abbff
4
19:25
abbc
' | awk '/[^[:space:]]/ { accum = accum $0 "\n" } /^[[:space:]]*$/ { if(on) print accum $0; on = 0; accum = "" } /[[:alpha:]]/ { on = 1 }'
1
19:22
abcde
3
19:24
abbff
4
19:25
abbc
```
The idea is to accumulate non-blank lines, setting flag once an alphabetical character found, and on a blank input line, flush the whole accumulated paragraph if that flag is set, reset accum to empty string and reset flag to zero.
(Note that if the last line of input is not necessarily empty you might need to add an `END` block that checks if currently there's a paragraph unflushed and flush it as needed.)
|
51,359,254
|
I am using instaFlights Search API of Sabre. I am unable to get response for any of the IATA 3-letter codes other then "JFK" and "LAX". Whenever I pass any other IATA code in the request like "DXB", i recieve an error.
please note that I am working on test environment and testing the api on the link given below.
[link of API test page](https://developer.sabre.com/io-docs)
The error that i receive every time i request with any other IATA Code.
[](https://i.stack.imgur.com/bectU.png)
|
2018/07/16
|
[
"https://Stackoverflow.com/questions/51359254",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8874135/"
] |
sed is for doing s/old/new/ on individual lines, **that is all**. For anything else you should be using awk:
```
$ awk -v RS= -v ORS='\n\n' '/[[:alpha:]]/' file
1
19:22
abcde
3
19:24
abbff
4
19:25
abbc
```
The above is simply this:
* `RS=` tells awk the input records are separated by blank lines.
* `ORS='\n\n'` tells awk the output records must also be separated by blank lines.
* `/[[:alpha:]]/` searches for and prints records that contain alphabetic characters.
|
Simple enough in Perl. The secret is to put Perl in "paragraph mode" by setting the input record separator (`$/`) to an empty string. Then we only print records if they contain a letter.
```
#!/usr/bin/perl
use strict;
use warnings;
# Paragraph mode
local $/ = '';
# Read from STDIN a record (i.e. paragraph) at a time
while (<>) {
# Only print records that include a letter
print if /[a-z]/i;
}
```
This is written as a Unix filter, i.e. it reads from `STDIN` and writes to `STDOUT`. So if it's in a file called `filter`, you can call it like this:
```
$ filter < your_input_file > your_output_file
```
Alternatively this is a simple command line script in Perl (`-00` is the command line option to put Perl into paragraph mode):
```
$ perl -00 -ne'print if /[a-z]/' < your_input_file > your_output_file
```
|
51,359,254
|
I am using instaFlights Search API of Sabre. I am unable to get response for any of the IATA 3-letter codes other then "JFK" and "LAX". Whenever I pass any other IATA code in the request like "DXB", i recieve an error.
please note that I am working on test environment and testing the api on the link given below.
[link of API test page](https://developer.sabre.com/io-docs)
The error that i receive every time i request with any other IATA Code.
[](https://i.stack.imgur.com/bectU.png)
|
2018/07/16
|
[
"https://Stackoverflow.com/questions/51359254",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8874135/"
] |
Simple enough in Perl. The secret is to put Perl in "paragraph mode" by setting the input record separator (`$/`) to an empty string. Then we only print records if they contain a letter.
```
#!/usr/bin/perl
use strict;
use warnings;
# Paragraph mode
local $/ = '';
# Read from STDIN a record (i.e. paragraph) at a time
while (<>) {
# Only print records that include a letter
print if /[a-z]/i;
}
```
This is written as a Unix filter, i.e. it reads from `STDIN` and writes to `STDOUT`. So if it's in a file called `filter`, you can call it like this:
```
$ filter < your_input_file > your_output_file
```
Alternatively this is a simple command line script in Perl (`-00` is the command line option to put Perl into paragraph mode):
```
$ perl -00 -ne'print if /[a-z]/' < your_input_file > your_output_file
```
|
This might work for you (GNU sed):
```
sed ':a;$!{N;/^$/M!ba};/[[:alpha:]]/!d' file
```
Gather up lines delimited by an empty line or end-of-file and delete the latest collection if it does not contain an alpha character.
This presupposes that the file format is fixed as in the example. To be more accurate use:
```
sed -r ':a;$!{N;/^$/M!ba};/^[1-9][0-9]*\n[0-9]{2}:[0-9]{2}\n[[:alpha:]]+\n?$/!d' file
```
|
51,359,254
|
I am using instaFlights Search API of Sabre. I am unable to get response for any of the IATA 3-letter codes other then "JFK" and "LAX". Whenever I pass any other IATA code in the request like "DXB", i recieve an error.
please note that I am working on test environment and testing the api on the link given below.
[link of API test page](https://developer.sabre.com/io-docs)
The error that i receive every time i request with any other IATA Code.
[](https://i.stack.imgur.com/bectU.png)
|
2018/07/16
|
[
"https://Stackoverflow.com/questions/51359254",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8874135/"
] |
Simple enough in Perl. The secret is to put Perl in "paragraph mode" by setting the input record separator (`$/`) to an empty string. Then we only print records if they contain a letter.
```
#!/usr/bin/perl
use strict;
use warnings;
# Paragraph mode
local $/ = '';
# Read from STDIN a record (i.e. paragraph) at a time
while (<>) {
# Only print records that include a letter
print if /[a-z]/i;
}
```
This is written as a Unix filter, i.e. it reads from `STDIN` and writes to `STDOUT`. So if it's in a file called `filter`, you can call it like this:
```
$ filter < your_input_file > your_output_file
```
Alternatively this is a simple command line script in Perl (`-00` is the command line option to put Perl into paragraph mode):
```
$ perl -00 -ne'print if /[a-z]/' < your_input_file > your_output_file
```
|
Similar to the solution of [Ed Morton](https://stackoverflow.com/a/51362559/8344060) but with the following assumptions:
* The text blocks consist of 2 or 3 lines.
* If there is a third line, it contains characters from any alphabet.
In essence, under these conditions we only need to check for a third field:
```
awk 'BEGIN{RS=;ORS="\n\n";FS="\n"}(NF<3)' file
```
or similar without `BEGIN`:
```
awk -v RS= -v ORS='\n\n' -F '\n' '(NF<3)' file
```
|
51,359,254
|
I am using instaFlights Search API of Sabre. I am unable to get response for any of the IATA 3-letter codes other then "JFK" and "LAX". Whenever I pass any other IATA code in the request like "DXB", i recieve an error.
please note that I am working on test environment and testing the api on the link given below.
[link of API test page](https://developer.sabre.com/io-docs)
The error that i receive every time i request with any other IATA Code.
[](https://i.stack.imgur.com/bectU.png)
|
2018/07/16
|
[
"https://Stackoverflow.com/questions/51359254",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8874135/"
] |
sed is for doing s/old/new/ on individual lines, **that is all**. For anything else you should be using awk:
```
$ awk -v RS= -v ORS='\n\n' '/[[:alpha:]]/' file
1
19:22
abcde
3
19:24
abbff
4
19:25
abbc
```
The above is simply this:
* `RS=` tells awk the input records are separated by blank lines.
* `ORS='\n\n'` tells awk the output records must also be separated by blank lines.
* `/[[:alpha:]]/` searches for and prints records that contain alphabetic characters.
|
This might work for you (GNU sed):
```
sed ':a;$!{N;/^$/M!ba};/[[:alpha:]]/!d' file
```
Gather up lines delimited by an empty line or end-of-file and delete the latest collection if it does not contain an alpha character.
This presupposes that the file format is fixed as in the example. To be more accurate use:
```
sed -r ':a;$!{N;/^$/M!ba};/^[1-9][0-9]*\n[0-9]{2}:[0-9]{2}\n[[:alpha:]]+\n?$/!d' file
```
|
51,359,254
|
I am using instaFlights Search API of Sabre. I am unable to get response for any of the IATA 3-letter codes other then "JFK" and "LAX". Whenever I pass any other IATA code in the request like "DXB", i recieve an error.
please note that I am working on test environment and testing the api on the link given below.
[link of API test page](https://developer.sabre.com/io-docs)
The error that i receive every time i request with any other IATA Code.
[](https://i.stack.imgur.com/bectU.png)
|
2018/07/16
|
[
"https://Stackoverflow.com/questions/51359254",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8874135/"
] |
sed is for doing s/old/new/ on individual lines, **that is all**. For anything else you should be using awk:
```
$ awk -v RS= -v ORS='\n\n' '/[[:alpha:]]/' file
1
19:22
abcde
3
19:24
abbff
4
19:25
abbc
```
The above is simply this:
* `RS=` tells awk the input records are separated by blank lines.
* `ORS='\n\n'` tells awk the output records must also be separated by blank lines.
* `/[[:alpha:]]/` searches for and prints records that contain alphabetic characters.
|
Similar to the solution of [Ed Morton](https://stackoverflow.com/a/51362559/8344060) but with the following assumptions:
* The text blocks consist of 2 or 3 lines.
* If there is a third line, it contains characters from any alphabet.
In essence, under these conditions we only need to check for a third field:
```
awk 'BEGIN{RS=;ORS="\n\n";FS="\n"}(NF<3)' file
```
or similar without `BEGIN`:
```
awk -v RS= -v ORS='\n\n' -F '\n' '(NF<3)' file
```
|
175,148
|
I currently have 10 years worth of January raster datasets (working on just one month at the moment) of cloud fraction cover from MODIS, and am trying to create new 3 separate rasters which will give me the Mean, Maximum and Minimum values for all 10 years.
I am currently processing this in QGIS and have so far only been able to obtain the Mean layer by using the Raster Calculator by first summing all 10 layers (years) for January and subsequently dividing it by 10, to obtain a Mean value layer for 10 years.
I can't seem to find similar solutions here of how to obtain obtain the Maximum and Minimum values?
|
2015/12/29
|
[
"https://gis.stackexchange.com/questions/175148",
"https://gis.stackexchange.com",
"https://gis.stackexchange.com/users/54179/"
] |
You should probably use per-cell calculators like [r.series](https://grass.osgeo.org/grass64/manuals/r.series.html) in the QGIS processing toolbox (GRASS commands), which can give you mean, max, min, as well as other statistics on your datasets.
|
Starting from QGIS 3.16 you can perform cell-based statistics natively using [Cell Statistics](https://qgis.org/en/site/forusers/visualchangelog316/index.html#new-cell-statistics-algorithm) tool. It includes mean, minimum, and maximum as well as other statistical methods.
[](https://i.stack.imgur.com/xDnyy.png)
|
175,148
|
I currently have 10 years worth of January raster datasets (working on just one month at the moment) of cloud fraction cover from MODIS, and am trying to create new 3 separate rasters which will give me the Mean, Maximum and Minimum values for all 10 years.
I am currently processing this in QGIS and have so far only been able to obtain the Mean layer by using the Raster Calculator by first summing all 10 layers (years) for January and subsequently dividing it by 10, to obtain a Mean value layer for 10 years.
I can't seem to find similar solutions here of how to obtain obtain the Maximum and Minimum values?
|
2015/12/29
|
[
"https://gis.stackexchange.com/questions/175148",
"https://gis.stackexchange.com",
"https://gis.stackexchange.com/users/54179/"
] |
You should probably use per-cell calculators like [r.series](https://grass.osgeo.org/grass64/manuals/r.series.html) in the QGIS processing toolbox (GRASS commands), which can give you mean, max, min, as well as other statistics on your datasets.
|
Other option could be **Statistics for rasters** from SAGA in QGIS 3.X.
[](https://i.stack.imgur.com/XXHlU.png)
|
175,148
|
I currently have 10 years worth of January raster datasets (working on just one month at the moment) of cloud fraction cover from MODIS, and am trying to create new 3 separate rasters which will give me the Mean, Maximum and Minimum values for all 10 years.
I am currently processing this in QGIS and have so far only been able to obtain the Mean layer by using the Raster Calculator by first summing all 10 layers (years) for January and subsequently dividing it by 10, to obtain a Mean value layer for 10 years.
I can't seem to find similar solutions here of how to obtain obtain the Maximum and Minimum values?
|
2015/12/29
|
[
"https://gis.stackexchange.com/questions/175148",
"https://gis.stackexchange.com",
"https://gis.stackexchange.com/users/54179/"
] |
Starting from QGIS 3.16 you can perform cell-based statistics natively using [Cell Statistics](https://qgis.org/en/site/forusers/visualchangelog316/index.html#new-cell-statistics-algorithm) tool. It includes mean, minimum, and maximum as well as other statistical methods.
[](https://i.stack.imgur.com/xDnyy.png)
|
Other option could be **Statistics for rasters** from SAGA in QGIS 3.X.
[](https://i.stack.imgur.com/XXHlU.png)
|
11,395,667
|
What is the minimum configuration to run the JVM?
The computer I work has the following settings:
`MS Windows XP Professional SP3`
`Intel Celeron 2.26GHz CPU, 959MB RAM, VIA/S3G UniChrome Pro IGP`
Using a Java-based program and I think that is causing this latency, besides having to use Internet Explorer to access the Intranet (which only allows the use of IE). So I wonder what the minimum configuration to run the JVM, so that my computer can be replaced.
From now on,
Thank you.
*Sorry for bad english*
|
2012/07/09
|
[
"https://Stackoverflow.com/questions/11395667",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/939803/"
] |
depends on what system and what version of jdk
see <http://www.java.com/en/download/help/sysreq.xml>
for Windows XP 128mb
|
The more RAM the better, processor and gpu are irrelevant. My workstation has 8 GB of RAM and it's ok for running a tomcat server, database server, firefox with dev add-ons (it can sometimes use around 1 GB of ram), Eclipse and some other tools.
The more RAM you have the more you can give to the apps executed in jvm (-xmx and -xms options)
|
11,395,667
|
What is the minimum configuration to run the JVM?
The computer I work has the following settings:
`MS Windows XP Professional SP3`
`Intel Celeron 2.26GHz CPU, 959MB RAM, VIA/S3G UniChrome Pro IGP`
Using a Java-based program and I think that is causing this latency, besides having to use Internet Explorer to access the Intranet (which only allows the use of IE). So I wonder what the minimum configuration to run the JVM, so that my computer can be replaced.
From now on,
Thank you.
*Sorry for bad english*
|
2012/07/09
|
[
"https://Stackoverflow.com/questions/11395667",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/939803/"
] |
depends on what system and what version of jdk
see <http://www.java.com/en/download/help/sysreq.xml>
for Windows XP 128mb
|
Well first you must consider the version of JVM you want to run, as for any other software the more you advance of version the more resources it "eats" therefore supposing you're on JVM 7
consider this table from the Oracle website
[Java System requirements](http://www.java.com/en/download/help/sysreq.xml)
If you have any other questions feel free to ask.
|
13,402,260
|
The same question as here:
[question](https://stackoverflow.com/questions/6996375/how-to-filter-datagridview-using-a-textbox-in-c)
But in my case my dataGridView1.DataSource is null.
I use the dataGridView1.Rows.Add function to add rows to the table.
Is it possible to add the filter the column of dataGridView using textbox without DataSource?
|
2012/11/15
|
[
"https://Stackoverflow.com/questions/13402260",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/242388/"
] |
It may be possible filter a datagrid that doesn't have a datasource, but I suspect it isn't.
Regardless, an easier solution would be to just give the grid a datasource. Rather than programmatically adding rows to the datagrid, instead create a `DataTable` and add rows to it, then set the grid's data source to that table. Now you can use standard filtering methods.
|
Insted of adding rows to the datagridview, create a `DataTable` and add rows to it and bind it to the datagridview. Now you can use TextBox to search.
```
DataTable table = new DataTable();
table.Columns.Add("Column_Name1", typeof(String));
table.Columns.Add("Column_Name2", typeof(String));
......
foreach (var element in list)
table.Rows.Add(element.Column_Name1, element.Column_Name2, ...);
dataGridView1.DataSource = table;
table.DefaultView.RowFilter = "Column_Name1 Like '"+TextBox.Text+"'";
```
|
13,402,260
|
The same question as here:
[question](https://stackoverflow.com/questions/6996375/how-to-filter-datagridview-using-a-textbox-in-c)
But in my case my dataGridView1.DataSource is null.
I use the dataGridView1.Rows.Add function to add rows to the table.
Is it possible to add the filter the column of dataGridView using textbox without DataSource?
|
2012/11/15
|
[
"https://Stackoverflow.com/questions/13402260",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/242388/"
] |
Or you can use this code:
```
if (textBox1.Text != string.Empty)
{
foreach (DataGridViewRow row in dataGridView1.Rows)
{
if (row.Cells[column_index].ToString().Trim().Contains(textBox1.Text.Trim()))
{
row.Visible = true;
}
else
row.Visible = false;
}
}
```
|
It may be possible filter a datagrid that doesn't have a datasource, but I suspect it isn't.
Regardless, an easier solution would be to just give the grid a datasource. Rather than programmatically adding rows to the datagrid, instead create a `DataTable` and add rows to it, then set the grid's data source to that table. Now you can use standard filtering methods.
|
13,402,260
|
The same question as here:
[question](https://stackoverflow.com/questions/6996375/how-to-filter-datagridview-using-a-textbox-in-c)
But in my case my dataGridView1.DataSource is null.
I use the dataGridView1.Rows.Add function to add rows to the table.
Is it possible to add the filter the column of dataGridView using textbox without DataSource?
|
2012/11/15
|
[
"https://Stackoverflow.com/questions/13402260",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/242388/"
] |
It may be possible filter a datagrid that doesn't have a datasource, but I suspect it isn't.
Regardless, an easier solution would be to just give the grid a datasource. Rather than programmatically adding rows to the datagrid, instead create a `DataTable` and add rows to it, then set the grid's data source to that table. Now you can use standard filtering methods.
|
you can also try linq
```
private void filter()
{
if (this.txtsearch.Text != string.Empty)
this.dataGridView1.Rows.OfType<DataGridViewRow>().Where(r => r.Cells["column_name"].Value.ToString() == this.txtsearch.Text.Trim()).ToList().ForEach(row => { if (!row.IsNewRow) row.Visible = false; });
else
this.dataGridView1.Rows.OfType<DataGridViewRow>().ToList().ForEach(row => { if (!row.IsNewRow) row.Visible = true; });
}
```
|
13,402,260
|
The same question as here:
[question](https://stackoverflow.com/questions/6996375/how-to-filter-datagridview-using-a-textbox-in-c)
But in my case my dataGridView1.DataSource is null.
I use the dataGridView1.Rows.Add function to add rows to the table.
Is it possible to add the filter the column of dataGridView using textbox without DataSource?
|
2012/11/15
|
[
"https://Stackoverflow.com/questions/13402260",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/242388/"
] |
It may be possible filter a datagrid that doesn't have a datasource, but I suspect it isn't.
Regardless, an easier solution would be to just give the grid a datasource. Rather than programmatically adding rows to the datagrid, instead create a `DataTable` and add rows to it, then set the grid's data source to that table. Now you can use standard filtering methods.
|
What @Rakesh Roa M did was good, but I took it a step further to firstly make it work in VB.NET, but also so that it was case insensitive.
Like you, I'm adding content programmatically as well.
It's certainly not the best way to do this if you have thousands of entries on a DataGridView due to it's inefficiencies, but if the record set is in the hundreds, this should work fine.
```
If TextBox1.Text IsNot String.Empty Then
For Each row As DataGridViewRow In DataGridView1.Rows
If row.Cells("your_column_name").Value.ToString().ToUpper().Contains(TextBox_filter.Text.ToUpper().Trim()) Then
row.Visible = True
Else
row.Visible = False
End If
Next
End If
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.