qid int64 1 74.7M | question stringlengths 15 58.3k | date stringlengths 10 10 | metadata list | response_j stringlengths 4 30.2k | response_k stringlengths 11 36.5k |
|---|---|---|---|---|---|
56,104 | I want to install **Oracle's JRE** and to update to the latest version with the Software Updater when they released. Is there a Ubuntu package that is provided by Canonical or Oracle?
Before release Java 7, I followed [this way](https://askubuntu.com/questions/5459/how-can-i-install-the-latest-version-of-sun-java-jre) to install **Java 6**.
But it doesn't work for **Java 7**. There is no package `sun-java7-xxx`. How can you install **Java 7**? | 2011/08/07 | [
"https://askubuntu.com/questions/56104",
"https://askubuntu.com",
"https://askubuntu.com/users/5717/"
] | This worked for my development needs of being able to run/compile 1.6 or 1.7. Previously I was just running 1.6 from a Ubuntu apt package.
1. Download [1.7 gzip](http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html).
2. Extract to folder
3. Update JAVA\_HOME and PATH in bash file
```
JAVA_HOME=/opt/java/jdk1.7.0_25
export JAVA_HOME
PATH=$JAVA_HOME/bin:$PATH
export PATH
``` | OS: Ubuntu 18.04 LTS
I am surprised no one has mentioned `conda` . Link:
<https://docs.conda.io/en/latest/miniconda.html>
I installed java in one of my conda environments and used the `java` command without problems. |
56,104 | I want to install **Oracle's JRE** and to update to the latest version with the Software Updater when they released. Is there a Ubuntu package that is provided by Canonical or Oracle?
Before release Java 7, I followed [this way](https://askubuntu.com/questions/5459/how-can-i-install-the-latest-version-of-sun-java-jre) to install **Java 6**.
But it doesn't work for **Java 7**. There is no package `sun-java7-xxx`. How can you install **Java 7**? | 2011/08/07 | [
"https://askubuntu.com/questions/56104",
"https://askubuntu.com",
"https://askubuntu.com/users/5717/"
] | >
> **Note:** WebUpd8 team's PPA has been discontinued with effective from April 16, 2019. Thus this PPA doesn't have any Java files. More information can be found on [PPA's page on Launchpad](https://launchpad.net/~webupd8team/+archive/ubuntu/java). Hence this method no longer works and exists because of historical reasons.
>
>
>
I appreciate all the previous answers. I want to add this answer to simplify things which is done by [www.webupd8.org](http://www.webupd8.org) to make installation in **2-5 minutes**.
This installation includes:
```
sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-java7-installer
```
That's all!!
Now to check the Java version
```
java -version
```
The output will be like
```
java version "1.7.0_25"
Java(TM) SE Runtime Environment (build 1.7.0_25-b15)
Java HotSpot(TM) Server VM (build 23.25-b01, mixed mode
```
There may come a new version, and then you can simply update it with this command:
```
sudo update-java-alternatives -s java-7-oracle
```
**Setting up environment variables**
```
sudo apt-get install oracle-java7-set-default
```
For more, check out [Install Oracle Java 7 in Ubuntu via PPA Repository](http://www.webupd8.org/2012/01/install-oracle-java-jdk-7-in-ubuntu-via.html). | Get the JDK from Oracle/Sun; download the Java JDK at:
<http://www.oracle.com/technetwork/java/javase/overview/index.html>
Please download or move the downloaded file to your home directory, `~`, for ease.
Note:
* Don't worry about what JDK to download for JEE.
* Please skip copying the Prompt " user@host:~$ ".
* Hit enter after each command.
Run in a terminal..
```
user@host:~$ sudo mkdir -p /usr/lib/jvm/
user@host:~$ sudo mv jdk-7u4-linux-i586.tar.gz /usr/lib/jvm/
user@host:~$ cd /usr/lib/jvm/
user@host:~$ sudo tar zxvf jdk-7u4-linux-i586.tar.gz
```
Now enable Java (by running individually):
```
user@host:~$ sudo update-alternatives --install "/usr/bin/java" "java" "/usr/lib/jvm/jdk1.7.0_04/bin/java" 1
user@host:~$ sudo update-alternatives --install "/usr/bin/javac" "javac" "/usr/lib/jvm/jdk1.7.0_04/bin/javac" 1
user@host:~$ sudo update-alternatives --install "/usr/bin/javaws" "javaws" "/usr/lib/jvm/jdk1.7.0_04/bin/javaws" 1
```
Close all browsers.
Create a Mozilla plugins folder in your home directory:
```
user@host:~$ mkdir ~/.mozilla/plugins/
```
Create a symbolic link to your Mozilla plugins folder. For 64-bit systems, replace ‘i386’ with ‘amd64’:
```
user@host:~$ ln -s /usr/lib/jvm/jdk1.7.0/jre/lib/i386/libnpjp2.so ~/.mozilla/plugins/
```
Testing:
```
user@host:~$ java -version
```
Output:
```
java version "1.7.0_04"
Java(TM) SE Runtime Environment (build 1.7.0_04-b20)
Java HotSpot(TM) Server VM (build 23.0-b21, mixed mode)
```
Testing:
```
user@host:~$ javac -version
```
Output:
```
javac 1.7.0_04
```
Verify JRE at <http://java.com/en/download/installed.jsp>. |
27,318,857 | So I'm developing a website using php, mysql and javascript, and also 'sha512' to encrypt passwords of members using the code :
```
$password = filter_input(INPUT_POST, 'p', FILTER_SANITIZE_STRING);
$random_salt = hash('sha512', uniqid(mt_rand(1, mt_getrandmax()), true));
$password = hash('sha512', $password . $random_salt);
```
the `p` value is comming from :
```
function formhash(form) {
var password = randomString();
var p = document.createElement("input");
form.appendChild(p);
p.name = "p";
p.type = "hidden";
p.value = hex_sha512(password.value);
password.value = "";
form.submit();
}
function randomString() {
var text = "";
var possible = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789";
for( var i=0; i < 9; i++ )
text += possible.charAt(Math.floor(Math.random() * possible.length));
return text;
}
```
My idea here is to reset user password by entering their email and generate random 8 characters then send it directly to their email.
The problem I'm facing now is how to get the actual password (not encrypted) that has been generated so it can be automatically sent to the email of the member who requested to reset their password? | 2014/12/05 | [
"https://Stackoverflow.com/questions/27318857",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4329172/"
] | Good question.
**First, you should never send users their passwords in plaintext.** It's considered a bad security practice for a few reasons. If anyone gets access to the email, then they have the password and can hijack the user account. Second, hashing is a one-way form of encryption where you turn the password into gibberish. The big value in hashing is that the same password will always be turned into the same gibberish-- everytime. This means you can do password matching without ever storing the raw password. The reason you're supposed to hash a password and not do 2-way encryption like AES-256, is that 2-way encryption requires the creation, management, and securing of encryption keys which can be hard. Hashing is just easier and more secure for the vast majority of developers.
**So how should you implement password reset if you can't send the raw password?**
You send the user an email with a link to a secure reset page AND a one-time use reset token that expires within a certain window. This way, if someone get's access to the email then the window of risk is limited to the short window.
There are a variety of ways to build this yourself but an easy approach to getting a one-time use token you don't have to store or manage is to offload user management to a microservice like Stormpath where it takes care of all the user management for you-- password reset, password storage, user profiles, authentication, etc.
For password reset here's what it would look like:
User initiates password reset work on a web page
1. You make API call to stormpath with user's email address or username
2. Stormpath sends out reset email to user (your "from" address, custom HTML, etc) with a link + token. The reset token that is unique, one-time use, and expires if not used within 24 hours
3. User clicks on the link and lands on the reset page
4. You pull the token from the URL and check Stormpath for token verification
5. User submits new password
6. Stormpath sends out reset success message (your "from" address, custom HTML, etc)
You can build your own UIs in this flow so the user never knows Stormpath exists.
Now, you don't have to manage, store, or secure any passwords or reset tokens in your database.
Here's are some links to the community-managed PHP SDK.
<http://docs.stormpath.com/php/quickstart/>
<http://docs.stormpath.com/php/product-guide/>
Full Disclosure - I work at Stormpath | >
> and also 'sha512' to encrypt passwords
>
>
>
You're not encrypting them, you're hashing them. A hash is a one-way function. You can't take the result of a hash function and get the original. There are many possible original chunks of data that can result in the same hash.
The whole point of hashing in this context is to be able to check passwords without ever actually storing the user's password. You shouldn't send the user their password in e-mail, as e-mail is sent over the internet unencrypted. If you must have the original pre-hashed data for some reason, you must store it before you hash it. |
10,661,807 | Loading textures from viewDidLoad works fine. But if I try to load them from the `GLKViewController` update I get an error. I do this because I want to swap in a new background texture without changing view.
This was working before the last upgrade. Maybe I was being lucky with timings. I suspect that it is failing because some thread is busy or something?
Here is the error in full.
**Domain=GLKTextureLoaderErrorDomain Code=8 "The operation couldn’t be completed. (GLKTextureLoaderErrorDomain error 8.)" UserInfo=0x10b5b510 {GLKTextureLoaderGLErrorKey=1282, GLKTextureLoaderErrorKey=OpenGL error}**
So the question is, can I safely load a texture from the `GLKViewController` update function? Or do I need to rethink my approach and reload the whole view or something?
Here is my function:
```
-(void) LoadTexture:(NSString *)texture textureInfo:(GLKTextureInfo**)textureInfo
{
NSString *path = [[NSBundle mainBundle] pathForResource:texture ofType:@"png"];
NSError *error = nil;
(*textureInfo) = [GLKTextureLoader textureWithContentsOfFile:path options:nil error:&error];
NSLog(@"path %@", path);
if(!(*textureInfo))
{
NSLog(@"Failed to load texture %@ %@", texture, error);
}
else
{
NSLog(@"LOADED Texture %@ !!! YAY!!! ", texture);
}
}
```
Thanks,
David | 2012/05/19 | [
"https://Stackoverflow.com/questions/10661807",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1046229/"
] | I had a problem like this and the work arround was loading the texture without the glktextureloader.
Here some code for loading the texture without the GLKtextureLoader:
```
bool lPowerOfTwo = false;
UIImage *image = [UIImage imageNamed:@"texture.png"];
GLuint width = CGImageGetWidth(image.CGImage);
GLuint height = CGImageGetHeight(image.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
void *imageData = malloc( height * width * 4 );
CGContextRef context = CGBitmapContextCreate( imageData, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big );
CGColorSpaceRelease( colorSpace );
CGContextClearRect( context, CGRectMake( 0, 0, width, height ) );
CGRect bounds=CGRectMake( 0, 0, width, height );
CGContextScaleCTM(context, 1, -1);
bounds.size.height = bounds.size.height*-1;
CGContextDrawImage(context, bounds, image.CGImage);
GLuint lTextId;
glGenTextures(1, &lTextId);
glBindTexture(GL_TEXTURE_2D, lTextId);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
if(!lPowerOfTwo)
{
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT );
glGenerateMipmap(GL_TEXTURE_2D);
}else
{
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
}
CGContextRelease(context);
free(imageData);
```
The lTextId variable has the opengl Texture Id.
Note: if the texture dimension is not power of two, the texture will be shown black if the GL\_TEXTURE\_WRAP\_S and \_T are not set to GL\_GLAMP\_TO\_EDGE | I had a similar problem to you. What I did to fix the problem was to have a Class which had all of the textures I wanted to use for the whole game. In `viewDidLoad:` I initialised the class and loaded all of the textures. When I needed to use any of the textures, they were already loaded and the problem didn't occur.
eg. In `viewDidLoad`
```
GameTextures *textures = [GameTextures alloc] init];
[textures LoadAll];
```
LoadAll would load all the textures for later use
Then when you need to use a texture
```
[myBackground setTexture: textures.backgroundTexture2];
```
Hope this helped :) |
10,661,807 | Loading textures from viewDidLoad works fine. But if I try to load them from the `GLKViewController` update I get an error. I do this because I want to swap in a new background texture without changing view.
This was working before the last upgrade. Maybe I was being lucky with timings. I suspect that it is failing because some thread is busy or something?
Here is the error in full.
**Domain=GLKTextureLoaderErrorDomain Code=8 "The operation couldn’t be completed. (GLKTextureLoaderErrorDomain error 8.)" UserInfo=0x10b5b510 {GLKTextureLoaderGLErrorKey=1282, GLKTextureLoaderErrorKey=OpenGL error}**
So the question is, can I safely load a texture from the `GLKViewController` update function? Or do I need to rethink my approach and reload the whole view or something?
Here is my function:
```
-(void) LoadTexture:(NSString *)texture textureInfo:(GLKTextureInfo**)textureInfo
{
NSString *path = [[NSBundle mainBundle] pathForResource:texture ofType:@"png"];
NSError *error = nil;
(*textureInfo) = [GLKTextureLoader textureWithContentsOfFile:path options:nil error:&error];
NSLog(@"path %@", path);
if(!(*textureInfo))
{
NSLog(@"Failed to load texture %@ %@", texture, error);
}
else
{
NSLog(@"LOADED Texture %@ !!! YAY!!! ", texture);
}
}
```
Thanks,
David | 2012/05/19 | [
"https://Stackoverflow.com/questions/10661807",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1046229/"
] | I had a problem like this and the work arround was loading the texture without the glktextureloader.
Here some code for loading the texture without the GLKtextureLoader:
```
bool lPowerOfTwo = false;
UIImage *image = [UIImage imageNamed:@"texture.png"];
GLuint width = CGImageGetWidth(image.CGImage);
GLuint height = CGImageGetHeight(image.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
void *imageData = malloc( height * width * 4 );
CGContextRef context = CGBitmapContextCreate( imageData, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big );
CGColorSpaceRelease( colorSpace );
CGContextClearRect( context, CGRectMake( 0, 0, width, height ) );
CGRect bounds=CGRectMake( 0, 0, width, height );
CGContextScaleCTM(context, 1, -1);
bounds.size.height = bounds.size.height*-1;
CGContextDrawImage(context, bounds, image.CGImage);
GLuint lTextId;
glGenTextures(1, &lTextId);
glBindTexture(GL_TEXTURE_2D, lTextId);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
if(!lPowerOfTwo)
{
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT );
glGenerateMipmap(GL_TEXTURE_2D);
}else
{
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
}
CGContextRelease(context);
free(imageData);
```
The lTextId variable has the opengl Texture Id.
Note: if the texture dimension is not power of two, the texture will be shown black if the GL\_TEXTURE\_WRAP\_S and \_T are not set to GL\_GLAMP\_TO\_EDGE | I was seeing this same behavior, which was caused by an unrelated error. Fix the error and the texture should load properly. See this thread: [GLKTextureLoader fails when loading a certain texture the first time, but succeeds the second time](https://stackoverflow.com/questions/8611063/glktextureloader-fails-when-loading-a-certain-texture-the-first-time-but-succee) |
10,661,807 | Loading textures from viewDidLoad works fine. But if I try to load them from the `GLKViewController` update I get an error. I do this because I want to swap in a new background texture without changing view.
This was working before the last upgrade. Maybe I was being lucky with timings. I suspect that it is failing because some thread is busy or something?
Here is the error in full.
**Domain=GLKTextureLoaderErrorDomain Code=8 "The operation couldn’t be completed. (GLKTextureLoaderErrorDomain error 8.)" UserInfo=0x10b5b510 {GLKTextureLoaderGLErrorKey=1282, GLKTextureLoaderErrorKey=OpenGL error}**
So the question is, can I safely load a texture from the `GLKViewController` update function? Or do I need to rethink my approach and reload the whole view or something?
Here is my function:
```
-(void) LoadTexture:(NSString *)texture textureInfo:(GLKTextureInfo**)textureInfo
{
NSString *path = [[NSBundle mainBundle] pathForResource:texture ofType:@"png"];
NSError *error = nil;
(*textureInfo) = [GLKTextureLoader textureWithContentsOfFile:path options:nil error:&error];
NSLog(@"path %@", path);
if(!(*textureInfo))
{
NSLog(@"Failed to load texture %@ %@", texture, error);
}
else
{
NSLog(@"LOADED Texture %@ !!! YAY!!! ", texture);
}
}
```
Thanks,
David | 2012/05/19 | [
"https://Stackoverflow.com/questions/10661807",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1046229/"
] | I had a problem like this and the work arround was loading the texture without the glktextureloader.
Here some code for loading the texture without the GLKtextureLoader:
```
bool lPowerOfTwo = false;
UIImage *image = [UIImage imageNamed:@"texture.png"];
GLuint width = CGImageGetWidth(image.CGImage);
GLuint height = CGImageGetHeight(image.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
void *imageData = malloc( height * width * 4 );
CGContextRef context = CGBitmapContextCreate( imageData, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big );
CGColorSpaceRelease( colorSpace );
CGContextClearRect( context, CGRectMake( 0, 0, width, height ) );
CGRect bounds=CGRectMake( 0, 0, width, height );
CGContextScaleCTM(context, 1, -1);
bounds.size.height = bounds.size.height*-1;
CGContextDrawImage(context, bounds, image.CGImage);
GLuint lTextId;
glGenTextures(1, &lTextId);
glBindTexture(GL_TEXTURE_2D, lTextId);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
if(!lPowerOfTwo)
{
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT );
glGenerateMipmap(GL_TEXTURE_2D);
}else
{
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
}
CGContextRelease(context);
free(imageData);
```
The lTextId variable has the opengl Texture Id.
Note: if the texture dimension is not power of two, the texture will be shown black if the GL\_TEXTURE\_WRAP\_S and \_T are not set to GL\_GLAMP\_TO\_EDGE | I had almost the same error:
>
> Error Domain=GLKTextureLoaderErrorDomain Code=8 "(null)" UserInfo={GLKTextureLoaderGLErrorKey=1282, GLKTextureLoaderErrorKey=OpenGLES Error.}
>
>
>
It is caused by switching between programs. An Open GL ES error breakpoint will be encountered if I tried to call glUniform1i with a program which is not currently in use.
Fixed by using the correct program, avoid trigger any error breakpoint. |
58,702,568 | I want to start by saying I am very new (1 week) into learning C#, so I sincerely apologize if this question is obvious. I do understand the reason for the exception, diverse.Length has to be 0 or greater. However, I am required to have the formula in my code so as to get the numbered-positions of the last 2 characters of an ever changing string (diverse).
Below, are 3 sets of code....
Firstly my working method.
```
static void Main(string[] args)
{
try
{
// Sample data - inputs 3 ints.
Console.WriteLine(Solution1(6, 1, 1));
Console.WriteLine(Solution1(1, 3, 1));
Console.WriteLine(Solution1(0, 1, 8));
Console.WriteLine(Solution1(5, 2, 4));
}
catch (Exception e)
{
Console.WriteLine(e.Message);
}
}
static string Solution1(int A, int B, int C)
{
string a = "a"; // a/b/c are added to string diverse when needed.
string b = "b";
string c = "c";
string diverse = "";
int totalLength = A + B + C; // Length of all 3 arrays
for(int i = 1; i <= totalLength; i++)
{
if (A >= B && A >= C && A > 0) { diverse = diverse + a; A = A - 1; }
if (B >= A && B >= C && B > 0) { diverse = diverse + b; B = B - 1; }
if (C >= A && C >= B && C > 0) { diverse = diverse + c; C = C - 1; }
}
return diverse;
}
```
*What I am trying to do is add an additional check to my code. This check will take the printed letters and check to see if 2 of the same letter has previously been printed. If so, it will not print a 3rd. To do this I made a solution that would find the last 2 characters of the string (as I mentioned above) and compare it to the conditional check in the if statement.*
Below is the code with this additional check, that I need to get working...
```
static string Solution1(int A, int B, int C)
{
string a = "a"; // a/b/c are added to string diverse when needed.
string b = "b";
string c = "c";
string diverse = "";
int totalLength = A + B + C; // Length of all 3 arrays
for (int i = 1; i <= totalLength; i++)
{
// Finds the last 2 characters in the diverse string.
int LastTwoChars = diverse.Length - 2;
string twoCharCheck = diverse.Substring(LastTwoChars, 2);
if (A > 0 && B < 2 && C < 2 && twoCharCheck != "aa")
{
diverse = diverse + a; A = A - 1;
}
if (B > 0 && A < 2 && C < 2 && twoCharCheck != "bb")
{
diverse = diverse + b; B = B - 1;
}
if (C > 0 && B < 2 && A < 2 && twoCharCheck != "cc")
{
diverse = diverse + c; C = C - 1;
}
}
return diverse;
}
``` | 2019/11/04 | [
"https://Stackoverflow.com/questions/58702568",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6441355/"
] | You can have the string check only after you have at least 2 characters:
```cs
int LastTwoChars = diverse.Length - 2;
string twoCharCheck = LastTwoChars >= 0 ? diverse.Substring(LastTwoChars, 2) : string.Empty; // identical with if (LastTwoChars >= 0) twoCharCheck = diverse.Substring(LastTwoChars, 2); else twoCharCheck = string.Empty
if (A > 0 && B < 2 && C < 2 && (LastTwoChars < 0 || twoCharCheck != "aa")) // check the end only if you have at least 2 chars;
{
...
}
....
```
As you can see the code gets ugly very fast, so I would propose using a helper method:
```cs
// returns the last two chars in a string format, or a default user string
private static string LastTwoCharsOrDefault(string input, string default)
{
var lastTwoCharsIdx = input.Length - 2;
if (lastTwoCharsIdx > 0 )
{
return input.Substring(lastTwoCharsIdx, 2);
}
// we don't have at least 2 chars, so lets just return the default
return default;
}
```
Then you can change your code like this:
```cs
if (A > 0 && B < 2 && C < 2 && string.Equals("aa", LastTwoCharsOrDefault(diverse, "aa"), StringComparison.OrdinalIgnoreCase))
...
```
You can find more about string comparison [here](https://stackoverflow.com/questions/44288/differences-in-string-compare-methods-in-c-sharp). | If I read your code right, you initialize Diverse as: `string diverse = "";`
and then execute `string twoCharCheck = diverse.Substring(LastTwoChars, 2);`
```
LastTwoChars is going to be -2 the first time through the loop.
``` |
589,962 | Hi I tried to delete/remove gstreamer from my Ubuntu. In process I think I deleted some gnome plugin or some other important plugin. Now everytime I restart my computer, the icon bar resets. (The vertical bar on the left hand side)
What package should I install to make this as before ? Or is there a reset settings or packages anywhere ?
Am new to linux so please forgive my wrong lingo. Am using ubuntu 14.04
As seen below I have pinned some icons I want i.e. eclipse and chrome to the launcher.
Now when I restart the computer, all the default stuff comes back i.e. firefox, libre office etc .
Edit :
I tried the command **setsid unity**
After trying this command I could a see a lot of error messages on the command line and it continued to do something i.e. it did not quit the command execution. Following file contains the error and the output.
<https://www.dropbox.com/s/bp4mmntgz5gi8s3/keep.txt?dl=0> | 2015/02/25 | [
"https://askubuntu.com/questions/589962",
"https://askubuntu.com",
"https://askubuntu.com/users/373742/"
] | Since unity –reset is deprecated and hasn't worked since 12.10, we have to reset Unity (and Compiz) manually if for some reason we don't want to install `unity-tweak-tool`
First let's make sure we have the tools to do the job.
`sudo apt-get install dconf-tools`
after installation is complete
If you want a listing of your current Compiz settings (before returning them to defaults) you can issue the command `dconf dump /org/compiz/`
we will issue the command
`dconf reset -f /org/compiz/`
To restart unity and apply the changes immediately issue the command
`setsid unity` or reboot.
Sources:
<http://www.webupd8.org/2012/10/how-to-reset-compiz-and-unity-in-ubuntu.html>
<https://github.com/phanimahesh/unity-revamp> | I struggled a lot with this. And finally found in a blog somewhere that I might need to update Dconf.
So I downloaded the latest version of dconf (dconf-0.22.0) and manually configured and installed it.
This solved the problem. The main reason this was happening is because, it was not able to store the settings anywhere i.e. it was storing the settings in the memory. As result they always got deleted on restart. Hope this helps others. |
589,962 | Hi I tried to delete/remove gstreamer from my Ubuntu. In process I think I deleted some gnome plugin or some other important plugin. Now everytime I restart my computer, the icon bar resets. (The vertical bar on the left hand side)
What package should I install to make this as before ? Or is there a reset settings or packages anywhere ?
Am new to linux so please forgive my wrong lingo. Am using ubuntu 14.04
As seen below I have pinned some icons I want i.e. eclipse and chrome to the launcher.
Now when I restart the computer, all the default stuff comes back i.e. firefox, libre office etc .
Edit :
I tried the command **setsid unity**
After trying this command I could a see a lot of error messages on the command line and it continued to do something i.e. it did not quit the command execution. Following file contains the error and the output.
<https://www.dropbox.com/s/bp4mmntgz5gi8s3/keep.txt?dl=0> | 2015/02/25 | [
"https://askubuntu.com/questions/589962",
"https://askubuntu.com",
"https://askubuntu.com/users/373742/"
] | Since unity –reset is deprecated and hasn't worked since 12.10, we have to reset Unity (and Compiz) manually if for some reason we don't want to install `unity-tweak-tool`
First let's make sure we have the tools to do the job.
`sudo apt-get install dconf-tools`
after installation is complete
If you want a listing of your current Compiz settings (before returning them to defaults) you can issue the command `dconf dump /org/compiz/`
we will issue the command
`dconf reset -f /org/compiz/`
To restart unity and apply the changes immediately issue the command
`setsid unity` or reboot.
Sources:
<http://www.webupd8.org/2012/10/how-to-reset-compiz-and-unity-in-ubuntu.html>
<https://github.com/phanimahesh/unity-revamp> | While the OP real issue might differ from mine, the OP title fits well enough and google guided me here while searching for solution.
To rephrase the problem: **all unity settings lost after restart**. Not only sidebar icons, but also desktop wallpaper, custom keyboard shortcuts, etc.
The problem was very simple: a conf file `~/.config/dconf/user` somehow got **owned by root** during system upgrade.
Solution is obvious: change owner back to your user and re-login (no restart needed). To be more clear:
```
sudo chown $USER:$USER ~/.config/dconf/user
``` |
42,616,572 | this is a follow up to another question I had and it's in regards to the
**this->next = NULL** pointer inside the HashNode constructor below .
My question is , I can't see why **htable[hash\_val]->next** does not equal NULL and instead actually has a memory address , even if **this->next = NULL** , as written in the constructor .
Can anybody tell me why **htable[hash\_val]->next** doesn't equal NULL and has an address associated with it . Can't seem to find it after looking for a while . I can see that **htable[hash\_val]** will have a value but I would think **htable[hash\_val]->next** would be NULL . Thanks .
```
#include<iostream>
using namespace std;
const int TABLE_SIZE = 128;
class HashNode
{
public:
int key;
int value;
HashNode* next;
HashNode(int key, int value)
{
this->key = key;
this->value = value;
this->next = NULL; // shouldn't htable[hash_val]->next = NULL
}
};
class HashMap
{
private:
HashNode** htable;
public:
HashMap()
{
htable = new HashNode*[TABLE_SIZE];
for (int i = 0; i < TABLE_SIZE; i++)
htable[i] = NULL;
}
int HashFunc(int key)
{
return key % TABLE_SIZE;
}
/*
* Insert Element at a key
*/
void Insert(int key, int value)
{
int hash_val = HashFunc(key);
HashNode* prev = NULL;
HashNode* entry = htable[hash_val];
while (entry != NULL)
{
prev = entry;
entry = entry->next;
}
if (entry == NULL)
{
entry = new HashNode(key, value);
if (prev == NULL)
{
htable[hash_val] = entry;
}
else
{
prev->next = entry;
}
}
}
void testnull(int key){
int hash_val = HashFunc(key);
cout<<htable[hash_val]->next; // outputs an address , not NULL
}
int Search(int key)
{
bool flag = false;
int hash_val = HashFunc(key);
HashNode* entry = htable[hash_val];
while (entry != NULL)
{
if (entry->key == key)
{
cout<<entry->value<<" ";
flag = true;
}
entry = entry->next;
}
if (!flag)
return -1;
}
};
int main() {
HashMap hash;
hash.Insert(3,7);
hash.Insert(3,8);
hash.testnull(3);
// your code goes here
return 0;
}
``` | 2017/03/06 | [
"https://Stackoverflow.com/questions/42616572",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7659675/"
] | As mentioned before, according to the FAQ you can have 100,000 simultaneous connections per database. If you need more you can use their contact feature and you'll be helped on an individual basis (they might have to shard your db over multiple servers).
As for your loop. Also according to the Firebase FAQ, there is an approximate limit of around 1000 small writes per second. There is no mention of any limit on reads.
As far as the whole searching use case goes. Searching for certain substrings (comparable to an SQL LIKE query, assuming that's what you want) is impossible in a Firebase query unless you're looking for values that start with a certain substring. If you want better search functionality I would recommend looking into search APIs for Firebase. I've seen ElasticSearch mentioned a couple of times in relation to this, but I've never used it so you'll have to do a bit of research.
Hope this helps.
Here are the appropriate links:
[Firebase Database limits](https://firebase.google.com/support/faq/#database-limits)
[Firebase Simultaneous connections](https://firebase.google.com/pricing/#faq-simultaneous)
[Firebase LIKE query](https://stackoverflow.com/questions/22506531/how-to-perform-sql-like-operation-on-firebase) | As far as I know that firebase-client opens a socket connection to firebase database. Google says that the limit of such connections are 10K simultaneous connection so if your projects exceeds that then open a support ticket to firebase and they will take care of the limit. hope that answers your question. |
14,133,894 | I have linkbutton in GridView. GridView has client event - rowselect. When I click LinkButton, client event (rowselect) is fired. I want to stop firing rowselect client event when I click LinkButton.
Any solution?
For Example.
GridView has 3 rows, two columns
```
Column 1 ---- Column 2 [LinkButton]
AAAAAA ---- test.aspx?ID=001
BBBBBB ---- test.aspx?ID=002
CCCCCC ---- test.aspx?ID=003
Client Side JavaScript (fired on GridView row selection)
function DisplayData(gridview row)
{
//get data from selected row.
}
```
If select first column of row, invoke client side event, it is ok. But when click LinkButton, redirect to test.aspx .. no fire on client side event. | 2013/01/03 | [
"https://Stackoverflow.com/questions/14133894",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1179789/"
] | The culprit is here:
```
private void pushChar(char c) {
myReverse[++topOfS]=c;
System.out.print(myReverse[topOfS++]);
}
```
you increment `topOfS` twice.
Note that the whole code can be replaced by:
```
public class StringReverseChar {
static String mStr=".gnihtemos od ot gniyrT ma I";
public static void main(String ar[]) {
System.out.println(new StringBuilder(mStr).reverse());
}
}
``` | ```
myReverse[++topOfS]=c;
System.out.print(myReverse[topOfS++]);
```
Why are you incrementing twice? |
14,133,894 | I have linkbutton in GridView. GridView has client event - rowselect. When I click LinkButton, client event (rowselect) is fired. I want to stop firing rowselect client event when I click LinkButton.
Any solution?
For Example.
GridView has 3 rows, two columns
```
Column 1 ---- Column 2 [LinkButton]
AAAAAA ---- test.aspx?ID=001
BBBBBB ---- test.aspx?ID=002
CCCCCC ---- test.aspx?ID=003
Client Side JavaScript (fired on GridView row selection)
function DisplayData(gridview row)
{
//get data from selected row.
}
```
If select first column of row, invoke client side event, it is ok. But when click LinkButton, redirect to test.aspx .. no fire on client side event. | 2013/01/03 | [
"https://Stackoverflow.com/questions/14133894",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1179789/"
] | The culprit is here:
```
private void pushChar(char c) {
myReverse[++topOfS]=c;
System.out.print(myReverse[topOfS++]);
}
```
you increment `topOfS` twice.
Note that the whole code can be replaced by:
```
public class StringReverseChar {
static String mStr=".gnihtemos od ot gniyrT ma I";
public static void main(String ar[]) {
System.out.println(new StringBuilder(mStr).reverse());
}
}
``` | ```
private void pushChar(char c) {
myReverse[++topOfS]=c;
System.out.print(myReverse[topOfS++]);
}
```
Should be:
```
private void pushChar(char c) {
myReverse[++topOfS]=c;
System.out.print(myReverse[topOfS]);
}
```
For one. If you make that change, do you still have a problem? Can't tell if that was debugging code, or something that was always there. |
14,133,894 | I have linkbutton in GridView. GridView has client event - rowselect. When I click LinkButton, client event (rowselect) is fired. I want to stop firing rowselect client event when I click LinkButton.
Any solution?
For Example.
GridView has 3 rows, two columns
```
Column 1 ---- Column 2 [LinkButton]
AAAAAA ---- test.aspx?ID=001
BBBBBB ---- test.aspx?ID=002
CCCCCC ---- test.aspx?ID=003
Client Side JavaScript (fired on GridView row selection)
function DisplayData(gridview row)
{
//get data from selected row.
}
```
If select first column of row, invoke client side event, it is ok. But when click LinkButton, redirect to test.aspx .. no fire on client side event. | 2013/01/03 | [
"https://Stackoverflow.com/questions/14133894",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1179789/"
] | **-** First of all you should **never re-invent the wheel, until and unless its necessary.**
**-** Use `StringBuilder` **(Not Thread Safe)** or `StringBuffer` **(Thread Safe)** method `reverse()`.
**Eg:**
```
String mStr= ".gnihtemos od ot gniyrT ma I";
String reStr = new StringBuilder(mStr).reverse();
```
**Probs in your code:**
**-** You **incremented it twice.**
```
private void pushChar(char c) {
myReverse[++topOfS]=c;
System.out.print(myReverse[topOfS++]);
}
``` | ```
myReverse[++topOfS]=c;
System.out.print(myReverse[topOfS++]);
```
Why are you incrementing twice? |
14,133,894 | I have linkbutton in GridView. GridView has client event - rowselect. When I click LinkButton, client event (rowselect) is fired. I want to stop firing rowselect client event when I click LinkButton.
Any solution?
For Example.
GridView has 3 rows, two columns
```
Column 1 ---- Column 2 [LinkButton]
AAAAAA ---- test.aspx?ID=001
BBBBBB ---- test.aspx?ID=002
CCCCCC ---- test.aspx?ID=003
Client Side JavaScript (fired on GridView row selection)
function DisplayData(gridview row)
{
//get data from selected row.
}
```
If select first column of row, invoke client side event, it is ok. But when click LinkButton, redirect to test.aspx .. no fire on client side event. | 2013/01/03 | [
"https://Stackoverflow.com/questions/14133894",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1179789/"
] | **-** First of all you should **never re-invent the wheel, until and unless its necessary.**
**-** Use `StringBuilder` **(Not Thread Safe)** or `StringBuffer` **(Thread Safe)** method `reverse()`.
**Eg:**
```
String mStr= ".gnihtemos od ot gniyrT ma I";
String reStr = new StringBuilder(mStr).reverse();
```
**Probs in your code:**
**-** You **incremented it twice.**
```
private void pushChar(char c) {
myReverse[++topOfS]=c;
System.out.print(myReverse[topOfS++]);
}
``` | ```
private void pushChar(char c) {
myReverse[++topOfS]=c;
System.out.print(myReverse[topOfS++]);
}
```
Should be:
```
private void pushChar(char c) {
myReverse[++topOfS]=c;
System.out.print(myReverse[topOfS]);
}
```
For one. If you make that change, do you still have a problem? Can't tell if that was debugging code, or something that was always there. |
13,251,328 | Hy everyone.
I have a simple problem here.
in my order class i have OrderStatus field, which is an enum in the database. (Can be "Under process" or Dispatched)
My problem is when im using update.jspx i want a field:select dropdown list, where the admin can change this value.
Because these values can not be read out from database, i was thinking of creating a static arraylist inside order.java like this:
```
public static List<String> StatusList;
static{
ArrayList<String> tmp = new ArrayList<String>();
tmp.add("Under process");
tmp.add("Dispatched");
StatusList = Collections.unmodifiableList(tmp);
}
public List<String> getStatusList() {
return StatusList;
}
```
How can i read out these value using field:select tag, and set them as orderStatus?
```
<field:select field="orderStatus" id="c_photostore_Porder_orderStatus" items="${porders}" itemValue="orderStatusList" path="/porders"/>
```
if i could call a method from update.jspx would be fine also i think, but i know the syntax only in webflow, not in standard roo. | 2012/11/06 | [
"https://Stackoverflow.com/questions/13251328",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/911862/"
] | You can place the List in a ServletContext or request attribute and access it on the jsp by calling `${application.StatusList}` or `${request.StatusList}`.
you can also apply solution described in [post](https://stackoverflow.com/questions/6395621/how-to-call-a-static-method-in-jsp-el) | Thank you very much!
For newcomers:
use it in jspx like this:
```
items="${applicationScope.StatusList}"
```
implement **servletContextAware** in the class.
Save the list to servletcontext. (setServletContext method)
I couldnt find solution for itemvalue to be working, any way to get field:select without being editable? (So a drop down list, without edit)? |
4,454,693 | how can i clear all data of a text file using TextWriter object?
mt file name is: user\_name.txt
Thanks. | 2010/12/15 | [
"https://Stackoverflow.com/questions/4454693",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/498727/"
] | `FileMode.Truncate` will remove all text from the file and then open the file for writing.
```
FileInfo fi = new FileInfo(@"C:\path\myfile.txt");
using(TextWriter tw = new StreamWriter(fi.Open(FileMode.Truncate)))
{
tw.Write("write my new text here");
}
``` | Would
```
File.Create(filename).Close();
```
work for you too? |
261,017 | In attempting to determine the impact of using Text or LongTextArea types in an object, I was surprised by the values displayed in the Setup > Storage Usage page: the sizes for each object type were equal.
To test each type I created 3 Objects :
1. ShortText : an object that contains a single custom field of type Text (255)
2. LongTextAreaObject : an object that contains a single field of type LongTextArea(131072)
3. LongTextAreaObjectEmpty : an object that is exactly the same as LongTextAreaObject that I am using as a control object where I will only populate the name and nothing else
Using DataLoader the following values were loaded (9999 records for each object type):
1. ShortText had a Name field set to 'a' and the custom Text field set to a concatenated value of 255 'a' chars
2. LongTextAreaObject had a Name field set to 'a' and the custom LongTextArea field set to a concatenated value of 65470 'a' chars
3. LongTextAreaObjectEmpty had a Name field set to 'a' and no values in the custom LongTextArea field
For all 3 objects the Setup > Storage Usage page showed a figure of 19.5MB. [](https://i.stack.imgur.com/tet8e.jpg)
I would have expected that the ShortText object containing the Text type field would have been smaller due to the amount of data being stored.
Is there an explanation of how the 19.5MB figure is reached? Are Text and LongTextArea some how stored in the same data type under the covers e.g. a CLOB (which would be an odd decision for a 255 length text field)? | 2019/05/03 | [
"https://salesforce.stackexchange.com/questions/261017",
"https://salesforce.stackexchange.com",
"https://salesforce.stackexchange.com/users/68344/"
] | Data Storage is not calculated by the number of fields or content in the fields. It calculated as a number of records. Custom Object take 2KB per record. Doesn't matter i if they have 1 field or 200 long text fields populated.
Salesforce uses a simplistic method for calculating storage usage. Most standard and Custom objects are 2KB but some special objects like PersonAccount take 4KB(as its an account and contact)
So in your case. 9999 records take
```
(9999*2)/1024 = 19.52MB
```
SRC: <https://help.salesforce.com/articleView?id=000193871&type=1> | What you are seeing in Data Storage is not based on the size of the field for respective object. The storage reflects the size of overall record that is stored in that object. [Salesforce record size overview](https://help.salesforce.com/articleView?id=000193871&type=1)
mentions that any record is roughly around 2KB in size (with some exceptions).
In your case the storage is approximately around 19.5MB based on 9999 records for each object. |
3,523,350 | We just installed Sharepoint Foundation 2010 and we're preparing to set it up for our knowledge management project.
I'm reading a lot over the Web and there seems to be options to categorize Wiki pages in Sharepoint, with the use of keywords and/or something called a "Term Store".
The problem is I can't find any of this in our installation of Sharepoint Foundation 2010. My user is part of the Admin groups, but still I see lots of options greyed out and fields displayed on screenshots over the Web but not in our installation.
I'm a bit clueless since I don't find any info on my problem over the Web.
Thanks in advance.
Mark | 2010/08/19 | [
"https://Stackoverflow.com/questions/3523350",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/425420/"
] | If you go to the **Document Library Settings** for the library that houses your wiki pages, you should see **Wiki Categories** as one of the metadata fields. This is where the categories/keywords are stored for a particular page. If you created an enterprise wiki site, on each wiki page you should be able to see on the right hand side an area called **Categories**. This is the term store you're talking about. If you have the wiki page in edit mode, you should be able to add categories (keywords, tags, whatever you want to call it) to the wiki page.
**If you don't see categories on the page...**
Categories, like the ratings area that should be above it, is a **web part**. Pop the page into edit mode, go to *insert > web part* and then under the **Content Rollup** area, insert a new **Categories** web part. | We are running SharePoint 2010 Foundation and I just found I could add a "Category" column by doing these steps.
1. "View all Site Content".
2. Open the wiki in the list of "Document Libraries".
3. Under "Settings", choose "Document Library Settings".
4. Under "Columns", choose "Add from existing site columns".
5. In "Available site columns", select "Category" and click "Add >" to move the choice to "Columns to add".
6. Click "Ok" to save.
7. Open the wiki, edit a page, and "Category" will appear under the Wiki Content input box to receive your input.
8. Save the wiki page and the Category input should be under the wiki page content.
9. Searching the site using any of the words or phrases in the Category returns the wiki page! |
3,523,350 | We just installed Sharepoint Foundation 2010 and we're preparing to set it up for our knowledge management project.
I'm reading a lot over the Web and there seems to be options to categorize Wiki pages in Sharepoint, with the use of keywords and/or something called a "Term Store".
The problem is I can't find any of this in our installation of Sharepoint Foundation 2010. My user is part of the Admin groups, but still I see lots of options greyed out and fields displayed on screenshots over the Web but not in our installation.
I'm a bit clueless since I don't find any info on my problem over the Web.
Thanks in advance.
Mark | 2010/08/19 | [
"https://Stackoverflow.com/questions/3523350",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/425420/"
] | I've found a way using custom columns: I created a custom Category column under the following :
Site Settings --> Site Content Types --> Wiki Page
After, I can create a new site page and when I go edit its properties, I can add categories. | We are running SharePoint 2010 Foundation and I just found I could add a "Category" column by doing these steps.
1. "View all Site Content".
2. Open the wiki in the list of "Document Libraries".
3. Under "Settings", choose "Document Library Settings".
4. Under "Columns", choose "Add from existing site columns".
5. In "Available site columns", select "Category" and click "Add >" to move the choice to "Columns to add".
6. Click "Ok" to save.
7. Open the wiki, edit a page, and "Category" will appear under the Wiki Content input box to receive your input.
8. Save the wiki page and the Category input should be under the wiki page content.
9. Searching the site using any of the words or phrases in the Category returns the wiki page! |
16,790 | I have two list fields with select widgets and the second select depends on the first one.
How would you do this? Is there a module already doing this at least in code? | 2011/12/05 | [
"https://drupal.stackexchange.com/questions/16790",
"https://drupal.stackexchange.com",
"https://drupal.stackexchange.com/users/-1/"
] | You can use the [Conditional fields](http://drupal.org/project/conditional_fields) module. | A solution could be creating a combo field that includes multiple select. |
16,790 | I have two list fields with select widgets and the second select depends on the first one.
How would you do this? Is there a module already doing this at least in code? | 2011/12/05 | [
"https://drupal.stackexchange.com/questions/16790",
"https://drupal.stackexchange.com",
"https://drupal.stackexchange.com/users/-1/"
] | I recently had your doubt, in my search I came upon a very interesting module, [Reference field option limit](http://drupal.org/project/reference_option_limit), which depends on [Entity Reference](http://drupal.org/project/entityreference) and is only available for Drupal 7.
It solved my problem; I hope it will be useful to you. | A solution could be creating a combo field that includes multiple select. |
16,790 | I have two list fields with select widgets and the second select depends on the first one.
How would you do this? Is there a module already doing this at least in code? | 2011/12/05 | [
"https://drupal.stackexchange.com/questions/16790",
"https://drupal.stackexchange.com",
"https://drupal.stackexchange.com/users/-1/"
] | This is quite complex but doable. I have two fields, one is called `parent_menu_name`, this is an entity reference field into menus and a `parent_menu_link` which is simply a string -- menu tree entries are not entities (custom menu links are but not at all tree entries are those).
I started with moving the code to a class for encapsulation:
```
function group_content_submenu_field_widget_single_element_form_alter(array &$element, FormStateInterface $form_state, array $context) {
\Drupal::classResolver(GroupContentSubmenuAlter::class)
->widget($element, $form_state, $context);
}
```
This calls my `widget` method:
```
public function widget(array &$element, FormStateInterface $form_state, array $context) {
/** @var \Drupal\Core\Field\FieldItemListInterface $items */
$items = $context['items'];
$gcm = $items->getEntity();
if ($gcm instanceof GroupContentMenuInterface && $gcm->hasField('parent_menu_link') && $gcm->hasField('parent_menu_name')) {
$prefix = sprintf('gcs-%s-%d-',
$gcm->getFieldDefinition('parent_menu_link')->getUniqueIdentifier(),
$context['delta']
);
$parent_menu_link_wrapper_id = $prefix . 'link';
$parent_menu_name_wrapper_id = $prefix . 'name';
switch ($items->getFieldDefinition()->getName()) {
case 'parent_menu_name':
$element['#prefix'] = "<div id='$parent_menu_name_wrapper_id'>";
$element['#suffix'] = '</div>';
$element['#after_build'][] = [static::class, 'afterBuildName'];
$element['#ajax'] = [
'callback' => [static::class, 'ajax'],
'parent_menu_link_wrapper_id' => $parent_menu_link_wrapper_id,
];
break;
case 'parent_menu_link':
$options = $this->getParentSelectOptions($gcm, $form_state);
$element['value']['#type'] = 'select';
$element['value']['#options'] = $options;
$element['value']['#size'] = min(count($options), 50);
$element['value']['#prefix'] = "<div id='$parent_menu_link_wrapper_id'>";
$element['value']['#suffix'] = '</div>';
$element['value']['#states']['invisible'][] = ["#$parent_menu_name_wrapper_id select" => ['value' => '_none']];
$element['value']['#after_build'][] = [static::class, 'afterBuildLink'];
break;
}
}
}
```
it adds HTML wrappers around both widgets. Initially `#states` hides the parent menu link widget using the wrapper ID of the parent menu name widget. Then `#ajax` on the parent menu name widget replaces the entire parent menu link widget using the wrapper ID of it. So the frontend is done, now we need to tie them together on the backend. Let's see the method generating the options for the second widget:
```
public function getParentSelectOptions(GroupContentMenuInterface $gcm, FormStateInterface $form_state): array {
$menu_name = $gcm->parent_menu_name->target_id;
// This is set in ::afterBuildName so this will only be set in form
// rebuild which is exactly when we need it. The form parents do not
// change from one form build to the next (hopefully).
if ($menu_name_form_parents = $form_state->get(self::PARENT_MENU_NAME_FORM_PARENTS)) {
$menu_name = $form_state->getValue($menu_name_form_parents);
}
if ($menu_name && $menu_name !== '_none' && ($menu = Menu::load($menu_name))) {
return $this->menuParentFormSelector->getParentSelectOptions('', [$menu_name => $menu->label()]);
}
return [];
}
```
As you can see it reads the value of the first widget after an AJAX submit thanks to this small after build method:
```
public static function afterBuildName(array $element, FormStateInterface $form_state): array {
$parents = $element['#parents'];
$parents[] = 0;
$parents[] = 'target_id';
$form_state->set(self::PARENT_MENU_NAME_FORM_PARENTS, $parents);
return $element;
}
```
so when the parent menu name widget is built, it stores `#parents` into `$form_state` and when the form is rebuilt, the parent menu link uses this information to find the parent menu name value.
The AJAX callback is simple:
```
public static function ajax(array &$form, FormStateInterface $form_state) {
$triggering_element = $form_state->getTriggeringElement();
// The right element to replace was set in ::afterBuildLink.
$parent_menu_link_widget = NestedArray::getValue($form, $form_state->get(self::PARENT_MENU_LINK_ARRAY_PARENTS));
$parent_menu_link_wrapper_id = $triggering_element['#ajax']['parent_menu_link_wrapper_id'];
return (new AjaxResponse())
->addCommand(new ReplaceCommand('#' . $parent_menu_link_wrapper_id, $parent_menu_link_widget));
}
```
Remember, this AJAX callback is called by the parent menu name widget and needs to replace the parent menu link widget. The frontend is handled already, this is where another `#after_build` method comes to help us: the parent menu link widget conveniently stores its parent in `$form_state` -- but since we are retrieving from `$form` this needs to be `#array_parents`:
```
public static function afterBuildLink(array $element, FormStateInterface $form_state): array {
$form_state->set(self::PARENT_MENU_LINK_ARRAY_PARENTS, $element['#array_parents']);
return $element;
}
```
To recap, the lifecycle of an AJAX form is:
1. Form gets built.
2. The form and the form state is stored in the database, keyed by the build id.
3. On an AJAX request, the form and form state is restored
4. The form is rebuilt.
5. The form and the form state is stored in the database, keyed by a new build id.
6. The AJAX response is sent. This includes response provided by the callback and the new build id.
Both of our widgets store their respective locations in step #1 and then the *other* widget retrieves this in #4. | A solution could be creating a combo field that includes multiple select. |
16,790 | I have two list fields with select widgets and the second select depends on the first one.
How would you do this? Is there a module already doing this at least in code? | 2011/12/05 | [
"https://drupal.stackexchange.com/questions/16790",
"https://drupal.stackexchange.com",
"https://drupal.stackexchange.com/users/-1/"
] | You can use the [Conditional fields](http://drupal.org/project/conditional_fields) module. | I recently had your doubt, in my search I came upon a very interesting module, [Reference field option limit](http://drupal.org/project/reference_option_limit), which depends on [Entity Reference](http://drupal.org/project/entityreference) and is only available for Drupal 7.
It solved my problem; I hope it will be useful to you. |
16,790 | I have two list fields with select widgets and the second select depends on the first one.
How would you do this? Is there a module already doing this at least in code? | 2011/12/05 | [
"https://drupal.stackexchange.com/questions/16790",
"https://drupal.stackexchange.com",
"https://drupal.stackexchange.com/users/-1/"
] | You can use the [Conditional fields](http://drupal.org/project/conditional_fields) module. | This is quite complex but doable. I have two fields, one is called `parent_menu_name`, this is an entity reference field into menus and a `parent_menu_link` which is simply a string -- menu tree entries are not entities (custom menu links are but not at all tree entries are those).
I started with moving the code to a class for encapsulation:
```
function group_content_submenu_field_widget_single_element_form_alter(array &$element, FormStateInterface $form_state, array $context) {
\Drupal::classResolver(GroupContentSubmenuAlter::class)
->widget($element, $form_state, $context);
}
```
This calls my `widget` method:
```
public function widget(array &$element, FormStateInterface $form_state, array $context) {
/** @var \Drupal\Core\Field\FieldItemListInterface $items */
$items = $context['items'];
$gcm = $items->getEntity();
if ($gcm instanceof GroupContentMenuInterface && $gcm->hasField('parent_menu_link') && $gcm->hasField('parent_menu_name')) {
$prefix = sprintf('gcs-%s-%d-',
$gcm->getFieldDefinition('parent_menu_link')->getUniqueIdentifier(),
$context['delta']
);
$parent_menu_link_wrapper_id = $prefix . 'link';
$parent_menu_name_wrapper_id = $prefix . 'name';
switch ($items->getFieldDefinition()->getName()) {
case 'parent_menu_name':
$element['#prefix'] = "<div id='$parent_menu_name_wrapper_id'>";
$element['#suffix'] = '</div>';
$element['#after_build'][] = [static::class, 'afterBuildName'];
$element['#ajax'] = [
'callback' => [static::class, 'ajax'],
'parent_menu_link_wrapper_id' => $parent_menu_link_wrapper_id,
];
break;
case 'parent_menu_link':
$options = $this->getParentSelectOptions($gcm, $form_state);
$element['value']['#type'] = 'select';
$element['value']['#options'] = $options;
$element['value']['#size'] = min(count($options), 50);
$element['value']['#prefix'] = "<div id='$parent_menu_link_wrapper_id'>";
$element['value']['#suffix'] = '</div>';
$element['value']['#states']['invisible'][] = ["#$parent_menu_name_wrapper_id select" => ['value' => '_none']];
$element['value']['#after_build'][] = [static::class, 'afterBuildLink'];
break;
}
}
}
```
it adds HTML wrappers around both widgets. Initially `#states` hides the parent menu link widget using the wrapper ID of the parent menu name widget. Then `#ajax` on the parent menu name widget replaces the entire parent menu link widget using the wrapper ID of it. So the frontend is done, now we need to tie them together on the backend. Let's see the method generating the options for the second widget:
```
public function getParentSelectOptions(GroupContentMenuInterface $gcm, FormStateInterface $form_state): array {
$menu_name = $gcm->parent_menu_name->target_id;
// This is set in ::afterBuildName so this will only be set in form
// rebuild which is exactly when we need it. The form parents do not
// change from one form build to the next (hopefully).
if ($menu_name_form_parents = $form_state->get(self::PARENT_MENU_NAME_FORM_PARENTS)) {
$menu_name = $form_state->getValue($menu_name_form_parents);
}
if ($menu_name && $menu_name !== '_none' && ($menu = Menu::load($menu_name))) {
return $this->menuParentFormSelector->getParentSelectOptions('', [$menu_name => $menu->label()]);
}
return [];
}
```
As you can see it reads the value of the first widget after an AJAX submit thanks to this small after build method:
```
public static function afterBuildName(array $element, FormStateInterface $form_state): array {
$parents = $element['#parents'];
$parents[] = 0;
$parents[] = 'target_id';
$form_state->set(self::PARENT_MENU_NAME_FORM_PARENTS, $parents);
return $element;
}
```
so when the parent menu name widget is built, it stores `#parents` into `$form_state` and when the form is rebuilt, the parent menu link uses this information to find the parent menu name value.
The AJAX callback is simple:
```
public static function ajax(array &$form, FormStateInterface $form_state) {
$triggering_element = $form_state->getTriggeringElement();
// The right element to replace was set in ::afterBuildLink.
$parent_menu_link_widget = NestedArray::getValue($form, $form_state->get(self::PARENT_MENU_LINK_ARRAY_PARENTS));
$parent_menu_link_wrapper_id = $triggering_element['#ajax']['parent_menu_link_wrapper_id'];
return (new AjaxResponse())
->addCommand(new ReplaceCommand('#' . $parent_menu_link_wrapper_id, $parent_menu_link_widget));
}
```
Remember, this AJAX callback is called by the parent menu name widget and needs to replace the parent menu link widget. The frontend is handled already, this is where another `#after_build` method comes to help us: the parent menu link widget conveniently stores its parent in `$form_state` -- but since we are retrieving from `$form` this needs to be `#array_parents`:
```
public static function afterBuildLink(array $element, FormStateInterface $form_state): array {
$form_state->set(self::PARENT_MENU_LINK_ARRAY_PARENTS, $element['#array_parents']);
return $element;
}
```
To recap, the lifecycle of an AJAX form is:
1. Form gets built.
2. The form and the form state is stored in the database, keyed by the build id.
3. On an AJAX request, the form and form state is restored
4. The form is rebuilt.
5. The form and the form state is stored in the database, keyed by a new build id.
6. The AJAX response is sent. This includes response provided by the callback and the new build id.
Both of our widgets store their respective locations in step #1 and then the *other* widget retrieves this in #4. |
6,417 | I have to address my future German Au Pair family in an E-mail and until now have only been in contact with the woman. However the last E-mail was signed with both names, so I now feel I should address the husband as well in my reply. It has been rather informal contact, she has already initiated 'duzen' so I would be using 'Liebe'.
Could somebody tell me, do I say 'Liebe (Female X) und lieber (Male X),' or is there a way I should address them both together? | 2013/03/19 | [
"https://german.stackexchange.com/questions/6417",
"https://german.stackexchange.com",
"https://german.stackexchange.com/users/2656/"
] | >
> Liebe Angela,
>
> lieber Peer,
>
> ...
>
>
>
or
>
> Liebe Angela und lieber Peer,
>
>
>
both sound perfect to me, if they both signed their last email with `Angela und Peer`.
If you address both the parents \*and their children\*\* (!), you can use
>
> Liebe Familie Müller,
>
>
>
Only an official sender (like the tax office) would use
>
> Eheleute Müller
>
>
>
and only in the address field. Otherwise this is obsolete.
---
\*see the [definition of "Familie"](https://german.stackexchange.com/questions/6426/was-versteht-man-im-deutschen-unter-einer-familie) | If the wife has offered you to communicate on a first-name basis then, as you correctly perceive, it is O.K. for you to take her up on that offer.
However, the husband may be a bit taken aback by being addressed as "Lieber Hans" by a perfect stranger. On the other hand, addressing them as "Sehr geehrte(r) Herr und Frau Schmidt" may strike them as needlessly stiff, even robotic.
A good compromise in my opinion would be to find some middle ground between politeness and informality. I would suggest "Liebe Eheleute Schmidt". Literally, "Eheleute" is "married people", i.e., husband and wife. Then continue addressing them both as "Sie", not "Du", until the husband, too, asks you to call him Hans.
Then again, as you say the husband has co-signed their last mail to you, so it could be appropriate to address them "Dear Petra, dear Hans" after all. This may be a situation where your intuition should be your guide. In any case they are hiring you as an au pair, not a German teacher, and you can be 100 percent sure that any minor language gaffes from you will be forgiven. In fact, they will probably preserve them for the family chronicles :) |
6,417 | I have to address my future German Au Pair family in an E-mail and until now have only been in contact with the woman. However the last E-mail was signed with both names, so I now feel I should address the husband as well in my reply. It has been rather informal contact, she has already initiated 'duzen' so I would be using 'Liebe'.
Could somebody tell me, do I say 'Liebe (Female X) und lieber (Male X),' or is there a way I should address them both together? | 2013/03/19 | [
"https://german.stackexchange.com/questions/6417",
"https://german.stackexchange.com",
"https://german.stackexchange.com/users/2656/"
] | The least problematic variant for both, formality, and familiarity in a case, when you communicate with strangers but expect to have a somewhat closer relationship in the future would be adressing them with their last name, and use 'Liebe...'
Examples:
>
> Liebe Beate Müller, lieber Hans Müller,
>
> Liebe Familie Müller,\*
>
>
>
Only in case they already had offered the ['Du'-form](https://german.stackexchange.com/questions/77/how-can-a-native-english-speaker-know-when-it-is-appropriate-to-use-the-polite) - as is was here - we can also use this in the letter, i.e. adressing them with their first names only and use the 'Du' (plural *'Ihr', 'Euch'*) in the text.
In all other cases, when you had not yet agreed to the 'Du' form the formal 'Sie' should be used.
\*The use of *"Familie"* is generally used for a married couple having children. It should therefore only be used when writing to a family, i.e. when there are also children living in their home. This may likely be the case when it comes to a position as an Au Pair. | If the wife has offered you to communicate on a first-name basis then, as you correctly perceive, it is O.K. for you to take her up on that offer.
However, the husband may be a bit taken aback by being addressed as "Lieber Hans" by a perfect stranger. On the other hand, addressing them as "Sehr geehrte(r) Herr und Frau Schmidt" may strike them as needlessly stiff, even robotic.
A good compromise in my opinion would be to find some middle ground between politeness and informality. I would suggest "Liebe Eheleute Schmidt". Literally, "Eheleute" is "married people", i.e., husband and wife. Then continue addressing them both as "Sie", not "Du", until the husband, too, asks you to call him Hans.
Then again, as you say the husband has co-signed their last mail to you, so it could be appropriate to address them "Dear Petra, dear Hans" after all. This may be a situation where your intuition should be your guide. In any case they are hiring you as an au pair, not a German teacher, and you can be 100 percent sure that any minor language gaffes from you will be forgiven. In fact, they will probably preserve them for the family chronicles :) |
6,417 | I have to address my future German Au Pair family in an E-mail and until now have only been in contact with the woman. However the last E-mail was signed with both names, so I now feel I should address the husband as well in my reply. It has been rather informal contact, she has already initiated 'duzen' so I would be using 'Liebe'.
Could somebody tell me, do I say 'Liebe (Female X) und lieber (Male X),' or is there a way I should address them both together? | 2013/03/19 | [
"https://german.stackexchange.com/questions/6417",
"https://german.stackexchange.com",
"https://german.stackexchange.com/users/2656/"
] | The least problematic variant for both, formality, and familiarity in a case, when you communicate with strangers but expect to have a somewhat closer relationship in the future would be adressing them with their last name, and use 'Liebe...'
Examples:
>
> Liebe Beate Müller, lieber Hans Müller,
>
> Liebe Familie Müller,\*
>
>
>
Only in case they already had offered the ['Du'-form](https://german.stackexchange.com/questions/77/how-can-a-native-english-speaker-know-when-it-is-appropriate-to-use-the-polite) - as is was here - we can also use this in the letter, i.e. adressing them with their first names only and use the 'Du' (plural *'Ihr', 'Euch'*) in the text.
In all other cases, when you had not yet agreed to the 'Du' form the formal 'Sie' should be used.
\*The use of *"Familie"* is generally used for a married couple having children. It should therefore only be used when writing to a family, i.e. when there are also children living in their home. This may likely be the case when it comes to a position as an Au Pair. | >
> Liebe Angela,
>
> lieber Peer,
>
> ...
>
>
>
or
>
> Liebe Angela und lieber Peer,
>
>
>
both sound perfect to me, if they both signed their last email with `Angela und Peer`.
If you address both the parents \*and their children\*\* (!), you can use
>
> Liebe Familie Müller,
>
>
>
Only an official sender (like the tax office) would use
>
> Eheleute Müller
>
>
>
and only in the address field. Otherwise this is obsolete.
---
\*see the [definition of "Familie"](https://german.stackexchange.com/questions/6426/was-versteht-man-im-deutschen-unter-einer-familie) |
154,740 | I got a problem with one page number in the toc: the one of the 'References' generated by biblatex. I am always using the command `\addcontentsline{toc}{chapter}{title}` to add stuff to the TOC, and in all other cases it worked fine (list of tables etc.).
Problem description: The page number in the toc is always the last page of the References. This means if the References are from page 80 to page 84, then in the TOC page 84 is used (instead of page 80 where the References begin).
edit: KOMA script provides `bibliography=totoc` which solves the issue. However, if anyone could explain why `\addcontentsline{toc}{chapter}{title}` works for other lists but not for the library, I would be thankful.
I cannot hand out my lib-file (named Lib1 in the code), but you could simply import one of yours which is a few pages long to see the problem.
Here is my preamble. As I don't know what the reason is, I have kept most of the definitions and macros generated from helpful users of these forums in. I only removed stuff which I was certain that it is harmless (e.g. table definitions etc.). The appendices have been made by @karlkoeller. But I don't think they are the problem because they start after the bibliography.
Thanks for any help!
```
\documentclass[a4paper, 12pt, headsepline, headings=small, numbers=noendperiod]{scrreprt}
\usepackage{float}
\usepackage[onehalfspacing]{setspace}
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage{mathptmx}
\usepackage[a4paper,showframe]{geometry}
\geometry{left=2cm,right=5cm,top=2cm,bottom=2cm,foot=1.5cm}
\usepackage{csquotes}
\usepackage{environ}
\usepackage[backend=bibtex8,
style=authoryear-icomp,
dashed=false,
autocite=footnote,
maxcitenames=3,
mincitenames=1,
maxbibnames=100,
firstinits=true,
sorting=nyvt
]{biblatex}
\bibliography{Lib1}
\usepackage{chngcntr}
\counterwithout{figure}{chapter}
\counterwithout{table}{chapter}
\counterwithout{equation}{chapter}
\counterwithout{footnote}{chapter}
\renewcommand{\floatpagefraction}{0.85}
\renewcommand\bottomfraction{0.65}
\setcounter{topnumber}{1}
\newcommand\appendicesname{Appendix}
\newcommand\listofloaname{List of Appendices}
\newcommand*{\listofappendices}{\listoftoc{loa}}
\setuptoc{loa}{totoc}
\makeatletter
\g@addto@macro\appendix{%
\counterwithin{table}{chapter}
\renewcommand*{\tableformat}{\tablename~\thetable}
\let\oldaddcontentsline\addcontentsline
\newcommand\hackedaddcontentsline[3]{\oldaddcontentsline{loa}{#2}{#3}}
%
%\newcommand\hackedtableaddcontentsline[3]{\oldaddcontentsline{loa}{#2}{\tableformat}}
%
\newcommand\hackedtableaddcontentsline[3]{\oldaddcontentsline{loa}{#2}{#3}}
%
\let\oldpart\part
\renewcommand*\part[1]{%
\let\addcontentsline\hackedaddcontentsline%
\oldpart{#1}%
\let\addcontentsline\oldaddcontentsline%
}
\let\oldchapter\chapter
\renewcommand*\chapter[1]{%
\let\addcontentsline\hackedaddcontentsline%
\oldchapter{#1}%
\let\addcontentsline\oldaddcontentsline%
}
\let\oldsection\section
\renewcommand*\section[1]{%
\let\addcontentsline\hackedaddcontentsline%
\oldsection{#1}%
\let\addcontentsline\oldaddcontentsline%
}
\let\oldsubsection\subsection
\renewcommand*\subsection[1]{%
\let\addcontentsline\hackedaddcontentsline%
\oldsubsection{#1}%
\let\addcontentsline\oldaddcontentsline%
}
\let\oldsubsubsection\subsubsection
\renewcommand*\subsubsection[1]{%
\let\addcontentsline\hackedaddcontentsline%
\oldsubsubsection{#1}%
\let\addcontentsline\oldaddcontentsline%
}
\let\oldtable\table
\renewcommand*\table{%
\let\addcontentsline\hackedtableaddcontentsline%
\oldtable%
}
\let\oldendtable\endtable
\renewcommand*\endtable{%
\oldendtable%
\let\addcontentsline\oldaddcontentsline%
}
}
\makeatother
\newlength\myindention
\setlength\myindention{1em}
\let\oldcaption\caption
\renewcommand*\caption[2][]{%
\oldcaption[#1]{#1\\\hspace*{\myindention}#2}%
}
% Biblatex:
\DeclareNameAlias{sortname}{last-first}
\DeclareNameAlias{default}{last-first}
\DeclareFieldFormat{title}{#1}
\DeclareFieldFormat
[article,inbook,incollection,inproceedings,patent,thesis,unpublished]
{title}{#1}
\DeclareFieldFormat{journaltitle}{#1}
\renewbibmacro*{cite:labelyear+extrayear}{%
\iffieldundef{labelyear}
{}
{\printtext[bibhyperref]{%
\printtext[parens]{%
\printfield{labelyear}%
\printfield{extrayear}}}}}
\renewbibmacro*{date+extrayear}{%
\iffieldundef{\thefield{datelabelsource}year}
{}
{%\printtext[parens]{%
\setunit{\addcomma\space}%
\iffieldsequal{year}{\thefield{datelabelsource}year}
{\printdateextralabel}%
{\printfield{labelyear}%
\printfield{extrayear}}}}%}%
\renewbibmacro*{publisher+location+date}{%
%
\printlist{publisher}%
\setunit*{\addcomma\space}%
%\printlist{publisher}%
%\setunit*{\addperiod\space}%
%
\printlist{location}%
\setunit*{\addcomma\space}
\usebibmacro{date}%
\newunit}
\renewbibmacro*{institution+location+date}{%
\printlist{institution}%
\setunit*{\addcomma\space}%
\printlist{location}%
\setunit*{\addcomma\space}
\usebibmacro{date}%
\newunit}
\renewbibmacro*{organization+location+date}{%
\printlist{organization}%
\setunit*{\addcomma\space}%
\printlist{location}%
\setunit*{\addcomma\space}
\usebibmacro{date}%
\newunit}
\renewbibmacro{in:}{%
\ifentrytype{article}{}{\printtext{\bibstring{in}\intitlepunct}}}
\renewbibmacro*{volume+number+eid}{%
\printfield{volume}%
\setunit*{\addspace}%
\printfield[parens]{number}%
\setunit{\addcomma\space}%
\printfield{eid}}
\begin{document}
\tableofcontents
\listoftables
\addcontentsline{toc}{chapter}{List of tables}
\nocite{*}
\printbibliography[title={References}]
\addcontentsline{toc}{chapter}{References}
\listofappendices
\begin{appendix}
\setlength{\abovecaptionskip}{-8pt}
\chapter{Appendix Chapter}
\end{appendix}
\end{document}
``` | 2014/01/18 | [
"https://tex.stackexchange.com/questions/154740",
"https://tex.stackexchange.com",
"https://tex.stackexchange.com/users/38283/"
] | the `\printbibliography` command causes the entire bibliography to print, so issuing
`\addeontentsline` *after* it will of course have the number of the last page.
i assume that the bibliography will start on a new right-hand page. so if you issue
the command `\cleardoublepage`, then `\addcontentsline`, and *then* the `\printbibliography`,
the entry in the contents should come out with the correct page number. if the document is one-sided, then `\clearpage` is sufficient.
(you have to run latex twice, of course.)
**edit:** it's been asked whether, if it's necessary to add a contents line for a chapter,
should the `\addcontentsline` always be added before `\chapter`. the answer is,
not usually.
in most document classes, `\chapter` issues a `\clearpage` (or `\cleardoublepage`)
command, so if the `\addcontentsline` line precedes `\chapter`, the page number
will be too low by one (or two). thus if you issue the `\chapter` command yourself,
the `\addcontents` line should be placed right after it. this will rarely be necessary,
of course, unless you're using a document class that doesn't automatically add a contents
line for `\chapter` (as `book` does not for `\chapter*`).
`\printbibliography` is a special case -- the command includes the `\chapter*` but goes
right ahead and processes the bibliography, without leaving a "hook" to allow an author
to say "i want this chapter title in the table of contents". there are some packages that
make this adjustment; the approach taken here does not use a separate package. | The Koma-Script classes already have facilities for adding standard material to the table of contents.
If you call the class like
```
\documentclass[
a4paper,
12pt,
headsepline,
headings=small,
numbers=noendperiod,
listof=totoc
]{scrreprt}
```
you won't need any `\addcontentsline` for `\listoftables`. For the references, use `biblatex` facilities:
```
\printbibliography[title={References},heading=bibintoc]
```
Note that the Koma-Script classes have their method for defining new floats. If you load also `\usepackage{scrhack}` before `float`, you'll avoid an annoying warning.
---
By the way, wouldn't the following be easier for adding the items in the appendix to the list of appendices?
```
\makeatletter
\g@addto@macro\appendix{%
\counterwithin{table}{chapter}%
\renewcommand*{\tableformat}{\tablename~\thetable}%
\let\oldaddcontentsline\addcontentsline
\renewcommand\addcontentsline[1]{\oldaddcontentsline{loa}}%
}
\makeatother
``` |
154,740 | I got a problem with one page number in the toc: the one of the 'References' generated by biblatex. I am always using the command `\addcontentsline{toc}{chapter}{title}` to add stuff to the TOC, and in all other cases it worked fine (list of tables etc.).
Problem description: The page number in the toc is always the last page of the References. This means if the References are from page 80 to page 84, then in the TOC page 84 is used (instead of page 80 where the References begin).
edit: KOMA script provides `bibliography=totoc` which solves the issue. However, if anyone could explain why `\addcontentsline{toc}{chapter}{title}` works for other lists but not for the library, I would be thankful.
I cannot hand out my lib-file (named Lib1 in the code), but you could simply import one of yours which is a few pages long to see the problem.
Here is my preamble. As I don't know what the reason is, I have kept most of the definitions and macros generated from helpful users of these forums in. I only removed stuff which I was certain that it is harmless (e.g. table definitions etc.). The appendices have been made by @karlkoeller. But I don't think they are the problem because they start after the bibliography.
Thanks for any help!
```
\documentclass[a4paper, 12pt, headsepline, headings=small, numbers=noendperiod]{scrreprt}
\usepackage{float}
\usepackage[onehalfspacing]{setspace}
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage{mathptmx}
\usepackage[a4paper,showframe]{geometry}
\geometry{left=2cm,right=5cm,top=2cm,bottom=2cm,foot=1.5cm}
\usepackage{csquotes}
\usepackage{environ}
\usepackage[backend=bibtex8,
style=authoryear-icomp,
dashed=false,
autocite=footnote,
maxcitenames=3,
mincitenames=1,
maxbibnames=100,
firstinits=true,
sorting=nyvt
]{biblatex}
\bibliography{Lib1}
\usepackage{chngcntr}
\counterwithout{figure}{chapter}
\counterwithout{table}{chapter}
\counterwithout{equation}{chapter}
\counterwithout{footnote}{chapter}
\renewcommand{\floatpagefraction}{0.85}
\renewcommand\bottomfraction{0.65}
\setcounter{topnumber}{1}
\newcommand\appendicesname{Appendix}
\newcommand\listofloaname{List of Appendices}
\newcommand*{\listofappendices}{\listoftoc{loa}}
\setuptoc{loa}{totoc}
\makeatletter
\g@addto@macro\appendix{%
\counterwithin{table}{chapter}
\renewcommand*{\tableformat}{\tablename~\thetable}
\let\oldaddcontentsline\addcontentsline
\newcommand\hackedaddcontentsline[3]{\oldaddcontentsline{loa}{#2}{#3}}
%
%\newcommand\hackedtableaddcontentsline[3]{\oldaddcontentsline{loa}{#2}{\tableformat}}
%
\newcommand\hackedtableaddcontentsline[3]{\oldaddcontentsline{loa}{#2}{#3}}
%
\let\oldpart\part
\renewcommand*\part[1]{%
\let\addcontentsline\hackedaddcontentsline%
\oldpart{#1}%
\let\addcontentsline\oldaddcontentsline%
}
\let\oldchapter\chapter
\renewcommand*\chapter[1]{%
\let\addcontentsline\hackedaddcontentsline%
\oldchapter{#1}%
\let\addcontentsline\oldaddcontentsline%
}
\let\oldsection\section
\renewcommand*\section[1]{%
\let\addcontentsline\hackedaddcontentsline%
\oldsection{#1}%
\let\addcontentsline\oldaddcontentsline%
}
\let\oldsubsection\subsection
\renewcommand*\subsection[1]{%
\let\addcontentsline\hackedaddcontentsline%
\oldsubsection{#1}%
\let\addcontentsline\oldaddcontentsline%
}
\let\oldsubsubsection\subsubsection
\renewcommand*\subsubsection[1]{%
\let\addcontentsline\hackedaddcontentsline%
\oldsubsubsection{#1}%
\let\addcontentsline\oldaddcontentsline%
}
\let\oldtable\table
\renewcommand*\table{%
\let\addcontentsline\hackedtableaddcontentsline%
\oldtable%
}
\let\oldendtable\endtable
\renewcommand*\endtable{%
\oldendtable%
\let\addcontentsline\oldaddcontentsline%
}
}
\makeatother
\newlength\myindention
\setlength\myindention{1em}
\let\oldcaption\caption
\renewcommand*\caption[2][]{%
\oldcaption[#1]{#1\\\hspace*{\myindention}#2}%
}
% Biblatex:
\DeclareNameAlias{sortname}{last-first}
\DeclareNameAlias{default}{last-first}
\DeclareFieldFormat{title}{#1}
\DeclareFieldFormat
[article,inbook,incollection,inproceedings,patent,thesis,unpublished]
{title}{#1}
\DeclareFieldFormat{journaltitle}{#1}
\renewbibmacro*{cite:labelyear+extrayear}{%
\iffieldundef{labelyear}
{}
{\printtext[bibhyperref]{%
\printtext[parens]{%
\printfield{labelyear}%
\printfield{extrayear}}}}}
\renewbibmacro*{date+extrayear}{%
\iffieldundef{\thefield{datelabelsource}year}
{}
{%\printtext[parens]{%
\setunit{\addcomma\space}%
\iffieldsequal{year}{\thefield{datelabelsource}year}
{\printdateextralabel}%
{\printfield{labelyear}%
\printfield{extrayear}}}}%}%
\renewbibmacro*{publisher+location+date}{%
%
\printlist{publisher}%
\setunit*{\addcomma\space}%
%\printlist{publisher}%
%\setunit*{\addperiod\space}%
%
\printlist{location}%
\setunit*{\addcomma\space}
\usebibmacro{date}%
\newunit}
\renewbibmacro*{institution+location+date}{%
\printlist{institution}%
\setunit*{\addcomma\space}%
\printlist{location}%
\setunit*{\addcomma\space}
\usebibmacro{date}%
\newunit}
\renewbibmacro*{organization+location+date}{%
\printlist{organization}%
\setunit*{\addcomma\space}%
\printlist{location}%
\setunit*{\addcomma\space}
\usebibmacro{date}%
\newunit}
\renewbibmacro{in:}{%
\ifentrytype{article}{}{\printtext{\bibstring{in}\intitlepunct}}}
\renewbibmacro*{volume+number+eid}{%
\printfield{volume}%
\setunit*{\addspace}%
\printfield[parens]{number}%
\setunit{\addcomma\space}%
\printfield{eid}}
\begin{document}
\tableofcontents
\listoftables
\addcontentsline{toc}{chapter}{List of tables}
\nocite{*}
\printbibliography[title={References}]
\addcontentsline{toc}{chapter}{References}
\listofappendices
\begin{appendix}
\setlength{\abovecaptionskip}{-8pt}
\chapter{Appendix Chapter}
\end{appendix}
\end{document}
``` | 2014/01/18 | [
"https://tex.stackexchange.com/questions/154740",
"https://tex.stackexchange.com",
"https://tex.stackexchange.com/users/38283/"
] | the `\printbibliography` command causes the entire bibliography to print, so issuing
`\addeontentsline` *after* it will of course have the number of the last page.
i assume that the bibliography will start on a new right-hand page. so if you issue
the command `\cleardoublepage`, then `\addcontentsline`, and *then* the `\printbibliography`,
the entry in the contents should come out with the correct page number. if the document is one-sided, then `\clearpage` is sufficient.
(you have to run latex twice, of course.)
**edit:** it's been asked whether, if it's necessary to add a contents line for a chapter,
should the `\addcontentsline` always be added before `\chapter`. the answer is,
not usually.
in most document classes, `\chapter` issues a `\clearpage` (or `\cleardoublepage`)
command, so if the `\addcontentsline` line precedes `\chapter`, the page number
will be too low by one (or two). thus if you issue the `\chapter` command yourself,
the `\addcontents` line should be placed right after it. this will rarely be necessary,
of course, unless you're using a document class that doesn't automatically add a contents
line for `\chapter` (as `book` does not for `\chapter*`).
`\printbibliography` is a special case -- the command includes the `\chapter*` but goes
right ahead and processes the bibliography, without leaving a "hook" to allow an author
to say "i want this chapter title in the table of contents". there are some packages that
make this adjustment; the approach taken here does not use a separate package. | You can just use `\clearpage` then `\addcontentstoline` then `\listoftables` .... like this.. it worked for me.. |
154,740 | I got a problem with one page number in the toc: the one of the 'References' generated by biblatex. I am always using the command `\addcontentsline{toc}{chapter}{title}` to add stuff to the TOC, and in all other cases it worked fine (list of tables etc.).
Problem description: The page number in the toc is always the last page of the References. This means if the References are from page 80 to page 84, then in the TOC page 84 is used (instead of page 80 where the References begin).
edit: KOMA script provides `bibliography=totoc` which solves the issue. However, if anyone could explain why `\addcontentsline{toc}{chapter}{title}` works for other lists but not for the library, I would be thankful.
I cannot hand out my lib-file (named Lib1 in the code), but you could simply import one of yours which is a few pages long to see the problem.
Here is my preamble. As I don't know what the reason is, I have kept most of the definitions and macros generated from helpful users of these forums in. I only removed stuff which I was certain that it is harmless (e.g. table definitions etc.). The appendices have been made by @karlkoeller. But I don't think they are the problem because they start after the bibliography.
Thanks for any help!
```
\documentclass[a4paper, 12pt, headsepline, headings=small, numbers=noendperiod]{scrreprt}
\usepackage{float}
\usepackage[onehalfspacing]{setspace}
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage{mathptmx}
\usepackage[a4paper,showframe]{geometry}
\geometry{left=2cm,right=5cm,top=2cm,bottom=2cm,foot=1.5cm}
\usepackage{csquotes}
\usepackage{environ}
\usepackage[backend=bibtex8,
style=authoryear-icomp,
dashed=false,
autocite=footnote,
maxcitenames=3,
mincitenames=1,
maxbibnames=100,
firstinits=true,
sorting=nyvt
]{biblatex}
\bibliography{Lib1}
\usepackage{chngcntr}
\counterwithout{figure}{chapter}
\counterwithout{table}{chapter}
\counterwithout{equation}{chapter}
\counterwithout{footnote}{chapter}
\renewcommand{\floatpagefraction}{0.85}
\renewcommand\bottomfraction{0.65}
\setcounter{topnumber}{1}
\newcommand\appendicesname{Appendix}
\newcommand\listofloaname{List of Appendices}
\newcommand*{\listofappendices}{\listoftoc{loa}}
\setuptoc{loa}{totoc}
\makeatletter
\g@addto@macro\appendix{%
\counterwithin{table}{chapter}
\renewcommand*{\tableformat}{\tablename~\thetable}
\let\oldaddcontentsline\addcontentsline
\newcommand\hackedaddcontentsline[3]{\oldaddcontentsline{loa}{#2}{#3}}
%
%\newcommand\hackedtableaddcontentsline[3]{\oldaddcontentsline{loa}{#2}{\tableformat}}
%
\newcommand\hackedtableaddcontentsline[3]{\oldaddcontentsline{loa}{#2}{#3}}
%
\let\oldpart\part
\renewcommand*\part[1]{%
\let\addcontentsline\hackedaddcontentsline%
\oldpart{#1}%
\let\addcontentsline\oldaddcontentsline%
}
\let\oldchapter\chapter
\renewcommand*\chapter[1]{%
\let\addcontentsline\hackedaddcontentsline%
\oldchapter{#1}%
\let\addcontentsline\oldaddcontentsline%
}
\let\oldsection\section
\renewcommand*\section[1]{%
\let\addcontentsline\hackedaddcontentsline%
\oldsection{#1}%
\let\addcontentsline\oldaddcontentsline%
}
\let\oldsubsection\subsection
\renewcommand*\subsection[1]{%
\let\addcontentsline\hackedaddcontentsline%
\oldsubsection{#1}%
\let\addcontentsline\oldaddcontentsline%
}
\let\oldsubsubsection\subsubsection
\renewcommand*\subsubsection[1]{%
\let\addcontentsline\hackedaddcontentsline%
\oldsubsubsection{#1}%
\let\addcontentsline\oldaddcontentsline%
}
\let\oldtable\table
\renewcommand*\table{%
\let\addcontentsline\hackedtableaddcontentsline%
\oldtable%
}
\let\oldendtable\endtable
\renewcommand*\endtable{%
\oldendtable%
\let\addcontentsline\oldaddcontentsline%
}
}
\makeatother
\newlength\myindention
\setlength\myindention{1em}
\let\oldcaption\caption
\renewcommand*\caption[2][]{%
\oldcaption[#1]{#1\\\hspace*{\myindention}#2}%
}
% Biblatex:
\DeclareNameAlias{sortname}{last-first}
\DeclareNameAlias{default}{last-first}
\DeclareFieldFormat{title}{#1}
\DeclareFieldFormat
[article,inbook,incollection,inproceedings,patent,thesis,unpublished]
{title}{#1}
\DeclareFieldFormat{journaltitle}{#1}
\renewbibmacro*{cite:labelyear+extrayear}{%
\iffieldundef{labelyear}
{}
{\printtext[bibhyperref]{%
\printtext[parens]{%
\printfield{labelyear}%
\printfield{extrayear}}}}}
\renewbibmacro*{date+extrayear}{%
\iffieldundef{\thefield{datelabelsource}year}
{}
{%\printtext[parens]{%
\setunit{\addcomma\space}%
\iffieldsequal{year}{\thefield{datelabelsource}year}
{\printdateextralabel}%
{\printfield{labelyear}%
\printfield{extrayear}}}}%}%
\renewbibmacro*{publisher+location+date}{%
%
\printlist{publisher}%
\setunit*{\addcomma\space}%
%\printlist{publisher}%
%\setunit*{\addperiod\space}%
%
\printlist{location}%
\setunit*{\addcomma\space}
\usebibmacro{date}%
\newunit}
\renewbibmacro*{institution+location+date}{%
\printlist{institution}%
\setunit*{\addcomma\space}%
\printlist{location}%
\setunit*{\addcomma\space}
\usebibmacro{date}%
\newunit}
\renewbibmacro*{organization+location+date}{%
\printlist{organization}%
\setunit*{\addcomma\space}%
\printlist{location}%
\setunit*{\addcomma\space}
\usebibmacro{date}%
\newunit}
\renewbibmacro{in:}{%
\ifentrytype{article}{}{\printtext{\bibstring{in}\intitlepunct}}}
\renewbibmacro*{volume+number+eid}{%
\printfield{volume}%
\setunit*{\addspace}%
\printfield[parens]{number}%
\setunit{\addcomma\space}%
\printfield{eid}}
\begin{document}
\tableofcontents
\listoftables
\addcontentsline{toc}{chapter}{List of tables}
\nocite{*}
\printbibliography[title={References}]
\addcontentsline{toc}{chapter}{References}
\listofappendices
\begin{appendix}
\setlength{\abovecaptionskip}{-8pt}
\chapter{Appendix Chapter}
\end{appendix}
\end{document}
``` | 2014/01/18 | [
"https://tex.stackexchange.com/questions/154740",
"https://tex.stackexchange.com",
"https://tex.stackexchange.com/users/38283/"
] | The Koma-Script classes already have facilities for adding standard material to the table of contents.
If you call the class like
```
\documentclass[
a4paper,
12pt,
headsepline,
headings=small,
numbers=noendperiod,
listof=totoc
]{scrreprt}
```
you won't need any `\addcontentsline` for `\listoftables`. For the references, use `biblatex` facilities:
```
\printbibliography[title={References},heading=bibintoc]
```
Note that the Koma-Script classes have their method for defining new floats. If you load also `\usepackage{scrhack}` before `float`, you'll avoid an annoying warning.
---
By the way, wouldn't the following be easier for adding the items in the appendix to the list of appendices?
```
\makeatletter
\g@addto@macro\appendix{%
\counterwithin{table}{chapter}%
\renewcommand*{\tableformat}{\tablename~\thetable}%
\let\oldaddcontentsline\addcontentsline
\renewcommand\addcontentsline[1]{\oldaddcontentsline{loa}}%
}
\makeatother
``` | You can just use `\clearpage` then `\addcontentstoline` then `\listoftables` .... like this.. it worked for me.. |
3,171,996 | A friend has given me a math problem to solve,the problem is as follows:
>
> Find the fist value of $n$ ,for which:
>
> $(11)^n$ contains $(n+2)$ digits, where $n \in \mathbb{N}$?
>
>
>
I have done trial and error method using calculator to find its answer,
and found answer is:
$$n=25$$
But how to solve this problem analytically?Any help...
For convenience of understanding this problem; consider this example:
$$11^3 =1331\to 4 \text{ digits} \implies (n+1) \text{ digits for n}=3$$ | 2019/04/02 | [
"https://math.stackexchange.com/questions/3171996",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/520626/"
] | To have $n+2$ digits the number must be greater than $10^{n+1}$ (and less than $10^{n+2}$, but we don't care about that). So we need
$$11^n \gt 10^{n+1}\\n \log(11) \gt (n+1) \log (10)\\1-\frac 1{n+1} \gt \frac {\log (10)}{\log (11)}$$ | $10^k$ has $k+1 $ digits.
$10^{k+1}$ has $k +2$ digits.
And $10^{k+1} - 1$ has $k+1$ digits.
so if $M$ has $n+2$ digits then $10^{n+1} \le M < 10^{n+2}$ and $n+1 \le \log M < n+2$.
So to solve $11^n$ having $n+2$ digits we need
$n+1 \le \log 11^n < n+2$.
So we need $n+1 \le \log 11^n = n \log 11$ or $\frac {n+1}{n} = 1+\frac 1n \le \log 11$ or $\frac 1n \le \log 11 - 1$ or $n \ge \frac 1{\log 11 - 1}$.
So we need the smallest possible integer $n \ge \frac 1{\log 11-1}\approx \frac 1{1.041392685158225040750199971243-1} \approx \frac 1{.041392685158225040750199971243}\approx 24.16$.
So $n =25$.
$11^{25}$ has $27$ digits. |
5,097,416 | In Matlab, If A is a matrix, sum(A) treats the columns of A as vectors, returning a row vector of the sums of each column.
sum(Image); How could it be done with OpenCV? | 2011/02/23 | [
"https://Stackoverflow.com/questions/5097416",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/595285/"
] | Using `cvReduce` has worked for me. For example, if you need to store the column-wise sum of a matrix as a row matrix you could do this:
```
CvMat * MyMat = cvCreateMat(height, width, CV_64FC1);
// Fill in MyMat with some data...
CvMat * ColSum = cvCreateMat(1, MyMat->width, CV_64FC1);
cvReduce(MyMat, ColSum, 0, CV_REDUCE_SUM);
```
More information is available in the [OpenCV documentation](http://opencv.willowgarage.com/documentation/c/core_operations_on_arrays.html?highlight=reduce#cvReduce). | For an 8 bit greyscale image, the following should work (I think).
It shouldn't be too hard to expand to different image types.
```
int imgStep = image->widthStep;
uchar* imageData = (uchar*)image->imageData;
uint result[image->width];
memset(result, 0, sizeof(uchar) * image->width);
for (int col = 0; col < image->width; col++) {
for (int row = 0; row < image->height; row++) {
result[col] += imageData[row * imgStep + col];
}
}
// your desired vector is in result
``` |
5,097,416 | In Matlab, If A is a matrix, sum(A) treats the columns of A as vectors, returning a row vector of the sums of each column.
sum(Image); How could it be done with OpenCV? | 2011/02/23 | [
"https://Stackoverflow.com/questions/5097416",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/595285/"
] | cvSum respects ROI, so if you move a 1 px wide window over the whole image, you can calculate the sum of each column.
My c++ got a little rusty so I won't provide a code example, though the last time I did this I used OpenCVSharp and it worked fine. However, I'm not sure how efficient this method is.
My math skills are getting rusty too, but shouldn't it be possible to sum all elements in columns in a matrix by multiplying it by a vector of 1s? | For an 8 bit greyscale image, the following should work (I think).
It shouldn't be too hard to expand to different image types.
```
int imgStep = image->widthStep;
uchar* imageData = (uchar*)image->imageData;
uint result[image->width];
memset(result, 0, sizeof(uchar) * image->width);
for (int col = 0; col < image->width; col++) {
for (int row = 0; row < image->height; row++) {
result[col] += imageData[row * imgStep + col];
}
}
// your desired vector is in result
``` |
5,097,416 | In Matlab, If A is a matrix, sum(A) treats the columns of A as vectors, returning a row vector of the sums of each column.
sum(Image); How could it be done with OpenCV? | 2011/02/23 | [
"https://Stackoverflow.com/questions/5097416",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/595285/"
] | **EDIT after 3 years:**
The proper function for this is [cv::reduce](https://docs.opencv.org/3.4.1/d2/de8/group__core__array.html#ga4b78072a303f29d9031d56e5638da78e).
>
> Reduces a matrix to a vector.
>
>
> The function reduce reduces the matrix to a vector by treating the
> matrix rows/columns as a set of 1D vectors and performing the
> specified operation on the vectors until a single row/column is
> obtained. For example, the function can be used to compute horizontal
> and vertical projections of a raster image. In case of REDUCE\_MAX and
> REDUCE\_MIN , the output image should have the same type as the source
> one. In case of REDUCE\_SUM and REDUCE\_AVG , the output may have a
> larger element bit-depth to preserve accuracy. And multi-channel
> arrays are also supported in these two reduction modes.
>
>
>
**OLD**:
I've used ROI method: move ROI of height of the image and width 1 from left to right and calculate means.
```
Mat src = imread(filename, 0);
vector<int> graph( src.cols );
for (int c=0; c<src.cols-1; c++)
{
Mat roi = src( Rect( c,0,1,src.rows ) );
graph[c] = int(mean(roi)[0]);
}
Mat mgraph( 260, src.cols+10, CV_8UC3);
for (int c=0; c<src.cols-1; c++)
{
line( mgraph, Point(c+5,0), Point(c+5,graph[c]), Scalar(255,0,0), 1, CV_AA);
}
imshow("mgraph", mgraph);
imshow("source", src);
```


**EDIT:**
Just out of curiosity, I've tried resize to height 1 and the result was almost the same:
```
Mat test;
cv::resize(src,test,Size( src.cols,1 ));
Mat mgraph1( 260, src.cols+10, CV_8UC3);
for(int c=0; c<test.cols; c++)
{
graph[c] = test.at<uchar>(0,c);
}
for (int c=0; c<src.cols-1; c++)
{
line( mgraph1, Point(c+5,0), Point(c+5,graph[c]), Scalar(255,255,0), 1, CV_AA);
}
imshow("mgraph1", mgraph1);
```
 | For an 8 bit greyscale image, the following should work (I think).
It shouldn't be too hard to expand to different image types.
```
int imgStep = image->widthStep;
uchar* imageData = (uchar*)image->imageData;
uint result[image->width];
memset(result, 0, sizeof(uchar) * image->width);
for (int col = 0; col < image->width; col++) {
for (int row = 0; row < image->height; row++) {
result[col] += imageData[row * imgStep + col];
}
}
// your desired vector is in result
``` |
5,097,416 | In Matlab, If A is a matrix, sum(A) treats the columns of A as vectors, returning a row vector of the sums of each column.
sum(Image); How could it be done with OpenCV? | 2011/02/23 | [
"https://Stackoverflow.com/questions/5097416",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/595285/"
] | Using `cvReduce` has worked for me. For example, if you need to store the column-wise sum of a matrix as a row matrix you could do this:
```
CvMat * MyMat = cvCreateMat(height, width, CV_64FC1);
// Fill in MyMat with some data...
CvMat * ColSum = cvCreateMat(1, MyMat->width, CV_64FC1);
cvReduce(MyMat, ColSum, 0, CV_REDUCE_SUM);
```
More information is available in the [OpenCV documentation](http://opencv.willowgarage.com/documentation/c/core_operations_on_arrays.html?highlight=reduce#cvReduce). | cvSum respects ROI, so if you move a 1 px wide window over the whole image, you can calculate the sum of each column.
My c++ got a little rusty so I won't provide a code example, though the last time I did this I used OpenCVSharp and it worked fine. However, I'm not sure how efficient this method is.
My math skills are getting rusty too, but shouldn't it be possible to sum all elements in columns in a matrix by multiplying it by a vector of 1s? |
5,097,416 | In Matlab, If A is a matrix, sum(A) treats the columns of A as vectors, returning a row vector of the sums of each column.
sum(Image); How could it be done with OpenCV? | 2011/02/23 | [
"https://Stackoverflow.com/questions/5097416",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/595285/"
] | Using `cvReduce` has worked for me. For example, if you need to store the column-wise sum of a matrix as a row matrix you could do this:
```
CvMat * MyMat = cvCreateMat(height, width, CV_64FC1);
// Fill in MyMat with some data...
CvMat * ColSum = cvCreateMat(1, MyMat->width, CV_64FC1);
cvReduce(MyMat, ColSum, 0, CV_REDUCE_SUM);
```
More information is available in the [OpenCV documentation](http://opencv.willowgarage.com/documentation/c/core_operations_on_arrays.html?highlight=reduce#cvReduce). | **EDIT after 3 years:**
The proper function for this is [cv::reduce](https://docs.opencv.org/3.4.1/d2/de8/group__core__array.html#ga4b78072a303f29d9031d56e5638da78e).
>
> Reduces a matrix to a vector.
>
>
> The function reduce reduces the matrix to a vector by treating the
> matrix rows/columns as a set of 1D vectors and performing the
> specified operation on the vectors until a single row/column is
> obtained. For example, the function can be used to compute horizontal
> and vertical projections of a raster image. In case of REDUCE\_MAX and
> REDUCE\_MIN , the output image should have the same type as the source
> one. In case of REDUCE\_SUM and REDUCE\_AVG , the output may have a
> larger element bit-depth to preserve accuracy. And multi-channel
> arrays are also supported in these two reduction modes.
>
>
>
**OLD**:
I've used ROI method: move ROI of height of the image and width 1 from left to right and calculate means.
```
Mat src = imread(filename, 0);
vector<int> graph( src.cols );
for (int c=0; c<src.cols-1; c++)
{
Mat roi = src( Rect( c,0,1,src.rows ) );
graph[c] = int(mean(roi)[0]);
}
Mat mgraph( 260, src.cols+10, CV_8UC3);
for (int c=0; c<src.cols-1; c++)
{
line( mgraph, Point(c+5,0), Point(c+5,graph[c]), Scalar(255,0,0), 1, CV_AA);
}
imshow("mgraph", mgraph);
imshow("source", src);
```


**EDIT:**
Just out of curiosity, I've tried resize to height 1 and the result was almost the same:
```
Mat test;
cv::resize(src,test,Size( src.cols,1 ));
Mat mgraph1( 260, src.cols+10, CV_8UC3);
for(int c=0; c<test.cols; c++)
{
graph[c] = test.at<uchar>(0,c);
}
for (int c=0; c<src.cols-1; c++)
{
line( mgraph1, Point(c+5,0), Point(c+5,graph[c]), Scalar(255,255,0), 1, CV_AA);
}
imshow("mgraph1", mgraph1);
```
 |
49,788,994 | IE ignore zoom setting doesn't work, my code as below, why it doesn't work? I got the error message (selenium.common.exceptions.SessionNotCreatedException: Message: Unexpected error launching Internet Explorer. Browser zoom level was set to 125%. It should be set to 100%)
```
from selenium.webdriver import Ie
from selenium.webdriver.ie.options import Options
opts = Options()
opts.ignore_protected_mode_settings = True
driver = Ie(options=opts)
``` | 2018/04/12 | [
"https://Stackoverflow.com/questions/49788994",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8531684/"
] | **No**, while working with **InternetExplorerDriver** you shouldn't ignore the browser zoom settings.
As per the Official Documentation of *InternetExplorerDriver* the [**`Required Configuration`**](https://github.com/SeleniumHQ/selenium/wiki/InternetExplorerDriver#required-configuration) mentions the following about **Browser Zoom Level**
```
The browser zoom level must be set to 100% so that the native mouse events can be set to the correct coordinates.
```
As the browser zoom level is set to **125%** hence you see the error. As a solution you must set the browser zoom level back to **100%**.
---
Update
------
Though you havn't replied/commented to my Answer which was constructed as per your question I can observe from your question update that you are trying to set the property **ignore\_protected\_mode\_settings** to **True**. To achieve that you need to use an instance of **DesiredCapabilities()** Class and configure the *WebDriver* instance as follows :
```
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
cap = DesiredCapabilities().INTERNETEXPLORER
cap['ignoreZoomSetting'] = True
browser = webdriver.Ie(capabilities=cap, executable_path=r'C:\path\to\IEDriverServer.exe')
browser.get('http://google.com/')
browser.quit()
``` | I faced the same issue. The option `ignore_zoom_level` solved it.
```py
from selenium import webdriver
from selenium.webdriver.ie.options import Options
ie_options = Options()
ie_options.ignore_zoom_level = True
ie_driver = webdriver.Ie(options=ie_options)
```
See also: <https://www.selenium.dev/documentation/en/driver_idiosyncrasies/driver_specific_capabilities/#internet-explorer> |
30,810 | Related to [this question](https://gis.stackexchange.com/questions/8418/large-shapefile-to-raster) asked here on gis.se and [this thread](http://postgis.17.n6.nabble.com/Rasterize-a-vector-td4997893.html) on postgis-user, has anyone worked out a good solution for in-database rasterization of a vector layer using PostGIS? It sounds like the necessary functions are available and it is possible to get a valid output, but not one that is readable by GDAL et al. because it has irregular tiles.
To be more specific, I would like to rasterize two distinct vector maps of the US, counties and NBCD mapping zones, to a particular resolution and both aligned with an existing raster. It is possible to do this with gdal\_rasterize, but with some trial and error as I recall -- mainly because the extents of the three inputs do not match, I expect.
Any thoughts? | 2012/08/02 | [
"https://gis.stackexchange.com/questions/30810",
"https://gis.stackexchange.com",
"https://gis.stackexchange.com/users/6650/"
] | Using the API wrong. `arcpy.da`'s second argument is a list of fields, not a where clause. Did you mean:
```
cursor = arcpy.da.UpdateCursor(featureClass,
['*'],
"{0} = '{1}'".format("PropCode",
hotelDict["hotelId"]))
``` | ArcGIS expects field names to be bounded by the double quote character: `"` Of course, this is also the same character in Python that separates strings. To make Python not end a string when it encounters a double quote, you need to use the backslash escape character: `\`. So your cursor expression will look like this:
```
rows = arcpy.UpdateCursor(featureClass, "\"PropCode\" = '"+hotelDict["hotelId"]+"'")
```
With this, the second and third double quotes are ignored by Python, and your expression is successfully passed into ArcGIS. |
39,530 | >
> “Poor Janine,” Holly said, and Veronica caught a mocking look that
> passed between her and James. It implied she didn’t really have any
> sympathy for Janine at all. “She was really upset,” Veronica said,
> putting enough ice into her voice to warn Holly off the topic. But
> James and Holly’s covert exchange made her angry. Good, she thought,
> they can have each other.
>
>
>
Hi! What does the character mean by "They can have each other"? She's going out with James but she doesn't really like him so she's not jealous.
Thanks! | 2014/11/14 | [
"https://ell.stackexchange.com/questions/39530",
"https://ell.stackexchange.com",
"https://ell.stackexchange.com/users/9156/"
] | She is "washing her hands" of those two. If they want each other, fine. They can have each other. They can strike up whatever sort of relationship that they want to. | Adding to @TRomano's answer...
It also implies that for the same reasons that she is upset with them, they will be a fitting punishment for each other, as in: "*They can go ahead and choke on each other!*" |
28,305,627 | In the fiddle - <http://jsfiddle.net/660m7g7k/>
```
<textarea id="input">
[
{
name: "Tyorry",
age: 22
}, {
name: "greg",
age: 44
}, {
name: "aff",
age: 99
}, {
name: "ben",
age: 20
}
]
```
```
var x=document.getElementById("input").value;
alert(x[0]);
```
There is JSON data, array of objects basically. I have 2 questions.
1) Is this JSON data in JSON.stringify format or JSON.parse format? since JSON.parse is erroring out and JSON.stringify is working properly.
2) Am getting the JSON data from textarea. but x[0] or x[3] is returning blank. basically i want to loop through the array item(which are objects) and get the values, name and age. | 2015/02/03 | [
"https://Stackoverflow.com/questions/28305627",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4513872/"
] | The value in a text area is always a `string`. So if you want it as an object you'll want to use `JSON.parse()` to get it. If `JSON.Parse()` is failing then your JSON is in an invalid format.
To check if your JSON is valid, try using something like <http://jsonlint.com/>. The JSON provived in the fiddle is invalid. | In Plain JS use this
```
var x=document.getElementById("input").value;
var y = eval(x);
alert('hi '+y[0].name+ ' are you '+ y[0].age+' years old');
```
[Plunker](http://jsfiddle.net/747zr8zm/) |
21,693,547 | I wrote a Gradle plugin, its version is specified in its build script.
It is possible for this plugin to be aware of its own version when someone is using it? (i.e. when its `apply(Project project)` method is called) | 2014/02/11 | [
"https://Stackoverflow.com/questions/21693547",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/378979/"
] | For my plugins, I embed a field into the MANIFEST.MF file called Implementation-Version during build. Then I read that field in during runtime, by accessing the package like this:
```
def pkg = MyPlugin.class.getPackage()
return pkg.implementationVersion
```
Or using a helper class like: <https://github.com/nebula-plugins/nebula-core/blob/master/src/main/groovy/nebula/core/ClassHelper.groovy#L16> to grab arbitrary field from the manifest. | You can also find the version by doing this:
```
def selfVersion = project.buildscript.configurations.classpath.resolvedConfiguration.resolvedArtifacts.collect {
it.moduleVersion.id }.findAll { it.name == '<name of plugin>' }.first().version
``` |
68,150,570 | I'm trying to run a simple test with JavaScript as below.
```
import React from 'react';
import Customization from 'components/onboarding/customization';
import '@testing-library/jest-dom';
import { render, screen, fireEvent } from '@testing-library/react';
describe('customization render', () => {
it('should render the Hero page with no issue', () => {
render(<Customization />);
const heading = screen.getByText(
/All the Moodmap at one place!/i
);
expect(heading).toBeInTheDocument();
});
it("should call onCLick method on click", () => {
const mockOnClick = jest.fn()
const {container} = render(<Customization />);
const button = getByTestId(container, 'alreadyDownloaded');
fireEvent.click(button);
expect(mockOnClick).toHaveBeenCalledTimes(1)
// const mockOnClick = jest.fn()
// const utils = render(<Customization onClick={mockOnClick} />)
// fireEvent.click(screen.getByText(/already downloaded ⟶/i))
// expect(mockOnClick).toHaveBeenCalledTimes(1)
})
});
```
When running the tests I'm getting this error
```
No google analytics trackingId defined
8 | debug: process.env.NODE_ENV !== 'production',
9 | plugins: [
> 10 | googleAnalyticsPlugin({
| ^
11 | trackingId: process.env.NEXT_PUBLIC_GA_TRACKING_ID,
12 | }),
```
How do I make this error go away - surely it shouldn't require Google Analytics code given the above, it's not in production when running the test?
### Update
So I need to make sure the `.env` file is being loaded!
In my `package.json` I've got this Jest setup:
```
"jest": {
"testMatch": [
"**/?(*.)(spec|test).?(m)js?(x)"
],
"moduleNameMapper": {
"\\.(css|less|scss)$": "identity-obj-proxy"
},
"moduleDirectories": [
"node_modules",
"src"
],
"rootDir": "src",
"moduleFileExtensions": [
"js",
"jsx",
"mjs"
],
"transform": {
"^.+\\.m?jsx?$": "babel-jest"
},
"coverageThreshold": {
"global": {
"branches": 80,
"functions": 80,
"lines": 80,
"statements": -10
}
}
},
```
### updated code to use jest.setup - can't get env to load
So
```
import { configure } from 'enzyme';
import Adapter from 'enzyme-adapter-react-16';
import "@testing-library/jest-dom";
configure({
adapter: new Adapter()
});
module.exports = {
testMatch: [
"**/?(*.)(spec|test).?(m)js?(x)"
],
moduleNameMapper: {
"\\.(css|less|scss)$": "identity-obj-proxy"
},
moduleDirectories: [
"node_modules",
"src"
],
rootDir: "src",
moduleFileExtensions: [
"js",
"jsx",
"mjs"
],
transform: {
"^.+\\.m?jsx?$": "babel-jest"
},
coverageThreshold: {
"global": {
"branches": 80,
"functions": 80,
"lines": 80,
"statements": -10
}
},
setupFiles: ["../<rootDir>/.config.env.test"]
};
```
The environment variable files is here:
```
process.env.NEXT_PUBLIC_GA_TRACKING_ID=xxx
```
And this is the code that is not loading the environment variables properly.
```
import Analytics from 'analytics';
import googleAnalyticsPlugin from '@analytics/google-analytics';
import Router from 'next/router';
// Initialize analytics and plugins
// Documentation: https://getanalytics.io
const analytics = Analytics({
debug: process.env.NODE_ENV !== 'production',
plugins: [
googleAnalyticsPlugin({
trackingId: process.env.NEXT_PUBLIC_GA_TRACKING_ID,
}),
],
});
``` | 2021/06/27 | [
"https://Stackoverflow.com/questions/68150570",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7208058/"
] | During your tests you can [leverage `loadEnvConfig` from `@next/env`](https://nextjs.org/docs/basic-features/environment-variables#test-environment-variables) to make sure your environment variables are loaded the same way Next.js does.
First setup a `.env.test` to be used during the tests.
```
NEXT_PUBLIC_GA_TRACKING_ID=ga-test-id
```
Next, create a Jest global setup file if you don't have one yet, and reference it in your `jest.config.js`.
```js
// jest.config.js
module.exports = {
//...
setupFilesAfterEnv: ['./jest.setup.js'],
};
```
Then add the following code into your Jest global setup file.
```js
// jest.setup.js
import { loadEnvConfig } from '@next/env'
loadEnvConfig(process.cwd())
``` | This message means that the `trackingId` is not defined. As you can see it read from the `process.env`. You need to create this file in the root of your project and call it `.env`. Note that the dot is at the beginning of the filename. The content of the file should be as follow:
```
NEXT_PUBLIC_GA_TRACKING_ID=insert-key-here
```
If your env file is not being read by jest you can do the following:
```js
// In jest.config.js :
module.exports = {
....
setupFiles: ["<rootDir>/test/setup-tests.ts"]
}
// The file test/test-setup.ts:
import dotenv from 'dotenv';
dotenv.config({path: './config.env.test'});
```
You can also check [this article](https://medium.com/better-things-digital/using-dotenv-with-jest-7e735b34e55f) for more details. |
72,180,057 | I am new to solidity and I am running code on Remix.
It doesn't matter what version of compiler I specify, I keep on getting the same error.
Can someone help me out? What does "Compiler version ^0.8.0 does not satisfy the r semver requirement" exactly mean?
Here is my code:
```
// SPDX-License-Identifier: UNLICENSED
```
pragma solidity ^ 0.8.0;
contract Storage {
```
struct People {
uint256 favoriteNumber;
string name;
}
mapping(string => uint256) public nameToFavoriteNumber;
People[] public people;
function addPerson(uint _personFavoriteNumber, string memory _personName ) public {
people.push(People({favoriteNumber: _personFavoriteNumber, name: _personName}));
nameToFavoriteNumber[_personName] = _personFavoriteNumber;
}
```
}
[](https://i.stack.imgur.com/6PRHh.png) | 2022/05/10 | [
"https://Stackoverflow.com/questions/72180057",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12078893/"
] | I had the same issue a couple of times. In Remix, I added a ".0" to the compiler version like so:
```
pragma solidity ^0.8.4.0;
```
I ran into this in Virtual Studio code also but I just ignored it and everything worked fine. I hope this helps! | It works in Remix as well, but I have worked on contracts that work without adding that ".0" at last, now even they show this error.
```
pragma solidity ^0.8.8.0;
``` |
3,522,678 | I have been examining many different examples and I found no objective justification to the chosen bounds in none of them, as if the choice was an intuitive process. Is it really just that? For the lower bound, do we "begin with" a 0 and start looking for a bound that is the furthest possible from 0 and that still satisfies the inequality? What about the upper bound? It just seems to me that it is very easy to choose the wrong bounds given the commonly counterintuitive nature of limits for beginners. Is there a way to be sure that it is the right/not right one and not fall onto that? (In case it makes a difference, the examples I have examined were applied on sequences)
Any help is welcome, thank you. | 2020/01/26 | [
"https://math.stackexchange.com/questions/3522678",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/628799/"
] | ### The Squeeze Theorem
So just for context, here's the definition of the Squeeze Theorem my answer works with. There are more general versions of the theorem, which correspond to "convergence" in metric or topological spaces, but we'll just use a version for the real numbers to keep things simple.
>
> **Squeeze Theorem**: Let $f : X\to\mathbb{R}$, and let $f^{-} : X\to\mathbb{R}$ and $f^{+} : X\to\mathbb{R}$, be functions defined on some set $X$, and let $a \in \overline{X}$ be a limit point of $X$ (but not necessarily in $X$ itself). Then if
> $$
> \forall x\in X : f^{-}(x) \le f(x) \le f^{+}(x)
> $$
> And if additionally,
> $$
> \lim\_{x\to a}f^{-}(x) = \lim\_{x\to a}f^{+}(x) = L
> $$
> Then
> $$
> \lim\_{x\to a}f(x) = L
> $$
>
>
>
The theorem can apply to functions $\mathbb{R}\to\mathbb{R}$, if you take $x \in X \subseteq \mathbb{R}$ to be a point on some interval of the real numbers, and $f$ to be a real-valued function.
Or, it can also apply equally to number sequences, by taking $X = \mathbb{N}$ to be a the set of natural numbers, taking $x = n \in X$ to be some natural number, and taking $f(n) = s\_n$ to be the function which maps each sequence index $n = \{1, 2, 3, ...\}$ to the $n$th sequence element $s\_n = \{s\_1, s\_2, s\_3, ...\}$. In this case, the point $a=\infty$ is not in $\mathbb{N}$, but is still one of the limit points of $\mathbb{N}$.
Now, to get to your question: it does not matter how ***fast*** the squeeze happens, only that there is a ***pinch***. At the beginning the lower and upper bounding sequences can be as "lower" and "upper" as you like, as long as they stay below/above $f(x)$ and both converge to $L$. What matters is that eventually, $f^{-}(x)$ and $f^{+}(x)$ will "squeeze" $f(x)$ to the same point, because $f$ is between $f^{-}$ and $f^{+}$ at every single step of the way, but $f^{-}$ and $f^{+}$ are converging to the same limit $L$, so in essence eventually there is "nowhere else to go" for $f$ except for also to the same limit $L$.
---
### An example with functions
In cases where $|a| < \infty$, this squeezing action can be visualized readily. Here is an example with $a = 0$:
[](https://i.stack.imgur.com/9btfH.png)
There is also an [interactive graph](http://desmos.com/calculator/n42uqqollt) you can use to explore this example further.
In this case, $f(x) = 3x^3\sin\left(1/x^2\right)$ is the black curve, which is undefined at $x=0$. We can take $X = (-1, 1)$ or some other interval around $x=0$ (since we're interested in what happens as $x\to 0$, so it really doesn't matter what these functions do far away from $x=0$).
1. If you take the lower and upper limiting functions to be $f^{-}(x) = -3|x|^3$, the solid blue curve, and $f^{+}(x) = 3|x|^3$, the solid red curve, then the conditions of the squeeze theorem are satisfied:
\begin{array}{cc}
\forall{x}\in X : -3|x|^3 \le 3x^3\sin\left(1/x^2\right) \le 3|x|^3 & \checkmark \\
\lim\_{x\to 0}\, -3|x|^3 = \lim\_{x\to 0}\, 3|x|^3 = -3|0|^3 = 3|0|^3 = 0 & \checkmark
\end{array}
Therefore, by the Squeeze Theorem, the black curve also converges to $L = 0$:
$$
\lim\_{x\to 0}\; 3x^3\sin\left(1/x^2\right) = 0
$$
However, you may notice something about the solid red and black curves - they actually come close enough to the black curve to just barely touch it at several points. In a sense, they are the "tightest" possible lower and upper bounds you can place on $f$.
Many illustrations of the Squeeze Theorem use "tight" bounds of this sort, perhaps because visually it emphasizes the "squeezing" action for which the theorem is named. But it is not necessary for the bounds to be exactly tight, for the theorem to work!
2. We could also take the lower and upper limiting functions to be $g^{-}(x) = -3x^2$, the dashed blue curve, and $g^{+}(x) = 3x^2$, the dashed red curve. Notice that the bound is no longer as tight, but the conditions of the squeeze theorem are satisfied all the same!
\begin{array}{cc}
\forall{x}\in X : -3x^2 \le 3x^3\sin\left(1/x^2\right) \le 3x^2 & \checkmark \\
\lim\_{x\to 0}\, -3x^2 = \lim\_{x\to 0}\, 3x^2 = -3(0)^2 = 3(0)^2 = 0 & \checkmark
\end{array}
These functions would work just as well for establishing the limit of $f(x)$ as $x\to 0$ as the first pair.
---
### An example with sequences
Here's another example, this time with a discrete sequence. Consider the infinite sequence:
$$
a\_n = \frac{(-1)^n}{n!} = \left(-1, \frac{1}{2}, -\frac{1}{6}, \frac{1}{24}, -\frac{1}{120}, \cdots \right)
$$
We can use the Squeeze Theorem to show that $a\_n \to 0$ as $n \to \infty$, by finding two sequences $a^{\;-}\_{n}$ and $a^{\;+}\_n$, where both (1) $a^{\;-}\_n \le a\_n \le a^{\;+}\_n$ for all $n$, and (2) both $a^{\;-}\_n \to 0$ and $a^{\;+}\_n \to 0$ as $n \to 0$. But just like with the continuous example, the bounding sequences don't have to be "tight" limits - they just have to bound, and to converge to the same limit. Therefore you could use any of the following as the Squeeze Theorem lower/upper bounds:
\begin{array}{cll}
\text{(1)} & a^{\;-}\_n = -1/n! \;&\; a^{\;+}\_n = 1/n! \\
\text{(2)} & a^{\;-}\_n = -1/2^{n-1} \;&\; a^{\;+}\_n = 1/2^{n-1} \\
\text{(3)} & a^{\;-}\_n = -1/n \;&\; a^{\;+}\_n = 1/n \\
\end{array}
Only the sequences in item $\text{(1)}$ bound the sequence $a\_n$ "tightly". Nevertheless, any of these sequences would work to establish the limit of $a\_n$ through the Squeeze Theorem, because they all bound $a\_n$, and they all converge to $0$. | But, we have $f(x)\le g(x)\le h(x)$ with $f$ and $h$ having the same limit. So it really is a "tight squeeze". |
4,630,032 | Here is another one of these weird things. I have this code and a file.
```
use strict;
use warnings;
my $file = "test.txt";
my @arr;
open (LOGFILE, $file);
while (my $line = <LOGFILE>)
{
#print $line;
@arr = split("\n", $line);
}
close LOGFILE;
print $arr[1];
```
test.txt contains
>
> \ntest1 \ntest2 \ntest3
>
>
>
Here is the error I get:
>
> Use of uninitialized value in print at
> test.pl line 15.
>
>
>
Did anyone encounter a similar problem in the past? | 2011/01/07 | [
"https://Stackoverflow.com/questions/4630032",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/318146/"
] | `split` takes a regex (I believe your string is coerced into a regex). Maybe something like `split(/\\n/, $line)`?
```
use strict;
use warnings;
my $file = "test.txt";
my @arr;
open (LOGFILE, $file);
while (my $line = <LOGFILE>)
{
print $line;
@arr = split(/\\n/, $line);
}
close LOGFILE;
print $arr[1];
``` | You could use:
```
@arr = split /\Q\n/, $line;
``` |
79,211 | Wondering what the unknown gotchas that would be involved in my selling my home and with the profits, buy a smaller one for cash to avoid mortgage payments. Some info: I live in California, U.S.A.; I will begin drawing SS in about 8 months and don't foresee uncovered medical expenses due to medicare; my wife is (don't tell anyone) ~61 and can begin drawing SS at 62 -but not medicare. So far, I know I have to pay: Medical for my wife, House and Car insurance, Property taxes, day-to-day living expenses. Bottom line, is it advantageous to follow through with the above plan? I have ~200K available after moving into the new home and monthly income from SS will be ~2400 (3900 after she retires).
Thanks in advance for your thoughts and comments. | 2017/04/29 | [
"https://money.stackexchange.com/questions/79211",
"https://money.stackexchange.com",
"https://money.stackexchange.com/users/55336/"
] | I am going to answer the question you didn't ask. The timing of Social Security.
[](https://i.stack.imgur.com/EyQnT.jpg)
Your benefit at 66 will be $2400, $3900 for the 2 of you. If you delay one year, you will see a bit of COLA (cost of living adjustment) as well as an 8% bump. That's $312/mo. ($3744/yr) The "cost" of this is not collecting that $46800. This, in effect, is an 8% return on that $46,800. Of course, per the chart, the 8% bump is for each year delayed until age 70.
For many people who have a bit more saved, this strategy will help avoid the taxation of Social Security benefits, a convoluted process where, if 1/2 your SS benefit plus other income exceeds $32K, the benefit starts to become taxable. I wrote about this, with a chart to illustrate, at [The Phantom Couple’s Tax Rate Zone](http://rothmania.net/the-phantom-couples-tax-rate-zone/). In your case, it looks like you'll stay under that level, unless you take a large 401(k) withdrawal for whatever reason.
Aside from this suggestion, the plan looks very sound. I've often written that a house shouldn't be considered an investment, up until the moment you downsize as you propose to do. | Assuming you WANT to move into a smaller, more manageable house; then the only real financial pitfall I could see would be to consider any capital gains you might have to account for if you're selling your current property for more than you paid for it.
Since you're willing to move into a smaller house, and have plenty of reserve funds, there shouldn't be too many other financial hurdles or potential pitfalls.
That said though, when it comes to moving, there are so many personal factors that often weigh more than the financial ones. Do you want a smaller house? Do you have the capability to handle a move? Are you at a point in your life where you want to dedicate time to getting settled for retirement? If the answer to all of those questions is yes, then it makes perfect sense in my opinion. |
44,161,662 | I know that in MVVM, we want to propagate user input from the *view* to the *view model* via data binding, and give the reflected view state in the *view model* to the *model*, where we write the business logic code, and update the user with the result via events.
However, does it mean that every change in the view must be done outside of the xaml.cs file?
Take for example a WPF application for [sliding puzzle](https://upload.wikimedia.org/wikipedia/commons/9/91/15-puzzle.svg):
If we want to write an algorithm to solve the puzzle, we'll put the code in the model.
However, assume we want to update the grid after the user clicked the down key.
Checking if such move is possible, redrawing the board or giving the player any feedback (if the move is legit or no) should be done in the view? (the xaml.cs file)
More generally, are there "rules of thumb" to decide what to handle where? | 2017/05/24 | [
"https://Stackoverflow.com/questions/44161662",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4468210/"
] | Quick recap for the MVVM layers (or "rules of thumb"):
* **Model**: Contains only the data used by the view models. As an exemple, consider business objects coming from database as "models".
* **View**: Connection between the user and the view model. You can use multiple views for a same view model. If the view model changes and updates, the view should show the changes.
* **ViewModel**: Contains the "business logic" between the view and the model. As such, commands, possible actions and algorithms are stored here. The view model dictates what is possible and what is not.
The communication between layers needs (that's the part necessary for MVVM) to be *interchangeable*, meaning the view model can be used with differents compatibles views and the model can be used by differents compatibles view models. To cut down on the dependencies of the multiples layers: the layers *should not* communicate directly between them. We use commands, events and direct bindings.
>
> However, does it mean that every change in the view must be done outside of the xaml.cs file? [...] However, assume we want to update the grid after the user clicked the down key. Checking if such move is possible, redrawing the board or giving the player any feedback (if the move is legit or no) should be done in the view? (the xaml.cs file)
>
>
>
No. The view model should explicitly tell the view what is possible and what is not. The view shows that the action is possible or not: it does not **decide** if it is possible. Such decision is in the business logic, so in the view model.
As a trail of thoughts, take what it is said about *interchangeable* views. If you switch out the view `foo` for another view `bar` to show your puzzle and you did put the decision about "what's possible" in the view, you will have to rewrite the decision tree/algorithm in the new view `bar` and thus, duplication of code/logic.
When the decision is higher up, the view reflects what the view model is telling him. If the view model wants the view to "refresh" or to tell the user "hey, that's an illegal move", the view model will do so via [commands](https://msdn.microsoft.com/en-us/magazine/dn237302.aspx) and events. After receiving such events, the view could then decide what to do with it:
* Show an error message about the illegal move
* Show a tooltip that the move is illegal
* Flash and shake the window with a beep to show that the move is illegal
* Many more implementations...
I do hope I answered your question as thoroughly as possible. | My 10 cents:
If my experience have taught me anything, it's that it's almost impossible to fit all problems with the same, general solution.
In the case of MVVM, some things I've learned (the hard way):
1. It's easy for the view model to devolve into God classes (ie, mix of purely view-related logic + some business logic + etc..)
2. Depending on the application tiers, some times it makes sense for logic to work on view models; other times, it's better for logic to work on the models instead.
3. Whatever layers I/you/anyone think certain classes/logic should go, will most likely have to change as development progresses.
Instead, my approach is usually:
1. Prepare
* Model (for serialization, very little logic),
* View Model (with property change bindings for view) and
* View (thin layer, binds almost directly to View Model)
2. Write the majority of the application logic in the View Model.
* Easier to have logic in here, so view bindings can work
* This is the stage where the View Model layer bloats up
3. When the application is finally working, begin refactoring
1. For rich-client applications, I find my Model classes to be almost purely data
2. The View Model will most likely be refactored into 2 layers: MVM (Model-View-Model) and VVM (View-View Model)
* MVM: This is where common, business-related logic/objects sit
* MVM Objects contain truly common properties that any view can bind to
* VVM: This is almost a 1-to-1 replication of a WPF view
* These objects are typically never shared outside its own view
* The separation into MVM and VVM helps prevent a single view model class from catering to ever views' needs (ie. whole bunch of `Is(Selected|Checked|etc)` and `*Command`, that may be only used exclusively by one view).
* (For some people, VVM logic could probably be part of the View. But for me, I often find myself eventually wishing I had separated them in the first place, for testing. So now I do.)
3. As the application evolves, properties/methods can be either pushed from the MVM into the VVM, or vice versa.
* The application's hierarchy is almost never truly static.
* Even when you build the best version of the application possible, the client will simply want more.
**Having the know-how to refactor an existing architecture to accommodate new requirements > Designing an architecture that is flexible enough for any future requirements**
*having said all that, for many applications that aren't too complex, a slightly-bulgy View Model is usually good enough.* |
24,913,044 | I have 3 `UITextField`s (location, address, zip). I hide 2 of the fields in `viewDidLoad`.
```
Addres1.hidden = YES;
Zip1.hidden = YES;
```
What I need is when I enter more than 1 number in the Location field, I need to show the address and zip text-fields.
I tried this:
```
-(BOOL)textFieldShouldBeginEditing:(UITextField *)textField
{
NSString *myString=Location.text;
NSInteger myInt = [myString intValue];
if (myInt >= 2) {
Addres1.hidden = NO;
Zip1.hidden = NO;
}else{
Addres1.hidden = YES;
Zip1.hidden = YES;
}
return YES;
}
```
But it's not working. Please tell me what I'm doing wrong in my code. | 2014/07/23 | [
"https://Stackoverflow.com/questions/24913044",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | ```
NSInteger myInt = [myString length];
```
instead of
```
NSInteger myInt = [myString intValue];
```
and write under this method
```
- (void) textFieldDidChange:(UITextField*)textField
```
if you want to compare the number for the input, simply create an array that contains all number you want to compare.
```
NSMutableArray *numberOfArray = [[NSMutableArray alloc]init];
for (int i = 0; i < 100; i++)
{
[numberOfArray addObject: i];
}
```
inside this method:
- (void) textFieldDidChange:(UITextField\*)textField
```
if ([numberOfArray containsObject:myInt])
{
Do whatever you want here !
}
``` | ```
-(BOOL)textFieldShouldBeginEditing:(UITextField *)textField
{
}
```
is only called when textfield becomes active so it is only called once so you have to write your logic in
```
- (BOOL)textField:(UITextField *)textField shouldChangeCharactersInRange:(NSRange)range replacementString:(NSString *)string
{
}
```
as this delegate is called on every character you enter in textfield |
2,773,817 | Okay, I have what I think is a simple question.. or just a case of me being a C# beginner.
I have an array of custom objects (clsScriptItem) that I am populating from a database. Once the items are loaded, I want to back them up to "backup" array so I can revert the information back after changing the main array. However, when I use CopyTo to copy the array and then alter the original array, the backup array is also being altered... I thought CopyTo merely copied values + structure from one array to another.
```
private void backupItems()
{
//lastSavedItems and items are both of type clsScriptItem[]
//(declaration not shown)
lastSavedItems = new clsScriptItem[items.Length];
items.CopyTo(lastSavedItems, 0);
//items[0].nexts[0] is 2
//lastSavedItems[0].nexts[0] is 2
items[0].nexts[0] = "-1";
//items[0].nexts[0] is -1
//lastSavedItems[0].nexts[0] is also -1
}
```
**How do I backup this data without having the two arrays be 'linked'??**
UPDATE :
I have updated the backup function to this
```
private void backupItems()
{
lastSavedItems = new clsScriptItem[items.Length];
for (int i = 0; i < items.Length; i++)
lastSavedItems[i] = (clsScriptItem)items[i].Clone();
items[0].nexts[0] = "-1";
}
```
And I have update my class thusly....
```
public class clsScriptItem : ICloneable
{
//other declarations...
object ICloneable.Clone() { return Clone(); }
public clsScriptItem Clone()
{
return ((clsScriptItem)MemberwiseClone());
}
}
```
\*\* - and the same thing is happening.. Any thoughts?\*\* | 2010/05/05 | [
"https://Stackoverflow.com/questions/2773817",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/83809/"
] | they aren't linked, you are using references to objects (So the second array contains 'pointers' to elements, which are the same as 'pointers' in first array..) so declare clsScriptItem as `struct`, or implement `ICloneable` interface and use
```
for(int i = 0; i < items.Length; i++)
lastSavedItems[i] = (clsScriptItem)items[i].Clone();
``` | This is a classic shallow copy vs deep copy problem. When you copy an array of reference types it is merely the references that are copied, not the objects those references point to. So your end result is two arrays that contain references to the same objects.
So when you copy an array that looks like this
```
Original
----
| |
| 0 |---> Cls
| |
----
| |
| 1 |---> Cls
| |
----
```
You get this
```
Original Copy
---- ----
| | | |
| 0 |---> Cls <--- | 0 |
| | | |
---- ----
| | | |
| 1 |---> Cls <--- | 1 |
| | | |
---- ----
```
If you change the referenced object in any way and it will look like you've changed the contents of both arrays when really all you've done is change the single object pointed to by both arrays. |
2,773,817 | Okay, I have what I think is a simple question.. or just a case of me being a C# beginner.
I have an array of custom objects (clsScriptItem) that I am populating from a database. Once the items are loaded, I want to back them up to "backup" array so I can revert the information back after changing the main array. However, when I use CopyTo to copy the array and then alter the original array, the backup array is also being altered... I thought CopyTo merely copied values + structure from one array to another.
```
private void backupItems()
{
//lastSavedItems and items are both of type clsScriptItem[]
//(declaration not shown)
lastSavedItems = new clsScriptItem[items.Length];
items.CopyTo(lastSavedItems, 0);
//items[0].nexts[0] is 2
//lastSavedItems[0].nexts[0] is 2
items[0].nexts[0] = "-1";
//items[0].nexts[0] is -1
//lastSavedItems[0].nexts[0] is also -1
}
```
**How do I backup this data without having the two arrays be 'linked'??**
UPDATE :
I have updated the backup function to this
```
private void backupItems()
{
lastSavedItems = new clsScriptItem[items.Length];
for (int i = 0; i < items.Length; i++)
lastSavedItems[i] = (clsScriptItem)items[i].Clone();
items[0].nexts[0] = "-1";
}
```
And I have update my class thusly....
```
public class clsScriptItem : ICloneable
{
//other declarations...
object ICloneable.Clone() { return Clone(); }
public clsScriptItem Clone()
{
return ((clsScriptItem)MemberwiseClone());
}
}
```
\*\* - and the same thing is happening.. Any thoughts?\*\* | 2010/05/05 | [
"https://Stackoverflow.com/questions/2773817",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/83809/"
] | ```
try this:-
public static MyType[] DeepClone(MyType[] obj)
{
using (MemoryStream ms = new MemoryStream())
{
BinaryFormatter formatter = new BinaryFormatter();
formatter.Serialize(ms, obj);
ms.Position = 0;
return (MyType[])formatter.Deserialize(ms);
}
}
``` | they aren't linked, you are using references to objects (So the second array contains 'pointers' to elements, which are the same as 'pointers' in first array..) so declare clsScriptItem as `struct`, or implement `ICloneable` interface and use
```
for(int i = 0; i < items.Length; i++)
lastSavedItems[i] = (clsScriptItem)items[i].Clone();
``` |
2,773,817 | Okay, I have what I think is a simple question.. or just a case of me being a C# beginner.
I have an array of custom objects (clsScriptItem) that I am populating from a database. Once the items are loaded, I want to back them up to "backup" array so I can revert the information back after changing the main array. However, when I use CopyTo to copy the array and then alter the original array, the backup array is also being altered... I thought CopyTo merely copied values + structure from one array to another.
```
private void backupItems()
{
//lastSavedItems and items are both of type clsScriptItem[]
//(declaration not shown)
lastSavedItems = new clsScriptItem[items.Length];
items.CopyTo(lastSavedItems, 0);
//items[0].nexts[0] is 2
//lastSavedItems[0].nexts[0] is 2
items[0].nexts[0] = "-1";
//items[0].nexts[0] is -1
//lastSavedItems[0].nexts[0] is also -1
}
```
**How do I backup this data without having the two arrays be 'linked'??**
UPDATE :
I have updated the backup function to this
```
private void backupItems()
{
lastSavedItems = new clsScriptItem[items.Length];
for (int i = 0; i < items.Length; i++)
lastSavedItems[i] = (clsScriptItem)items[i].Clone();
items[0].nexts[0] = "-1";
}
```
And I have update my class thusly....
```
public class clsScriptItem : ICloneable
{
//other declarations...
object ICloneable.Clone() { return Clone(); }
public clsScriptItem Clone()
{
return ((clsScriptItem)MemberwiseClone());
}
}
```
\*\* - and the same thing is happening.. Any thoughts?\*\* | 2010/05/05 | [
"https://Stackoverflow.com/questions/2773817",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/83809/"
] | ```
try this:-
public static MyType[] DeepClone(MyType[] obj)
{
using (MemoryStream ms = new MemoryStream())
{
BinaryFormatter formatter = new BinaryFormatter();
formatter.Serialize(ms, obj);
ms.Position = 0;
return (MyType[])formatter.Deserialize(ms);
}
}
``` | This is a classic shallow copy vs deep copy problem. When you copy an array of reference types it is merely the references that are copied, not the objects those references point to. So your end result is two arrays that contain references to the same objects.
So when you copy an array that looks like this
```
Original
----
| |
| 0 |---> Cls
| |
----
| |
| 1 |---> Cls
| |
----
```
You get this
```
Original Copy
---- ----
| | | |
| 0 |---> Cls <--- | 0 |
| | | |
---- ----
| | | |
| 1 |---> Cls <--- | 1 |
| | | |
---- ----
```
If you change the referenced object in any way and it will look like you've changed the contents of both arrays when really all you've done is change the single object pointed to by both arrays. |
21,090,556 | I am using QuaZIP 0.5.1 with Qt 5.1.1 for C++ on Ubuntu 12.04 x86\_64.
My program reads a large gzipped binary file, usually 1GB of uncompressed data or more, and makes some computations on it. It is not computational-extensive, and most of the time is passed on I/O. So if I can find a way to report how much data of the file is read, I can report it on a progress bar, and even provide an estimation of ETA.
I open the file with:
```
QuaGzipFile gzip(fileName);
if (!gzip.open(QIODevice::ReadOnly))
{
// report error
return;
}
```
But there is no functionality in QuaGzipFile to find the file size nor the current position.
I do not need to find size and position of uncompressed stream, the size and position of compressed stream are fine, because a rough estimation of progress is enough.
Currently, I can find **size of compressed file**, using `QFile(fileName).size()`. Also, I can easily find **current position in uncompressed stream**, by keeping sum of return values of `gzip.read()`. But these two numbers do not match.
I can alter the QuaZIP library, and access internal zlib-related stuff, if it helps. | 2014/01/13 | [
"https://Stackoverflow.com/questions/21090556",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1446210/"
] | There is no reliable way to determine total size of uncompressed stream. See [this answer](https://stackoverflow.com/a/9727599/344347) for details and possible workarounds.
However, there is a way to get position in compressed stream:
```
QFile file(fileName);
file.open(QFile::ReadOnly);
QuaGzipFile gzip;
gzip.open(file.handle(), QuaGzipFile::ReadOnly);
while(true) {
QByteArray buf = gzip.read(1000);
//process buf
if (buf.isEmpty()) { break; }
QFile temp_file_object;
temp_file_object.open(file.handle(), QFile::ReadOnly);
double progress = 100.0 * temp_file_object.pos() / file.size();
qDebug() << qRound(progress) << "%";
}
```
The idea is to open file manually and use file descriptor to get position. QFile cannot track external position changes, so `file.pos()` will be always 0. So we create `temp_file_object` from the file descriptor forcing QFile to request file position. I could use some lower level API (such as `lseek()`) to get file position but I think my way is more cross-platform.
Note that this method is not very accurate and can give progress values bigger than real. That's because zlib can internally read and decode more data than you have already read. | Using an ugly hack to zlib, I was able to find position in compressed stream.
First, I copied definition of `gz_stream` from gzio.c (from zlib-1.2.3.4 source), to the end of quagzipfile.cpp. Then I reimplemented the virtual function `qint64 QIODevice::pos() const`:
```
qint64 QuaGzipFile::pos() const
{
gz_stream *s = (gz_stream *)d->gzd;
return ftello64(s->file);
}
```
Since quagzipfile.cpp and quagzipfile.h seem to be independent from other QuaZIP library files, maybe it is better to copy the functionality I need from these files and avoid this hack?
The current version of program is something like this:
```
QFile infile(fileName);
if (!infile.open(QIODevice::ReadOnly))
return;
qint64 fileSize = infile.size;
infile.close();
QuaGzipFile gzip(fileName);
if (!gzip.open(QIODevice::ReadOnly))
return;
qint64 nread;
char buffer[bufferSize];
while ((nread = gzip.read(&buffer, bufferSize)) > 0)
{
// use buffer
int percent = 100.0 * gzip.pos() / fileSize;
// report percent
}
gzip.close();
``` |
21,090,556 | I am using QuaZIP 0.5.1 with Qt 5.1.1 for C++ on Ubuntu 12.04 x86\_64.
My program reads a large gzipped binary file, usually 1GB of uncompressed data or more, and makes some computations on it. It is not computational-extensive, and most of the time is passed on I/O. So if I can find a way to report how much data of the file is read, I can report it on a progress bar, and even provide an estimation of ETA.
I open the file with:
```
QuaGzipFile gzip(fileName);
if (!gzip.open(QIODevice::ReadOnly))
{
// report error
return;
}
```
But there is no functionality in QuaGzipFile to find the file size nor the current position.
I do not need to find size and position of uncompressed stream, the size and position of compressed stream are fine, because a rough estimation of progress is enough.
Currently, I can find **size of compressed file**, using `QFile(fileName).size()`. Also, I can easily find **current position in uncompressed stream**, by keeping sum of return values of `gzip.read()`. But these two numbers do not match.
I can alter the QuaZIP library, and access internal zlib-related stuff, if it helps. | 2014/01/13 | [
"https://Stackoverflow.com/questions/21090556",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1446210/"
] | In zlib 1.2.4 and greater you can use the `gzoffset()` function to get the current position in the compressed file. The current version of zlib is 1.2.8. | Using an ugly hack to zlib, I was able to find position in compressed stream.
First, I copied definition of `gz_stream` from gzio.c (from zlib-1.2.3.4 source), to the end of quagzipfile.cpp. Then I reimplemented the virtual function `qint64 QIODevice::pos() const`:
```
qint64 QuaGzipFile::pos() const
{
gz_stream *s = (gz_stream *)d->gzd;
return ftello64(s->file);
}
```
Since quagzipfile.cpp and quagzipfile.h seem to be independent from other QuaZIP library files, maybe it is better to copy the functionality I need from these files and avoid this hack?
The current version of program is something like this:
```
QFile infile(fileName);
if (!infile.open(QIODevice::ReadOnly))
return;
qint64 fileSize = infile.size;
infile.close();
QuaGzipFile gzip(fileName);
if (!gzip.open(QIODevice::ReadOnly))
return;
qint64 nread;
char buffer[bufferSize];
while ((nread = gzip.read(&buffer, bufferSize)) > 0)
{
// use buffer
int percent = 100.0 * gzip.pos() / fileSize;
// report percent
}
gzip.close();
``` |
21,090,556 | I am using QuaZIP 0.5.1 with Qt 5.1.1 for C++ on Ubuntu 12.04 x86\_64.
My program reads a large gzipped binary file, usually 1GB of uncompressed data or more, and makes some computations on it. It is not computational-extensive, and most of the time is passed on I/O. So if I can find a way to report how much data of the file is read, I can report it on a progress bar, and even provide an estimation of ETA.
I open the file with:
```
QuaGzipFile gzip(fileName);
if (!gzip.open(QIODevice::ReadOnly))
{
// report error
return;
}
```
But there is no functionality in QuaGzipFile to find the file size nor the current position.
I do not need to find size and position of uncompressed stream, the size and position of compressed stream are fine, because a rough estimation of progress is enough.
Currently, I can find **size of compressed file**, using `QFile(fileName).size()`. Also, I can easily find **current position in uncompressed stream**, by keeping sum of return values of `gzip.read()`. But these two numbers do not match.
I can alter the QuaZIP library, and access internal zlib-related stuff, if it helps. | 2014/01/13 | [
"https://Stackoverflow.com/questions/21090556",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1446210/"
] | There is no reliable way to determine total size of uncompressed stream. See [this answer](https://stackoverflow.com/a/9727599/344347) for details and possible workarounds.
However, there is a way to get position in compressed stream:
```
QFile file(fileName);
file.open(QFile::ReadOnly);
QuaGzipFile gzip;
gzip.open(file.handle(), QuaGzipFile::ReadOnly);
while(true) {
QByteArray buf = gzip.read(1000);
//process buf
if (buf.isEmpty()) { break; }
QFile temp_file_object;
temp_file_object.open(file.handle(), QFile::ReadOnly);
double progress = 100.0 * temp_file_object.pos() / file.size();
qDebug() << qRound(progress) << "%";
}
```
The idea is to open file manually and use file descriptor to get position. QFile cannot track external position changes, so `file.pos()` will be always 0. So we create `temp_file_object` from the file descriptor forcing QFile to request file position. I could use some lower level API (such as `lseek()`) to get file position but I think my way is more cross-platform.
Note that this method is not very accurate and can give progress values bigger than real. That's because zlib can internally read and decode more data than you have already read. | In zlib 1.2.4 and greater you can use the `gzoffset()` function to get the current position in the compressed file. The current version of zlib is 1.2.8. |
23,810,950 | Is this
```
<input type="button" value="..."
onclick="javascript: {ddwrt:GenFireServerEvent('__commit;__redirect={/Pages/Home.aspx}' ) }"
/>
```
the same (functionally) as
```
. . .
<script type="javascript/text>
function runIt() {
ddwrt:GenFireServerEvent('__commit;__redirect={/Pages/Home.aspx}' );
}
</script>
<body>
<input type="button" value="..."
onclick="runIt();" />
</body>
</html>
```
I don't really understand
1. what role the term "javascript:" in the `onclick` event description serves. I mean, isn't it the default that what is in the onclick will be javascript?
2. what role the outer curly braces serve in the `..."javascript: {}"`.
3. I recognize that "`ddwrt:`" is a namespace, but I am not aware of how to specify a namespace within a javascript function, which itself is located within a `<script>` block. | 2014/05/22 | [
"https://Stackoverflow.com/questions/23810950",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1106424/"
] | 1. It is a [label](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/label), and completely useless in this context
2. They create a [block](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/block), also useless in this context
3. No, it is another label and also useless | In this particular case (Sharepoint) this is NOT javascript, this is parsed by Sharepoint and translated in something like:
```
onclick="javascript: __doPostBack('ctl00$ctl37$g_c251e0c4_cd3d_4fc0_9028_ab565452bedd','__cancel;__redirect={https://....}')"
```
have a look at the result source code.
That's why you can't call GenFireServerEvent in your javascript code. |
48,757,747 | Let's consider a file called `test1.py` and containing the following code:
```
def init_foo():
global foo
foo=10
```
Let's consider another file called `test2.py` and containing the following:
```
import test1
test1.init_foo()
print(foo)
```
Provided that `test1` is on the pythonpath (and gets imported correctly) I will now receive the following error message:
`NameError: name 'foo' is not defined`
Anyone can explain to me why the variable `foo` is not declared as a `global` in the scope of `test2.py` while it is run? Also if you can provide a workaround for that problem?
Thx! | 2018/02/13 | [
"https://Stackoverflow.com/questions/48757747",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4961888/"
] | For this you need to use [`selModel`](https://docs.sencha.com/extjs/5.1.4/api/Ext.grid.Panel.html#cfg-selModel) config for [`grid`](https://docs.sencha.com/extjs/5.1.4/api/Ext.grid.Panel.html) using [`CheckboxModel`](https://docs.sencha.com/extjs/5.1.4/api/Ext.selection.CheckboxModel.html).
* A **selModel** Ext.selection.Model instance or config object, or the selection model class's alias string.In latter case its type property determines to which type of selection model this config is applied.
* A **CheckboxModel** selection model that renders a column of checkboxes that can be toggled to select or deselect rows. The default mode for this selection model is MULTI.
In this **[FIDDLE](https://fiddle.sencha.com/#view/editor&fiddle/2d1r)**, I have create a demo using two grid. In first grid you can select record by `ctrl/shift` key and In second grid you can select directly on row click. I hope this will help/guide you to achieve you requirement.
**CODE SNIPPET**
```
Ext.application({
name: 'Fiddle',
launch: function () {
//define user store
Ext.define('User', {
extend: 'Ext.data.Store',
alias: 'store.users',
fields: ['name', 'email', 'phone'],
data: [{
name: 'Lisa',
email: 'lisa@simpsons.com',
phone: '555-111-1224'
}, {
name: 'Bart',
email: 'bart@simpsons.com',
phone: '555-222-1234'
}, {
name: 'Homer',
email: 'homer@simpsons.com',
phone: '555-222-1244'
}, {
name: 'Marge',
email: 'marge@simpsons.com',
phone: '555-222-1254'
}, {
name: 'AMargeia',
email: 'marge@simpsons.com',
phone: '555-222-1254'
}]
});
//Define custom grid
Ext.define('MyGrid', {
extend: 'Ext.grid.Panel',
alias: 'widget.mygrid',
store: {
type: 'users'
},
columns: [{
text: 'Name',
flex: 1,
dataIndex: 'name'
}, {
text: 'Email',
dataIndex: 'email',
flex: 1
}, {
text: 'Phone',
flex: 1,
dataIndex: 'phone'
}]
});
//create panel with 2 grid
Ext.create({
xtype: 'panel',
renderTo: Ext.getBody(),
items: [{
//select multiple records by using ctrl key and by selecting the checkbox with mouse in extjs grid
xtype: 'mygrid',
title: 'multi selection example by using ctrl/shif key',
/*
* selModel
* A Ext.selection.Model instance or config object,
* or the selection model class's alias string.
*/
selModel: {
/* selType
* A selection model that renders a column of checkboxes
* that can be toggled to select or deselect rows.
* The default mode for this selection model is MULTI.
*/
selType: 'checkboxmodel'
}
}, {
//select multi record by row click
xtype: 'mygrid',
margin: '20 0 0 0',
title: 'multi selection example on rowclick',
/*
* selModel
* A Ext.selection.Model instance or config object,
* or the selection model class's alias string.
*/
selModel: {
/* selType
* A selection model that renders a column of checkboxes
* that can be toggled to select or deselect rows.
* The default mode for this selection model is MULTI.
*/
selType: 'checkboxmodel',
/* mode
* "SIMPLE" - Allows simple selection of multiple items one-by-one.
* Each click in grid will either select or deselect an item.
*/
mode: 'SIMPLE'
}
}]
});
}
});
``` | I achieved by adding keyup,keydown listeners. Please find the fiddle where i updated the code.
<https://fiddle.sencha.com/#view/editor&fiddle/2d98> |
618,735 | I would like to start the genealogy program [Gramps](https://gramps-project.org/) with a language (English) other than my locale one (Spanish). I successfully tried to run `Gramps` in terminal via
```
LANG=en_GB gramps
```
I would like, now, to add this command in the .desktop file in `/usr/share/applications/` to be able to start `Gramps` in the English language, but I cannot get it to run like this
```
EXEC=LANG=en_GB gramps
```
What can I do?
**Edit:**
For those interested: the suggestion by Jacob down below helped me to start gramps in the given language English via the .desktop file. In addition, I have used the following two commands so that `gramps` in terminal starts in English as well:
```
echo 'LANGUAGE=en_GB PATH=/usr/bin/gramps:$PATH' >> ~/.bash_profile
source ~/.bash_profile
echo 'alias gramps='LANGUAGE=en_GB /usr/bin/gramps'' >> ~/.bashrc
source ~/.bashrc
```
Logout and login! | 2015/05/05 | [
"https://askubuntu.com/questions/618735",
"https://askubuntu.com",
"https://askubuntu.com/users/69710/"
] | I installed Gramp and tried it here, and this should really work:
```
Exec=/bin/bash -c "LANGUAGE=en_GB gramps"
```
`LANGUAGE=` takes precedence over `LANG=`
*Note*
Make sure you run the application from the *local* `.desktop` file: After editing the local one, make *sure* you log out / in before running it again. | A more generic way, compared to playing with a .desktop file, ~/.bashrc, etc., is to create the file **~/bin/gramps** and give it this contents:
```
#!/bin/sh
export LANGUAGE=en_GB
exec /usr/bin/gramps $@
```
Also run `chmod +x ~/bin/gramps`. Then, next time you log in, English will be the display language however you start gramps. |
618,735 | I would like to start the genealogy program [Gramps](https://gramps-project.org/) with a language (English) other than my locale one (Spanish). I successfully tried to run `Gramps` in terminal via
```
LANG=en_GB gramps
```
I would like, now, to add this command in the .desktop file in `/usr/share/applications/` to be able to start `Gramps` in the English language, but I cannot get it to run like this
```
EXEC=LANG=en_GB gramps
```
What can I do?
**Edit:**
For those interested: the suggestion by Jacob down below helped me to start gramps in the given language English via the .desktop file. In addition, I have used the following two commands so that `gramps` in terminal starts in English as well:
```
echo 'LANGUAGE=en_GB PATH=/usr/bin/gramps:$PATH' >> ~/.bash_profile
source ~/.bash_profile
echo 'alias gramps='LANGUAGE=en_GB /usr/bin/gramps'' >> ~/.bashrc
source ~/.bashrc
```
Logout and login! | 2015/05/05 | [
"https://askubuntu.com/questions/618735",
"https://askubuntu.com",
"https://askubuntu.com/users/69710/"
] | I installed Gramp and tried it here, and this should really work:
```
Exec=/bin/bash -c "LANGUAGE=en_GB gramps"
```
`LANGUAGE=` takes precedence over `LANG=`
*Note*
Make sure you run the application from the *local* `.desktop` file: After editing the local one, make *sure* you log out / in before running it again. | **My workaround:**
```
[Desktop Entry]
Encoding=UTF-8
Name=PhotoFiltre Studio X
Comment=PlayOnLinux
Type=Application
**Exec=env LC_ALL="pl_PL.UTF8" /usr/share/playonlinux/playonlinux --run "PhotoFiltre Studio X" %F**
Icon=/home/gajowy/.PlayOnLinux//icones/full_size/PhotoFiltre Studio X
Name[fr_FR]=PhotoFiltre Studio X
StartupWMClass=pfstudiox.exe
Categories=Graphics;RasterGraphics;
``` |
618,735 | I would like to start the genealogy program [Gramps](https://gramps-project.org/) with a language (English) other than my locale one (Spanish). I successfully tried to run `Gramps` in terminal via
```
LANG=en_GB gramps
```
I would like, now, to add this command in the .desktop file in `/usr/share/applications/` to be able to start `Gramps` in the English language, but I cannot get it to run like this
```
EXEC=LANG=en_GB gramps
```
What can I do?
**Edit:**
For those interested: the suggestion by Jacob down below helped me to start gramps in the given language English via the .desktop file. In addition, I have used the following two commands so that `gramps` in terminal starts in English as well:
```
echo 'LANGUAGE=en_GB PATH=/usr/bin/gramps:$PATH' >> ~/.bash_profile
source ~/.bash_profile
echo 'alias gramps='LANGUAGE=en_GB /usr/bin/gramps'' >> ~/.bashrc
source ~/.bashrc
```
Logout and login! | 2015/05/05 | [
"https://askubuntu.com/questions/618735",
"https://askubuntu.com",
"https://askubuntu.com/users/69710/"
] | A more generic way, compared to playing with a .desktop file, ~/.bashrc, etc., is to create the file **~/bin/gramps** and give it this contents:
```
#!/bin/sh
export LANGUAGE=en_GB
exec /usr/bin/gramps $@
```
Also run `chmod +x ~/bin/gramps`. Then, next time you log in, English will be the display language however you start gramps. | **My workaround:**
```
[Desktop Entry]
Encoding=UTF-8
Name=PhotoFiltre Studio X
Comment=PlayOnLinux
Type=Application
**Exec=env LC_ALL="pl_PL.UTF8" /usr/share/playonlinux/playonlinux --run "PhotoFiltre Studio X" %F**
Icon=/home/gajowy/.PlayOnLinux//icones/full_size/PhotoFiltre Studio X
Name[fr_FR]=PhotoFiltre Studio X
StartupWMClass=pfstudiox.exe
Categories=Graphics;RasterGraphics;
``` |
2,482,907 | I use `document.getElementById("text").value.length` to get the string length through javascript, and `mb_strlen($_POST['text'])` to get the string length by PHP and both differs very much. Carriage returns are converted in javascript before getting the string length, but I guess some characters are not being counted.
For example,
>
> [b]15. Umieszczanie obrazka z logo na stronie zespołu[/b]
>
>
>
This block of text is calculated 57 in javascript and 58 in PHP. When the text gets long, the difference increases. Is there any way to overcome this? | 2010/03/20 | [
"https://Stackoverflow.com/questions/2482907",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/247430/"
] | I have found an mb\_strlen equivalent function for Javascript, maybe this might be useful for someone else:
```
function mb_strlen(str) {
var len = 0;
for(var i = 0; i < str.length; i++) {
len += str.charCodeAt(i) < 0 || str.charCodeAt(i) > 255 ? 2 : 1;
}
return len;
}
```
Thanks to all that tried to help! | I notice that there is a non-standard character in there (the ł) - I'm not sure how PHP counts non-standard - but it could be counting that as two. What happens if you run the test without that character? |
2,482,907 | I use `document.getElementById("text").value.length` to get the string length through javascript, and `mb_strlen($_POST['text'])` to get the string length by PHP and both differs very much. Carriage returns are converted in javascript before getting the string length, but I guess some characters are not being counted.
For example,
>
> [b]15. Umieszczanie obrazka z logo na stronie zespołu[/b]
>
>
>
This block of text is calculated 57 in javascript and 58 in PHP. When the text gets long, the difference increases. Is there any way to overcome this? | 2010/03/20 | [
"https://Stackoverflow.com/questions/2482907",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/247430/"
] | If you're trying to get the length of an UTF-8 encoded string in PHP, you should specify the encoding in the second parameter of `mb_strlen`, like so:
```
mb_strlen($_POST['text'], 'UTF-8')
```
Also, don't forget to call `stripslashes` on the POST-var. | I notice that there is a non-standard character in there (the ł) - I'm not sure how PHP counts non-standard - but it could be counting that as two. What happens if you run the test without that character? |
2,482,907 | I use `document.getElementById("text").value.length` to get the string length through javascript, and `mb_strlen($_POST['text'])` to get the string length by PHP and both differs very much. Carriage returns are converted in javascript before getting the string length, but I guess some characters are not being counted.
For example,
>
> [b]15. Umieszczanie obrazka z logo na stronie zespołu[/b]
>
>
>
This block of text is calculated 57 in javascript and 58 in PHP. When the text gets long, the difference increases. Is there any way to overcome this? | 2010/03/20 | [
"https://Stackoverflow.com/questions/2482907",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/247430/"
] | I have found an mb\_strlen equivalent function for Javascript, maybe this might be useful for someone else:
```
function mb_strlen(str) {
var len = 0;
for(var i = 0; i < str.length; i++) {
len += str.charCodeAt(i) < 0 || str.charCodeAt(i) > 255 ? 2 : 1;
}
return len;
}
```
Thanks to all that tried to help! | This should do the trick
```
function mb_strlen (s) {
return ~-encodeURI(s).split(/%..|./).length;
}
``` |
2,482,907 | I use `document.getElementById("text").value.length` to get the string length through javascript, and `mb_strlen($_POST['text'])` to get the string length by PHP and both differs very much. Carriage returns are converted in javascript before getting the string length, but I guess some characters are not being counted.
For example,
>
> [b]15. Umieszczanie obrazka z logo na stronie zespołu[/b]
>
>
>
This block of text is calculated 57 in javascript and 58 in PHP. When the text gets long, the difference increases. Is there any way to overcome this? | 2010/03/20 | [
"https://Stackoverflow.com/questions/2482907",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/247430/"
] | I have found an mb\_strlen equivalent function for Javascript, maybe this might be useful for someone else:
```
function mb_strlen(str) {
var len = 0;
for(var i = 0; i < str.length; i++) {
len += str.charCodeAt(i) < 0 || str.charCodeAt(i) > 255 ? 2 : 1;
}
return len;
}
```
Thanks to all that tried to help! | Just type more than one line in your text area and you'll see the diference going bigger and bigger...
This came from the fact Javascript value.length don't count the end of line when all PHP length functions take them in account.
Just do:
```
// In case you're using CKEditot
// id is the id of the text area
var value = eval('CKEDITOR.instances.'+id+'.getData();');
// String length without the CRLF
var taille = value.length;
// get number of line
var nb_lines = (value.match(/\n/g) || []).length;
// Now, this value is the same you'll get with strlen in PHP
taille = taille + nb_lines;
``` |
2,482,907 | I use `document.getElementById("text").value.length` to get the string length through javascript, and `mb_strlen($_POST['text'])` to get the string length by PHP and both differs very much. Carriage returns are converted in javascript before getting the string length, but I guess some characters are not being counted.
For example,
>
> [b]15. Umieszczanie obrazka z logo na stronie zespołu[/b]
>
>
>
This block of text is calculated 57 in javascript and 58 in PHP. When the text gets long, the difference increases. Is there any way to overcome this? | 2010/03/20 | [
"https://Stackoverflow.com/questions/2482907",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/247430/"
] | If you're trying to get the length of an UTF-8 encoded string in PHP, you should specify the encoding in the second parameter of `mb_strlen`, like so:
```
mb_strlen($_POST['text'], 'UTF-8')
```
Also, don't forget to call `stripslashes` on the POST-var. | This should do the trick
```
function mb_strlen (s) {
return ~-encodeURI(s).split(/%..|./).length;
}
``` |
2,482,907 | I use `document.getElementById("text").value.length` to get the string length through javascript, and `mb_strlen($_POST['text'])` to get the string length by PHP and both differs very much. Carriage returns are converted in javascript before getting the string length, but I guess some characters are not being counted.
For example,
>
> [b]15. Umieszczanie obrazka z logo na stronie zespołu[/b]
>
>
>
This block of text is calculated 57 in javascript and 58 in PHP. When the text gets long, the difference increases. Is there any way to overcome this? | 2010/03/20 | [
"https://Stackoverflow.com/questions/2482907",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/247430/"
] | If you're trying to get the length of an UTF-8 encoded string in PHP, you should specify the encoding in the second parameter of `mb_strlen`, like so:
```
mb_strlen($_POST['text'], 'UTF-8')
```
Also, don't forget to call `stripslashes` on the POST-var. | Just type more than one line in your text area and you'll see the diference going bigger and bigger...
This came from the fact Javascript value.length don't count the end of line when all PHP length functions take them in account.
Just do:
```
// In case you're using CKEditot
// id is the id of the text area
var value = eval('CKEDITOR.instances.'+id+'.getData();');
// String length without the CRLF
var taille = value.length;
// get number of line
var nb_lines = (value.match(/\n/g) || []).length;
// Now, this value is the same you'll get with strlen in PHP
taille = taille + nb_lines;
``` |
144,974 | Do we have any choice of calling a callback function for sorting?
Just like
```
filter_condition_callback
```
As
```
'sort_callback' => array($this, 'sortingfun'),
``` | 2016/11/09 | [
"https://magento.stackexchange.com/questions/144974",
"https://magento.stackexchange.com",
"https://magento.stackexchange.com/users/8009/"
] | I faced the same problem today. I solved it by adding joins to the `_prepareCollection` which I added to the `filter_condition_callback`.
Then added the table.column name for the `'index'` in `$this->addColumn`. (In my case: `catalog_product_entity.sku`).
In the \_prepareCollection I added:
```
protected function _prepareCollection()
{
$collection = Mage::getResourceModel($this->_getCollectionClass());
// add to make skus sortable
$collection->getSelect()->join(
'sales_flat_order_item',
'main_table.entity_id=sales_flat_order_item.order_id',
array('product_id')
)
->join(
'catalog_product_entity',
'sales_flat_order_item.product_id=catalog_product_entity.entity_id',
array('sku')
);
$this->setCollection($collection);
return parent::_prepareCollection();
}
```
This link helped me too:
[Column isn't Sorting in Custom Admin Report](https://magento.stackexchange.com/questions/4061/column-isnt-sorting-in-custom-admin-report) | for fix sort - need override method **\_setCollectionOrder**,
for example my solution
```
protected function _setCollectionOrder($column)
{
if (!$dir = $column->getDir()) {
return $this;
}
if ($column->getIndex() == 'orders_count') {
$collection = $this->getCollection();
$collection->getSelect()
->order("orders_count " . strtoupper($column->getDir()));
return $this;
}
if ($column->getIndex() == 'orders_total') {
$collection = $this->getCollection();
$collection->getSelect()
->order("orders_total " . strtoupper($column->getDir()));
return $this;
}
return parent::_setCollectionOrder($column);
}
``` |
23,766 | When talking about options to tackle volatile cryptocurrency prices, Ethereum's white paper has the following discussion:
>
> Such a contract would have significant potential in crypto-commerce.
> One of the main problems cited about cryptocurrency is the fact that
> it's volatile; although many users and merchants may want the security
> and convenience of dealing with cryptographic assets, they may not
> wish to face that prospect of losing 23% of the value of their funds
> in a single day. **Up until now, the most commonly proposed solution
> has been issuer-backed assets; the idea is that an issuer creates a
> sub-currency in which they have the right to issue and revoke units,
> and provide one unit of the currency to anyone who provides them
> (offline) with one unit of a specified underlying asset (eg. gold,
> USD). The issuer then promises to provide one unit of the underlying
> asset to anyone who sends back one unit of the crypto-asset. This
> mechanism allows any non-cryptographic asset to be "uplifted" into a
> cryptographic asset, provided that the issuer can be trusted.**
>
>
>
Any one can elaborate on the concept "issuer-backed assets", especially the words in bold above? Does it mean the issuer raise the funds in USD/gold/etc and repay them in Ether? Any link to ICO here?
Confused here. Thanks a lot! | 2017/08/06 | [
"https://ethereum.stackexchange.com/questions/23766",
"https://ethereum.stackexchange.com",
"https://ethereum.stackexchange.com/users/16720/"
] | You are creating a new contract instance every time. There is no way for each contract instance to be aware of how many other ones have been created, so the value will always be 1.
What you want to do is have a parent contract which is able to create the child contract that you would like to count. Here is some sample code:
```
contract Parent {
uint counter;
function createChild() {
Child child = new Child();
++counter;
}
}
```
And the Child contract could be anything you want. I think this answers your question. | Each time you instantiate/deploy your contract it is a new contract with new storage, so it is correctly reporting that it it has been created once. Each contract has its own independent version of `counter`.
You could have a single Counter contract that each of the contracts you deploy calls as it is created. That would be able to maintain a single counter that increments each time. I don't know how you could protect that from being called by any third-party, though, and messing up your count.
Or, perhaps best, you should make a "Factory" contract that only takes instruction from your own account. When you call it, it deploys your contract itself and keeps count for you. |
31,550,249 | I'm trying to write a simple app to send (and possibly receive) emails from my gmail account. I managed to do it while hardcoding my account information in my source code, but now I wanted to enter them in GUI fields and read information from there. Here is the code:
```
import sys
import smtplib
from PyQt4 import QtCore, QtGui
from Notifier_Main import Ui_Notifier_Main_GUI
class MainGUI(QtGui.QWidget, Ui_Notifier_Main_GUI):
def __init__(self):
QtGui.QWidget.__init__(self)
self.setupUi(self)
self.sendButton.clicked.connect(self.send)
def send(self):
fromaddr = self.senderEmailLineEdit.text()
toaddrs = self.receiverEmailLineEdit.text()
msg = self.msgTextEdit.toPlainText()
username = self.senderEmailLineEdit.text()
server = smtplib.SMTP("smtp.gmail.com:587")
server.starttls()
server.login(username, 'password')
server.sendmail(fromaddr, toaddrs, msg)
server.quit()
if __name__ == "__main__":
app = QtGui.QApplication(sys.argv)
main_gui = MainGUI()
main_gui.show()
sys.exit(app.exec_())
```
When I run it I get this long ass error:
```
C:\Python27\python.exe "E:/Python Projekti/Notifier/src/main.py"
Traceback (most recent call last):
File "E:/Python Projekti/Notifier/src/main.py", line 20, in send
server.sendmail(fromaddr, toaddrs, msg)
File "C:\Python27\lib\smtplib.py", line 728, in sendmail
(code, resp) = self.mail(from_addr, esmtp_opts)
File "C:\Python27\lib\smtplib.py", line 480, in mail
self.putcmd("mail", "FROM:%s%s" % (quoteaddr(sender), optionlist))
File "C:\Python27\lib\smtplib.py", line 141, in quoteaddr
m = email.utils.parseaddr(addr)[1]
File "C:\Python27\lib\email\utils.py", line 214, in parseaddr
addrs = _AddressList(addr).addresslist
File "C:\Python27\lib\email\_parseaddr.py", line 457, in __init__
self.addresslist = self.getaddrlist()
File "C:\Python27\lib\email\_parseaddr.py", line 218, in getaddrlist
ad = self.getaddress()
File "C:\Python27\lib\email\_parseaddr.py", line 228, in getaddress
self.gotonext()
File "C:\Python27\lib\email\_parseaddr.py", line 204, in gotonext
if self.field[self.pos] in self.LWS + '\n\r':
TypeError: 'in <string>' requires string as left operand, not QString
```
I tried googling that type error, and found some link about some spyderlib but since I'm pretty new at all this, I couldn't figure out what to do with that. | 2015/07/21 | [
"https://Stackoverflow.com/questions/31550249",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3853902/"
] | Most requests to Qt elements that have text will return QStrings, the simple string container Qt uses. Most other libraries are going to expect regular python strings, so casting using str() may be necessary. All of:
```
fromaddr = self.senderEmailLineEdit.text()
toaddrs = self.receiverEmailLineEdit.text()
msg = self.msgTextEdit.toPlainText()
username = self.senderEmailLineEdit.text()
```
are QString objects. | Try casting the variable to string using the builtin function str() |
31,550,249 | I'm trying to write a simple app to send (and possibly receive) emails from my gmail account. I managed to do it while hardcoding my account information in my source code, but now I wanted to enter them in GUI fields and read information from there. Here is the code:
```
import sys
import smtplib
from PyQt4 import QtCore, QtGui
from Notifier_Main import Ui_Notifier_Main_GUI
class MainGUI(QtGui.QWidget, Ui_Notifier_Main_GUI):
def __init__(self):
QtGui.QWidget.__init__(self)
self.setupUi(self)
self.sendButton.clicked.connect(self.send)
def send(self):
fromaddr = self.senderEmailLineEdit.text()
toaddrs = self.receiverEmailLineEdit.text()
msg = self.msgTextEdit.toPlainText()
username = self.senderEmailLineEdit.text()
server = smtplib.SMTP("smtp.gmail.com:587")
server.starttls()
server.login(username, 'password')
server.sendmail(fromaddr, toaddrs, msg)
server.quit()
if __name__ == "__main__":
app = QtGui.QApplication(sys.argv)
main_gui = MainGUI()
main_gui.show()
sys.exit(app.exec_())
```
When I run it I get this long ass error:
```
C:\Python27\python.exe "E:/Python Projekti/Notifier/src/main.py"
Traceback (most recent call last):
File "E:/Python Projekti/Notifier/src/main.py", line 20, in send
server.sendmail(fromaddr, toaddrs, msg)
File "C:\Python27\lib\smtplib.py", line 728, in sendmail
(code, resp) = self.mail(from_addr, esmtp_opts)
File "C:\Python27\lib\smtplib.py", line 480, in mail
self.putcmd("mail", "FROM:%s%s" % (quoteaddr(sender), optionlist))
File "C:\Python27\lib\smtplib.py", line 141, in quoteaddr
m = email.utils.parseaddr(addr)[1]
File "C:\Python27\lib\email\utils.py", line 214, in parseaddr
addrs = _AddressList(addr).addresslist
File "C:\Python27\lib\email\_parseaddr.py", line 457, in __init__
self.addresslist = self.getaddrlist()
File "C:\Python27\lib\email\_parseaddr.py", line 218, in getaddrlist
ad = self.getaddress()
File "C:\Python27\lib\email\_parseaddr.py", line 228, in getaddress
self.gotonext()
File "C:\Python27\lib\email\_parseaddr.py", line 204, in gotonext
if self.field[self.pos] in self.LWS + '\n\r':
TypeError: 'in <string>' requires string as left operand, not QString
```
I tried googling that type error, and found some link about some spyderlib but since I'm pretty new at all this, I couldn't figure out what to do with that. | 2015/07/21 | [
"https://Stackoverflow.com/questions/31550249",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3853902/"
] | Most requests to Qt elements that have text will return QStrings, the simple string container Qt uses. Most other libraries are going to expect regular python strings, so casting using str() may be necessary. All of:
```
fromaddr = self.senderEmailLineEdit.text()
toaddrs = self.receiverEmailLineEdit.text()
msg = self.msgTextEdit.toPlainText()
username = self.senderEmailLineEdit.text()
```
are QString objects. | just use this:
```
fromaddr = str(self.senderEmailLineEdit.text())
toaddrs = str(self.receiverEmailLineEdit.text())
msg = str(self.msgTextEdit.toPlainText())
username = str(self.senderEmailLineEdit.text())
``` |
37,544,649 | I have a custom popup window by a layout. I have to give a x,y coordinates to appear popup window after `a_btn` click. This can be different locations in different phones.
But I want to show the popup window always above and touching the the `a_btn`
How can I implement this.Help me
My code for showing the popup window :
```
a_btn.setOnClickListener(new OnClickListener() {
@Override
public void onClick(View v) {
LayoutInflater lInflater = (LayoutInflater) getActivity().getSystemService(Context.LAYOUT_INFLATER_SERVICE);
View popup_view = lInflater.inflate(R.layout.popup_a, null);
final PopupWindow popup = new PopupWindow(popup_view,FrameLayout.LayoutParams.WRAP_CONTENT,FrameLayout.LayoutParams.WRAP_CONTENT,true);
popup.setFocusable(true);
popup.setBackgroundDrawable(new ColorDrawable());
popup.showAtLocation(relative, Gravity.NO_GRAVITY, coordinateTop, 100);
//popup.showAsDropDown(location_popup_view, 2, 2);
}
});
``` | 2016/05/31 | [
"https://Stackoverflow.com/questions/37544649",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4976267/"
] | If you want counter in toolbar try [ActionBarMenuItemCounter](https://github.com/cvoronin/ActionBarMenuItemCounter),it worked for me
```
private Drawable buildCounterDrawable(int count, int backgroundImageId) {
LayoutInflater inflater = LayoutInflater.from(this);
View view = inflater.inflate(R.layout.counter_menuitem_layout, null);
view.setBackgroundResource(backgroundImageId);
if (count == 0) {
View counterTextPanel = view.findViewById(R.id.counterValuePanel);
counterTextPanel.setVisibility(View.GONE);
} else {
TextView textView = (TextView) view.findViewById(R.id.count);
textView.setText("" + count);
}
view.measure(
View.MeasureSpec.makeMeasureSpec(0, View.MeasureSpec.UNSPECIFIED),
View.MeasureSpec.makeMeasureSpec(0, View.MeasureSpec.UNSPECIFIED));
view.layout(0, 0, view.getMeasuredWidth(), view.getMeasuredHeight());
view.setDrawingCacheEnabled(true);
view.setDrawingCacheQuality(View.DRAWING_CACHE_QUALITY_HIGH);
Bitmap bitmap = Bitmap.createBitmap(view.getDrawingCache());
view.setDrawingCacheEnabled(false);
return new BitmapDrawable(getResources(), bitmap);
}
``` | I would save your shoping cart as an image and implement your menu layout as follows:
```
<menu xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto">
<item
android:id="@+id/icon_shopping_cart"
android:icon="@drawable/shopping_cart"
android:title="@string/shopping_icon_description"
android:showAsAction="always" />
</menu>
```
And inflate it your activity / fragment:
```
@Override
public void onCreateOptionsMenu(Menu menu, MenuInflater inflater) {
super.onCreateOptionsMenu(menu, inflater);
inflater.inflate(R.menu.menu_layout, menu);
}
``` |
37,544,649 | I have a custom popup window by a layout. I have to give a x,y coordinates to appear popup window after `a_btn` click. This can be different locations in different phones.
But I want to show the popup window always above and touching the the `a_btn`
How can I implement this.Help me
My code for showing the popup window :
```
a_btn.setOnClickListener(new OnClickListener() {
@Override
public void onClick(View v) {
LayoutInflater lInflater = (LayoutInflater) getActivity().getSystemService(Context.LAYOUT_INFLATER_SERVICE);
View popup_view = lInflater.inflate(R.layout.popup_a, null);
final PopupWindow popup = new PopupWindow(popup_view,FrameLayout.LayoutParams.WRAP_CONTENT,FrameLayout.LayoutParams.WRAP_CONTENT,true);
popup.setFocusable(true);
popup.setBackgroundDrawable(new ColorDrawable());
popup.showAtLocation(relative, Gravity.NO_GRAVITY, coordinateTop, 100);
//popup.showAsDropDown(location_popup_view, 2, 2);
}
});
``` | 2016/05/31 | [
"https://Stackoverflow.com/questions/37544649",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4976267/"
] | Your set-up is correct, now only below is the thing you need..
```
@Override
public boolean onCreateOptionsMenu(Menu menu) {
getMenuInflater().inflate(R.menu.your_menu_file, menu);
final MenuItem item = menu.findItem(R.id.icon_shopping_cart);
TextView cartCount = (TextView) item.getActionView().findViewById(R.id.counter);
cartCount.setText("10");
return true;
}
```
And remove this line
`toolbar.inflateMenu(R.id.shopping_cart);` | I would save your shoping cart as an image and implement your menu layout as follows:
```
<menu xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto">
<item
android:id="@+id/icon_shopping_cart"
android:icon="@drawable/shopping_cart"
android:title="@string/shopping_icon_description"
android:showAsAction="always" />
</menu>
```
And inflate it your activity / fragment:
```
@Override
public void onCreateOptionsMenu(Menu menu, MenuInflater inflater) {
super.onCreateOptionsMenu(menu, inflater);
inflater.inflate(R.menu.menu_layout, menu);
}
``` |
37,544,649 | I have a custom popup window by a layout. I have to give a x,y coordinates to appear popup window after `a_btn` click. This can be different locations in different phones.
But I want to show the popup window always above and touching the the `a_btn`
How can I implement this.Help me
My code for showing the popup window :
```
a_btn.setOnClickListener(new OnClickListener() {
@Override
public void onClick(View v) {
LayoutInflater lInflater = (LayoutInflater) getActivity().getSystemService(Context.LAYOUT_INFLATER_SERVICE);
View popup_view = lInflater.inflate(R.layout.popup_a, null);
final PopupWindow popup = new PopupWindow(popup_view,FrameLayout.LayoutParams.WRAP_CONTENT,FrameLayout.LayoutParams.WRAP_CONTENT,true);
popup.setFocusable(true);
popup.setBackgroundDrawable(new ColorDrawable());
popup.showAtLocation(relative, Gravity.NO_GRAVITY, coordinateTop, 100);
//popup.showAsDropDown(location_popup_view, 2, 2);
}
});
``` | 2016/05/31 | [
"https://Stackoverflow.com/questions/37544649",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4976267/"
] | If you want counter in toolbar try [ActionBarMenuItemCounter](https://github.com/cvoronin/ActionBarMenuItemCounter),it worked for me
```
private Drawable buildCounterDrawable(int count, int backgroundImageId) {
LayoutInflater inflater = LayoutInflater.from(this);
View view = inflater.inflate(R.layout.counter_menuitem_layout, null);
view.setBackgroundResource(backgroundImageId);
if (count == 0) {
View counterTextPanel = view.findViewById(R.id.counterValuePanel);
counterTextPanel.setVisibility(View.GONE);
} else {
TextView textView = (TextView) view.findViewById(R.id.count);
textView.setText("" + count);
}
view.measure(
View.MeasureSpec.makeMeasureSpec(0, View.MeasureSpec.UNSPECIFIED),
View.MeasureSpec.makeMeasureSpec(0, View.MeasureSpec.UNSPECIFIED));
view.layout(0, 0, view.getMeasuredWidth(), view.getMeasuredHeight());
view.setDrawingCacheEnabled(true);
view.setDrawingCacheQuality(View.DRAWING_CACHE_QUALITY_HIGH);
Bitmap bitmap = Bitmap.createBitmap(view.getDrawingCache());
view.setDrawingCacheEnabled(false);
return new BitmapDrawable(getResources(), bitmap);
}
``` | Your set-up is correct, now only below is the thing you need..
```
@Override
public boolean onCreateOptionsMenu(Menu menu) {
getMenuInflater().inflate(R.menu.your_menu_file, menu);
final MenuItem item = menu.findItem(R.id.icon_shopping_cart);
TextView cartCount = (TextView) item.getActionView().findViewById(R.id.counter);
cartCount.setText("10");
return true;
}
```
And remove this line
`toolbar.inflateMenu(R.id.shopping_cart);` |
47,794,944 | I have a aspx page call it Scheduler.aspx that has an update panel with a repeater, within the repeater ItemTemplate I have a ModalPopupExtender that has an iFrame to another aspx page call this Update.aspx
in the form\_load of the Update.aspx page the code checks for some updates from another system and will alert the user that some updates have happened,
what I'm finding is when the Scheduler.aspx page loads the function in update.aspx is fired so I get a number of alerts but don't want this until I go into Update.aspx within the modal,
The function is in if(!Page.IsPostback) but is there anyway I can tell if the page is loading for real or just the parent page loading,
Scheuler.aspx example markup
```
<asp:UpdatePanel ID="updMon" runat="server" UpdateMode="Conditional" ChildrenAsTriggers="true">
<ContentTemplate>
<asp:Repeater ID="rptMon" runat="server" OnItemCreated="rptMon_ItemCreated">
<HeaderTemplate>
</HeaderTemplate>
<ItemTemplate>
<table style="width:100%;" class='<%# sTableClass(DataBinder.Eval(Container.DataItem, "DSC_ID").ToString()) %>'>
<tr>
<th>
<asp:LinkButton style="color:#717171" runat="server" id="LinkButton1" href="#">
<%# DataBinder.Eval(Container.DataItem, "DSC_DELNAME") %></asp:LinkButton>
</th>
</tr>
<tr>...</tr>
</table>
<cc1:ModalPopupExtender ID="mpe1" runat="server" PopupControlID="pnlMon1" TargetControlID="LinkButton1" ></cc1:ModalPopupExtender>
<asp:Panel ID="pnlMon1" runat="server" CssClass="pnlBackGround" align="center" style = "display:none" >
<iframe id="iFrm1" class="iframeStyle" src='<%# "Update.aspx?id=" + Eval("DSC_ID").ToString() %>' runat="server"></iframe>
<div id="divClose" style="position:relative;top:-60px;width:200px;left:450px;">
<asp:Button ID="btnCloseEdit1" runat="server" class="btn btn-primary" Text="Close" OnCommand="btnCloseEdit1_Command" CommandArgument='<%# Eval("DSC_ID").ToString() %>' />
</div>
</asp:Panel>
</ItemTemplate>
<FooterTemplate></FooterTemplate>
</asp:Repeater>
</ContentTemplate>
```
the call to the update check in Update.aspx is like this
```
protected void Page_Load(object sender, EventArgs e)
{
if (!Page.IsPostBack)
{
if (Request.QueryString["id"] != null)
{
sID = Request.QueryString["id"].ToString();
CheckUpdates(sID);
}
}
``` | 2017/12/13 | [
"https://Stackoverflow.com/questions/47794944",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8238038/"
] | Finally, I found the solution in [Angular HttpInterceptor documentation usage notes](https://angular.io/api/common/http/HttpInterceptor#usage-notes):
>
> To use the same instance of HttpInterceptors for the entire app,
> import the HttpClientModule only in your AppModule, and add the
> interceptors to the root application injector . If you import
> HttpClientModule multiple times across different modules (for example,
> in lazy loading modules), each import creates a new copy of the
> HttpClientModule, which overwrites the interceptors provided in the
> root module.
>
>
>
I was importing HttpClientModule in a lazy loaded module that was making hte requests. After resolving this issue everything works like a charm. | With version 4.3, angular added a new service `HttpClient`.
With version 5, angular deprecated the old service `Http`.
The interceptor only works with `HttpClient`.
You can be sure the libraries you have that are not intercepted, use the old `Http`. Pay attention, `Http` will probably be removed with angular 6!
If you want to make sure every call is intercepted by your interceptor, you need to upgrade your dependencies to their latest versions. |
3,971 | Has anyone had any experience with [Serenity Editor](http://www.serenity-software.com/) editing software? What's the difference between the Standard version or the Word add-in? What kind of results can it get?
If nobody has used it, from the description does it look like a worthwhile tool to investigate? Why or why not? | 2011/09/15 | [
"https://writers.stackexchange.com/questions/3971",
"https://writers.stackexchange.com",
"https://writers.stackexchange.com/users/2343/"
] | Ok, I gave it a try.
I was puzzled during setup getting a message that my screen resolution is too high. Because it was a big WTF too me, I ignored the message and continued. Also does the WTF.
After launching the standalone application you see a fixed-size window, not resizable. I do not know if this is some stupid limitation of the evaluation version, but this is just ridiculous. After the app has analyzed your code, three sub windows show up within the main window, one behind the other.
The Draft Output window shows your text with numbers annotated. The Usage Output (error list) shows these numbers and the issue description. Now it would be handy to arrange these two windows side-by-side, but that is impossible, because of the fixed-sized main window. For me as software developer this is one giant WTF!
Luckily you can export the outputs and watch them in the editor of your choice side-by-side.
The findings themselves are interesting and helpful--at least for me as a non-native speaker. The error list is not in ascending order, instead sorted in categories. Looks like the idea is to get through the error list and look up the corresponding number in the text, not the other way around.
To get an impression I show you a section of my current novel and the analysis of the tool:
(Caveat: I believe in *Write, don't edit* and this is work in progress. So don't expect anything polished.)
Original text:
>
> The wind breathes through my wings. I’m gliding on a cushion of relief. I’ve got outside in time. Here I’m one of many crows flying around. I angle my wings and turn around. I overlook the inner yard of the temple. Witches all over the place. Can they also can fly? It would reduce my chances tremendously.
>
>
> Which chances anyway? They are so many and I am only one. They are witches and I’m just a bird.
>
>
> »Don’t let me regret having chosen you.«
>
>
> The beast.
>
>
> »Oh, don’t tell me, you forgotten me. That would hurt so much; itching my highly strung heart.«
>
>
> Aren’t you too sensitive? I doubt, that demons have hearts.
>
>
> »And I doubt, that ravens have brains. At least the one I know of.«
>
>
> If you are so smart, beast, then tell me how I, a bird, can kill the archpriest. A witch with magic, charms and stuff.
>
>
> »Learn to think. Then learn to be more than a lousy bird.«
>
>
> Wait! You cast me into this body.
>
>
> »Yes, learn to use it.«
>
>
>
Draft Output:
>
> <344>The wind breathes through my wings. <345>I'm gliding on a cushion
>
> of relief. <346>I've got outside in time. <347>Here I'm one of many crows
>
> flying around. <348>I angle my wings and turn around. <349>I overlook the
>
> inner yard of the temple. <350>Witches all over the place. <351>Can they
>
> also can fly? <352>It would reduce my chances tremendously.
>
> <353>Which chances anyway? <354>They are so many and I am only one.
>
> <355>They are witches and I'm just a bird.
>
> <356>The beast.
>
> <357>Aren't you too sensitive? <358>I doubt, that demons have hearts.
>
> <359>If you are so smart, beast, then tell me how I, a bird, can kill
>
> the archpriest. <360>A witch with magic, charms and stuff.
>
> <361>Wait! <362>You casted me into this body.
>
>
>
Usage Output (some numbers are not listed, reason unknown, I put them in order):
>
> <345> I'm
>
> CONTRACTION [k]
>
> <346> I've
>
> CONTRACTION [k]
>
> <347> I'm
>
> CONTRACTION [k]
>
> <347> around
>
> COMMONLY MISUSED TERM; use "about" before a time [M]
>
> <348> around
>
> COMMONLY MISUSED TERM; use "about" before a time [M]
>
> <350> Witches
>
> POSSIBLE OFFENSIVE TERM? [o]
>
> <350> all over the place
>
> SLANG EXPRESSION [L]
>
> <351> also can
>
> UNIDIOMATIC PHRASE; can also? [N]
>
> <352> tremendously
>
> POSSIBLE EMPTY INTENSIFIER [E]
>
> <354> only
>
> COMMONLY MISUSED TERM: place right before word(s) it modifies [M]
>
> <355> witches
>
> POSSIBLE OFFENSIVE TERM? [o]
>
> <355> I'm
>
> CONTRACTION [k]
>
> <357> Aren't
>
> CONTRACTION [k]
>
> <359> smart
>
> COMMONLY MISUSED TERM; intelligent? [M]
>
> <359> tell . . . how
>
> COMMONLY MISUSED TERM; "that" unless "how" = "in what way" [M]
>
> <360> witch
>
> POSSIBLE OFFENSIVE TERM? [o]
>
> <360> and stuff
>
> INFORMAL OR COLLOQUIAL USAGE unless "stuff" is a verb [I]
>
>
> | Looking at the examples they give, it looks like a good idea. However, many of the error they detect could have been detected by anyone with a good knowledge of English, which comes merely by reading a lot of books, and not necessarily by getting an English Degree.
So on [this](http://www.serenity-software.com/pages/new_FAQs.html) page, one of the errors they give is:
>
> Good things come to them who waits.
>
>
>
Now I saw the error immediately, without having to look their solution. Another example is:
>
> My birthday was June 31, 1986.
>
>
>
The date is incorrect - I couldn't guess this one.
So overall, not a bad software, but it might be less useful to those who already are in the habit of reading. The key point is the price - do you think $55 is a fair price?
They have a ten day trial- you can try that to see how it works in practice. Of course, you will need to have a fairly long piece of work to see how good the software is in practice.
As to which version you buy, if you normally use Ms Word, then you should buy the plug-in. If you use an alternative like Open Office, or any other free tool, then buy the stand alone version.
Just giving my personal opinion, I'm not too sure of software that claims to replace what a human can do, at least in creative fields (remember Clippy?). So be sure to use the trial before you buy the full version, and do share your results here.
**Edit:** Ok, I actually tried the software, and it was worse than I thought. It looked like a 1990's shareware program designed by a student. The UI was weird- you have to press 2 weird sounding buttons like draft and usage to get your analysis. The analysis itself is in a small non-resizeable window.
Even if you ignore the clunky UI, the actual program isn't that great. I entered about 1200 words from my draft, and it threw out hundreds of suggestions, most of them useless. Eg, in one case it told me the word 'assistance' was pretentious (really?) and I should replace it with help. But assistance was the right word in that scene. Then it wanted to replace hoarder with boarder.
Now you can say, the writer should be able to take what he/she wants from the output, ignoring what they don't like. All well. Except that it was throwing up hundreds of suggestions for just 1200 words. If I entered my whole book, I would be bogged down for weeks trying to understand the software's cryptic messages.
And finally, the price. currently, it is $55, which is more than even Scrivener. I'm not sure this software justifies the high price for the value it offers. |
3,971 | Has anyone had any experience with [Serenity Editor](http://www.serenity-software.com/) editing software? What's the difference between the Standard version or the Word add-in? What kind of results can it get?
If nobody has used it, from the description does it look like a worthwhile tool to investigate? Why or why not? | 2011/09/15 | [
"https://writers.stackexchange.com/questions/3971",
"https://writers.stackexchange.com",
"https://writers.stackexchange.com/users/2343/"
] | Ok, I gave it a try.
I was puzzled during setup getting a message that my screen resolution is too high. Because it was a big WTF too me, I ignored the message and continued. Also does the WTF.
After launching the standalone application you see a fixed-size window, not resizable. I do not know if this is some stupid limitation of the evaluation version, but this is just ridiculous. After the app has analyzed your code, three sub windows show up within the main window, one behind the other.
The Draft Output window shows your text with numbers annotated. The Usage Output (error list) shows these numbers and the issue description. Now it would be handy to arrange these two windows side-by-side, but that is impossible, because of the fixed-sized main window. For me as software developer this is one giant WTF!
Luckily you can export the outputs and watch them in the editor of your choice side-by-side.
The findings themselves are interesting and helpful--at least for me as a non-native speaker. The error list is not in ascending order, instead sorted in categories. Looks like the idea is to get through the error list and look up the corresponding number in the text, not the other way around.
To get an impression I show you a section of my current novel and the analysis of the tool:
(Caveat: I believe in *Write, don't edit* and this is work in progress. So don't expect anything polished.)
Original text:
>
> The wind breathes through my wings. I’m gliding on a cushion of relief. I’ve got outside in time. Here I’m one of many crows flying around. I angle my wings and turn around. I overlook the inner yard of the temple. Witches all over the place. Can they also can fly? It would reduce my chances tremendously.
>
>
> Which chances anyway? They are so many and I am only one. They are witches and I’m just a bird.
>
>
> »Don’t let me regret having chosen you.«
>
>
> The beast.
>
>
> »Oh, don’t tell me, you forgotten me. That would hurt so much; itching my highly strung heart.«
>
>
> Aren’t you too sensitive? I doubt, that demons have hearts.
>
>
> »And I doubt, that ravens have brains. At least the one I know of.«
>
>
> If you are so smart, beast, then tell me how I, a bird, can kill the archpriest. A witch with magic, charms and stuff.
>
>
> »Learn to think. Then learn to be more than a lousy bird.«
>
>
> Wait! You cast me into this body.
>
>
> »Yes, learn to use it.«
>
>
>
Draft Output:
>
> <344>The wind breathes through my wings. <345>I'm gliding on a cushion
>
> of relief. <346>I've got outside in time. <347>Here I'm one of many crows
>
> flying around. <348>I angle my wings and turn around. <349>I overlook the
>
> inner yard of the temple. <350>Witches all over the place. <351>Can they
>
> also can fly? <352>It would reduce my chances tremendously.
>
> <353>Which chances anyway? <354>They are so many and I am only one.
>
> <355>They are witches and I'm just a bird.
>
> <356>The beast.
>
> <357>Aren't you too sensitive? <358>I doubt, that demons have hearts.
>
> <359>If you are so smart, beast, then tell me how I, a bird, can kill
>
> the archpriest. <360>A witch with magic, charms and stuff.
>
> <361>Wait! <362>You casted me into this body.
>
>
>
Usage Output (some numbers are not listed, reason unknown, I put them in order):
>
> <345> I'm
>
> CONTRACTION [k]
>
> <346> I've
>
> CONTRACTION [k]
>
> <347> I'm
>
> CONTRACTION [k]
>
> <347> around
>
> COMMONLY MISUSED TERM; use "about" before a time [M]
>
> <348> around
>
> COMMONLY MISUSED TERM; use "about" before a time [M]
>
> <350> Witches
>
> POSSIBLE OFFENSIVE TERM? [o]
>
> <350> all over the place
>
> SLANG EXPRESSION [L]
>
> <351> also can
>
> UNIDIOMATIC PHRASE; can also? [N]
>
> <352> tremendously
>
> POSSIBLE EMPTY INTENSIFIER [E]
>
> <354> only
>
> COMMONLY MISUSED TERM: place right before word(s) it modifies [M]
>
> <355> witches
>
> POSSIBLE OFFENSIVE TERM? [o]
>
> <355> I'm
>
> CONTRACTION [k]
>
> <357> Aren't
>
> CONTRACTION [k]
>
> <359> smart
>
> COMMONLY MISUSED TERM; intelligent? [M]
>
> <359> tell . . . how
>
> COMMONLY MISUSED TERM; "that" unless "how" = "in what way" [M]
>
> <360> witch
>
> POSSIBLE OFFENSIVE TERM? [o]
>
> <360> and stuff
>
> INFORMAL OR COLLOQUIAL USAGE unless "stuff" is a verb [I]
>
>
> | On the Serenity web page, I note the following, offered as a reason for buying the software:
>
> Inexperienced writers usually cannot identify problems in punctuation, spelling, word choice, phrasing, and style simply by looking over their work.
>
>
>
I think that if you cannot identify problems like these, perhaps you haven't yet acquired enough skill to write a novel in English. You become an experienced writer not by using software, but by reading and writing. Mostly reading. But writing a lot. And rewriting. And rewriting. |
743,858 | When you create an index on a column or number of columns in MS SQL Server (I'm using version 2005), you can specify that the index on each column be either ascending or descending. I'm having a hard time understanding why this choice is even here. Using binary sort techniques, wouldn't a lookup be just as fast either way? What difference does it make which order I choose? | 2009/04/13 | [
"https://Stackoverflow.com/questions/743858",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8409/"
] | This primarily matters when used with composite indexes:
```
CREATE INDEX ix_index ON mytable (col1, col2 DESC);
```
can be used for either:
```
SELECT *
FROM mytable
ORDER BY
col1, col2 DESC
```
or:
```
SELECT *
FROM mytable
ORDER BY
col1 DESC, col2
```
, but not for:
```
SELECT *
FROM mytable
ORDER BY
col1, col2
```
An index on a single column can be efficiently used for sorting in both ways.
See the article in my blog for details:
* [**Descending indexes**](http://explainextended.com/2009/04/27/descending-indexes/)
**Update:**
In fact, this can matter even for a single column index, though it's not so obvious.
Imagine an index on a column of a clustered table:
```
CREATE TABLE mytable (
pk INT NOT NULL PRIMARY KEY,
col1 INT NOT NULL
)
CREATE INDEX ix_mytable_col1 ON mytable (col1)
```
The index on `col1` keeps ordered values of `col1` along with the references to rows.
Since the table is clustered, the references to rows are actually the values of the `pk`. They are also ordered within each value of `col1`.
This means that that leaves of the index are actually ordered on `(col1, pk)`, and this query:
```
SELECT col1, pk
FROM mytable
ORDER BY
col1, pk
```
needs no sorting.
If we create the index as following:
```
CREATE INDEX ix_mytable_col1_desc ON mytable (col1 DESC)
```
, then the values of `col1` will be sorted descending, but the values of `pk` within each value of `col1` will be sorted ascending.
This means that the following query:
```
SELECT col1, pk
FROM mytable
ORDER BY
col1, pk DESC
```
can be served by `ix_mytable_col1_desc` but not by `ix_mytable_col1`.
In other words, the columns that constitute a `CLUSTERED INDEX` on any table are always the trailing columns of any other index on that table. | The sort order matters when you want to retrieve lots of sorted data, not individual records.
Note that (as you are suggesting with your question) the sort order is typically far less significant than what columns you are indexing (the system can read the index in reverse if the order is opposite what it wants). I rarely give index sort order any thought, whereas I agonize over the columns covered by the index.
@Quassnoi provides a [great example](https://stackoverflow.com/questions/743858/sql-server-indexes-ascending-or-descending-what-difference-does-it-make/743870#743870) of when it *does* matter. |
743,858 | When you create an index on a column or number of columns in MS SQL Server (I'm using version 2005), you can specify that the index on each column be either ascending or descending. I'm having a hard time understanding why this choice is even here. Using binary sort techniques, wouldn't a lookup be just as fast either way? What difference does it make which order I choose? | 2009/04/13 | [
"https://Stackoverflow.com/questions/743858",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8409/"
] | For a true single column index it makes little difference from the Query Optimiser's point of view.
For the table definition
```
CREATE TABLE T1( [ID] [int] IDENTITY NOT NULL,
[Filler] [char](8000) NULL,
PRIMARY KEY CLUSTERED ([ID] ASC))
```
The Query
```
SELECT TOP 10 *
FROM T1
ORDER BY ID DESC
```
Uses an ordered scan with scan direction `BACKWARD` as can be seen in the Execution Plan. There is a slight difference however in that currently only `FORWARD` scans can be parallelised.

However **it can make a big difference in terms of logical fragmentation**. If the index is created with keys descending but new rows are appended with ascending key values then you can end up with every page out of logical order. This can severely impact the size of the IO reads when scanning the table and it is not in cache.
See the fragmentation results
```
avg_fragmentation avg_fragment
name page_count _in_percent fragment_count _size_in_pages
------ ------------ ------------------- ---------------- ---------------
T1 1000 0.4 5 200
T2 1000 99.9 1000 1
```
for the script below
```
/*Uses T1 definition from above*/
SET NOCOUNT ON;
CREATE TABLE T2( [ID] [int] IDENTITY NOT NULL,
[Filler] [char](8000) NULL,
PRIMARY KEY CLUSTERED ([ID] DESC))
BEGIN TRAN
GO
INSERT INTO T1 DEFAULT VALUES
GO 1000
INSERT INTO T2 DEFAULT VALUES
GO 1000
COMMIT
SELECT object_name(object_id) AS name,
page_count,
avg_fragmentation_in_percent,
fragment_count,
avg_fragment_size_in_pages
FROM
sys.dm_db_index_physical_stats(db_id(), object_id('T1'), 1, NULL, 'DETAILED')
WHERE index_level = 0
UNION ALL
SELECT object_name(object_id) AS name,
page_count,
avg_fragmentation_in_percent,
fragment_count,
avg_fragment_size_in_pages
FROM
sys.dm_db_index_physical_stats(db_id(), object_id('T2'), 1, NULL, 'DETAILED')
WHERE index_level = 0
```
It's possible to use the spatial results tab to verify the supposition that this is because the later pages have ascending key values in both cases.
```
SELECT page_id,
[ID],
geometry::Point(page_id, [ID], 0).STBuffer(4)
FROM T1
CROSS APPLY sys.fn_PhysLocCracker( %% physloc %% )
UNION ALL
SELECT page_id,
[ID],
geometry::Point(page_id, [ID], 0).STBuffer(4)
FROM T2
CROSS APPLY sys.fn_PhysLocCracker( %% physloc %% )
```
 | The sort order matters when you want to retrieve lots of sorted data, not individual records.
Note that (as you are suggesting with your question) the sort order is typically far less significant than what columns you are indexing (the system can read the index in reverse if the order is opposite what it wants). I rarely give index sort order any thought, whereas I agonize over the columns covered by the index.
@Quassnoi provides a [great example](https://stackoverflow.com/questions/743858/sql-server-indexes-ascending-or-descending-what-difference-does-it-make/743870#743870) of when it *does* matter. |
743,858 | When you create an index on a column or number of columns in MS SQL Server (I'm using version 2005), you can specify that the index on each column be either ascending or descending. I'm having a hard time understanding why this choice is even here. Using binary sort techniques, wouldn't a lookup be just as fast either way? What difference does it make which order I choose? | 2009/04/13 | [
"https://Stackoverflow.com/questions/743858",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8409/"
] | This primarily matters when used with composite indexes:
```
CREATE INDEX ix_index ON mytable (col1, col2 DESC);
```
can be used for either:
```
SELECT *
FROM mytable
ORDER BY
col1, col2 DESC
```
or:
```
SELECT *
FROM mytable
ORDER BY
col1 DESC, col2
```
, but not for:
```
SELECT *
FROM mytable
ORDER BY
col1, col2
```
An index on a single column can be efficiently used for sorting in both ways.
See the article in my blog for details:
* [**Descending indexes**](http://explainextended.com/2009/04/27/descending-indexes/)
**Update:**
In fact, this can matter even for a single column index, though it's not so obvious.
Imagine an index on a column of a clustered table:
```
CREATE TABLE mytable (
pk INT NOT NULL PRIMARY KEY,
col1 INT NOT NULL
)
CREATE INDEX ix_mytable_col1 ON mytable (col1)
```
The index on `col1` keeps ordered values of `col1` along with the references to rows.
Since the table is clustered, the references to rows are actually the values of the `pk`. They are also ordered within each value of `col1`.
This means that that leaves of the index are actually ordered on `(col1, pk)`, and this query:
```
SELECT col1, pk
FROM mytable
ORDER BY
col1, pk
```
needs no sorting.
If we create the index as following:
```
CREATE INDEX ix_mytable_col1_desc ON mytable (col1 DESC)
```
, then the values of `col1` will be sorted descending, but the values of `pk` within each value of `col1` will be sorted ascending.
This means that the following query:
```
SELECT col1, pk
FROM mytable
ORDER BY
col1, pk DESC
```
can be served by `ix_mytable_col1_desc` but not by `ix_mytable_col1`.
In other words, the columns that constitute a `CLUSTERED INDEX` on any table are always the trailing columns of any other index on that table. | For a true single column index it makes little difference from the Query Optimiser's point of view.
For the table definition
```
CREATE TABLE T1( [ID] [int] IDENTITY NOT NULL,
[Filler] [char](8000) NULL,
PRIMARY KEY CLUSTERED ([ID] ASC))
```
The Query
```
SELECT TOP 10 *
FROM T1
ORDER BY ID DESC
```
Uses an ordered scan with scan direction `BACKWARD` as can be seen in the Execution Plan. There is a slight difference however in that currently only `FORWARD` scans can be parallelised.

However **it can make a big difference in terms of logical fragmentation**. If the index is created with keys descending but new rows are appended with ascending key values then you can end up with every page out of logical order. This can severely impact the size of the IO reads when scanning the table and it is not in cache.
See the fragmentation results
```
avg_fragmentation avg_fragment
name page_count _in_percent fragment_count _size_in_pages
------ ------------ ------------------- ---------------- ---------------
T1 1000 0.4 5 200
T2 1000 99.9 1000 1
```
for the script below
```
/*Uses T1 definition from above*/
SET NOCOUNT ON;
CREATE TABLE T2( [ID] [int] IDENTITY NOT NULL,
[Filler] [char](8000) NULL,
PRIMARY KEY CLUSTERED ([ID] DESC))
BEGIN TRAN
GO
INSERT INTO T1 DEFAULT VALUES
GO 1000
INSERT INTO T2 DEFAULT VALUES
GO 1000
COMMIT
SELECT object_name(object_id) AS name,
page_count,
avg_fragmentation_in_percent,
fragment_count,
avg_fragment_size_in_pages
FROM
sys.dm_db_index_physical_stats(db_id(), object_id('T1'), 1, NULL, 'DETAILED')
WHERE index_level = 0
UNION ALL
SELECT object_name(object_id) AS name,
page_count,
avg_fragmentation_in_percent,
fragment_count,
avg_fragment_size_in_pages
FROM
sys.dm_db_index_physical_stats(db_id(), object_id('T2'), 1, NULL, 'DETAILED')
WHERE index_level = 0
```
It's possible to use the spatial results tab to verify the supposition that this is because the later pages have ascending key values in both cases.
```
SELECT page_id,
[ID],
geometry::Point(page_id, [ID], 0).STBuffer(4)
FROM T1
CROSS APPLY sys.fn_PhysLocCracker( %% physloc %% )
UNION ALL
SELECT page_id,
[ID],
geometry::Point(page_id, [ID], 0).STBuffer(4)
FROM T2
CROSS APPLY sys.fn_PhysLocCracker( %% physloc %% )
```
 |
1,092,790 | >
> Let A be a square matrix which is diagonalizable over field $\mathbb{F}$ and the sum of the entries of any column is the same number $a\in\mathbb{F}$. **Show that $a$ is eigenvalue of the matrix A.**
>
>
>
**My try:**
Let A be the arbitrary matrix $\displaystyle \left(\begin{matrix} a\_{11}&a\_{12}&\cdots&a\_{1(n-1)}&a\_{1n} \\ a\_{21}&a\_{22}&\cdots&a\_{2(n-1)}&a\_{2n} \\ \vdots&\vdots & \ddots&\vdots&\vdots \\a\_{(n-1)1}&a\_{(n-1)2}&\cdots&a\_{(n-1)(n-1)}&a\_{(n-1)n} \\ a\_{n1}&a\_{n2}&\cdots&a\_{n(n-1)}&a\_{nn}\end{matrix}\right)$.
Now $\displaystyle p\_A(x)=\det(xI-A)=\det\left(\begin{matrix} x-a\_{11}&-a\_{12}&\cdots&-a\_{1(n-1)}&-a\_{1n} \\ -a\_{21}&x-a\_{22}&\cdots&-a\_{2(n-1)}&-a\_{2n} \\ \vdots&\vdots & \ddots&\vdots&\vdots \\-a\_{(n-1)1}&-a\_{(n-1)2}&\cdots&x-a\_{(n-1)(n-1)}&-a\_{(n-1)n} \\ -a\_{n1}&-a\_{n2}&\cdots&-a\_{n(n-1)}&-a\_{nn}\end{matrix}\right)$.
Multiply row and add it to other row doesn't change the value of the determinant, hance we can add any row $2\le j\le n$ to the first row and get $$p\_A(x)=\det\left(\begin{matrix} x-\sum\_{i=1}^na\_{i1}&x-\sum\_{i=1}^na\_{i2}&\cdots&x-\sum\_{i=1}^na\_{i(n-1)}&x-\sum\_{i=1}^na\_{in} \\ -a\_{21}&x-a\_{22}&\cdots&-a\_{2(n-1)}&-a\_{2n} \\ \vdots&\vdots & \ddots&\vdots&\vdots \\-a\_{(n-1)1}&-a\_{(n-1)2}&\cdots&x-a\_{(n-1)(n-1)}&-a\_{(n-1)n} \\ -a\_{n1}&-a\_{n2}&\cdots&-a\_{n(n-1)}&-a\_{nn}\end{matrix}\right)$$
But we know that $\displaystyle \forall 1 \le k \le n: \ \sum\_{i=1}^n a\_{ik}=a$, hence $$p\_A(x)=\det\left(\begin{matrix} x-a&x-a&\cdots&x-a&x-a \\ -a\_{21}&x-a\_{22}&\cdots&-a\_{2(n-1)}&-a\_{2n} \\ \vdots&\vdots & \ddots&\vdots&\vdots \\-a\_{(n-1)1}&-a\_{(n-1)2}&\cdots&x-a\_{(n-1)(n-1)}&-a\_{(n-1)n} \\ -a\_{n1}&-a\_{n2}&\cdots&-a\_{n(n-1)}&-a\_{nn}\end{matrix}\right)= \\ =(x-a)\cdot\det\left(\begin{matrix} 1&1&\cdots&1&1 \\ -a\_{21}&x-a\_{22}&\cdots&-a\_{2(n-1)}&-a\_{2n} \\ \vdots&\vdots & \ddots&\vdots&\vdots \\-a\_{(n-1)1}&-a\_{(n-1)2}&\cdots&x-a\_{(n-1)(n-1)}&-a\_{(n-1)n} \\ -a\_{n1}&-a\_{n2}&\cdots&-a\_{n(n-1)}&-a\_{nn}\end{matrix}\right)$$ thus $a$ is an eigenvalue of the matrix A.
I didn't use the fact that A is diagonalizable and don't know why it is needed.
Any help/hint will be appreciated, thank you! | 2015/01/06 | [
"https://math.stackexchange.com/questions/1092790",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/111334/"
] | $A$ and $A^T$ have the same eigenvalues.
Take $X=\left( \begin{matrix}
1\\
1\\
...\\
1
\end{matrix} \right)$
$A^T X=aX$, $a$ being the sum of each column of $A$, thus the sum of each row of $A^T$.
Then $a$ is eigenvalue of $A^T$, and of $A$. | i will avoid the use of the fact that $A$ and $A^T$ have the same eigenvalue therby avoiding any appeal to determinants explicitly. bus we do use the fact the one sided invertiblilty implies the other. that is $AB = I$ iff $BA = I.$
if the sum of every column is $a,$ then $A$ has left eigenvector corresponding to $a.$ that is there is $u = (1,1,\cdots, 1)^T \neq 0$ such that $u^TA = au^T.$ therefore $A - aI$ is not invertible which in turn implies that $a$ is an eigenvalue of $A$ and has a (right)eigenvector $x \neq 0$ such that $Ax = ax$ |
1,092,790 | >
> Let A be a square matrix which is diagonalizable over field $\mathbb{F}$ and the sum of the entries of any column is the same number $a\in\mathbb{F}$. **Show that $a$ is eigenvalue of the matrix A.**
>
>
>
**My try:**
Let A be the arbitrary matrix $\displaystyle \left(\begin{matrix} a\_{11}&a\_{12}&\cdots&a\_{1(n-1)}&a\_{1n} \\ a\_{21}&a\_{22}&\cdots&a\_{2(n-1)}&a\_{2n} \\ \vdots&\vdots & \ddots&\vdots&\vdots \\a\_{(n-1)1}&a\_{(n-1)2}&\cdots&a\_{(n-1)(n-1)}&a\_{(n-1)n} \\ a\_{n1}&a\_{n2}&\cdots&a\_{n(n-1)}&a\_{nn}\end{matrix}\right)$.
Now $\displaystyle p\_A(x)=\det(xI-A)=\det\left(\begin{matrix} x-a\_{11}&-a\_{12}&\cdots&-a\_{1(n-1)}&-a\_{1n} \\ -a\_{21}&x-a\_{22}&\cdots&-a\_{2(n-1)}&-a\_{2n} \\ \vdots&\vdots & \ddots&\vdots&\vdots \\-a\_{(n-1)1}&-a\_{(n-1)2}&\cdots&x-a\_{(n-1)(n-1)}&-a\_{(n-1)n} \\ -a\_{n1}&-a\_{n2}&\cdots&-a\_{n(n-1)}&-a\_{nn}\end{matrix}\right)$.
Multiply row and add it to other row doesn't change the value of the determinant, hance we can add any row $2\le j\le n$ to the first row and get $$p\_A(x)=\det\left(\begin{matrix} x-\sum\_{i=1}^na\_{i1}&x-\sum\_{i=1}^na\_{i2}&\cdots&x-\sum\_{i=1}^na\_{i(n-1)}&x-\sum\_{i=1}^na\_{in} \\ -a\_{21}&x-a\_{22}&\cdots&-a\_{2(n-1)}&-a\_{2n} \\ \vdots&\vdots & \ddots&\vdots&\vdots \\-a\_{(n-1)1}&-a\_{(n-1)2}&\cdots&x-a\_{(n-1)(n-1)}&-a\_{(n-1)n} \\ -a\_{n1}&-a\_{n2}&\cdots&-a\_{n(n-1)}&-a\_{nn}\end{matrix}\right)$$
But we know that $\displaystyle \forall 1 \le k \le n: \ \sum\_{i=1}^n a\_{ik}=a$, hence $$p\_A(x)=\det\left(\begin{matrix} x-a&x-a&\cdots&x-a&x-a \\ -a\_{21}&x-a\_{22}&\cdots&-a\_{2(n-1)}&-a\_{2n} \\ \vdots&\vdots & \ddots&\vdots&\vdots \\-a\_{(n-1)1}&-a\_{(n-1)2}&\cdots&x-a\_{(n-1)(n-1)}&-a\_{(n-1)n} \\ -a\_{n1}&-a\_{n2}&\cdots&-a\_{n(n-1)}&-a\_{nn}\end{matrix}\right)= \\ =(x-a)\cdot\det\left(\begin{matrix} 1&1&\cdots&1&1 \\ -a\_{21}&x-a\_{22}&\cdots&-a\_{2(n-1)}&-a\_{2n} \\ \vdots&\vdots & \ddots&\vdots&\vdots \\-a\_{(n-1)1}&-a\_{(n-1)2}&\cdots&x-a\_{(n-1)(n-1)}&-a\_{(n-1)n} \\ -a\_{n1}&-a\_{n2}&\cdots&-a\_{n(n-1)}&-a\_{nn}\end{matrix}\right)$$ thus $a$ is an eigenvalue of the matrix A.
I didn't use the fact that A is diagonalizable and don't know why it is needed.
Any help/hint will be appreciated, thank you! | 2015/01/06 | [
"https://math.stackexchange.com/questions/1092790",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/111334/"
] | $A$ and $A^T$ have the same eigenvalues.
Take $X=\left( \begin{matrix}
1\\
1\\
...\\
1
\end{matrix} \right)$
$A^T X=aX$, $a$ being the sum of each column of $A$, thus the sum of each row of $A^T$.
Then $a$ is eigenvalue of $A^T$, and of $A$. | There is nothing wrong with the proof although you probably use a gun to kill the fly.
Indeed, you don't need $A$ to be diagonalizable.
We can construct a matrix $A$ with this property, which is not diagonalizable, easily. Consider
$$
A=\left[\begin{array}{rrr}0&-1&-1\\1&1&0\\0&1&2\end{array}\right].
$$
The column sums are all equal to $1$, which is the only eigenvalue of $A$ with the algebraic multiplicity $3$. However, $A$ is not diagonalizable. |
49,591,242 | I'm using VS Code in a Typescript project that uses Jest for testing. For some reason, VS Code thinks that the Jest globals are not available:
[](https://i.stack.imgur.com/68tun.jpg)
I have the Jest typedefs installed in my dev dependencies.
```
"devDependencies": {
// ...truncated
"@types/jest": "^20",
"jest": "^20.0.4",
"ts-jest": "^20.0.7",
"ts-node": "^5.0.0",
"typescript": "~2.4.0"
}
``` | 2018/03/31 | [
"https://Stackoverflow.com/questions/49591242",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2076152/"
] | Correct answer here is that typescript requires type declarations for jest before jest global objects are visible for intellisense.
Add this triple-slash directive to beginning of your test file:
```
/// <reference types="jest" />
``` | I upgraded my version of Typescript to 2.8 and this problem went away. I'm going to assume it was some sort of cache issue. |
49,591,242 | I'm using VS Code in a Typescript project that uses Jest for testing. For some reason, VS Code thinks that the Jest globals are not available:
[](https://i.stack.imgur.com/68tun.jpg)
I have the Jest typedefs installed in my dev dependencies.
```
"devDependencies": {
// ...truncated
"@types/jest": "^20",
"jest": "^20.0.4",
"ts-jest": "^20.0.7",
"ts-node": "^5.0.0",
"typescript": "~2.4.0"
}
``` | 2018/03/31 | [
"https://Stackoverflow.com/questions/49591242",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2076152/"
] | I've similarly struggled with this problem a number of times despite having `@types/jest` in my `devDependencies` too.
I created [`jest-without-globals`](https://github.com/agilgur5/jest-without-globals) as a *very* tiny wrapper to support importing Jest's features instead of relying on globals, thereby ensuring the variables exist.
It's written in TypeScript as well, ensuring that it's typed properly when imported and that you don't need to do anything other than an import to make the types function.
[Per the Usage docs](https://github.com/agilgur5/jest-without-globals#usage), it's straightforward to use:
>
>
> ```
> import { describe, it, expect } from 'jest-without-globals'
>
> describe('describe should create a section', () => {
> it('it should checkmark', () => {
> expect('').toBe('')
> })
> })
>
> ```
>
> All of the functions available in [Jest's API](https://jestjs.io/docs/en/api), as well as `jest` and `expect`, can be imported from `jest-without-globals`.
>
>
> | I upgraded my version of Typescript to 2.8 and this problem went away. I'm going to assume it was some sort of cache issue. |
49,591,242 | I'm using VS Code in a Typescript project that uses Jest for testing. For some reason, VS Code thinks that the Jest globals are not available:
[](https://i.stack.imgur.com/68tun.jpg)
I have the Jest typedefs installed in my dev dependencies.
```
"devDependencies": {
// ...truncated
"@types/jest": "^20",
"jest": "^20.0.4",
"ts-jest": "^20.0.7",
"ts-node": "^5.0.0",
"typescript": "~2.4.0"
}
``` | 2018/03/31 | [
"https://Stackoverflow.com/questions/49591242",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2076152/"
] | Correct answer here is that typescript requires type declarations for jest before jest global objects are visible for intellisense.
Add this triple-slash directive to beginning of your test file:
```
/// <reference types="jest" />
``` | I've similarly struggled with this problem a number of times despite having `@types/jest` in my `devDependencies` too.
I created [`jest-without-globals`](https://github.com/agilgur5/jest-without-globals) as a *very* tiny wrapper to support importing Jest's features instead of relying on globals, thereby ensuring the variables exist.
It's written in TypeScript as well, ensuring that it's typed properly when imported and that you don't need to do anything other than an import to make the types function.
[Per the Usage docs](https://github.com/agilgur5/jest-without-globals#usage), it's straightforward to use:
>
>
> ```
> import { describe, it, expect } from 'jest-without-globals'
>
> describe('describe should create a section', () => {
> it('it should checkmark', () => {
> expect('').toBe('')
> })
> })
>
> ```
>
> All of the functions available in [Jest's API](https://jestjs.io/docs/en/api), as well as `jest` and `expect`, can be imported from `jest-without-globals`.
>
>
> |
5,768 | Using Exp:resso's store addon, is it possible to do buy one get one free type sales?
I see the promo codes but they dont seem to have the option to require more than one item in the cart to use it or anything like that | 2013/02/03 | [
"https://expressionengine.stackexchange.com/questions/5768",
"https://expressionengine.stackexchange.com",
"https://expressionengine.stackexchange.com/users/727/"
] | There's no way to do this using promo codes and Store. You would be able to create an extension to do this using our custom hooks.
The ability to create discounts like this is high on our [feature request list](https://exp-resso.com/store/support) and will be addressed in a future version (though there is no specific timeframe, so if you need it now it's best to go with an extension). | From recent memory of trying this. As of the current version this isn't available. Best bet would be a custom extension. |
16,935,259 | I'm developing an Android application and I have a problem:
I have this method:
```
// User has introduced an incorrect password.
private void invalidPassword()
{
// R.id.string value for alert dialog title.
int dialogTitle = 0;
// R.id.string value for alert dialog message.
int dialogMessage = 0;
boolean hasReachedMaxAttempts;
clearWidgets();
numIntents++;
hasReachedMaxAttempts = (numIntents > maxNumIntents);
// Max attempts reached
if (hasReachedMaxAttempts)
{
dialogTitle = R.string.dialog_title_error;
dialogMessage = R.string.dialog_message_max_attempts_reached;
}
else
{
dialogTitle = R.string.dialog_title_error;
dialogMessage = R.string.dialog_message_incorrect_password;
}
AlertDialog.Builder builder = new AlertDialog.Builder(this);
builder.setMessage(dialogMessage)
.setTitle(dialogTitle);
builder.setPositiveButton(R.string.ok, new DialogInterface.OnClickListener()
{
public void onClick(DialogInterface dialog, int id)
{
// TODO: User clicked OK button
if (hasReachedMaxAttempts)
{
}
else
{
}
}
});
AlertDialog dialog = builder.create();
dialog.show();
}
```
**How can I make visible `boolean hasReachedMaxAttempts;` inside `onClick`?** | 2013/06/05 | [
"https://Stackoverflow.com/questions/16935259",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/68571/"
] | you need that variable to be final;
```
final boolean hasReachedMaxAttemptsFinal = hasReachedMaxAttempts;
AlertDialog.Builder builder = new AlertDialog.Builder(this);
if (hasReachedMaxAttemptsFinal)
``` | Declare your `final boolean hasReachedMaxAttempts;` variable at `class level` and it should get the task done |
16,935,259 | I'm developing an Android application and I have a problem:
I have this method:
```
// User has introduced an incorrect password.
private void invalidPassword()
{
// R.id.string value for alert dialog title.
int dialogTitle = 0;
// R.id.string value for alert dialog message.
int dialogMessage = 0;
boolean hasReachedMaxAttempts;
clearWidgets();
numIntents++;
hasReachedMaxAttempts = (numIntents > maxNumIntents);
// Max attempts reached
if (hasReachedMaxAttempts)
{
dialogTitle = R.string.dialog_title_error;
dialogMessage = R.string.dialog_message_max_attempts_reached;
}
else
{
dialogTitle = R.string.dialog_title_error;
dialogMessage = R.string.dialog_message_incorrect_password;
}
AlertDialog.Builder builder = new AlertDialog.Builder(this);
builder.setMessage(dialogMessage)
.setTitle(dialogTitle);
builder.setPositiveButton(R.string.ok, new DialogInterface.OnClickListener()
{
public void onClick(DialogInterface dialog, int id)
{
// TODO: User clicked OK button
if (hasReachedMaxAttempts)
{
}
else
{
}
}
});
AlertDialog dialog = builder.create();
dialog.show();
}
```
**How can I make visible `boolean hasReachedMaxAttempts;` inside `onClick`?** | 2013/06/05 | [
"https://Stackoverflow.com/questions/16935259",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/68571/"
] | you need that variable to be final;
```
final boolean hasReachedMaxAttemptsFinal = hasReachedMaxAttempts;
AlertDialog.Builder builder = new AlertDialog.Builder(this);
if (hasReachedMaxAttemptsFinal)
``` | It is visible, but it needs to be set to `final`.
```
final boolean hasReachedMaxAttempts = (numIntents > maxNumIntents);
``` |
37,337,080 | I have upgraded to the new API and don't know how to initialize Firebase references in two separate files:
```
/* CASE 1 */
// 1st file
var config = {/* ... */};
firebase.initializeApp(config);
var rootRef = firebase.database().ref();
// 2nd file - initialize again
var config = {/* ... */};
firebase.initializeApp(config);
var rootRef = firebase.database().ref();
```
>
> RESULT: bundle.js:535 Uncaught Error: Firebase App named '[DEFAULT]' already exists.
>
>
>
```
/* CASE 2 */
// 1st file
var config = {/* ... */};
firebase.initializeApp(config);
var rootRef = firebase.database().ref();
// 2nd file - don't initialize
var rootRef = firebase.database().ref();
```
>
> RESULT: bundle.js:529 Uncaught Error: No Firebase App '[DEFAULT]' has been created - call Firebase App.initializeApp().
>
>
>
Before the new API I just called
```
var myFirebaseRef = new Firebase("https://<YOUR-FIREBASE-APP>.firebaseio.com/");
```
in each file, and it worked okay. | 2016/05/20 | [
"https://Stackoverflow.com/questions/37337080",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5850108/"
] | I made the mistake by importing like this.
```
import firebase from 'firebase'
const firebaseConfig = {
apiKey: 'key',
authDomain: 'domain',
databaseURL: 'url',
storageBucket: ''
};
firebase.initializeApp(firebaseConfig);
```
This worked fine for a few days but when I tried to sign in with [custom tokens](https://firebase.google.com/docs/auth/web/custom-auth) my auth object was not changed. I had to refresh the page for it to update so I could make certain calls to the database which were protected by my own auth credentials rules.
```
".read": "$uid === auth.uid || auth.isAdmin === true || auth.isTeacher === true",
```
When I changed my imports to this it worked again.
```
import firebase from 'firebase/app';
import 'firebase/auth';
import 'firebase/database';
const firebaseConfig = {
apiKey: 'key',
authDomain: 'domain',
databaseURL: 'url',
storageBucket: ''
};
firebase.initializeApp(firebaseConfig);
```
Then whenever I need to use Firebase in a certain module I import this(notice the import from firebase/app instead of firebase):
```
import firebase from 'firebase/app';
```
And talk to certain services like so:
```
firebase.auth().onAuthStateChanged((user) => {
if (user) {
// Authenticated.
} else {
// Logged out.
}
});
firebase.database().ref('myref').once('value').then((snapshot) => {
// do stuff with the snapshot
});
``` | For a small fraction of the people here, this issue might be cause by trying to initialize fb admin on the same script that you used to initialize fb on the front end. If anybody is initializing firebase twice on the same script (once for admin and once for frontend), then you need initialize firebase admin in a different script than in front end, and do not import anything that is exported on the backend script on the frontend (and vise versa). |
37,337,080 | I have upgraded to the new API and don't know how to initialize Firebase references in two separate files:
```
/* CASE 1 */
// 1st file
var config = {/* ... */};
firebase.initializeApp(config);
var rootRef = firebase.database().ref();
// 2nd file - initialize again
var config = {/* ... */};
firebase.initializeApp(config);
var rootRef = firebase.database().ref();
```
>
> RESULT: bundle.js:535 Uncaught Error: Firebase App named '[DEFAULT]' already exists.
>
>
>
```
/* CASE 2 */
// 1st file
var config = {/* ... */};
firebase.initializeApp(config);
var rootRef = firebase.database().ref();
// 2nd file - don't initialize
var rootRef = firebase.database().ref();
```
>
> RESULT: bundle.js:529 Uncaught Error: No Firebase App '[DEFAULT]' has been created - call Firebase App.initializeApp().
>
>
>
Before the new API I just called
```
var myFirebaseRef = new Firebase("https://<YOUR-FIREBASE-APP>.firebaseio.com/");
```
in each file, and it worked okay. | 2016/05/20 | [
"https://Stackoverflow.com/questions/37337080",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5850108/"
] | If you don't have the control over where Firebase will be instantiated, you can do something like this:
```
try {
let firApp = firebase.app(applicationName);
return firApp;
} catch (error) {
return firebase.initializeApp({
credential: firebase.credential.cert(firebaseCredentials),
databaseURL: firebaseUrl
}, applicationName);
}
```
Firebase will try to get the application, if it doesn't exist, then you can initialize it freely. | I made the mistake by importing like this.
```
import firebase from 'firebase'
const firebaseConfig = {
apiKey: 'key',
authDomain: 'domain',
databaseURL: 'url',
storageBucket: ''
};
firebase.initializeApp(firebaseConfig);
```
This worked fine for a few days but when I tried to sign in with [custom tokens](https://firebase.google.com/docs/auth/web/custom-auth) my auth object was not changed. I had to refresh the page for it to update so I could make certain calls to the database which were protected by my own auth credentials rules.
```
".read": "$uid === auth.uid || auth.isAdmin === true || auth.isTeacher === true",
```
When I changed my imports to this it worked again.
```
import firebase from 'firebase/app';
import 'firebase/auth';
import 'firebase/database';
const firebaseConfig = {
apiKey: 'key',
authDomain: 'domain',
databaseURL: 'url',
storageBucket: ''
};
firebase.initializeApp(firebaseConfig);
```
Then whenever I need to use Firebase in a certain module I import this(notice the import from firebase/app instead of firebase):
```
import firebase from 'firebase/app';
```
And talk to certain services like so:
```
firebase.auth().onAuthStateChanged((user) => {
if (user) {
// Authenticated.
} else {
// Logged out.
}
});
firebase.database().ref('myref').once('value').then((snapshot) => {
// do stuff with the snapshot
});
``` |
37,337,080 | I have upgraded to the new API and don't know how to initialize Firebase references in two separate files:
```
/* CASE 1 */
// 1st file
var config = {/* ... */};
firebase.initializeApp(config);
var rootRef = firebase.database().ref();
// 2nd file - initialize again
var config = {/* ... */};
firebase.initializeApp(config);
var rootRef = firebase.database().ref();
```
>
> RESULT: bundle.js:535 Uncaught Error: Firebase App named '[DEFAULT]' already exists.
>
>
>
```
/* CASE 2 */
// 1st file
var config = {/* ... */};
firebase.initializeApp(config);
var rootRef = firebase.database().ref();
// 2nd file - don't initialize
var rootRef = firebase.database().ref();
```
>
> RESULT: bundle.js:529 Uncaught Error: No Firebase App '[DEFAULT]' has been created - call Firebase App.initializeApp().
>
>
>
Before the new API I just called
```
var myFirebaseRef = new Firebase("https://<YOUR-FIREBASE-APP>.firebaseio.com/");
```
in each file, and it worked okay. | 2016/05/20 | [
"https://Stackoverflow.com/questions/37337080",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5850108/"
] | This is an issue I ran into as well upgrading to the new version of Firebase. You might want two separate firebase apps initialized, like explained in other answers, but I just wanted to use the refs in two different locations in my app and I was getting the same error.
What you need to do for this situation is to create a firebase module for your app that only initializes firebase once, then you import or require it elsewhere in your app.
This is pretty simple, here is mine: modules/firebase.js
```
import firebase from 'firebase';
var firebaseConfig = {
apiKey: "some-api-key",
authDomain: "some-app.firebaseapp.com",
databaseURL: "https://some-app.firebaseio.com",
storageBucket: "some-app.appspot.com",
};
var FbApp = firebase.initializeApp(firebaseConfig);
module.exports.FBApp = FbApp.database(); //this doesnt have to be database only
```
And then elsewhere in your application you simply:
```
import FBApp from '/your/module/location'
var messagesRef = FBApp.ref("messages/");
``` | I made the mistake by importing like this.
```
import firebase from 'firebase'
const firebaseConfig = {
apiKey: 'key',
authDomain: 'domain',
databaseURL: 'url',
storageBucket: ''
};
firebase.initializeApp(firebaseConfig);
```
This worked fine for a few days but when I tried to sign in with [custom tokens](https://firebase.google.com/docs/auth/web/custom-auth) my auth object was not changed. I had to refresh the page for it to update so I could make certain calls to the database which were protected by my own auth credentials rules.
```
".read": "$uid === auth.uid || auth.isAdmin === true || auth.isTeacher === true",
```
When I changed my imports to this it worked again.
```
import firebase from 'firebase/app';
import 'firebase/auth';
import 'firebase/database';
const firebaseConfig = {
apiKey: 'key',
authDomain: 'domain',
databaseURL: 'url',
storageBucket: ''
};
firebase.initializeApp(firebaseConfig);
```
Then whenever I need to use Firebase in a certain module I import this(notice the import from firebase/app instead of firebase):
```
import firebase from 'firebase/app';
```
And talk to certain services like so:
```
firebase.auth().onAuthStateChanged((user) => {
if (user) {
// Authenticated.
} else {
// Logged out.
}
});
firebase.database().ref('myref').once('value').then((snapshot) => {
// do stuff with the snapshot
});
``` |
37,337,080 | I have upgraded to the new API and don't know how to initialize Firebase references in two separate files:
```
/* CASE 1 */
// 1st file
var config = {/* ... */};
firebase.initializeApp(config);
var rootRef = firebase.database().ref();
// 2nd file - initialize again
var config = {/* ... */};
firebase.initializeApp(config);
var rootRef = firebase.database().ref();
```
>
> RESULT: bundle.js:535 Uncaught Error: Firebase App named '[DEFAULT]' already exists.
>
>
>
```
/* CASE 2 */
// 1st file
var config = {/* ... */};
firebase.initializeApp(config);
var rootRef = firebase.database().ref();
// 2nd file - don't initialize
var rootRef = firebase.database().ref();
```
>
> RESULT: bundle.js:529 Uncaught Error: No Firebase App '[DEFAULT]' has been created - call Firebase App.initializeApp().
>
>
>
Before the new API I just called
```
var myFirebaseRef = new Firebase("https://<YOUR-FIREBASE-APP>.firebaseio.com/");
```
in each file, and it worked okay. | 2016/05/20 | [
"https://Stackoverflow.com/questions/37337080",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5850108/"
] | You need to name your different instances (Apps as Firebase calls them); by default you're working with the `[DEFAULT]` App, because that's the most common use case, but when you need to work with multiple Apps then you have to add a name when initialising:
```
// Intialize the "[DEFAULT]" App
var mainApp = firebase.intializeApp({ ... });
// Intialize a "Secondary" App
var secondaryApp = firebase.initializeApp({ ... }, "Secondary");
...
mainApp.database().ref("path/to/data").set(value);
secondaryApp.database().ref("path/to/data").set(anotherValue);
```
You can find a more example scenarios in the updated [Initialize multiple apps](https://firebase.google.com/docs/web/setup#initialize_multiple_apps) section of the Add Firebase to your JavaScript Project guide. | For a small fraction of the people here, this issue might be cause by trying to initialize fb admin on the same script that you used to initialize fb on the front end. If anybody is initializing firebase twice on the same script (once for admin and once for frontend), then you need initialize firebase admin in a different script than in front end, and do not import anything that is exported on the backend script on the frontend (and vise versa). |
37,337,080 | I have upgraded to the new API and don't know how to initialize Firebase references in two separate files:
```
/* CASE 1 */
// 1st file
var config = {/* ... */};
firebase.initializeApp(config);
var rootRef = firebase.database().ref();
// 2nd file - initialize again
var config = {/* ... */};
firebase.initializeApp(config);
var rootRef = firebase.database().ref();
```
>
> RESULT: bundle.js:535 Uncaught Error: Firebase App named '[DEFAULT]' already exists.
>
>
>
```
/* CASE 2 */
// 1st file
var config = {/* ... */};
firebase.initializeApp(config);
var rootRef = firebase.database().ref();
// 2nd file - don't initialize
var rootRef = firebase.database().ref();
```
>
> RESULT: bundle.js:529 Uncaught Error: No Firebase App '[DEFAULT]' has been created - call Firebase App.initializeApp().
>
>
>
Before the new API I just called
```
var myFirebaseRef = new Firebase("https://<YOUR-FIREBASE-APP>.firebaseio.com/");
```
in each file, and it worked okay. | 2016/05/20 | [
"https://Stackoverflow.com/questions/37337080",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5850108/"
] | You need to name your different instances (Apps as Firebase calls them); by default you're working with the `[DEFAULT]` App, because that's the most common use case, but when you need to work with multiple Apps then you have to add a name when initialising:
```
// Intialize the "[DEFAULT]" App
var mainApp = firebase.intializeApp({ ... });
// Intialize a "Secondary" App
var secondaryApp = firebase.initializeApp({ ... }, "Secondary");
...
mainApp.database().ref("path/to/data").set(value);
secondaryApp.database().ref("path/to/data").set(anotherValue);
```
You can find a more example scenarios in the updated [Initialize multiple apps](https://firebase.google.com/docs/web/setup#initialize_multiple_apps) section of the Add Firebase to your JavaScript Project guide. | If you don't have the control over where Firebase will be instantiated, you can do something like this:
```
try {
let firApp = firebase.app(applicationName);
return firApp;
} catch (error) {
return firebase.initializeApp({
credential: firebase.credential.cert(firebaseCredentials),
databaseURL: firebaseUrl
}, applicationName);
}
```
Firebase will try to get the application, if it doesn't exist, then you can initialize it freely. |
37,337,080 | I have upgraded to the new API and don't know how to initialize Firebase references in two separate files:
```
/* CASE 1 */
// 1st file
var config = {/* ... */};
firebase.initializeApp(config);
var rootRef = firebase.database().ref();
// 2nd file - initialize again
var config = {/* ... */};
firebase.initializeApp(config);
var rootRef = firebase.database().ref();
```
>
> RESULT: bundle.js:535 Uncaught Error: Firebase App named '[DEFAULT]' already exists.
>
>
>
```
/* CASE 2 */
// 1st file
var config = {/* ... */};
firebase.initializeApp(config);
var rootRef = firebase.database().ref();
// 2nd file - don't initialize
var rootRef = firebase.database().ref();
```
>
> RESULT: bundle.js:529 Uncaught Error: No Firebase App '[DEFAULT]' has been created - call Firebase App.initializeApp().
>
>
>
Before the new API I just called
```
var myFirebaseRef = new Firebase("https://<YOUR-FIREBASE-APP>.firebaseio.com/");
```
in each file, and it worked okay. | 2016/05/20 | [
"https://Stackoverflow.com/questions/37337080",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5850108/"
] | This is an issue I ran into as well upgrading to the new version of Firebase. You might want two separate firebase apps initialized, like explained in other answers, but I just wanted to use the refs in two different locations in my app and I was getting the same error.
What you need to do for this situation is to create a firebase module for your app that only initializes firebase once, then you import or require it elsewhere in your app.
This is pretty simple, here is mine: modules/firebase.js
```
import firebase from 'firebase';
var firebaseConfig = {
apiKey: "some-api-key",
authDomain: "some-app.firebaseapp.com",
databaseURL: "https://some-app.firebaseio.com",
storageBucket: "some-app.appspot.com",
};
var FbApp = firebase.initializeApp(firebaseConfig);
module.exports.FBApp = FbApp.database(); //this doesnt have to be database only
```
And then elsewhere in your application you simply:
```
import FBApp from '/your/module/location'
var messagesRef = FBApp.ref("messages/");
``` | You need to name your different instances (Apps as Firebase calls them); by default you're working with the `[DEFAULT]` App, because that's the most common use case, but when you need to work with multiple Apps then you have to add a name when initialising:
```
// Intialize the "[DEFAULT]" App
var mainApp = firebase.intializeApp({ ... });
// Intialize a "Secondary" App
var secondaryApp = firebase.initializeApp({ ... }, "Secondary");
...
mainApp.database().ref("path/to/data").set(value);
secondaryApp.database().ref("path/to/data").set(anotherValue);
```
You can find a more example scenarios in the updated [Initialize multiple apps](https://firebase.google.com/docs/web/setup#initialize_multiple_apps) section of the Add Firebase to your JavaScript Project guide. |
37,337,080 | I have upgraded to the new API and don't know how to initialize Firebase references in two separate files:
```
/* CASE 1 */
// 1st file
var config = {/* ... */};
firebase.initializeApp(config);
var rootRef = firebase.database().ref();
// 2nd file - initialize again
var config = {/* ... */};
firebase.initializeApp(config);
var rootRef = firebase.database().ref();
```
>
> RESULT: bundle.js:535 Uncaught Error: Firebase App named '[DEFAULT]' already exists.
>
>
>
```
/* CASE 2 */
// 1st file
var config = {/* ... */};
firebase.initializeApp(config);
var rootRef = firebase.database().ref();
// 2nd file - don't initialize
var rootRef = firebase.database().ref();
```
>
> RESULT: bundle.js:529 Uncaught Error: No Firebase App '[DEFAULT]' has been created - call Firebase App.initializeApp().
>
>
>
Before the new API I just called
```
var myFirebaseRef = new Firebase("https://<YOUR-FIREBASE-APP>.firebaseio.com/");
```
in each file, and it worked okay. | 2016/05/20 | [
"https://Stackoverflow.com/questions/37337080",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5850108/"
] | If you don't have the control over where Firebase will be instantiated, you can do something like this:
```
try {
let firApp = firebase.app(applicationName);
return firApp;
} catch (error) {
return firebase.initializeApp({
credential: firebase.credential.cert(firebaseCredentials),
databaseURL: firebaseUrl
}, applicationName);
}
```
Firebase will try to get the application, if it doesn't exist, then you can initialize it freely. | To make multiple instances using `new firebase.initializeApp()`, you need a second parameter for the firebase constructor:
```
firebase.initializeApp( {}, "second parameter" );
```
Compare it to the old way to generate multiple instances where
```
new Firebase.Context()
```
is the second parameter:
```
new Firebase('', new Firebase.Context() );
``` |
37,337,080 | I have upgraded to the new API and don't know how to initialize Firebase references in two separate files:
```
/* CASE 1 */
// 1st file
var config = {/* ... */};
firebase.initializeApp(config);
var rootRef = firebase.database().ref();
// 2nd file - initialize again
var config = {/* ... */};
firebase.initializeApp(config);
var rootRef = firebase.database().ref();
```
>
> RESULT: bundle.js:535 Uncaught Error: Firebase App named '[DEFAULT]' already exists.
>
>
>
```
/* CASE 2 */
// 1st file
var config = {/* ... */};
firebase.initializeApp(config);
var rootRef = firebase.database().ref();
// 2nd file - don't initialize
var rootRef = firebase.database().ref();
```
>
> RESULT: bundle.js:529 Uncaught Error: No Firebase App '[DEFAULT]' has been created - call Firebase App.initializeApp().
>
>
>
Before the new API I just called
```
var myFirebaseRef = new Firebase("https://<YOUR-FIREBASE-APP>.firebaseio.com/");
```
in each file, and it worked okay. | 2016/05/20 | [
"https://Stackoverflow.com/questions/37337080",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5850108/"
] | You need to name your different instances (Apps as Firebase calls them); by default you're working with the `[DEFAULT]` App, because that's the most common use case, but when you need to work with multiple Apps then you have to add a name when initialising:
```
// Intialize the "[DEFAULT]" App
var mainApp = firebase.intializeApp({ ... });
// Intialize a "Secondary" App
var secondaryApp = firebase.initializeApp({ ... }, "Secondary");
...
mainApp.database().ref("path/to/data").set(value);
secondaryApp.database().ref("path/to/data").set(anotherValue);
```
You can find a more example scenarios in the updated [Initialize multiple apps](https://firebase.google.com/docs/web/setup#initialize_multiple_apps) section of the Add Firebase to your JavaScript Project guide. | I made the mistake by importing like this.
```
import firebase from 'firebase'
const firebaseConfig = {
apiKey: 'key',
authDomain: 'domain',
databaseURL: 'url',
storageBucket: ''
};
firebase.initializeApp(firebaseConfig);
```
This worked fine for a few days but when I tried to sign in with [custom tokens](https://firebase.google.com/docs/auth/web/custom-auth) my auth object was not changed. I had to refresh the page for it to update so I could make certain calls to the database which were protected by my own auth credentials rules.
```
".read": "$uid === auth.uid || auth.isAdmin === true || auth.isTeacher === true",
```
When I changed my imports to this it worked again.
```
import firebase from 'firebase/app';
import 'firebase/auth';
import 'firebase/database';
const firebaseConfig = {
apiKey: 'key',
authDomain: 'domain',
databaseURL: 'url',
storageBucket: ''
};
firebase.initializeApp(firebaseConfig);
```
Then whenever I need to use Firebase in a certain module I import this(notice the import from firebase/app instead of firebase):
```
import firebase from 'firebase/app';
```
And talk to certain services like so:
```
firebase.auth().onAuthStateChanged((user) => {
if (user) {
// Authenticated.
} else {
// Logged out.
}
});
firebase.database().ref('myref').once('value').then((snapshot) => {
// do stuff with the snapshot
});
``` |
37,337,080 | I have upgraded to the new API and don't know how to initialize Firebase references in two separate files:
```
/* CASE 1 */
// 1st file
var config = {/* ... */};
firebase.initializeApp(config);
var rootRef = firebase.database().ref();
// 2nd file - initialize again
var config = {/* ... */};
firebase.initializeApp(config);
var rootRef = firebase.database().ref();
```
>
> RESULT: bundle.js:535 Uncaught Error: Firebase App named '[DEFAULT]' already exists.
>
>
>
```
/* CASE 2 */
// 1st file
var config = {/* ... */};
firebase.initializeApp(config);
var rootRef = firebase.database().ref();
// 2nd file - don't initialize
var rootRef = firebase.database().ref();
```
>
> RESULT: bundle.js:529 Uncaught Error: No Firebase App '[DEFAULT]' has been created - call Firebase App.initializeApp().
>
>
>
Before the new API I just called
```
var myFirebaseRef = new Firebase("https://<YOUR-FIREBASE-APP>.firebaseio.com/");
```
in each file, and it worked okay. | 2016/05/20 | [
"https://Stackoverflow.com/questions/37337080",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5850108/"
] | If you don't have the control over where Firebase will be instantiated, you can do something like this:
```
try {
let firApp = firebase.app(applicationName);
return firApp;
} catch (error) {
return firebase.initializeApp({
credential: firebase.credential.cert(firebaseCredentials),
databaseURL: firebaseUrl
}, applicationName);
}
```
Firebase will try to get the application, if it doesn't exist, then you can initialize it freely. | For a small fraction of the people here, this issue might be cause by trying to initialize fb admin on the same script that you used to initialize fb on the front end. If anybody is initializing firebase twice on the same script (once for admin and once for frontend), then you need initialize firebase admin in a different script than in front end, and do not import anything that is exported on the backend script on the frontend (and vise versa). |
37,337,080 | I have upgraded to the new API and don't know how to initialize Firebase references in two separate files:
```
/* CASE 1 */
// 1st file
var config = {/* ... */};
firebase.initializeApp(config);
var rootRef = firebase.database().ref();
// 2nd file - initialize again
var config = {/* ... */};
firebase.initializeApp(config);
var rootRef = firebase.database().ref();
```
>
> RESULT: bundle.js:535 Uncaught Error: Firebase App named '[DEFAULT]' already exists.
>
>
>
```
/* CASE 2 */
// 1st file
var config = {/* ... */};
firebase.initializeApp(config);
var rootRef = firebase.database().ref();
// 2nd file - don't initialize
var rootRef = firebase.database().ref();
```
>
> RESULT: bundle.js:529 Uncaught Error: No Firebase App '[DEFAULT]' has been created - call Firebase App.initializeApp().
>
>
>
Before the new API I just called
```
var myFirebaseRef = new Firebase("https://<YOUR-FIREBASE-APP>.firebaseio.com/");
```
in each file, and it worked okay. | 2016/05/20 | [
"https://Stackoverflow.com/questions/37337080",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5850108/"
] | This is an issue I ran into as well upgrading to the new version of Firebase. You might want two separate firebase apps initialized, like explained in other answers, but I just wanted to use the refs in two different locations in my app and I was getting the same error.
What you need to do for this situation is to create a firebase module for your app that only initializes firebase once, then you import or require it elsewhere in your app.
This is pretty simple, here is mine: modules/firebase.js
```
import firebase from 'firebase';
var firebaseConfig = {
apiKey: "some-api-key",
authDomain: "some-app.firebaseapp.com",
databaseURL: "https://some-app.firebaseio.com",
storageBucket: "some-app.appspot.com",
};
var FbApp = firebase.initializeApp(firebaseConfig);
module.exports.FBApp = FbApp.database(); //this doesnt have to be database only
```
And then elsewhere in your application you simply:
```
import FBApp from '/your/module/location'
var messagesRef = FBApp.ref("messages/");
``` | For a small fraction of the people here, this issue might be cause by trying to initialize fb admin on the same script that you used to initialize fb on the front end. If anybody is initializing firebase twice on the same script (once for admin and once for frontend), then you need initialize firebase admin in a different script than in front end, and do not import anything that is exported on the backend script on the frontend (and vise versa). |
67,984,815 | Well, I was working on my project and suddenly when I created a new route I get this problem where the route exists but it shows 404 !! so I tried to delete an existing route that is working but when I delete that route still works !! I had this problem previously but I just deleted that route and made another route again and it was working fine but this time it does not work !!
Here is some of my code ->
```js
$("#resetBtn").click(function () {
$.ajax({
url: "/reset/website/data",
type: "POST",
data: {
_token: $('meta[name="csrf"]').attr("content"),
},
success: function (data) {
console.log(data);
activateNotificationSuccess("Successfully reseted.");
},
error: function (error) {
console.log(error);
activateNotificationFail("Something went very wrong !!");
},
});
});
```
```
Route::post('/reset/website/data', [ColorNImageController::class, 'reset']);
```
Here is the problem if I make it a get request and directly navigate using my browser ->
[](https://i.stack.imgur.com/WwjPw.png)
Does anyone have a solution? | 2021/06/15 | [
"https://Stackoverflow.com/questions/67984815",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15969810/"
] | When you create the container, you can specify the partition key which is path to a property in your JSON document. My guess is that when you (or someone else) created the container, `/id` was chosen as the partition key.
Considering partition key for a container can't be changed, what you need to do is create a new container. When you create the new container, ensure that the partition key property is set as `/Classes/Subjects/TypeId`. | You can form a partition key by concatenating multiple property values into a single artificial partitionKey property.
Please follow the steps given in below page:
<https://www.c-sharpcorner.com/article/understanding-partitioning-and-partition-key-in-azure-cosmos-db/> |
897,054 | $$f(x,y) = \begin{cases} \dfrac{\sin(xy)}{xy} & \text{if $x y \ne 0$} \\ 1 & \text{if $xy=0$} \end{cases}$$
all ideas are appreciated
i think this is non-continuous, i did by converting to polar coordinates
Looking for more ideas and interesting observations | 2014/08/14 | [
"https://math.stackexchange.com/questions/897054",
"https://math.stackexchange.com",
"https://math.stackexchange.com/users/167429/"
] | I assume that $f(x)$ should read as $f(x,y)$. Define the function $g:\mathbb R^2\to\mathbb R$ as $g(x,y)=xy$ for all $(x,y)\in\mathbb R^2$. This function is clearly continuous. Moreover, define another function $h:\mathbb R\to\mathbb R$ as
\begin{align\*}h(z)=\begin{cases}\dfrac{\sin z}{z}&\text{if $z\neq0$,}\\1&\text{if $z=0$.}\end{cases}\end{align\*}
This function is continuous, as it is well-known that $\lim\_{z\to0}(\sin z)/z=1$. Now, observe that $f=h\circ g$ and recall that the composition of continuous functions is continuous. | If $y$ is presumed constant (as you write $f(x)$), the function is indeed continuous wich can be proven by showing that
$$\lim\_{x\to0} \frac{\sin x}x = 1$$
(or trivially if $y=0$)
And noting that $f = x\mapsto \frac{\sin x}x \circ x\mapsto xy$.
If $y$ is also a variable (i.e. you meant $f(x,y)$) take a look if $(x,y) \mapsto xy$ is continuous (**yes, it is**) and note again that
$$f = x\mapsto \frac{\sin x}x \circ (x,y) \mapsto xy$$
is continuous as the composition of two continuous functions. |
181,932 | I wish to use process substitution to direct a list of files (produced, for example, by `ls` or `find`) to a particular application for opening/viewing. While piping such a list to `xargs` is suitable for a script or binary, this action fails if the object of `xargs` is a shell alias, as noted in [other questions on this site](https://unix.stackexchange.com/q/141367/14960). [The particular application I have in mind is `feh`]
Given this limitation of `xargs` vis-a-vis bash aliases, I am instead attempting to use process substitution in the form `[script/binary/alias] <(find . -iname '*')`. This construction gives the desired effect if the list of files is directed to certain shell commands such as `cat` (or `less`, if an input redirection, `<`, is prepended to the process substitution statement); however, it notably fails if the input (a list of paths to files) is instead directed to an application (e.g., `feh`, `gimp`) for opening/viewing. The error accompanying this failure, in the particular case that `feh` is the recipient of the process substitution statement, is "`feh: No loadable images specified.`" Prepending an input redirection operator to the process substitution statement does not alleviate the problem (unlike the case for other commands, e.g., `less`).
Thus, my question concerns whether process substitution can be employed for the stated purpose (that is, opening a list of files), and if so, what the appropriate syntax might be. The specific requirement for compatibility with bash aliases (specified in `~/.bash_aliases` or a similar configuration file). | 2015/01/30 | [
"https://unix.stackexchange.com/questions/181932",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14960/"
] | you can pipe the output of ps and sort the column of memory depending on the requirement.
```
ps aux | sort -r -k4
``` | It would be useful if you could specify your OS (e.g. Ubuntu 12.04) and system(e.g. desktop/server) to help identify potential tools. For instance I can run 'gnome-system-monitor' which gives me a GUI display where I can sort by any of user, cpu, memory and other fields. |
60,631,229 | i want to bind my button only on the element that i added to the cart, it's working well when i'm not in a loop but in a loop anything happen. i'm not sure if it was the right way to add the index like that in order to bind only the item clicked, if i don't put the index every button on the loop are binded and that's not what i want in my case.
```
:loading="isLoading[index]"
```
here the vue :
```
<div class="container column is-9">
<div class="section">
<div class="columns is-multiline">
<div class="column is-3" v-for="(product, index) in computedProducts">
<div class="card">
<div class="card-image">
<figure class="image is-4by3">
<img src="" alt="Placeholder image">
</figure>
</div>
<div class="card-content">
<div class="content">
<div class="media-content">
<p class="title is-4">{{product.name}}</p>
<p class="subtitle is-6">Description</p>
<p>{{product.price}}</p>
</div>
</div>
<div class="content">
<b-button class="is-primary" @click="addToCart(product)" :loading="isLoading[index]"><i class="fas fa-shopping-cart"></i> Ajouter au panier</b-button>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
```
here the data :
```
data () {
return {
products : [],
isLoading: false,
}
},
```
here my add to cart method where i change the state of isLoading :
```
addToCart(product) {
this.isLoading = true
axios.post('cart/add-to-cart/', {
data: product,
}).then(r => {
this.isLoading = false
}).catch(e => {
this.isLoading = false
});
}
``` | 2020/03/11 | [
"https://Stackoverflow.com/questions/60631229",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7528834/"
] | you can use:
```
words = input_str.split()
s = set()
result = set()
for w in words:
r = w[::-1]
if r in s:
result.add(r)
else:
s.add(w)
list(result)
```
output:
```
['am', 'eat']
```
this is O(n) time complexity solution, so you have to get first the words and iterate through them, each time you have a new word you are adding him to a set, if the reverse is already in the set you are adding the reverse to the result | ```
input_str= 'i am going to eat ma and tae will also go'
words_list = input_str.split()
new_words_list = [word[::-1] for word in words_list]
data = []
for i in words_list:
if len(i) > 1 and i in new_words_list:
data.append(i)
```
**Output:**
```
['am', 'eat', 'ma', 'tae']
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.