qid
int64
10
74.7M
question
stringlengths
15
26.2k
date
stringlengths
10
10
metadata
list
response_j
stringlengths
27
28.1k
response_k
stringlengths
23
26.8k
3,336,726
I have the sequence $a\_n=(n^3)\big(\frac{3}{4}\big)^n$ and I need to find whether it converges or not and the limit. I took the common ratio which is $\frac{3 (n+1)^3}{4 n^3}$ and since $\big|\frac{3}{4}\big|<1$ it converges. I don't know how to find the limit from here.
2019/08/28
[ "https://math.stackexchange.com/questions/3336726", "https://math.stackexchange.com", "https://math.stackexchange.com/users/680718/" ]
There are a number of ways to approach this problem. As others have mentioned, the convergence of $\displaystyle \sum^\infty n^3(3/4)^n$ implies $n^3(3/4)^n \to 0$ as $n \to \infty$, and it looks like this is what you're trying to exploit. > > **Exercise**: If $\displaystyle \lim\_{n \to \infty} a\_n \neq 0$, prove that $\displaystyle \sum\_{n=1}^\infty a\_n$ cannot converge. > > > I'm a fan of logarithms myself, so here is another approach: We have $\ln \Big( n^3(3/4)^n \Big) = 3\ln(n) + n \ln(3/4)$. Notice that $3/4 < 1$, so its natural log will be negative; we can rearrange the expression to emphasize this: $$3\ln(n) + n \ln(3/4) \ = \ 3 \ln(n) - n \ln(4/3)$$ At this point, we note that the linear term grows faster than the logarithmic term, so we see that $\ln(a\_n) \to - \infty$, and because the natural logarithm is a continuous function, we can conclude that $a\_n \to 0$. If you feel that the "grows faster" argument is hand-wavy, you can make it rigorous by taking a derivative, wherein you'll find that it's negative for all $x$ beyond $x \approx 10.4$ and tending to $-\ln(4/3)$, implying unbounded growth toward $- \infty$.
> > I can see that he proves how the sequence converges but i can't understand how he finds the limit. > > > You seem to mix (the convergence of) sequence and series. > > I took the common ratio which is $\frac{3}{4}$ and since $|\frac{3}{4}|<1$ it converges. > > > If by *"it converges"* you mean that the series $\sum a\_n$ converges, because you seem to have applied the [ratio test](https://en.wikipedia.org/wiki/Ratio_test), then - a fortiori - you also have $a\_n \to 0$, since: $$\sum a\_n \;\mbox{ converges } \implies a\_n \to 0$$
42,854,365
Here i am tryning to add a new row but it's crashing and giving the error "attempt to insert row 3 into section 2, but there are only 3 rows in section 2 after the update". Intially the emerCount is 2 and i am trying to add 3rd row. ``` var index = NSIndexPath() if (emerCount<3){ emerCount += 1 index = NSIndexPath(row: emerCount, section: 2) self.profileTableView?.insertRows(at: [index as IndexPath], with: .automatic) self.profileTableView?.reloadData() } ```
2017/03/17
[ "https://Stackoverflow.com/questions/42854365", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4263791/" ]
Try connecting to the instance via the serial console: <https://cloud.google.com/compute/docs/instances/interacting-with-serial-console>. From there you should be able to repair the firewall rules.
pjhsea I tried your steps and it is connecting to serial port but it asks passphrase. I was not sure where I have given this passphrase. If you try to access serial port on **gcloud** console then it always asks for passphrase. So I tried below steps and I was able to connect to serial port to change firewall setting on my ubuntu VM. Goto Compute Engine -> VM Instances fro left side menu. Click on name of the VM you are facing issue connecting using SSH. It will open VM Instance details page. On details tab under VM name there will be two dropdown for remote access. SSH and Connect to serial console. Click on serial console and select serial port 1(console) This connect to serial console without any passphrase and I was able to change my firewall rules using below command. ufw allow 22 Now you should be able to connect using SSH.
59,296,801
When I run the following command, I expect the exit code to be 0 since my `combined` container runs a test that successfully exits with an exit code of 0. ``` docker-compose up --build --exit-code-from combined ``` Unfortunately, I consistently receive an exit code of 137 even when the tests in my `combined` container run successfully and I exit that container with an exit code of 0 (more details on how that happens are specified below). Below is my docker-compose version: ``` docker-compose version 1.25.0, build 0a186604 ``` According to this [post](https://success.docker.com/article/what-causes-a-container-to-exit-with-code-137), the exit code of 137 can be due to two main issues. 1. The container received a `docker stop` and the app is not gracefully handling SIGTERM 2. The container has run out of memory (OOM). *I know the 137 exit code is not because my container has run out of memory.* When I run `docker inspect <container-id>`, I can see that "OOMKilled" is false as shown in the snippet below. I also have 6GB of memory allocated to the Docker Engine which is plenty for my application. ``` [ { "Id": "db4a48c8e4bab69edff479b59d7697362762a8083db2b2088c58945fcb005625", "Created": "2019-12-12T01:43:16.9813461Z", "Path": "/scripts/init.sh", "Args": [], "State": { "Status": "exited", "Running": false, "Paused": false, "Restarting": false, "OOMKilled": false, <---- shows container did not run out of memory "Dead": false, "Pid": 0, "ExitCode": 137, "Error": "", "StartedAt": "2019-12-12T01:44:01.346592Z", "FinishedAt": "2019-12-12T01:44:11.5407553Z" }, ``` *My container doesn't exit from a `docker stop` so I don't think the first reason is relevant to my situation either.* **How my Docker containers are set up** I have two Docker containers: 1. **b-db** - contains my database 2. **b-combined** - contains my web application and a series of tests, which run once the container is up and running. I'm using a docker-compose.yml file to start both containers. ``` version: '3' services: db: build: context: . dockerfile: ./docker/db/Dockerfile container_name: b-db restart: unless-stopped volumes: - dbdata:/data/db ports: - "27017:27017" networks: - app-network combined: build: context: . dockerfile: ./docker/combined/Dockerfile container_name: b-combined restart: unless-stopped env_file: .env ports: - "5000:5000" - "8080:8080" networks: - app-network depends_on: - db networks: app-network: driver: bridge volumes: dbdata: node_modules: ``` Below is the Dockerfile for the `combined` service in `docker-compose.yml`. ``` FROM cypress/included:3.4.1 WORKDIR /usr/src/app COPY package*.json ./ RUN npm install COPY . . EXPOSE 5000 RUN npm install -g history-server nodemon RUN npm run build-test EXPOSE 8080 COPY ./docker/combined/init.sh /scripts/init.sh RUN ["chmod", "+x", "/scripts/init.sh"] ENTRYPOINT [ "/scripts/init.sh" ] ``` Below is what is in my `init.sh` file. ``` #!/bin/bash # Start front end server history-server dist -p 8080 & front_pid=$! # Start back end server that interacts with DB nodemon -L server & back_pid=$! # Run tests NODE_ENV=test $(npm bin)/cypress run --config video=false --browser chrome # Error code of the test test_exit_code=$? echo "TEST ENDED WITH EXIT CODE OF: $test_exit_code" # End front and backend server kill -9 $front_pid kill -9 $back_pid # Exit with the error code of the test echo "EXITING SCRIPT WITH EXIT CODE OF: $test_exit_code" exit "$test_exit_code" ``` Below is the Dockerfile for my `db` service. All its doing is copying some local data into the Docker container and then initialising the database with this data. ``` FROM mongo:3.6.14-xenial COPY ./dump/ /tmp/dump/ COPY mongo_restore.sh /docker-entrypoint-initdb.d/ RUN chmod 777 /docker-entrypoint-initdb.d/mongo_restore.sh ``` Below is what is in `mongo_restore.sh`. ``` #!/bin/bash # Creates db using copied data mongorestore /tmp/dump ``` Below are the last few lines of output when I run `docker-compose up --build --exit-code-from combined; echo $?`. ``` ... b-combined | user disconnected b-combined | Mongoose disconnected b-combined | Mongoose disconnected through Heroku app shutdown b-combined | TEST ENDED WITH EXIT CODE OF: 0 =========================== b-combined | EXITING SCRIPT WITH EXIT CODE OF: 0 ===================================== Aborting on container exit... Stopping b-combined ... done 137 ``` **What is confusing as you can see above, is that the test and script ended with exit code of 0 since all my tests passed successfully but the container still exited with an exit code of 137.** **What is even more confusing is that when I comment out the following line (which runs my Cypress integration tests) from my `init.sh` file, the container exits with a 0 exit code as shown below.** ``` NODE_ENV=test $(npm bin)/cypress run --config video=false --browser chrome ``` Below is the output I receive when I comment out / remove the above line from `init.sh`, which is a command that runs my Cypress integration tests. ``` ... b-combined | TEST ENDED WITH EXIT CODE OF: 0 =========================== b-combined | EXITING SCRIPT WITH EXIT CODE OF: 0 ===================================== Aborting on container exit... Stopping b-combined ... done 0 ``` **How do I get docker-compose to return me a zero exit code when my tests run successfully and a non-zero exit code when they fail?** **EDIT:** After running the following docker-compose command in debug mode, I noticed that b-db seems to have some trouble shutting down and potentially is receiving a SIGKILL signal from Docker because of that. ``` docker-compose --log-level DEBUG up --build --exit-code-from combined; echo $? ``` Is this indeed the case according to the following output? ``` ... b-combined exited with code 0 Aborting on container exit... http://localhost:None "GET /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561be961eecfe874bbb04def5b7c9e5/json HTTP/1.1" 200 None http://localhost:None "GET /v1.25/containers/json?limit=-1&all=1&size=0&trunc_cmd=0&filters=%7B%22label%22%3A+%5B%22com.docker.compose.project%3Db-property%22%2C+%22com.docker.compose.oneoff%3DFalse%22%5D%7D HTTP/1.1" 200 3819 http://localhost:None "GET /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561be961eecfe874bbb04def5b7c9e5/json HTTP/1.1" 200 None http://localhost:None "GET /v1.25/containers/0626d6bf49e5236440c82de4e969f31f4f86280d6f8f555f05b157fa53bae9b8/json HTTP/1.1" 200 None http://localhost:None "GET /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561be961eecfe874bbb04def5b7c9e5/json HTTP/1.1" 200 None http://localhost:None "GET /v1.25/containers/json?limit=-1&all=0&size=0&trunc_cmd=0&filters=%7B%22label%22%3A+%5B%22com.docker.compose.project%3Db-property%22%2C+%22com.docker.compose.oneoff%3DFalse%22%5D%7D HTTP/1.1" 200 4039 http://localhost:None "POST /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561be961eecfe874bbb04def5b7c9e5/attach?logs=0&stdout=1&stderr=1&stream=1 HTTP/1.1" 101 0 http://localhost:None "GET /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561be961eecfe874bbb04def5b7c9e5/json HTTP/1.1" 200 None http://localhost:None "GET /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561be961eecfe874bbb04def5b7c9e5/json HTTP/1.1" 200 None http://localhost:None "GET /v1.25/containers/0626d6bf49e5236440c82de4e969f31f4f86280d6f8f555f05b157fa53bae9b8/json HTTP/1.1" 200 None Stopping b-combined ... Stopping b-db ... Pending: {<Container: b-db (0626d6)>, <Container: b-combined (196f3e)>} Starting producer thread for <Container: b-combined (196f3e)> http://localhost:None "GET /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561be961eecfe874bbb04def5b7c9e5/json HTTP/1.1" 200 None http://localhost:None "GET /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561be961eecfe874bbb04def5b7c9e5/json HTTP/1.1" 200 None Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} http://localhost:None "GET /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561be961eecfe874bbb04def5b7c9e5/json HTTP/1.1" 200 None Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} Pending: {<Container: b-db (0626d6)>} http://localhost:None "POST /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561be961eecfe874bbb04def5b7c9e5/wait HTTP/1.1" 200 32 http://localhost:None "POST /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561be961eecfe874bbb04def5b7c9e5/stop?t=10 HTTP/1.1" 204 0 http://localhost:None "GET /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561be961eecfe874bbb04def5b7c9e5/json HTTP/1.1" 200 None http://localhost:None "POST /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561bStopping b-combined ... done Finished processing: <Container: b-combined (196f3e)> Pending: {<Container: b-db (0626d6)>} Starting producer thread for <Container: b-db (0626d6)> http://localhost:None "GET /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561be961eecfe874bbb04def5b7c9e5/json HTTP/1.1" 200 None http://localhost:None "GET /v1.25/containers/0626d6bf49e5236440c82de4e969f31f4f86280d6f8f555f05b157fa53bae9b8/json HTTP/1.1" 200 None Pending: set() Pending: set() Pending: set() Pending: set() Pending: set() Pending: set() http://localhost:None "GET /v1.25/containers/0626d6bf49e5236440c82de4e969f31f4f86280d6f8f555f05b157fa53bae9b8/json HTTP/1.1" 200 None http://localhost:None "POST /v1.25/containers/0626d6bf49e5236440c82de4e969f31f4f86280d6f8f555f05b157fa53bae9b8/stop?t=10 HTTP/1.1" 204 0 http://localhost:None "POST /v1.25/containers/0626d6bf49e5236440c82de4e969f31f4f86280d6f8f555f05b157fa53bae9b8/wait HTTP/1.1" 200 30 Stopping b-db ... done Pending: set() http://localhost:None "GET /v1.25/containers/0626d6bf49e5236440c82de4e969f31f4f86280d6f8f555f05b157fa53bae9b8/json HTTP/1.1" 200 None http://localhost:None "GET /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561be961eecfe874bbb04def5b7c9e5/json HTTP/1.1" 200 None 137 ```
2019/12/12
[ "https://Stackoverflow.com/questions/59296801", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7489488/" ]
Docker exit code 137 implies Docker doesn't have enough RAM to finish the work. Unfortunately Docker consumes a lot of RAM. Go to Docker Desktop app > Preferences > Resources > Advanced and increase the MEMORY - best to double it.
The error message strikes me as: `Aborting on container exit...` From [docker-compose docs](https://docs.docker.com/compose/reference/up/): > > **--abort-on-container-exit** Stops all containers if any container was stopped. > > > Are you running docker-compose with this flag? If that is the case, think about what it means. Once `b-combined` is finished, it simply exits. That means, container `b-db` will be forced to stop as well. Even though `b-combined` returned with exit code 0, `b-db` forced shutdown was likely not handled gracefully by mongodb. EDIT: I just realized you have `--exit-code-from` in the command line. That implies `--abort-on-container-exit`. **Solution**: `b-db` needs more time to exit gracefully. Using `docker-compose up --timeout 600` avoids the error.
54,120
I am looking for the way to get the similarity between two item names using integer encoding or one-hot encoding. For example, "lane connector" vs. "a truck crane". I have 100,000 item names consisting of 2~3 words as above. also, items have its size(36mm, 12M, 2400\*1200...) and unit(ea, m2, m3, hr...) I wanna make (item name, size, unit) as a vector. To do this, I need to change texts to numbers using some way. All I found is only word2vec things, but my case has no context corpus. So I don't think it is possible to learn some context from my data. [![Example Image of dataset](https://i.stack.imgur.com/6JAKa.png)](https://i.stack.imgur.com/6JAKa.png)
2019/06/20
[ "https://datascience.stackexchange.com/questions/54120", "https://datascience.stackexchange.com", "https://datascience.stackexchange.com/users/76321/" ]
I'm not sure, if it's possible with this data set. Word2Vec is used to generate word embedding, which works on the principle of "words association" in a sentence. So I dont think you can apply Word2Vec on this dataset which looks like doesn't have any association, except on some places where you can match(perform clustering) some parameters like: 1. Units 2. Size/dimension of the item-name Interested to know some solution for such types of problems.
Okay, so what I understand is you just have a list of words and want to get word vectors for those. You are correct that you cannot train a word2vec model as it requires a corpus. But what you can do is use a pre-trained model (word2vec or glove). I suggest you use word2vec as gensim has a pretty simple implementation. You can download Google’s pre-trained model [here](https://drive.google.com/file/d/0B7XkCwpI5KDYNlNUTTlSS21pQmM/edit). And then you can use the following code to get `word_embed` for a given `word_list`. ``` import gensim model = gensim.models.KeyedVectors.load_word2vec_format('GoogleNews-vectors-negative300.bin', binary=True) vocab = model.vocab.keys() word_embed = {} for word in word_list: if word in vocab: word_embed.append(model[word]) ``` Also, you'll have to apply some pre-processing to your word list so that you can get maximum matches from the pre-trained embeddings (like removing the etc.) And if a word is still not found in the pre-trained embeddings you can either initialize it randomly or take an average of the embeddings.
13,137,463
I have tried: ``` const ascii = "abcdefghijklmnopqrstuvwxyz" const letter_goodness []float32 = { .0817,.0149,.0278,.0425,.1270,.0223,.0202, .0609,.0697,.0015,.0077,.0402,.0241,.0675, .0751,.0193,.0009,.0599,.0633,.0906,.0276, .0098,.0236,.0015,.0197,.0007 } const letter_goodness = { .0817,.0149,.0278,.0425,.1270,.0223,.0202, .0609,.0697,.0015,.0077,.0402,.0241,.0675, .0751,.0193,.0009,.0599,.0633,.0906,.0276, .0098,.0236,.0015,.0197,.0007 } const letter_goodness = []float32 { .0817,.0149,.0278,.0425,.1270,.0223,.0202, .0609,.0697,.0015,.0077,.0402,.0241,.0675, .0751,.0193,.0009,.0599,.0633,.0906,.0276, .0098,.0236,.0015,.0197,.0007 } ``` The first declaration and initialization works fine, but the second, third and fourth don't work. How can I declare and initialize a const array of floats?
2012/10/30
[ "https://Stackoverflow.com/questions/13137463", "https://Stackoverflow.com", "https://Stackoverflow.com/users/205270/" ]
An array isn't immutable by nature; you can't make it constant. The nearest you can get is: ``` var letter_goodness = [...]float32 {.0817, .0149, .0278, .0425, .1270, .0223, .0202, .0609, .0697, .0015, .0077, .0402, .0241, .0675, .0751, .0193, .0009, .0599, .0633, .0906, .0276, .0098, .0236, .0015, .0197, .0007 } ``` Note the `[...]` instead of `[]`: it ensures you get a (fixed size) array instead of a slice. So the values aren't fixed but the size is. As pointed out by [@jimt](https://stackoverflow.com/users/357705/jimt), the `[...]T` syntax is sugar for `[123]T`. It creates a fixed size array, but lets the compiler figure out how many elements are in it.
From [Effective Go](https://golang.org/doc/effective_go.html#constants): > > Constants in Go are just that—constant. They are created at compile time, even when defined as locals in functions, and can only be numbers, characters (runes), strings or booleans. Because of the compile-time restriction, the expressions that define them must be constant expressions, evaluatable by the compiler. For instance, `1<<3` is a constant expression, while `math.Sin(math.Pi/4)` is not because the function call to `math.Sin` needs to happen at run time. > > > Slices and arrays are always evaluated during runtime: ``` var TestSlice = []float32 {.03, .02} var TestArray = [2]float32 {.03, .02} var TestArray2 = [...]float32 {.03, .02} ``` `[...]` tells the compiler to figure out the length of the array itself. Slices wrap arrays and are easier to work with in most cases. Instead of using constants, just make the variables unaccessible to other packages by using a lower case first letter: ``` var ThisIsPublic = [2]float32 {.03, .02} var thisIsPrivate = [2]float32 {.03, .02} ``` `thisIsPrivate` is available only in the package it is defined. If you need read access from outside, you can write a simple getter function (see [Getters in golang](https://golang.org/doc/effective_go.html#Getters)).
23,349,855
For a lab we are required to read in from binary files using low level io (open/lseek/close not fopen/fseek/fclose) and manipulate the data. My question is how do I read or write structs using these methods. The struct is as follows ``` typedef struct Entry { char title[33]; char artist[17]; int val; int cost; } Entry_T; ``` I originally planned on creating a buffer of `sizeof(Entry_T)` and read the struct simply, but I don't think that's possible using low level I/O. Am I supposed to create 4 buffers and fill them sequentially, use one buffer and reallocate it for the right sizes, or is it something else entirely. An example of writing would be helpful as well, but I think I may be able to figure it out after I see a read example.
2014/04/28
[ "https://Stackoverflow.com/questions/23349855", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2929589/" ]
The low-level functions might be OS specific. However they are generally these: ``` fopen() -> open() fread() -> read() fwrite() -> write() fclose() -> close() ``` Note that while the 'fopen()' set of functions use a 'FILE \*' as a token to represent the file, the 'open()' set of functions use an integer (int). Using read() and write(), you may write entire structures. So, for: ``` typedef struct Entry { char title[33]; char artist[17]; int val; int cost; } Entry_T; ``` You may elect to read, or write as follows: ``` { int fd = (-1); Entry_T entry; ... fd=open(...); ... read(fd, &entry, sizeof(entry)); ... write(fd, &entry, sizeof(entry)); ... if((-1) != fd) close(fd); } ```
Here you go: ``` Entry_T t; int fd = open("file_name", _O_RDONLY, _S_IREAD); if (fd == -1) //error { } read(fd, &t, sizeof(t)); close(fd); ```
33,217,241
I got a linear layout that I want to move up when a Snackbar appears. I saw many examples how to do this with FloatingButton, but what about a regular view?
2015/10/19
[ "https://Stackoverflow.com/questions/33217241", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1129332/" ]
Based on @Travis Castillo answer. Fixed problems such as: * Moving entire layout up and cause the objects on top of view disappear. * Doesnt push the layout up when showing SnackBars immediately after eachother. So here is fixed code for `MoveUpwardBehavior` Class : ```java import android.content.Context; import android.support.annotation.Keep; import android.support.design.widget.CoordinatorLayout; import android.support.design.widget.Snackbar; import android.support.v4.view.ViewCompat; import android.util.AttributeSet; import android.view.View; @Keep public class MoveUpwardBehavior extends CoordinatorLayout.Behavior<View> { public MoveUpwardBehavior() { super(); } public MoveUpwardBehavior(Context context, AttributeSet attrs) { super(context, attrs); } @Override public boolean layoutDependsOn(CoordinatorLayout parent, View child, View dependency) { return dependency instanceof Snackbar.SnackbarLayout; } @Override public boolean onDependentViewChanged(CoordinatorLayout parent, View child, View dependency) { float translationY = Math.min(0, ViewCompat.getTranslationY(dependency) - dependency.getHeight()); //Dismiss last SnackBar immediately to prevent from conflict when showing SnackBars immediately after eachother ViewCompat.animate(child).cancel(); //Move entire child layout up that causes objects on top disappear ViewCompat.setTranslationY(child, translationY); //Set top padding to child layout to reappear missing objects //If you had set padding to child in xml, then you have to set them here by <child.getPaddingLeft(), ...> child.setPadding(0, -Math.round(translationY), 0, 0); return true; } @Override public void onDependentViewRemoved(CoordinatorLayout parent, View child, View dependency) { //Reset paddings and translationY to its default child.setPadding(0, 0, 0, 0); ViewCompat.animate(child).translationY(0).start(); } } ``` This codes pushes up what user sees on screen and besides user have access to all objects in your layout while SnackBar is showing. If you want the SnackBar cover the objects instead of pushing and besides user do have access to all objects, then you need to change method `onDependentViewChanged`: ```java @Override public boolean onDependentViewChanged(CoordinatorLayout parent, View child, View dependency) { float translationY = Math.min(0, ViewCompat.getTranslationY(dependency) - dependency.getHeight()); //Dismiss last SnackBar immediately to prevent from conflict when showing SnackBars immediately after eachother ViewCompat.animate(child).cancel(); //Padding from bottom instead pushing top and padding from top. //If you had set padding to child in xml, then you have to set them here by <child.getPaddingLeft(), ...> child.setPadding(0, 0, 0, -Math.round(translationY)); return true; } ``` and method `onDependentViewRemoved`: ```java @Override public void onDependentViewRemoved(CoordinatorLayout parent, View child, View dependency) { //Reset paddings and translationY to its default child.setPadding(0, 0, 0, 0); } ``` Unfortunately you will lose animation when user swipe to remove `SnackBar`. And you have to use `ValueAnimator` class to make animation for padding changes that makes some conflict here and you have to debug them. <https://developer.android.com/reference/android/animation/ValueAnimator.html> Any comment about animation for swipe to remove `SnackBar` appreciated. If you can skip that animation then you can use it. Anyhow, i recommend first type.
@Markymark propose great solution, but on the first frame of snackbar there is translationY==0, on second translationY==full height and start decreasing correctly, so dependent layout janks on the first frame. This can be fixed by skipping first frame + restoring original padding as good side effect. ``` class MoveUpwardBehavior(context: Context?, attrs: AttributeSet?) : CoordinatorLayout.Behavior<View>(context, attrs), Parcelable { var originalPadding = -1 constructor(parcel: Parcel) : this( TODO("context"), TODO("attrs") ) { } override fun layoutDependsOn(parent: CoordinatorLayout, targetView: View, snackBar: View): Boolean { return snackBar is Snackbar.SnackbarLayout } /** * @param parent - the parent container * @param targetView - the view that applies the layout_behavior * @param snackBar */ override fun onDependentViewChanged(parent: CoordinatorLayout, targetView: View, snackBar: View): Boolean { if (originalPadding==-1) { originalPadding = targetView.paddingBottom return true } val bottomPadding = min(0f, snackBar.translationY - snackBar.height).roundToInt() // println("bottomPadding: ${snackBar.translationY} ${snackBar.height}") //Dismiss last SnackBar immediately to prevent from conflict when showing SnackBars immediately after each other ViewCompat.animate(targetView).cancel() //Set bottom padding so the target ViewGroup is not hidden targetView.setPadding(targetView.paddingLeft, targetView.paddingTop, targetView.paddingRight, -(bottomPadding-originalPadding)) return true } override fun onDependentViewRemoved(parent: CoordinatorLayout, targetView: View, snackBar: View) { //Reset padding to default value targetView.setPadding(targetView.paddingLeft, targetView.paddingTop, targetView.paddingRight, originalPadding) originalPadding = -1 } override fun writeToParcel(parcel: Parcel, flags: Int) { } override fun describeContents(): Int { return 0 } companion object CREATOR : Parcelable.Creator<MoveUpwardBehavior> { override fun createFromParcel(parcel: Parcel): MoveUpwardBehavior { return MoveUpwardBehavior(parcel) } override fun newArray(size: Int): Array<MoveUpwardBehavior?> { return arrayOfNulls(size) } } } ```
177,007
I was wondering if I can use the aac codec in my commercial app for free (through lgpl ffmpeg). It says on the wiki: > > No licenses or payments are required to be able to stream or distribute content in AAC format.[36] This reason alone makes AAC a much more attractive format to distribute content than MP3, particularly for streaming content (such as Internet radio). > However, a patent license is required for all manufacturers or developers of AAC codecs.> For this reason free and open source software implementations such as FFmpeg and FAAC may be distributed in source form only, in order to avoid patent infringement. (See below under Products that support AAC, Software.) > > > But the xSplit program had to cancel the AAC for free members because they have to pay royalties per person. Is this true (that you have to pay per each person that uses aac)? If you do have to pay, which company do you pay to and how does one apply?
2012/11/24
[ "https://softwareengineering.stackexchange.com/questions/177007", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/72932/" ]
**(This answer is not legal advice. You should speak to an experienced patent attorney.)** ### What to do about AAC If I needed to encode or decode AAC, I would rely on operating system APIs where the OS or hardware vendor has licensed the AAC patents AND the patents for similar audio technologies: * **Windows**: [Microsoft Media Foundation](https://docs.microsoft.com/en-us/windows/win32/medfound/microsoft-media-foundation-sdk) * **macOS**, **iOS**: [Apple Core Audio Essentials](https://developer.apple.com/library/archive/documentation/MusicAudio/Conceptual/CoreAudioOverview/CoreAudioEssentials/CoreAudioEssentials.html) * **Android**: the [android.media](https://developer.android.com/reference/android/media/package-summary) Java/Kotlin API or the [libmediandk](https://developer.android.com/ndk/reference/group/media) native API (requires that your Android OEM has licensed the needed patents) Another option is to purchase [Fluendo's patent-compliant package](https://fluendo.com/en/products/enterprise/fluendo-ffmpeg/) for FFmpeg or GStreamer, which supports Windows, MacOS, GNU/Linux, Android, and iOS. The Fluendo package might support platforms supported by GStreamer like the BSDs and OpenSolaris. ### The problem with patents In general, it is not possible for you to prove that any software you write or depend on does not infringe upon any patents you haven't licensed. When you license a patent, you did not pay for the "right" to actually use a technology. You have only earned the right not to be sued by the licensor for infringement of the patents you licensed. This is why we see cases like [Alcatel-Lucent v. Microsoft Corp.](https://en.wikipedia.org/wiki/Alcatel-Lucent_v._Microsoft_Corp.), where Microsoft had licensed Fraunhofer's MP3 patents but Alcatel-Lucent still argued that Windows Media Player's MP3 support infringed upon two Alcatel-Lucent patents for perceptual audio coding. ### FFmpeg's patent situation Any code in FFmpeg--even the [Fraunhofer FDK AAC library](https://en.wikipedia.org/wiki/Fraunhofer_FDK_AAC)--could be infringing upon yet-unknown patents. To minimize their own liability, the FFmpeg developers cannot advise you on which patents you may need to license. Patent owners have no incentive to warn or sue open source projects for infringement. Instead, they wait for wealthy companies to integrate these infringing technologies in lucrative commercial products. FFmpeg [warns on their legal page](https://www.ffmpeg.org/legal.html) that MPEG LA does this.
Android apps like TuneIn radio use the FFMPEG decoder and I cannot believe that such a popular app is paying a per download licence fee for that. I note the the BBCs rather nifty iPlayer Radio app uses HLS to deliver the audio directly to the media player. This is how it should be done.
51,664,591
In my application, I am using third party authentication to log a user in and then set a token in his localstorage. I'm writing a service to cache the profile information, which takes that user's auth token and calls a `getUser()` backend method to give me back the user profile information. The issue is that there is a slight delay between the time when the token is set in localstorage and when the app is relying on the token to make the backend call upon initialization. ``` export class UserService { private userProfileSubject = new BehaviorSubject<Enduser>(new Enduser()); userProfile$ = this.userProfileSubject.asObservable(); constructor( private _adService: AdService, private _authService: AuthnService) { } setUserProfile() { const username = this._authService.getUser(); this.userProfile$ = this._adService.getUser(username).pipe( first(), map(result => result[0]), publishReplay(1), refCount() ); return this.userProfile$; } } ``` This is the synchronous method which checks the localstorage token and returns the username. ``` public getUser(): string { const jwtHelper = new JwtHelperService() const token = localStorage.getItem(environment.JWT_TOKEN_NAME); if (!token || jwtHelper.isTokenExpired(token)) { return null; } else { const t = jwtHelper.decodeToken(token); return t.username; } } ``` So `this._authService.getUser();` needs to complete before I can use it in `this._adService.getUser(username)`. I figured the way to do this would be to make the `getUser()` method return an Observable and `takeWhile` until the value is `!== null`. Or with `timer`. Been trying this for a couple hours without success. Any help is greatly appreciated. \_\_ Edit: This seems to work, but using `timer` strikes me as pretty hacky, and I'd rather do it another way: In `user.service.ts`: ``` setUserProfile() { timer(100).pipe( concatMap(() => { const username = this._authService.getUser(); return this._adService.getUser(username) }), map(res => res[0]) ).subscribe(profile => { this.userProfileSubject.next(profile); }); } ``` In `app.component.ts` `ngOnInit` ``` this._userService.setUserProfile(); this._userService.userProfile$.pipe( map((user: Enduser) => this._userService.setUserPermissions(user)), takeUntil(this.ngUnsubscribe) ).subscribe(); ``` Edit 2: Working Solution `isLoggedIn()` is the method in which local storage is set. Here, I'm waiting for it to be set before continuing on to fetch the user profile information. ``` this._authService.isLoggedIn().pipe( concatMap(() => { const username = this._authService.getUser(); return this._adService.getUser(username) }), map(res => res[0]) ).subscribe(profile => { this.userProfileSubject.next(profile); }); } ``` isLoggedIn: ``` isLoggedIn(state): Observable<boolean> { ... return this.http.get(url, {withCredentials: true}).pipe( map((res: any) => { const token = res.mdoc.token; if (token) { localStorage.setItem(environment.JWT_TOKEN_NAME, token); return true; } else { return false; } }) } ```
2018/08/03
[ "https://Stackoverflow.com/questions/51664591", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4059156/" ]
As stated in my comment, your problem of wanting to wait for `this._authService.getUser()` to complete doesn't make sense, because if `this._authService.getUser()` is synchronous (as stated by you), then it will always complete before the next line of code is executed. Anyways, after reading your code I think I know what you are trying to do... 1. Get a username form `this._authService.getUser()` 2. Pass the username to `this._adService.getUser()` 3. Wait for `this._adService.getUser()` to complete and pass it's value to your observable stream, `userProfile$` To achieve that, you don't need any of those fancy RxJS operators; your code can be as simple as: ``` export class UserService { private userProfileSubject = new BehaviorSubject<Enduser>(new Enduser()); userProfile$ = this.userProfileSubject.asObservable(); constructor( private _adService: AdService, private _authService: AuthnService ) {} setUserProfile() { const username = this._authService.getUser(); this._adService.getUser(username).subscribe((userProfile: Enduser) => { this.userProfileSubject.next(userProfile); }); } } ``` Just emit to the `userProfile$` stream as I am doing above, and subscribe to that where ever you want in your app to get the user profile data. Now anywhere in your app, you can do this to get the user profile data whenever it's sent down the stream: ``` constructor(private _userService: UserService) { _userService.userProfile$.subscribe((userProfile: Enduser) => { console.log(userProfile); }); } ```
My implementation: ``` setUserProfile() { this.userProfile$ = this._authService.isLoggedIn(this.activatedRoute.snapshot).pipe( concatMap(() => { return this._adService.getUser(this._authService.getUser()).pipe( map(result => result[0]), publishReplay(1), refCount() ); }) ) return this.userProfile$; } } _____ // _adService.getUser() getUser(username: string): Observable<Enduser> { const usernameUrl = encodeURIComponent(username); return this.http.get(`${environment.API_URL}person/${usernameUrl}`).pipe( map((res: any) => res.data) ); } _____ // _authService.getUser() public getUser(): string { const jwtHelper = new JwtHelperService() const token = localStorage.getItem(environment.JWT_TOKEN_NAME); if (!token || jwtHelper.isTokenExpired(token)) { return null; } else { const t = jwtHelper.decodeToken(token); return t.username; } } ```
1,319,603
Is it possible to view the PHP code of a live website ?
2009/08/23
[ "https://Stackoverflow.com/questions/1319603", "https://Stackoverflow.com", "https://Stackoverflow.com/users/123663/" ]
You can't do that. Because the server side script (here PHP scripts) execute on the web server and its output is embedded inside HTML which is then thrown back to your browser. So all you can view is the HTML. Just imagine, if what you asked was possible, then evryone would have the source code of facebook, flipkart in their hands now.
check out `php://input` and `php://filter/convert.base64-encode/resource=<filepath>`, eg. <http://level11.tasteless.eu/index.php?file=php://filter/convert.base64-encode/resource=config.easy.inc.php>
11,034,322
You have the `EntityManager.find(Class entityClass, Object primaryKey)` method to find a specific row with a primary key. But how do I find a value in a column that just have unique values and is not a primary key?
2012/06/14
[ "https://Stackoverflow.com/questions/11034322", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1008572/" ]
You can use a Query, either JPQL, Criteria, or SQL. Not sure if your concern is in obtaining cache hits similar to find(). In EclipseLink 2.4 cache indexes were added to allow you to index non-primary key fields and obtain cache hits from JPQL or Criteria. See, <http://wiki.eclipse.org/EclipseLink/UserGuide/JPA/Basic_JPA_Development/Caching/Indexes> Prior to 2.4 you could use in-memory queries to query the cache on non-id fields.
TL;DR With in DSL level - [JPA](https://docs.oracle.com/javaee/7/api/javax/persistence/package-summary.html) no practice mentioned in previous answers > > How do I find a value in a column that just have unique values and is not a primary key? > > > There isn't specification for query with custom field with in root interface of `javax.persistence.EntityManager`, you need to have [criteria](https://docs.oracle.com/javaee/7/tutorial/persistence-criteria.htm) base query. ```html CriteriaBuilder criteriaBuilder = entityManager.getCriteriaBuilder(); CriteriaQuery<R> criteriaQuery = criteriaBuilder.createQuery(EntityType.class) Root<R> root = criteriaQuery.from(type); criteriaBuilder.and(criteriaBuilder.equal(root.get(your_field), value)); ``` You can also group your predicates together and pass them all together. ``` andPredicates.add(criteriaBuilder.and(root.get(field).in(child))); criteriaBuilder.and(andPredicates.toArray(new Predicate[]{}); ``` And calling result(rather single entity or a list of entities) with ``` entityManager.createQuery(suitable_criteria_query).getSingleResult(); entityManager.createQuery(suitable_criteria_query).getResultList(); ```
45,793,451
I have backend that return me some json. I parse it to my class: ``` class SomeData( @SerializedName("user_name") val name: String, @SerializedName("user_city") val city: String, var notNullableValue: String ) ``` Use gson converter factory: ``` Retrofit retrofit = new Retrofit.Builder() .baseUrl(ENDPOINT) .client(okHttpClient) .addConverterFactory(GsonConverterFactory.create(gson)) .addCallAdapterFactory(RxJava2CallAdapterFactory.create()) .build(); ``` And in my interface: ``` interface MyAPI { @GET("get_data") Observable<List<SomeData>> getSomeData(); } ``` Then I retrieve data from the server (with rxJava) without any error. But I expected an error because I thought I should do something like this (to prevent GSON converter error, because `notNullableValue` is not present in my JSON response): ``` class SomeData @JvmOverloads constructor( @SerializedName("user_name") val name: String, @SerializedName("user_city") val city: String, var notNullableValue: String = "" ) ``` After the data is received from backend and parsed to my SomeData class with constructor without def value, the value of the **notNullableValue == null**. As I understand not nullable value can be null in Kotlin?
2017/08/21
[ "https://Stackoverflow.com/questions/45793451", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5417224/" ]
Yes, that is because you're giving it a default value. Ofcourse it will never be null. That's the whole point of a default value. Remove `=""` from constructor and you will get an error. Edit: Found the issue. GSON uses the magic `sun.misc.Unsafe` class which has an `allocateInstance` method which is obviously considered very `unsafe` because what it does is skip initialization (constructors/field initializers and the like) and security checks. So there is your answer why a Kotlin non-nullable field can be null. Offending code is in `com/google/gson/internal/ConstructorConstructor.java:223` Some interesting details about the `Unsafe` class: <http://mishadoff.com/blog/java-magic-part-4-sun-dot-misc-dot-unsafe/>
Try to override constructor like this: ``` class SomeData( @SerializedName("user_name") val name: String, @SerializedName("user_city") val city: String, var notNullableValue: String = "") { constructor() : this("","","") } ``` Now after server response you can check the **notNullableValue** is not null - its empty
39,790,281
I want to generate running serial no like 0001, 0999, 1100, 19300 with leading padded zeros until four characters. I have written below query to generate that number. ``` Select Right(Power(10, 4) + 02, 4) Select Right(Power(10, 4) + 102, 4) Select Right(Power(10, 4) + 10002, 4) ``` **Actual Result:-** 0002 0102 0002 **Expected Result:-** 0002 0102 10002 In SQL Server 2012, there is FORMAT function available. ``` SELECT Format(1, '0002') SELECT Format(1000, '0102') SELECT Format(10000, '10002') ``` **Actual Output:-** 0002 0102 10002 Currently I am using SQL Server 2008. How can I achieve that padded left zeros until 4 characters length after that original number should come?
2016/09/30
[ "https://Stackoverflow.com/questions/39790281", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1875682/" ]
The generator solution you are looking for would be ``` function* range(i, end=Infinity) { while (i <= end) { yield i++; } } // if (this.props.total > 1) - implicitly done by `range` for (let page of range(1, this.props.total) { active = page === +this.props.current; } ```
For generating any range of sequential integers of length `k` starting at `n` in JavaScript the following should work: ``` Array.apply(null, Array(k)).map((x, i) => i + n); ``` While not *quite* the same as the coffeescript range functionality, its probably close enough for most uses. Also despite being significantly more verbose has one decided advantage: you don't have to remember which of `..` and `...` is exclusive and which is inclusive.
31,074,739
In order to insert GA code (and pretty much any other JS library), the code snippet is: ``` <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-XXXXXX-X', 'auto'); ga('send', 'pageview'); </script> ``` Why not: ``` <script type="text/javascript" src="//www.google-analytics.com/analytics.js" async></script> ``` at the end of the `body`?
2015/06/26
[ "https://Stackoverflow.com/questions/31074739", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1620081/" ]
Google knows that the script they have is not dependent on any other script in the page, therefore they are enforcing that the script is executed as 'non-blocking' meaning that the script content is executed ASAP, outside of the usual tag ordering within the document (it does not have any dependencies). The [implementation of the DOM script tag](https://html.spec.whatwg.org/multipage/scripting.html#script) is non-trivial and must cater for script inter-dependencies, unless explicitly stated as '[async](https://html.spec.whatwg.org/multipage/scripting.html#attr-script-async)'. In this case the external code will be executed immediately, without waiting for anything else on the page to load. [Google have documented their approach well here](https://developers.google.com/analytics/devguides/collection/analyticsjs/advanced). Basically it will improve performance on old browsers by allowing `async` script execution. Dynamically inserting a script tag mimics the behaviour of the native `async` attribute in modern browsers. You can see that the dynamic script tag is marked as `async` in their code injector function, to cater for modern browsers too. i.e. `a.async=1;`
According to Google's [docs](https://developers.google.com/analytics/devguides/collection/analyticsjs/advanced), they do recommend the simpler `<script src>` version but **only** if you're targeting modern browsers (excluding IE 9).
128,952
i currently have a client that will be adding replicated data from satellite locations in the number of approximately 80TB per year. with this said in year 2 we will have 160TB and so on year after year. i want to do some sort of raid 10 or raid 6 setup. i want to keep the servers to approximately 4u high and rack mounted. all suggestions welcome on a replication strategy. we will be wanting to have one instance of the data in house and the other to be co-located (any suggestions on co-locate sites too?). the obvious hardware will be something like a rack mount server with hot swap trays and dual xeon based type processors. the use of the data is for archives of information, files will be made up of small file sizes. i can add or expand to this question if it is too vague. thanks for looking.
2010/04/02
[ "https://serverfault.com/questions/128952", "https://serverfault.com", "https://serverfault.com/users/90428/" ]
We are using CORAID shelves for some of our stuff. The last shelf we are setting up is a 24 port filled up with 2tb drives <http://www.coraid.com/PRODUCTS/SR-Series/SR2421-EtherDrive-Storage-Appliance_2> . We got 4 shelves so far. It takes up 4u, is certified with vmware and has linux & windows drivers available (I use both currently). The cost/gb is pretty low. I deployed the first shelf on 2006 and never had any problems with CORAID equipment.
Are you willing to build your own? Or are you looking at a vendor solution? Do you need to access the data as a single volume, or, will your system handle figuring out which server your data is on? Are you concerned with having at least two copies of your data on disparate systems or are you handling backups in addition? Does this need to act as a SAN, or is data archived and able to deal with slightly slower access times? That said, if you are looking at a vendor provided solution, their salespeople are paid very well to identify what you need and provide a quotation. If you are looking to build your own, consider ZFS. The most dense 4U solution I've used <http://www.supermicro.com/products/chassis/4U/?chs=847> If you were to build your own similar to a Lefthand solutions product or with glusterFS, you could build a cluster with redundancy on top of a number of these nodes.
55,674,130
```swift var input = [readLine() ?? ""] ``` If I just entered, input has `[""]` If I do not input anything, I want to make the input an empty list. How can I do it? This is because I want the count of the input to be zero when the input is empty.
2019/04/14
[ "https://Stackoverflow.com/questions/55674130", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11211656/" ]
To add a sort of combination between GhostCat's answer and Dr Phil's answer, you could have a text file that contains the server IP that is simply read by your Java class at class-initialization time. The text file would be a *resource* (i.e. it's bundled inside in your JAR file). ``` public class Main { public static final String IP_SERVER; static { try (InputStream is = Main.class.getResourceAsStream("path/to/resource/file.txt")) { BufferedReader reader = new BufferedReader(new InputStreamReader(is)); IP_SERVER = reader.readLine(); } catch (IOException ex) { throw new UncheckedIOException(ex); } } } ``` The above code assumes your text file will have the IP information on the first line (and ignores any other lines). You will probably want to create the `InputStreamReader` with an explicit charset (the one used to encode your text file) rather than relying on the default charset (see the documentation of `InputStreamReader` for more details). To modify the text file you could use a similar approach to that described in Dr Phil's answer. The only difference here is that you're modifying a text file which means you don't need to recompile any source files. Instead of a text file, you could use a properties file (i.e. `*.properties`). You would then use the [`Properties`](https://docs.oracle.com/en/java/javase/12/docs/api/java.base/java/util/Properties.html) class to access it (again, as a *resource*).
Include that file in your project with the original package. While compiling your class will be detected first instead of that client.jar. For this, in the compilation path, your class files should be in first instead of that client.jar.
30,959
What are the proper assumptions of Multinomial Logistic Regression? And what are the best tests to satisfy these assumptions using SPSS 18?
2012/06/22
[ "https://stats.stackexchange.com/questions/30959", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/12161/" ]
The key assumption in the MNL is that the errors are independently and identically distributed with a Gumbel extreme value distribution. The problem with *testing* this assumption is that it is made *a priori*. In standard regression you fit the least-squares curve, and measure the residual error. In a logit model, you assume that the error is already in the measurement of the point, and compute a likelihood function from that assumption. An important assumption is that the sample be exogenous. If it is choice-based, there are corrections that need to be employed. As far as assumptions on the model itself, [Train](http://elsa.berkeley.edu/books/choice2nd/Ch03_p34-75.pdf) describes three: 1. Systematic, and non-random, taste variation. 2. Proportional substitution among alternatives (a consequence of the IIA property). 3. No serial correlation in the error term (panel data). The first assumption you mostly just have to defend in the context of your problem. The third is largely the same, because the error terms are purely random. The second is testable to a certain extent, however. If you specify a nested logit model, and it turns out that the inter-nest substitution pattern is entirely flexible ($\lambda = 1$) then you could have used the MNL model, and the IIA assumption is valid. But remember that the log-likelihood function for the nested logit model has local maxima, so you should make sure that you get $\lambda =1$ consistently. As far as doing any of this in SPSS, I can't help you other than suggest you use the `mlogit` package in R instead. Sorry.
gmacfarlane has been very clear. But to be more precise, and I assume you perform a cross section analysis, the core assumption is the IIA (independence of irrelevant alternatives). You can not force your data fit into the IIA assumption, you should test it and hope for it to be satisfied. SPSS could not handle the test until 2010 for sure. R of course does it, but it might me easier for you to migrate to Stata and implement the IIA tests provided by the `mlogit` postestimation commands. If the IIA does not holds, mixed multinomial logit or nested logit are reasonable alternatives. The first one can be estimated within the `gllamm`, the second with the far more parsimonious `nlogit` command.
41,500,569
I have a file with too many data objects in JSON of the following form: ``` { "type": "FeatureCollection", "features": [ { "type": "Feature", "properties": {}, "geometry": { "type": "Polygon", "coordinates": [ [ [ -37.880859375, 78.81903553711727 ], [ -42.01171875, 78.31385955743478 ], [ -37.6171875, 78.06198918665974 ], [ -37.880859375, 78.81903553711727 ] ] ] } }, { "type": "Feature", "properties": {}, "geometry": { "type": "Polygon", "coordinates": [ [ [ -37.6171875, 78.07107600956168 ], [ -35.48583984375, 78.42019327591201 ], [ -37.880859375, 78.81903553711727 ], [ -37.6171875, 78.07107600956168 ] ] ] } } ] } ``` I would like to split the large file such that each features object would have its own file containing a its type object and features(coordinates) object. So essentially, I am trying to get many of these: ``` { "type": "FeatureCollection", "features": [ { "type": "Feature", "properties": {}, "geometry": { "type": "Polygon", "coordinates": [ [ [ -37.6171875, 78.07107600956168 ], [ -35.48583984375, 78.42019327591201 ], [ -37.880859375, 78.81903553711727 ], [ -37.6171875, 78.07107600956168 ] ] ] } } ] } ```
2017/01/06
[ "https://Stackoverflow.com/questions/41500569", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5329401/" ]
Here's a solution requiring just one invocation of `jq` and one of `awk`, assuming the input is in a file (input.json) and that the N-th component should be written to a file /tmp/file$N.json beginning with N=1: ``` jq -c '.features = (.features[] | [.]) ' input.json | awk '{ print > "/tmp/file" NR ".json"}' ``` An alternative to `awk` here would be `split -l 1`. If you want each of the output files to be "pretty-printed", then using a shell such as bash, you could (at the cost of n additional calls to jq) write: ``` N=0 jq -c '.features = (.features[] | [.])' input.json | while read -r json ; do N=$((N+1)) jq . <<< "$json" > "/tmp/file${N}.json" done ``` Each of the additional calls to jq will be fast, so this may be acceptable.
I haven't tested this code properly. But should provide you some idea on how you can solve the problem mentioned above ```js var json = { "type": "FeatureCollection", "features": [ { "type": "Feature", "properties": {}, "geometry": { "type": "Polygon", "coordinates": [ [ [ -37.880859375, 78.81903553711727 ], [ -42.01171875, 78.31385955743478 ], [ -37.6171875, 78.06198918665974 ], [ -37.880859375, 78.81903553711727 ] ] ] } }, { "type": "Feature", "properties": {}, "geometry": { "type": "Polygon", "coordinates": [ [ [ -37.6171875, 78.07107600956168 ], [ -35.48583984375, 78.42019327591201 ], [ -37.880859375, 78.81903553711727 ], [ -37.6171875, 78.07107600956168 ] ] ] } } ] } $(document).ready(function(){ var counter = 1; json.features.forEach(function(feature){ var data = {type: json.type, features: [feature]} var newJson = JSON.stringify(data); var blob = new Blob([newJson], {type: "application/json"}); var url = URL.createObjectURL(blob); var a = document.createElement('a'); a.download = "feature_" + counter + ".json"; a.href = url; a.textContent = "Download feature_" + counter + ".json"; counter++; document.getElementById('feature').appendChild(a); document.getElementById('feature').appendChild(document.createElement('br')); }); }); ``` ```html <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <div id="feature"></div> ```
21,768,186
From this link I would like to display a MessageBox like the one [UI](https://msdn.microsoft.com/en-us/library/windows/desktop/dn742478.aspx) > > Formatting will erase all the data on this disk. > > > On that the named the button Format I am not finding that as a MessageBox. How is that done? Is this just a custom modal Window? This seems close but does not rename a button [MessageBox.Show](http://msdn.microsoft.com/en-us/library/ms598707%28v=vs.110%29.aspx) I know you are not supposed to use tags in the title but there is Custom MessageBox title already but it deals with Forms.
2014/02/13
[ "https://Stackoverflow.com/questions/21768186", "https://Stackoverflow.com", "https://Stackoverflow.com/users/607314/" ]
This seems close: <http://www.codeproject.com/Articles/201894/A-Customizable-WPF-MessageBox> 01234567890123456789
There are some overloads on MessageBox.Show that can make that message: ``` MessageBox.Show("Formatting will erase all data on this disk. \nTo format the disk, click OK. To quit, click Cancel.", "Format Local Disk (F:)", MessageBoxButton.OKCancel, MessageBoxImage.Exclamation); ``` The `MessageBoxButton` enum lets you choose between `OK`, `OKCancel`, `YesNo`, and `YesNoCancel`. If you want different buttons than that I'm afraid you have to use the advice of the other answers and create your own window.
323,781
How do I easily open my current vim buffers/arguments in a separate window/tab each? I know about: ``` $ vim one.txt two.txt three.txt -O ``` However if I simply start vim with: ``` $ vim one.txt two.txt three.txt ``` How can I replicate this behaviour once I've already started vim?
2011/08/16
[ "https://superuser.com/questions/323781", "https://superuser.com", "https://superuser.com/users/41606/" ]
To split all buffers use `:sba` or `:vert sba`
How to convert buffers to windows/tabs: * buffers > horizontal windows: `:ba` (buffer all) * buffers > vertical windows: `:vert ba` (vertical buffer all) * buffers > tabs: `:tab ba` (tab buffer all) How to convert window/tabs to buffers: * windows > buffers: `:on`, `:only` (current window only) * tabs > buffers: `:tabo`, `:tabonly` (current tab only) If you want to convert all windows to tabs, first convert windows to buffers, then buffers to tabs: `:on | tab ba`
64,155,219
I have the following array: ``` a =['1','2'] ``` I want to convert this array into the below format : ``` a=[1,2] ``` How can I do that?
2020/10/01
[ "https://Stackoverflow.com/questions/64155219", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14207921/" ]
You could collect the data from both input arrays/lists into a set of pairs and then recollect the pairs back to two new lists (or clear and reuse existing `names`/`IDs` lists): ```java List<String> names = Arrays.asList("ben","david","jerry","tom","ben"); List<String> IDs = Arrays.asList("123","23456","34567","123","123"); // assuming that both lists have the same size // using list to store a pair Set<List<String>> deduped = IntStream.range(0, names.size()) .mapToObj(i -> Arrays.asList(names.get(i), IDs.get(i))) .collect(Collectors.toCollection(LinkedHashSet::new)); System.out.println(deduped); System.out.println("-------"); List<String> dedupedNames = new ArrayList<>(); List<String> dedupedIDs = new ArrayList<>(); deduped.forEach(pair -> {dedupedNames.add(pair.get(0)); dedupedIDs.add(pair.get(1)); }); System.out.println(dedupedNames); System.out.println(dedupedIDs); ``` Output: ``` [[ben, 123], [david, 23456], [jerry, 34567], [tom, 123]] ------- [ben, david, jerry, tom] [123, 23456, 34567, 123] ```
You could add your names one by one to a set as long as `Set.add` returns true and if it returns false store the index of that element in a list (indices to remove). Then sort the indices list in reverse order and use `List.remove(int n)` on both your names list and id list: ``` List<String> names = ... List<String> ids = ... Set<String> set = new HashSet<>(); List<Integer> toRemove = new ArrayList<>(); for(int i = 0; i< names.size(); i ++){ if(!set.add(names.get(i))){ toRemove.add(i); } } Collections.sort(toRemove, Collections.reverseOrder()); for (int i : toRemove){ names.remove(i); ids.remove(i); } System.out.println(toRemove); System.out.println(names); System.out.println(ids); ```
29,067,033
I've got a string with some **HTML** in it. I'd like to get the first two paragraphs ``` <p>content</p><p>content 2</p> ``` What would be the easiest way to do this?
2015/03/15
[ "https://Stackoverflow.com/questions/29067033", "https://Stackoverflow.com", "https://Stackoverflow.com/users/446835/" ]
Two examples: ``` var string = "<p>content</p><p>content 2</p>"; // EXAMPLE 1 var parser = new DOMParser().parseFromString(string, "text/html"); var paragraphs = parser.getElementsByTagName('p'); for(var i=0; i<paragraphs.length; i++){ console.log( paragraphs[i].innerHTML ); // or use `outerHTML` } // "content" // "content 2" // EXAMPLE 2 var el = document.createElement("div"); el.insertAdjacentHTML('beforeend', string); var paragraphs = el.getElementsByTagName('p'); for(var i=0; i<paragraphs.length; i++){ console.log( paragraphs[i].innerHTML ); // or use `outerHTML` } // "content" // "content 2" ``` if you want instead to really get ``` // "<p>content</p>" // "<p>content 2</p>" ``` simply replace in both examples `innerHTML` with `outerHTML` if you want to manipulate the current HTMLParagraphElement than simply use the `paragraph[i]`
Create a DOM element and put your html in it, like so: ``` var element = document.createElement('div'); element.innerHTML = "<html><p>content</p></html>" ``` You can then get the desired parts as a NodeList: ``` element.getElementsByTagName('p'); ```
1,892,181
It is states that ,Infinity is the notation used to denote greatest number. And $\infty + \infty = \ infinity$ When my brother and i has discussed about it we have the following argument. 1. I say "infinity is not a real number".but my brother arguree with me And my proof is like the one which follows $$\infty + \infty = \ infinity$$ And $$\infty + 1= \ infinity$$ Since infinity not equal to zero,if infinity is a real number,then by 'cancellation law' $$\infty + \infty = \infty +1$$. Therefore $$\infty=1$$ but it is not true .hence it is not a real number. Is it right proof? 2.As the discussion goes on my brother ask "why we say $\infty + \infty = \ infinity$" He give a proof like this If infinity is a greatest number then $\infty + \infty $ is again a greatest number so we called it as infinity" But my stand is "if p is a greatest number then p+p = 2p .therefore ,2p is the greatest number.then how you call p as a greatest number. Again he say to consider a statement " if $\infty$ is a greatest number then $\infty + \infty = \ infinity$" Since infinity is undefined the statement is true.I accept it but I won't know whether it is exactly true. I want to know 3. What is infinity? At last I was very confused about infinity .please someone explain the three question that I ask.Very thanks in Advance
2016/08/14
[ "https://math.stackexchange.com/questions/1892181", "https://math.stackexchange.com", "https://math.stackexchange.com/users/355833/" ]
Good that you raised this question: What is infinity? The fact that the symbol $\infty$ appears so frequently in calculus textbooks in the notations like $x \to \infty$ and $n \to \infty$ seems to suggests that it is to be treated on the same footing as $1,2, 3, \pi$ etc (i.e. treated as a real number like we use the notation $x \to 1$ or $x \to a$ for a real number $a$). First and foremost, we need to get rid of the myth that the notation $x \to a$ or $x \to \infty$ has a meaning in isolation. *Sorry! this notation has no meaning in isolation*. A notation $x \to a$ always comes as a part of a bigger notation like $$\lim\_{x \to a}f(x) = L$$ or as part of the phrase $$f(x) \to L\text{ as }x \to a$$ and note that in the above notations both $L, a$ can be replaced by symbols $\infty$ or $-\infty$. Same remarks apply to the notation $n \to \infty$. *The symbol $\infty$ has a meaning in a specific context and the meaning of $\infty$ in that context is given by a specific definition for that context. There is no meaning of the symbol $\infty$ by default in absence of a context and the related definition applicable to that context.* Adding symbols $\infty, -\infty$ to the set of real numbers to form extended real number system is a device used for technical convenience (mainly to reduce typing effort and writing concise books thereby reducing their understandability). This approach does not serve any purpose for a beginner in calculus who is trying sincerely to develop concepts of calculus. It is however suitable for those experienced in the art of calculus because they can do away with some extra effort of typing. As a beginner of calculus one should first try to learn about all the contexts where the symbol $\infty$ is used and then study very deeply the definition of that context. Unless you do this $\infty$ will always remain a confusing concept. Unfortunately most textbooks don't try to handle $\infty$ in that manner and rather start giving rules like $\infty + \infty = \infty$. I will provide a context here for use of $\infty$ and give its definition: *Let $f$ be a real valued function defined for all real values of $x > a$ where $a$ is some specific real number. The notation $\lim\_{x \to \infty}f(x) = L$ where $L$ is a real number means the following:* *For every given real number $\epsilon > 0$ there is a real number $N > 0$ such that $|f(x) - L| < \epsilon$ for all $x > N$.* The same meaning is conveyed by the phrase *$f(x) \to L$ as $x \to \infty$*. Another context for infinity is the phrase *$f(x) \to \infty$ as $x \to a$* whose meaning I will provide next. *Let $f$ be a real valued function defined in a certain neighborhood of $a$ except possibly at $a$. The phrase "$f(x) \to \infty$ as $x \to a$" means the following:* *For every real number $N > 0$ there exists a real number $\delta > 0$ such that $f(x) > N$ for all $x$ with $0 < |x - a| < \delta$.* The same meaning is conveyed by the notation $\lim\_{x \to a}f(x) = \infty$ but in this case I prefer to use the phrase equivalent as I hate to see the operations of $+,-,\times, /, =$ applied to $\infty$. You will notice that *understanding these definitions is a challenge*. And it requires reasonable amount of effort to really understand them. Having a copy of Hardy's *A Course of Pure Mathematics* would be a great help here because it explains these things in very great detail in a manner suitable for students of age 15-16 years. Now here is an exercise. *Using both the contexts try to give the definition for the phrase $f(x) \to \infty$ as $x \to \infty$.* And if you can do this then the next step would be to provide similar definitions for the contexts in which $-\infty$ occurs. The treatment of $n \to \infty$ happens slightly differently because by convention $n$ is assumed to be a positive integer unless otherwise stated. If you are able to supply the definitions required in last paragraph then you will also be able to supply the definition for the context $\lim\_{n \to \infty}s\_{n} = L$ where $s\_{n}$ is a sequence (i.e a real valued function whose domain is $\mathbb{N}$).
If you add $\pm\infty$ to the real numbers it is no longer what we call a "field" and therefore the cancellation law no longer necessarily holds. You can extend the real numers to what we call the extended real numbers but you cannot extend all algebraic operations as you might like. That's the root of all such contradictions - which are common we see such arguments on a daily basis.
33,365,805
I would like to have a Date Picker with only one wheel, filled with days as on the Date and Time mode. [![enter image description here](https://i.stack.imgur.com/lzxkP.jpg)](https://i.stack.imgur.com/lzxkP.jpg) If I choose the Date mode, I have 3 wheels and I loose the name of the day. In fact I would like something like the date and time mode, but only with the date. I could use a UIPickerView, but I would have to fill it by myself with a very big array of data, as I want to be able go back far in time. With the Date picker, it's filled automatically, which is cool. Is there a solution ?
2015/10/27
[ "https://Stackoverflow.com/questions/33365805", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5392802/" ]
You can also use **UIDatePicker**'s **pickerMode** property to do that. ``` datePicker.pickerMode = .Date ``` Look into header file, you will see that there are few other Options that you can play with. Other values are; ``` Time Date DateAndTime CountDownTimer ```
As you only need to show the contents that you have marked in the green box. You can use this <https://github.com/attias/AADatePicker> and modify it a bit and you can get it as per your requirement. The changes you will need to make are within the AADatePickerView Class 1) In viewForRow method comment the following things [![enter image description here](https://i.stack.imgur.com/XCGsS.png)](https://i.stack.imgur.com/XCGsS.png) 2) Second change [![enter image description here](https://i.stack.imgur.com/QH2zH.png)](https://i.stack.imgur.com/QH2zH.png) [![enter image description here](https://i.stack.imgur.com/z7a9N.png)](https://i.stack.imgur.com/z7a9N.png)
50,638,913
I am using [jscolor](http://jscolor.com/) to pick a color with an input. I wanted to style it but I can't get rid of the hex value of the color choosen and I didn't find any documentation about options. **What I've tried :** ``` //It shows nothing display: none //It works but then the dot isn't properly aligned font-size:0 //does nothing class="jscolor "{valueElement:null, value:"#e6e5f4"} ``` [jsfiddle](https://jsfiddle.net/7ayLhLpc/)
2018/06/01
[ "https://Stackoverflow.com/questions/50638913", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9114752/" ]
You're going to have to multiply each element, but something like this will ease the need to std::get on each element manually and multiply, and give the compiler a good chance to optimize. ``` #include <iostream> #include <type_traits> #include <tuple> #include <iomanip> template <typename T,typename Tuple, size_t... Is> void multiply_tuple_elements(T coeff, Tuple& tuple,std::index_sequence<Is...>) { using do_= int[]; (void)do_{0, ( std::get<Is>(tuple)*=coeff ,0)...}; } int main() { double coeff = 2.0; std::tuple<double, double, double> m_size = std::make_tuple(1.0, 1.0, 1.0); multiply_tuple_elements( coeff, m_size , std::make_index_sequence<std::tuple_size<decltype(m_size)>::value>() ); std::cout << std::fixed; std::cout << std::get<0>(m_size) << std::endl; std::cout << std::get<1>(m_size) << std::endl; std::cout << std::get<2>(m_size) << std::endl; return 0; } ``` [Demo](https://ideone.com/mVHvmX)
> > How can I do this more efficiently? > > > In sense of execution performance: You cannot... Every multiplication will involve the FPU of your CPU, and every multiplication needs to be done separately *there*. If you want to get simpler code (that, once compiled, still does the multiplications separately...), you might try to implement some kind of `for_each` for tuples, possibly based on this [answer](https://stackoverflow.com/a/6894436/1312382), such that you could write a one-liner like this: ``` foreach(m_size, [](double& v) { v *= 2; }); ``` Not sure if this fits *your* definition of efficiency, though... As types are all the same, you could switch to a `std::array<double, 3>` and then use standard libraries `std::for_each` instead. This could even be further simplified by providing range-based wrappers (unfortunately not yet existing in standard library): ``` template <typename C, typename F> void for_each(C& c, F&& f) { std::for_each(c.begin(), c.end(), std::move(f)); } ```
5,388
I am a mom who has made two changes to my children's education so far. I was not satisfied with the level of education of school at the first school so we switched to a more rigorous academic environment. Unfortunately, the fit was horrible at the second school, including the teachers, admin and most importantly friends. So we went back to the first school and are wrapping up our second year there. The learning is still not at the level we would like, but our kids are happy and have a sense of belonging. I also have made friends with other parents who seem to be like minded individuals overall. I often think of the saying that "If you surround yourself with bright people, you become smarter." There are minimal bright kids at the school. I want the best for my kids in all aspects of their development but feel that they are getting cut short on the academics. By moving them, we would all be cutting ourselves short on the social side of things. Should I stick it out and supplement their education from home? I am a teacher by profession but feel stressed out in having to play mom and teacher, like trying to teach your kid how to drive. I don't want to make the wrong decision and have to go back a third time...that would be terrible!
2012/06/26
[ "https://parenting.stackexchange.com/questions/5388", "https://parenting.stackexchange.com", "https://parenting.stackexchange.com/users/2869/" ]
From moving various schools myself when I was younger I would say that changing schools is a very big upset to learning - the child takes time to make new friends, settle in, understand the new curriculum etc. If you can supplement their learning at home I would recommend doing that - being a teacher you will probably be in a good place here to see what areas they aren't getting at school and building on those at home. Especially if the current school has things mostly right but just isn't quite hitting the spot in some areas, this will be the least likely to cause issues, in my opinion. If you can get buy in from the school to discuss the extent of their curriculum with you you should be able to build on that to where you want their learning to get to.
You are obviously getting a lot of answers, this is a tough one. I taught preschool for two years and middle school (in what were supposedly highly rated, academically rigorous schools) for eight. I also taught twice exceptional kids for three (these are the ones that are often the targets of school "socialization" and most often bullied - sometimes even by their former teachers). My daughter began reading at three and at the age of five was measured as reading at a 5th grade level so we've had to make some similar decisions. In addition to the choices you mention, there are a lot of partial school options such as virtual schooling and home school cooperatives. We participate in a virtual school that has community outings, Wednesday is classroom day (with an accredited teacher that is NOT me), classes are two-grade splits, but it is a full classroom with kids relatively close in age, and field trips in which we can choose to participate frequently. Such communities can be found across the US. Also, Homeschooling does not have the social implications many people think it has. Our schools are not truly in the business of "socializing" our kids and a lot less socialization occurs than most people think. Yes, they learn to share and some conflict resolution happens, but that does not complete the picture of what needs to happen. If you think you'd like to consider Homeschooling, there are a few other questions that may be useful to you too. One is about the [pros and cons of homeschooling](https://parenting.stackexchange.com/q/5593/2876) as well as one about home-schoolers and [social events/extra-curriculars](https://parenting.stackexchange.com/q/594/2876) (Pay special attention to Hedgemage's answer). If home education isn't right for you, then I would definitely suggest supplementing. However, I wouldn't suggest supplementing with the stuff they are already doing. Rather, I would suggest supplementing in the areas the school is probably not even touching. Geography, a second language, History enrichment, music, theater, as a few examples - or do family reading books and introduce literature from the banned book lists that you don't have a problem with. Do lots of fun challenges, your kids can make cartesian divers and toothpick bridges for fun science activities with you. Try to frequently go on "field trips" and "outings" that will take you to an educational place and have fun while you are there together. . . Whatever you do, make it fun and for the whole family or your kids are likely to resent the extra "pencil pushing" and the fun will be completely gone from learning(Having taught in private schools, I've seen this happen to great kids). Days at school are long and your kids are likely to start having a lot of homework too in the not-so-distant future. Teach as you live your life. Whatever you decide, It will be what is right for you and your kids in the end, but be careful about "over supplementing" in too formal a way.
7,240,698
I'd like to test the value of an enumeration attribute of a DOORs object. How can this be done? And where can I find a DXL documentation describing basic features like this? ``` if (o."Progress" == 0) // This does NOT work { // do something } ```
2011/08/30
[ "https://Stackoverflow.com/questions/7240698", "https://Stackoverflow.com", "https://Stackoverflow.com/users/89004/" ]
For multi-valued enumerations, the best way is `if (isMember(o."Progress", "0")) {`. The possible enumerations for single and multi-enumeration variables are considered strings, so Steve's solution is the best dxl way for the single enumeration.
If you're talking about the "related number" that is assignable from the Edit Types box, then you'll need to start by getting the position of the enumeration string within the enum and then retrieve `EnumName[k].value` . I'm no expert at DXL, so the only way to find the index that I know of off the top of my head is to loop over `1 : EnumName.size` and when you get a match to the enumeration string, use that loop index value to retrieve the enumeration "related number."
34,978,260
I've been trying for the last three days (yeah) to make a image/short video tagging system for my own use but this has proven a challenge beyond me. These are the strings: ``` d:\images\tagging 1\GIFs\kung fu panda, fight.webm d:\images\tagging 1\GIFs\kung fu panda, fight (2).webm d:\images\tagging 1\GIFs\kung fu panda 2, fight.webm d:\images\tagging 1\GIFs\kung fu panda 2, fight (2).webm d:\images\tagging 1\GIFs\pulp fiction, samuel l. jackson, angry, funny.webm ``` I have four things that I've tried modifying to achieve what I want with no success: ``` (?<=d:\\images\\tagging\s1\\GIFs\\)([\w\s])+ ([a-z0-9]\s?)+ (?<=\\)[^\\]*?(?=\..*$) [^\\/:*?"<>|\r\n]+$ ``` 1 Almost there, but it doesn't extend past the first comma. 2 This does almost everything, but I haven't found a way to exclude the directory, the (#) and the extension. 3 Taken from the internet, captures the "l." and stops there, whole filename, can't use commas as I want, captures (#). 4 Taken from regexbuddy (yes I actually bought it in my desperation), captures (#) and extension. @timgeb The intention is to get the filenames without the commas, the (#) and extension, so: ``` "kung fu panda" "fight" "kung fu panda" "fight" "kung fu panda 2" "fight" "kung fu panda 2" "fight" "pulp fiction" "samuel l. jackson" "angry" "funny" ```
2016/01/24
[ "https://Stackoverflow.com/questions/34978260", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5833369/" ]
Your question isn't very clear, but I *think* you want to parse filenames. If that's the case, I wouldn't recommend using `re` as your primary tool. Instead, have a look at [`os.path`](https://docs.python.org/3/library/os.path.html#module-os.path): ``` import os.path # Or `import ntpath` for Windows paths on non-Windows systems dir, file_name = os.path.split('d:\images\tagging 1\GIFs\kung fu panda, fight (2).webm') # dir = 'd:\images\tagging 1\GIFs' # file_name = 'kung fu panda, fight (2).webm' root, ext = os.path.splitext(file_name) # root = 'kung fu panda, fight (2)' # ext = '.webm' ``` Now you have a much simpler problem: removing the numbers in parentheses.
Get the basename, substitute integers in parentheses and the extension with the empty string and strip off the whitespace. ``` from ntpath import basename import re map(str.strip, re.sub('\(\d+\)|\.\w+$', '', basename(s)).split(',')) ``` Demo: ``` >>> s = 'd:\images\tagging 1\GIFs\kung fu panda, fight.webm' >>> map(str.strip, re.sub('\(\d+\)|\.\w+$', '', basename(s)).split(',')) ['kung fu panda', 'fight'] >>> s = 'd:\images\tagging 1\GIFs\kung fu panda, fight (2).webm' >>> map(str.strip, re.sub('\(\d+\)|\.\w+$', '', basename(s)).split(',')) ['kung fu panda', 'fight'] >>> s = 'd:\images\tagging 1\GIFs\kung fu panda 2, fight.webm' >>> map(str.strip, re.sub('\(\d+\)|\.\w+$', '', basename(s)).split(',')) ['kung fu panda 2', 'fight'] >>> s = 'd:\images\tagging 1\GIFs\kung fu panda 2, fight (2).webm' >>> map(str.strip, re.sub('\(\d+\)|\.\w+$', '', basename(s)).split(',')) ['kung fu panda 2', 'fight'] >>> s = 'd:\images\tagging 1\GIFs\pulp fiction, samuel l. jackson, angry, funny.webm' >>> map(str.strip, re.sub('\(\d+\)|\.\w+$', '', basename(s)).split(',')) ['pulp fiction', 'samuel l. jackson', 'angry', 'funny'] ```
12,849,717
I'm trying to expand this div across with width of the browser. I've read from [here](https://stackoverflow.com/questions/5590214/make-child-div-stretch-across-width-of-page "here") that you can use `{position:absolute; left: 0; right:0;}` to achieve that as in the jsfiddle here: <http://jsfiddle.net/bJbgJ/3/> But the problem is that my current `#container` has a `{position:relative;}` property, and hence if I apply `{position:absolute}` to the child div, it would only refer to `#container`. Is there a way to still extend my child div beyond the `#container`?
2012/10/11
[ "https://Stackoverflow.com/questions/12849717", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1411027/" ]
You can try adding `overflow:visible` to the parent `div`, then making the child wider than the parent.
None of the answers above mention the best way of doing this, using `position:relative` on the full width container and `position:absolute;left:0;width:100%;` on div inside that is inside any centralised/offset div, as long as no other containers have declared position:relative, this will work.
31,738,762
I have a div which have 3 inputs: ``` <div class="excelPreview"> <a href="#" id="getContent" class="btn btn-primary">get</a> <a href="#" id="calculate" class="btn btn-primary">calculate</a> <div class="getThis"> Title <div>ID</div> <div>Name</div> <p>Content here....</p> <input type="text" id="num1"> + <input type="text" id="num2"> = <input type="text" id="total"> </div> </div> ``` the result of num1 is added to num2 and put in the total textbox. I use this to add ``` $( "#calculate" ).on( "click", function() { num1 = $("#num1").val(); num2 = $("#num2").val(); total = parseInt(num1) + parseInt(num2); $("#total").val(total); }); ``` I use this code to get the html contents: ``` var cont = $('.getThis').html(); ``` but it only gets the content not the values of textbox included like this Title ``` <div>ID</div> <div>Name</div> <p>Content here....</p> <input type="text" id="num1"> + <input type="text" id="num2"> = <input type="text" id="total"> ``` I want the result is should be: Title ``` <div>ID</div> <div>Name</div> <p>Content here....</p> <input type="text" id="num1" value="OfWhatIInput"> + <input type="text" id="num2" value="OfWhatIInput"> = <input type="text" id="total" value="OfWhatIsTheResult"> ```
2015/07/31
[ "https://Stackoverflow.com/questions/31738762", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4298298/" ]
You can use `clone`. `clone` will copy the whole div with all the events also. ``` $('#getContent').on('click', function() { var cont = $('.getThis').clone(true); $(".excelPreview").append(cont); }) ``` [Fiddle](https://jsfiddle.net/j61t7qww/)
``` var sum=$('#num2).val()+$('#num2).val(); $('#total).val(sum); ```
72,814,362
``` pragma solidity >=0.5.0 <0.6.0; contract ZombieFactory { uint dnaDigits = 16; uint dnaModulus = 10 ** dnaDigits; struct Zombie { string name; uint dna; } Zombie[] public zombies; function createZombie (string memory _name, uint _dna) public { // start here } } ``` Here I am confused because as per this post <https://ethereum.stackexchange.com/questions/1701/what-does-the-keyword-memory-do-exactly?newreg=743a8ddb20c449df924652051c14ef26> *"the local variables of struct are by-default in storage, but the function arguments are always in memory"*. So does it mean that in this code when we pass string \_name as a function argument, it will be assigned to memory or will it remain in the storage like all other state variables?
2022/06/30
[ "https://Stackoverflow.com/questions/72814362", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15906729/" ]
IIUC for get all negative values use [`boolean indexing`](http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing): ``` out = s[s.lt(0)] ``` If need absolute values use [`Series.abs`](http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.abs.html): ``` out = s.abs() ``` If need replace negative to to `0` use [`Series.clip`](http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.clip.html): ``` out = s.clip(lower=0) print (out) mandant brand WELT N24 DOKU 0.0 N24 DOKU 0.0 Name: a, dtype: float64 ``` Or some another value use [`Series.mask`](http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.mask.html): ``` out = s.mask(s.lt(0), 1) print (out) mandant brand WELT N24 DOKU 1.0 N24 DOKU 1.0 Name: a, dtype: float64 ```
put array indexes according to your code ``` df.index.get_level_values(0).drop_duplicates()[-2:] ```
27,487,972
How do I make a .jar file out of the [Volley project](https://developer.android.com/training/volley/index.html) ([git repository](https://android.googlesource.com/platform/frameworks/volley))? I have tried to follow the instructions in [this answer](https://stackoverflow.com/a/16721116/1274911), however running `android update project -p .` in the cloned `volley` folder throws this error: ``` Error: . is not a valid project (AndroidManifest.xml not found). ```
2014/12/15
[ "https://Stackoverflow.com/questions/27487972", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1274911/" ]
The build process for Volley has changed to Gradle. If you just want to use the library without building it, you can get the Jar from Maven or scroll down to the instructions to building it yourself lower in this answer. **Maven** An easier way to obtain the Jar file is to download it directly from Maven Central. You can find the latest version with this search: <http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22com.mcxiaoke.volley%22> At the time of writing the 1.0.19 version can be found here: <http://search.maven.org/remotecontent?filepath=com/mcxiaoke/volley/library/1.0.19/library-1.0.19.jar> --- **Gradle** The new way to do it is to build the project using Gradle. You can do this by running: ``` git clone https://android.googlesource.com/platform/frameworks/volley gradle build ``` This will create a file in ``` build\intermediates\bundles\release ``` Then add this file into your libs folder and add it to your project.
@Sateesh G answer is the better answer. Here are the steps I used to build volley as an aar on OS X. (Assuming you already have git, gradle, and android dev tools setup and working) ``` export ANDROID_HOME=~/.android-sdk/android-sdk-macosx git clone https://android.googlesource.com/platform/frameworks/volley cd volley ``` Configured roblectric and its dependencies ``` echo 'emulateSdk=18'>>src/test/resources/org.robolectric.Config.properties cat<<END>rules.gradle allprojects { repositories { jcenter() } } dependencies { testCompile 'junit:junit:4.12' testCompile 'org.apache.maven:maven-ant-tasks:2.1.3' testCompile 'org.mockito:mockito-core:1.9.5' testCompile 'org.robolectric:robolectric:2.4' } END ``` Build the aar ``` gradle wrapper ./gradlew clean build ``` Output files will be located under ``` build/outputs/aar ``` Finally to include the resulting aar files in your Android project; copy the volley-release.aar file to the libs subdirectory under your project and add the following to your projects build.gradle file ``` repositories { flatDir { dirs 'libs' } } compile('com.android.volley:volley-release:1.0.0@aar') ```
10,621,749
I am trying to populate a drop down list I have the following: ``` $('#dd_deg ').append($('<option></option>').val('hello').html('va1')); ``` What is the purpose of .val and .html. I see that val1 is what shows in the drop down but what is the purpose of .val
2012/05/16
[ "https://Stackoverflow.com/questions/10621749", "https://Stackoverflow.com", "https://Stackoverflow.com/users/996431/" ]
According to the docs, [.val(value)](http://api.jquery.com/val/) > > Set the value of each element in the set of matched elements. > > > And [.html(htmlString)](http://api.jquery.com/html/) > > Set the HTML contents of each element in the set of matched elements. > > > You typically use val to get/set the value of input elements, and html to get/set the inner html of any element.
`val()` sets the `value` attribute of the element, as opposed to `html(val)` which sets the inner-HTML of the element and, frankly, should be replaced with `text()` since it's a purely textual change/setting. Incidentally, this question could have been easily enough answered by a visit to the jQuery API site, to use (in most cases) simply use the following approach: ``` http://api.jquery.com/<name of method you want information about>/ ``` So, in the above question, you'd use the URL: ``` http://api.jquery.com/val/ ``` References: * [`html()`](http://api.jquery.com/html/). * [`text()`](http://api.jquery.com/text/). * [`val()`](http://api.jquery.com/val/).
3,692,042
I have been asked this question by a colleague that should we always include a default constructor in a class? If so, why? If no, why not? ***Example*** ``` public class Foo { Foo() { } Foo(int x, int y) { ... } } ``` I am also interested to get some lights on this from experts.
2010/09/11
[ "https://Stackoverflow.com/questions/3692042", "https://Stackoverflow.com", "https://Stackoverflow.com/users/309343/" ]
You have to keep in mind that if you don't provide an overloaded constructor, the compiler will generate a default constructor for you. That means, if you just have ``` public class Foo { } ``` The compiler will generate this as: ``` public class Foo { public Foo() { } } ``` However, as soon as you add the other constructor ``` public class Foo { public Foo(int x, int y) { // ... } } ``` The compiler will no longer automatically generate the default constructor for you. If the class was already being used in other code which relied on the presence of a default constructor, `Foo f = new Foo();`, that code would now break. If you don't want someone to be able to initialize the class without providing data you should create a default constructor which is `private` to be explicit about the fact that you are preventing instances from being constructed with no input data. There are times, however, when it is necessary to provide a default constructor (whether public or private). As was previously mentioned, some types of serialization require a default constructor. There are also times when a class has multiple parameterized constructors but also requires "lower level" initialization, in which case a private default constructor can be used which is chained in from the parameterized constructors. ``` public class Foo { private Foo() { // do some low level initialization here } public Foo(int x, int y) : this() { // ... } public Foo(int x, int y, int z) : this() { // ... } } ```
A generic type can only be instantiated with C# means (without reflection) if it has a default constructor. Also, the `new()` generic type constraint has to be specified: ``` void Construct<T>() where T : new() { var t = new T(); ... } ``` Calling this method using a type as generic type argument that has no default constructor results in a compiler error.
30,113,663
I have a rather simple question with an inkling as to what the answer is. My generalized question: What is actually going on when you declare a member variable, be it public or private, and for all permutations of variable types, e.g. static vs const vs regular variables? ``` class some_class { private: static const std::string str; public: ... } ``` I have kind of realized that in C++ there is no notion of a non-variable, that is, a non-constructed variable as I was kind of taught to believe exists with languages like Java. The same may also be true in Java, however it is not the way I was taught to think of things so I'm trying to come up with the correct way to think of these non-initialized variables. ``` public class main { public static void main(String[] args) { String str; // A kind of non-variable, or non-constructed variable (refers to null). str = new String(); // Now this variable actually refers to an object rather than null, it is a constructed variable. } } ``` Since C++ allows you to initialize member variables in the constructor through initializer lists, and I have proven to myself via use of a debugger that the variable doesn't exist before it is initialized through the initializer list (either explicitly or by default), what is, then, actually going on behind the scenes when you declare the member variable?
2015/05/08
[ "https://Stackoverflow.com/questions/30113663", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1033323/" ]
Tricky question -- it's ambiguous depending on perspective. From a pseudo-machine perspective, normally adding a non-static plain old data type to a class makes that class type bigger. The compiler also figures out how to align it and relative memory offsets to address it relative to the object in the resulting machine code. This is pseudo-machine level because at the machine level, data types don't actually exist: just raw bits and bytes, registers, instructions, things like that. When you add a non-primitive user-defined type, this recurses and the compiler generates the instructions to access the members of the member and so on. From a higher level, adding members to a class makes the member accessible from instances (objects) of the class. The constructor initializes those members, and the destructor destroys them (recursively triggering destructors of members that have non-trivial destructors, and likewise for constructors in the construction phase). Yet your example is a static member. For static members, they get stored in a data segment at the machine level and the compiler generates the code to access those static members from the data segment. Some of this might be a bit confusing. C++ shares its legacy with C which is a hardware-level language, and its static compilation and linking affects its design. So while it can go pretty high-level, a lot of its constructs are still tied to how the hardware, compiler, and linker does things, whereas in Java, the language can make some more sensible choices in favor of programmer convenience without a language design that somewhat reflects all of these things.
Yes and no. A variable of class type in Java is *really* a pointer. Unlike C and C++ pointers, it doesn't support pointer arithmetic (but that's not essential to being a pointer--for example, pointers in Pascal didn't support arithmetic either). So, when you define a variable of class type in Java: `String str;`, it's pretty much equivalent to defining a pointer in C++: `String *str;`. You can then assign a new (or existing) String object to that, as you've shown. Now, it's certainly *possible* to achieve roughly the same effect in C++ by explicitly using a pointer (or reference). There are differences though. If you use a pointer, you have to explicitly dereference that pointer to get the object to which it refers. If you use a reference, you *must* initialize the reference--and once you do so, that reference can never refer to any object other than the one with which it was initialized. There are also some special rules for `const` variables in C++. In many cases, where you're just defining a symbolic name for a value: ``` static const int size = 1234; ``` ...and you never use that variable in a way that requires it to have an address (e.g., taking its address), it usually won't be assigned an address at all. In other words, the compiler will know the value you've associated with that name, but when compilation is finished, the compiler will have substituted the value anywhere you've used that name, so the variable (as such) basically disappears (though if you have the compiler generate debugging information, it'll usually retain enough to know and display its name/type correctly). C++ does have one other case where a variable is a *little* like a Java "zombie" object that's been declared but not initialized. If you move from an object: `object x = std::move(y);`, after the move is complete the source of the move (`y` in this case) can be in a rather strange state where it exists, but *about* all you can really do with it is assign a new value to it. Just for example, in the case of a string, it *might* be an empty string--but it also *could* retain exactly the value it had before the move, or it *could* contain some other value (e.g., the value that the destination string held before the move). Even that, however, is a little bit different--even though you don't *know* its state, it's still an object that should maintain the invariants of its class--for example, if you move from a string, and then ask for the string's length, that length should match up with what the string actually contains--if (for example) you print it out, you don't know what string will print out, but you should *not* get an equivalent of a `NullPointerException`--if it's an empty string, it just won't print anything out. If it's a non-empty string, the length of the data that's printed out should match up with what its `.size()` indicates, and so on. The other obviously similar C++ type would be a pointer. An uninitialized pointer does not point to an object. The pointer *itself* exists though--it just doesn't refer to anything. Attempting to dereference it could give some sort of error message telling you that you've attempted to use a null pointer--but unless it has static storage duration, or you've explicitly initialized it, there's no guarantee that it'll be a null pointer either--attempting to dereference it could give a garbage value, throw an exception, or almost anything else (i.e., it's undefined behavior).
3,972,240
The subject of this question speaks for itself. I am wondering if Fluent NHibernate is ready for production code. I am especially wondering in light of some seemingly simple problems that I am having with it that I haven't yet found fully satisfactory solutions for (and the community doesn't have a solution for?) [Why is Fluent NHibernate ignoring my convention?](https://stackoverflow.com/questions/3877932/why-is-fluent-nhibernate-ignoring-my-convention) [Why is Fluent NHibernate ignorning my unique constraint on a component?](https://stackoverflow.com/questions/3901553/why-is-fluent-nhibernate-ignoring-my-unique-constraint-on-a-component) Yes, I am aware of this [old question](https://stackoverflow.com/questions/1026401) which is more than a year old; the answer seems to be kinda-sorta-maybe. Is Fluent NHibernate is ready for production now?
2010/10/19
[ "https://Stackoverflow.com/questions/3972240", "https://Stackoverflow.com", "https://Stackoverflow.com/users/45914/" ]
By what metric do you measure "production ready"? How is production any more stringent than other environments? Only you can decide if it meets your needs. Your first question you have a work around for. Fluent NHibernate is open source, if people aren't dying because of a bug (aka, there's a work around available), it's unlikely our finite resources will be spent on it when there are more important things to be working on. Enums are a known issue, primarily because 50% of people expect them to be mapped as ints, and the others expect strings; either way, one party is going to think that the implementation is a bug. Your second question looks like a bug. Funnily enough, the Fluent NHibernate developers don't trawl Stack Overflow for possible bugs. If you don't tell us that a bug exists, we won't be able to fix it; sadly, I'm not psychic. Fluent NHibernate has is past 1.0, which is quite a significant milestone for an OSS project, and is in use in hundreds of production applications. Whether that makes it "production ready" can only be decided by you. If you don't think it's production ready yet, it's open source and we're always looking for contributors.
This kind of question really should be asked over on their google group page: <http://groups.google.com/group/fluent-nhibernate>. Being an open source project that is constantly evolving with NHibernate itself, it will almost always be in a semi-flux state, especially with NH3 coming soon.
21,375,148
I have strings like those for example: '1 hour' '5 mins' '1 day' '30 secs' '4 hours' this strings represent the time past since something. I want to convert them to the time (DateTime) that it happends I tried to insert it to a timespan.parse but it throw an exception... What is the best way to do something like that? Thanks
2014/01/27
[ "https://Stackoverflow.com/questions/21375148", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1612018/" ]
You may try using *Dictionary* for all the *names* are used: ``` public static TimeSpan ParseTimeSpan(String value) { // Expand dictionary with values you're using, e.g. // "second", "minute", "week" etc. Dictionary<String, long> seconds = new Dictionary<String, long>() { {"days", 86400}, {"day", 86400}, {"hours", 3600}, {"hour", 3600}, {"mins", 60}, {"min", 60}, {"secs", 1}, {"sec", 1} }; String[] items = value.Split(); long result = 0; for (int i = 0; i < items.Length - 1; i += 2) result += long.Parse(items[i]) * seconds[items[i + 1]]; return TimeSpan.FromSeconds(result); } ... TimeSpan result = ParseTimeSpan("1 hour 15 mins 32 secs"); ```
The second part in the string is *units*. Dunno about format what you will have, but all those listead can be split and parsed like this: ``` // text is "1 hour" or "5 mins" or "1 day" or "30 secs" or "4 hours" var item = text.Split(new char[] {' '}); var value = int.Parse(item[0]); var unit = item[1]; // get lowest units, in our case it is seconds if(unit.StartWith("min")) value *= 60; else if(unit.StartWith("hour")) value *= 60 * 60; else if(unit.StartWith("day")) value *= 60 * 60 * 24; // now that we have seconds, we can convert it into timespan var timespan = new TimeSpan(0, 0, value); ``` You could do it for *ms* or even *ticks*.
299,249
I am struggling with testing a method that uploads documents to Amazon S3, but I think this question applies to any non-trivial API/external dependecy. I've only come up with three potential solutions but none seem satisfactory: 1. Do run the code, actually upload the document, check with AWS's API that it has been uploaded and delete it at the end of the test. This will make the test very slow, will cost money every time the test is run and won't alway return the same result. 2. Mock S3. This is super hairy because I have no idea about that object's internals and it feels wrong because it's way too complicated. 3. Just make sure that MyObject.upload() is called with the right arguments and trust that I am using the S3 object correctly. This bothers me because there is no way to know for sure I used the S3 API correctly from the tests alone. I checked how Amazon tests their own SDK and they do mock everything. They have a 200 lines helper that does the mocking. I don't feel it's practical for me to do the same. How do I solve this?
2015/10/07
[ "https://softwareengineering.stackexchange.com/questions/299249", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/199400/" ]
You need to do both. Running, uploading and deleting is an integration test. It interfaces with an external system and can therefore be expected to run slow. It should probably not be part of every single build you do locally, but it should be part of a CI build or nightly build. That offsets the slowness of those tests and still provides the value of having it tested automatically. You also need unittests that run more quickly. Since it is generally smart to not hard-depend on an external system too much (so you can swap out implementations or switch over) you should probably try and write a simple interface over S3 that you can code against. Mock that interface in unittests so you can have quick-running unittests. The first tests check that your code actually works with S3, the second tests that your code correctly calls the code that talks to S3.
Adding to the previous answers, the main question is whether (and how) you want to mock the S3 API for your tests. Instead of manually mocking individual S3 responses, you can take advantage of some very sophisticated existing mocking frameworks. For instance [moto](https://github.com/spulec/moto) provides functionality that is very similar to the actual S3 API. You could also take a look at **[LocalStack](https://github.com/atlassian/localstack)**, a framework which combines existing tools and provides a fully functional local cloud environment (including S3) that facilitates integration testing. Although some of these tools are written in other languages (Python), it should be easy to spin up the test environment in an external process from your tests in, say, Java/JUnit.
2,205,455
In solaris how to detect broken socket in send() call? i dont want to use signal. i tried SO\_NOSIGPIPE and MSG\_NOSIGNAL but both are not available in Solaris and my program is getting killed with "broken pipe" error. Is there any way to detect broken pipe? Thanks!
2010/02/05
[ "https://Stackoverflow.com/questions/2205455", "https://Stackoverflow.com", "https://Stackoverflow.com/users/258479/" ]
Heading are what their name suggests, they should be used for **headings** or **titles**. Heading make them bold as well as different sizes based on their level. For other piece of text, you have to decide whether you want to make it bold or not. Also, heading tags are good for **search-engine-optimization**, the SEOs usually put the page titles or important keywords inside these heading tags. If you simply want something to appear bold, use the <**strong**> tags instead. What i would suggest you is: **You should use headings for the titles or important keywords for the SEO purpose and you should use the other bold type tags such as *b* or *strong* at your own will when you want to make something appear bold.** **Example:** ``` <H2>Amazing Laptops</h2> <p> We deal in great <strong>quality laptops</strong> you will ever come across. </p> ``` **Bottom Line:** There is a world of difference between bold type tags such as **strong** and heading tags. They serve the different purpose, you can not compare them.
Unfortunately there is no definitive formula for whether some piece of text should be a heading or not. If you don't understand what your client is looking for, I suggest you ask the client. We have even less idea since we haven't seen the data.
50,821,414
I thought that part of the appeal of `table-layout:fixed` was that you could set your cell widths to be whatever you want and the browser would blindly accept them. I have a situation where I have a containing div set to 900px width. In it is a table, with 4 columns, each set to 300px width. The div has a background colour and is set to overflow:visible. The result should be that the third column's right hand edge lines up with the right hand edge of the div, and the fourth column bursts out of the div. But instead all four columns show *inside* the div, at about 225px each. What can I do to alleviate this problem? Thanks!
2018/06/12
[ "https://Stackoverflow.com/questions/50821414", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1058739/" ]
tl;dr ===== * Do not use `DateUtil` whatever that is. (Perhaps Apache DateUtils library?) * Do not use terrible old date-time classes such as `java.util.Date`. * Use the modern industry-leading *java.time* classes. Code for parsing a string lacking an offset, then assigning an offset of zero for UTC itself. ``` LocalDateTime // Represents a date and a time-of-day but without any concept of time zone or offset-from-UTC. NOT a moment, NOT a point on the timeline. .parse( "201801011000" , DateTimeFormatter.ofPattern( "uuuuMMddHHmm" ) ) .atOffset( ZoneOffset.UTC ) // Assign an offset-from-UTC. Do this only if you are CERTAIN this offset was originally intended for this input but was unfortunately omitted from the text. Returns an `OffsetDateTime`. .toInstant() // Extract an `Instant` from the `OffsetDateTime`. Basically the same thing. But `Instant` is always in UTC by definition, so this type is more appropriate if your intention is to work only in UTC. On the other hand, `Instant` is a basic class, and `OffsetDateTime` is more flexible such as various formatting patterns when generating `String` object to represent its value. ``` Using *java.time* ================= The modern approach in Java uses the *java.time* classes. This industry-leading framework supplanted the terribly troublesome old date-time classes such as `Date`, `Calendar`, and `SimpleDateFormat`. `DateTimeFormatter` ------------------- Parse your input string. Define a formatting pattern to match. ``` DateTimeFormatter f = DateTimeFormatter.ofPattern( "uuuuMMddHHmm" ) ; String input = "201801011000" ; ``` `LocalDateTime` --------------- Parse as a `LocalDateTime` because your input lacks an indicator for time zone or offset-from-UTC. ``` LocalDateTime ldt = LocalDateTime.parse( input , f ) ; ``` Lacking a zone or offset means this does *not* represent a moment, is *not* a point on the timeline. Instead, this represents *potential* moments along a range of about 26-27 hours, the range of time zones around the globe. `OffsetDateTime` ---------------- If you know for certain that this date and time-of-day were intended to represent a moment in UTC, apply the constant `ZoneOffset.UTC` to get an `OffsetDateTime` object. ``` OffsetDateTime odt = ldt.atOffset( ZoneOffset.UTC ) ; ``` `ZonedDateTime` --------------- Your Question is vague. It sounds like you might know of an specific time zone intended for this input. If so, assign a `ZoneId` to get a `ZonedDateTime` object. Understand that an offset-from-UTC is but a mere number of hours, minutes, and seconds. Nothing more, nothing less. In contrast, a time zone is much more. A time zone is a history of past, present, and future changes to the offset used by the people of a certain region. Specify a [proper time zone name](https://en.wikipedia.org/wiki/List_of_tz_zones_by_name) in the format of `continent/region`, such as [`America/Montreal`](https://en.wikipedia.org/wiki/America/Montreal), [`Africa/Casablanca`](https://en.wikipedia.org/wiki/Africa/Casablanca), or `Pacific/Auckland`. Never use the 3-4 letter abbreviation such as `EST` or `IST` as they are *not* true time zones, not standardized, and not even unique(!). ``` ZoneId z = ZoneId.of( "Africa/Tunis" ) ; ZonedDateTime zdt = ldt.atZone( z ) ; ``` `Instant` --------- A quick way to adjust back into UTC is to extract a `Instant` object. An `Instant` is always in UTC. ``` Instant instan = zdt.toInstant() ; ``` ISO 8601 -------- Tip: Instead of using custom format for exchanging date-time values as text, use only the standard [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) formats. The standard formats are practical, easy to parse by machine, easy to read by humans across cultures. The *java.time* classes use the ISO 8601 formats by default when parsing/generating strings. The `ZonedDateTime::toString` method wisely extends the standard to append the name of the zone in square brackets. ``` Instant instant = Instant.parse( "2018-07-23T16:18:54Z" ) ; // `Z` on the end means UTC, pronounced “Zulu”. String output = instant.toString() ; // 2018-07-23T16:18:54Z ``` And always include the offset and time zone in your string. Omitting the offset/zone for a moment is like omitting the currency for a price: All you have left is an ambiguous number worth nothing. Actually, worse than nothing as it can cause all sorts of confusion and errors. --- About *java.time* ================= The [*java.time*](http://docs.oracle.com/javase/10/docs/api/java/time/package-summary.html) framework is built into Java 8 and later. These classes supplant the troublesome old [legacy](https://en.wikipedia.org/wiki/Legacy_system) date-time classes such as [`java.util.Date`](https://docs.oracle.com/javase/10/docs/api/java/util/Date.html), [`Calendar`](https://docs.oracle.com/javase/10/docs/api/java/util/Calendar.html), & [`SimpleDateFormat`](http://docs.oracle.com/javase/10/docs/api/java/text/SimpleDateFormat.html). The [*Joda-Time*](http://www.joda.org/joda-time/) project, now in [maintenance mode](https://en.wikipedia.org/wiki/Maintenance_mode), advises migration to the [java.time](http://docs.oracle.com/javase/10/docs/api/java/time/package-summary.html) classes. To learn more, see the [*Oracle Tutorial*](http://docs.oracle.com/javase/tutorial/datetime/TOC.html). And search Stack Overflow for many examples and explanations. Specification is [JSR 310](https://jcp.org/en/jsr/detail?id=310). You may exchange *java.time* objects directly with your database. Use a [JDBC driver](https://en.wikipedia.org/wiki/JDBC_driver) compliant with [JDBC 4.2](http://openjdk.java.net/jeps/170) or later. No need for strings, no need for `java.sql.*` classes. Where to obtain the java.time classes? * [**Java SE 8**](https://en.wikipedia.org/wiki/Java_version_history#Java_SE_8), [**Java SE 9**](https://en.wikipedia.org/wiki/Java_version_history#Java_SE_9), [**Java SE 10**](https://en.wikipedia.org/wiki/Java_version_history#Java_SE_10), and later + Built-in. + Part of the standard Java API with a bundled implementation. + Java 9 adds some minor features and fixes. * [**Java SE 6**](https://en.wikipedia.org/wiki/Java_version_history#Java_SE_6) and [**Java SE 7**](https://en.wikipedia.org/wiki/Java_version_history#Java_SE_7) + Much of the java.time functionality is back-ported to Java 6 & 7 in [***ThreeTen-Backport***](http://www.threeten.org/threetenbp/). * [**Android**](https://en.wikipedia.org/wiki/Android_(operating_system)) + Later versions of Android bundle implementations of the *java.time* classes. + For earlier Android (<26), the [***ThreeTenABP***](https://github.com/JakeWharton/ThreeTenABP) project adapts [***ThreeTen-Backport***](http://www.threeten.org/threetenbp/) (mentioned above). See [*How to use ThreeTenABP…*](http://stackoverflow.com/q/38922754/642706). The [**ThreeTen-Extra**](http://www.threeten.org/threeten-extra/) project extends java.time with additional classes. This project is a proving ground for possible future additions to java.time. You may find some useful classes here such as [`Interval`](http://www.threeten.org/threeten-extra/apidocs/org/threeten/extra/Interval.html), [`YearWeek`](http://www.threeten.org/threeten-extra/apidocs/org/threeten/extra/YearWeek.html), [`YearQuarter`](http://www.threeten.org/threeten-extra/apidocs/org/threeten/extra/YearQuarter.html), and [more](http://www.threeten.org/threeten-extra/apidocs/index.html).
You should use the latest classes [`java.time`](https://docs.oracle.com/javase/10/docs/api/java/time/package-summary.html) provided from Java8. Steps are as follows: **Step-1.** Parse `String` to `LocalDateTime` **Step-2.** Convert `LocalDateTime` to the `ZonedDateTime` and then we can convert between different `timezone`. **Hope this help:** **In Mirth you can write as:** ``` String str = "201207011000"; var date_in_utc =java.time.format.DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm") .format(java.time.ZonedDateTime.of(java.time.LocalDateTime .parse(str,java.time.format.DateTimeFormatter .ofPattern("yyyyMMddHHmm")),java.time.ZoneId.of("CET")) .withZoneSameInstant(java.time.ZoneOffset.UTC)); ``` **Full Snippet:** ``` ZoneId cet = ZoneId.of("CET"); String str = "201207011000"; DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyyMMddHHmm"); LocalDateTime localtDateAndTime = LocalDateTime.parse(str, formatter); ZonedDateTime dateAndTimeInCET = ZonedDateTime.of(localtDateAndTime, cet ); System.out.println("Current date and time in a CET timezone : " + dateAndTimeInCET); ZonedDateTime utcDate = dateAndTimeInCET.withZoneSameInstant(ZoneOffset.UTC); System.out.println("Current date and time in UTC : " + utcDate); System.out.println("Current date and time in UTC : " + DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm").format(utcDate)); ```
4,511,266
**Problem** Differentiate the following inverse trig. function for $x\in (-1, 1)$: $$f(x) = \sec^{-1}\left(\sqrt{1+x^2}\right)$$ **Attempting to solve** Putting $x = \tan(\theta)$ so that $\theta \in \left(\dfrac{-\pi}{4}, \dfrac{\pi}{4}\right)$ $$\begin{align}\implies f(x) & = \sec^{-1}\left(\sqrt{1+\tan^2\theta}\right)\\&=\sec^{-1}\left(\sqrt{\sec^2\theta}\right)\\&=\sec^{-1}\left(|\sec\theta|\right)\tag{$\*$}\end{align}$$ This is equal to $\theta$ only when $\theta \in \left(0, \pi\right)$. I'm not getting it... what to do when $\theta$ doesn't lie in that interval? I'm having great difficulties in defining $\sec^{-1}\left(\sec x\right)$ for different intervals. --- **Edit** By some sort of [graphing](https://www.desmos.com/calculator/z6txfynhvc), I obtained the solution as, $$f'(x) = \begin{cases}\dfrac{1}{1 + x^2},\quad {\rm if\: } x\in(0, 1)\\\\\dfrac{-1}{1+x^2},\quad{\rm if\ }x \in(-1,0)\end{cases}$$ which implies that the above expression $(\*)$ can be defined as, $$\sec^{-1}\left(|\sec\theta|\right) = \begin{cases} \theta + C\_1, \quad {\rm if \:} \theta \in \left(0, \dfrac{\pi}{4}\right)\\\\-\theta + C\_2, \quad {\rm if \:} \theta \in\left(-\dfrac{\pi}{4},0\right) \end{cases}$$ ---though I've obtained the derivative, but still, I'm not understanding it :(
2022/08/13
[ "https://math.stackexchange.com/questions/4511266", "https://math.stackexchange.com", "https://math.stackexchange.com/users/-1/" ]
The easiest way is through implicit differentiation. Suppose that $y=\sec^{-1}{\sqrt{1+x^{2}}}$ This implies that $\sec{y}=\sqrt{1+x^{2}}$ Taking the derivative with respect to $x$, you get: $\sec{y}\tan{y}\frac{dy}{dx}=\frac{x}{\sqrt{1+x^{2}}}$, by the product rule. Isolating $\frac{dy}{dx}$, you get $\frac{dy}{dx}=\frac{1}{\sec{y}\tan{y}}\frac{x}{\sqrt{1+x^2}}$ Note that $\tan{y}=\sqrt{\sec^{2}{y}-1}=\sqrt{x^2}=x$ So, $\frac{dy}{dx}=\frac{x}{x(x^2+1)}=\frac{1}{x^2+1}$, for $x\in[0,1]$. You can also note that, $\frac{dy}{dx}=\frac{-1}{x^2+1}$, for $x\in[-1,0)$. If you want to combine this into one formula, with some thought, you can see that, if you want to find the sign of the input, you can take $\frac{x}{|x|}$, giving a final solution of $\frac{dy}{dx}=\frac{x}{|x|}\frac{1}{x^2+1}$, for $x\in[-1,1]$
We can recognize six $ \theta$ related inverse function in six ways by drawing the originating triangle in trigonometry: $$\theta=\sec^{-1}\sqrt{1+x^2} = \tan^{-1}x =\cos^{-1}\frac{1}{\sqrt{1+x^2} }=\sin^{-1}\frac{x}{\sqrt{1+x^2} } \text{ &..}$$ So we then can recognize to have a directly more common derivative of arctan $$ \frac{d\theta}{dx}=\pm\frac{1}{1+x^2}; $$ where the radical sign in $\sec^{-1}\sqrt{1+x^2} $ admits negative sign also . [![enter image description here](https://i.stack.imgur.com/gMQQv.png)](https://i.stack.imgur.com/gMQQv.png)
5,875,464
I currently have a Magento store where I'm using a CMS page as the homepage. I want to integrate my wordpress blog (hosted on the same server) into this CMS page. It would show the latest blog post and preferably have the comment function available on the front page. The first thing I considered was using the Wordpress Loop on the Magento CMS page, but it doesn't seem like it allows PHP. One other thought I had was to create the homepage using modules or blocks. To be honest, I've never created a module or block so I'm not all that familiar with what is involved. The CMS page that I had created is simply an image slider/carousel (nivo-slider) and some photos with links. None of the content actually needs to be done with CMS, it just needs to be presented within my Magento theme/framework. All homepage updates will be handled by myself, so I can bypass the CMS system all together and just update modules if it turns out that the modules solution will allow me to have both the Wordpress blog and nivo-slider on the same page. Any thoughts?
2011/05/03
[ "https://Stackoverflow.com/questions/5875464", "https://Stackoverflow.com", "https://Stackoverflow.com/users/337903/" ]
The FishPig extension now supports this functionality so you achieve this by following these steps: * Upgrade to the latest version of FishPig's WordPress Integration * Login to your Magento Admin and go to WordPress > Settings Blog / Plugins * Find the Layout option and set 'Blog as Magento Homepage' to Yes * Save the page and voila, you have it: WordPress as your Magento homepage using your Magento theme in under 2 minutes
You will be wanting the Fishpig integration: <http://www.magentocommerce.com/magento-connect/fishpig/extension/3958/fishpig_wordpress_integration> Not only is it free, it is also actively maintained. You can also do an Apache redirect for the root/home page to a wordpress page of your choosing. In that way you don't have to have the problem of adding something to the Magento homepage and maybe have something fancy in Wordpress.
16,339,585
There are 4 tables. * items ( item\_id, item\_name, item\_owner) * groups ( grp\_id, grp\_name, grp\_owner) * users (grp\_id, usr\_ref) * share (item\_id, grp\_id) My objective is to get a list of all those items where item\_owner = user\_id ( say 123 ) or user\_id belongs to a group with which the item is shared. A basic query implementation to retrieve all items shared with a group to which a particular user\_id belongs would be ``` select i.item_id from items i left outer join share on share.item_id = i.item_id left outer join users on users.grp_id = share.grp_id left outer join groups on groups.grp_id = share.grp_id where users.usr_ref = user_id ``` And to include all other elements of which user\_id is the owner, i did something like ``` select * from items where owner = user_id or item_id in ( select i.item_id from items i left outer join share on share.item_id = i.item_id left outer join users on users.grp_id = share.grp_id left outer join groups on groups.grp_id = share.grp_id where users.usr_ref = user_id ) ``` which i suppose is a very bad implementation as the item\_id needs to be searched everytime in the array obtained from the joins. How can i improve my sql statement. Also is there any other way in which i can redesign my table structure so that i can implement the query in some other way ? Thanx in advance.
2013/05/02
[ "https://Stackoverflow.com/questions/16339585", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2148105/" ]
You need `INNER JOIN` in this case because you need to get an item that has connection on all tables. Your current query uses `LEFT JOIN` that is why even an item that has not associated on any user will be shown on the list. Give this a try, ``` SELECT DISTINCT a.* FROM items a INNER JOIN `share` b ON a.item_ID = b.item_ID INNER JOIN groups c ON b.grp_ID = c.grp_ID INNER JOIN users d ON c.grp_ID = d.grp_ID WHERE d.usr_ref = user_ID ``` To further gain more knowledge about joins, kindly visit the link below: * [Visual Representation of SQL Joins](http://www.codinghorror.com/blog/2007/10/a-visual-explanation-of-sql-joins.html)
Perhaps I'm not understanding your question, but can you not just use `OR` with your first query: ``` select i.item_id from items i left outer join share on share.item_id = i.item_id left outer join users on users.grp_id = share.grp_id left outer join groups on groups.grp_id = share.grp_id where i.item_owner = @user_id or users.usr_ref = @user_id ```
18,033,419
Recently I decided to start my third pygame game.In that game the player should fire at airplanes from the cannon in the bottom of the screen,by pointing to the airplane with the mouse.I putted the code in more modules(more.py files) for easier understanding.I started by trying to get cannon barel rotate towards mouse current position.So here we go. main.py ``` import pygame,sys import screen import constants from pygame.locals import * from loop import * screen.screen = pygame.display.set_mode((800,600)) main_loop() ``` constatns.py ``` import pygame pygame.init() scr = pygame.display.list_modes() resolution = (scr[0][0],scr[0][1]) angle = 0 barell = pygame.image.load("barell.png") tux = pygame.image.load("tux.png") tux2 = pygame.transform.scale(tux,(100,100)) ``` loop.py ``` import pygame,event import screen import constants import math import random import groups from faster_barell import * faster_barell = faster_barell() faster_barell.add_to_group() def main_loop(): while True: mx,my = pygame.mouse.get_pos() event.Event() screen.screen.fill([0,0,0]) groups.barell_group.draw(screen.screen) for t in groups.barell_group: dx = mx - t.rect.x dy = my - t.rect.y angle = math.atan2(dx,dy) angle2 = math.degrees(angle) constants.angle = angle2 t.image = pygame.transform.rotate(constants.barell,angle2) pygame.display.flip() ``` screen.py ``` import pygame import constants pygame.init() screen = None ``` event.py ``` import pygame,sys from pygame.locals import * class Event(object): def __init__(self): for event in pygame.event.get(): if event.type == pygame.QUIT: sys.exit() ``` faster\_barell.py ``` import pygame import constants import groups class faster_barell(pygame.sprite.Sprite): def __init__(self): pygame.sprite.Sprite.__init__(self) self.image = constants.barell self.rect = self.image.get_rect() self.rect.x = 750 self.rect.y = 550 def add_to_group(self): groups.barell_group.add(self) ``` groups.py ``` import pygame barell_group = pygame.sprite.Group() ``` Now instead of rotating pygame(I can't really explain how)scales the barell image.The barell image is just a blank(white) 10x30 image).Now here comes even more strange part.When in t.image = pygame.transform.rotate(constants.barell,angle2) I change constants.barell to constants.tux2(which is just a tux image(just for testing it won't be in the game) everything works just fine! Here is the tux image I worked with <http://upload.wikimedia.org/wikipedia/commons/3/3e/Tux-G2.png> .I tried to solve the problem by changing dx and dy in math.atan2 to something else(dy,dx,-dy,dx,-dx,-dy and so on)please help I'm trying to solve this for about 6 hours(I never ask on stackoverflow unless I really can't do anything to get the code working)
2013/08/03
[ "https://Stackoverflow.com/questions/18033419", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2564921/" ]
C++ is a language that puts the correctness of the code in the hand of the programmer. Trying to alter that via some convoluted methods typically leads to code that is hard to use or that doesn't work very well. Forcing the hand of the programmer so that (s)he has to create an object on the heap even if that's not "right" for that particular situation is just bad. Let the programmer shoot him-/herself in the foot if he wants to. In larger projects, code should be reviewed by peers (preferably at least sometimes by more senior staff) for correctness and that it follows the coding guidelines of the project. I'm not entirely sure how "virtual destructors" relate to "safe destruction" and "shared pointers" - these are three different concepts that are not very closely related - virtual destructors are needed when a class is used as a base-class to derive a new class. STL objects are not meant to be derived from [as a rule, you use templates OR inheritance, although they CAN be combined, it gets very complicated very quickly when you do], so there is no need to use virtual destructors in STL. If you have a class that is a baseclass, and the storage is done based on pointers or references to the baseclass, then you MUST have virtual destructors - or don't use inheritance. "safe destruction", I take it, means "no memory leaks" [rather than "correct destruction", which can of course also be a problem - and cause problems with memory leaks]. For a large number of situations, this means "don't use pointers to the object in the first place". I see a lot of examples here on SO where the programmer is calling `new` for absolutely no reason. `vector<X>* v = new vector<X>;` is definitely a "bad smell" (Just like fish or meat, something is wrong with the code if it smells bad). And if you are calling new, then using shared pointer, unique pointer or some other "wrapping" is a good idea. But you shouldn't force that concept - there are occasionally good reasons NOT to do that. "shared pointer" is a concept to "automatically destroy the object when it is no longer in use", which is a useful technique to avoid memory leaks. Now that I have told you NOT to do this, here's one way to achieve it: ``` class X { private: int x; X() : x(42) {}; public: static shared_ptr<X> makeX() { return make_shared<X>(); } }; ``` Since the constructor is private, the "user" of the class can't call "new" or create an object of this kind. [You probably also want to put the copy constructor and assignment operator in private or use `delete` to prevent them from being used]. However, I still think this is a bad idea in the first place.
The answer by [Mats](https://stackoverflow.com/a/18033580/725021) is indeed wrong. The `make_shared` needs a public constructor. However, the following is valid: ```cpp class X { private: int x; X() : x( 42 ) {}; public: static std::shared_ptr<X> makeX() { return std::shared_ptr<X>( new X() ); } }; ``` I don't like to use the `new` keyword, but in this case, it is the only way.
28,172
I'd like some way to get notices of comments people make to my (questions, answers, comments), as well as changes to my reputation. Any chance that this could be added? I'm sure I could script something up to scrape the webpage, but that's a lot less friendly than having it built in :)
2009/11/02
[ "https://meta.stackexchange.com/questions/28172", "https://meta.stackexchange.com", "https://meta.stackexchange.com/users/127120/" ]
Well, you can now mark this as complete. I have written a small app that does exactly this. [Stack2RSS](https://stackapps.com/questions/1599/stack2rss-a-json-to-rss-conversion-service) -------------------------------------------------------------------------------------------- Stack2RSS takes an API request and converts it into an RSS feed that you can then subscribe to. Because the API is so flexible, stack2rss is very flexible. Answering your question, here is the feed for comments people post to you on Meta Stack Overflow: > > <http://stack2rss.stackexchange.com/meta.stackoverflow/users/127120/mentioned> > > > And for recent reputation changes: > > <http://stack2rss.stackexchange.com/meta.stackoverflow/users/127120/reputation> > > >
Native RSS feed for comments: * `https://meta.stackexchange.com/feeds/user/`user-id`/responses` * `https://stackoverflow.com/feeds/user/`user-id`/responses` * Replace `user-id` with your own id. You can find your id in the URL of your own profile. To get the feed for any domain: 1. Go to your account profile 2. Check for "user feed" at the bottom 3. add `/responses` to the URL
34,616,727
I have a blog system where user inputs the image url in the post content like ``` hey how are you <img src="example.com/image.png"> ``` if the user has written like this ``` hello how are you <img src="example.com/image.png"> ``` Then I want to find this `img` `src` line and use it as featured image here is what I have tried: ``` $haystack = 'how are you <img src="hey.png">'; $needle = '<img src="'; if (strpos($haystack, $needle) !== false) { echo "$needle"; } else { echo "no"; } ``` When I echo I only get: ``` <img src=" ``` I want to get whole ``` <img src="hey.png"> ``` from that string how can I do this.
2016/01/05
[ "https://Stackoverflow.com/questions/34616727", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
`strpos` returns the index of where the needle starts in the haystack. Look at combining the returned value with `substr` and `haystack` to get the substring you want.
This is most probably way easier with using a [Regular Expression (RegEx)](http://php.net/manual/en/book.pcre.php). Here is an simple example: ``` $string = 'hey how are you <img src="example.com/image.png"> blah blah'; preg_match('/<img src=".*">/', $string, $matches); print_r($matches); ``` Which would give you an array like this: ``` Array ( [0] => <img src="example.com/image.png"> ) ```
104,735
I understand that similar questions like this one have been asked before on this site, listed below. However, I am confused about the answers. If I explain what I think I understand, can somebody please point out where i'm wrong? * [why-more-bandwidth-means-more-bit-rate-per-second](https://electronics.stackexchange.com/questions/97658/why-more-bandwidth-means-more-bit-rate-per-second) * [why-do-higher-frequencies-mean-higher-data-rates...](https://electronics.stackexchange.com/questions/84245/why-do-higher-frequencies-mean-higher-data-rates-and-why-do-we-even-need-freque) I'll start with what I do know: **Shannon Law gives the theoretical upper limit** $$C\_{noisy}=B\*log\_{2}(1+\frac{S}{N})$$ if S = N, then C = B As N→∞, C→0 As N→0, C→∞ **Nyquist Formula says approximately how many levels are needed to achieve this limit** $$C\_{noiseless}=2\*B\*log\_{2}M$$ (If you do not use enough logic levels you can not approach the shannon limit, but by using more and more levels you will **not** exceed the shannon limit) --- My problem is that I'm having a hard time understanding why bandwidth relates to bit rate at all. To me it seems like the upper limit of the frequency that can be sent down the channel is the important factor. Here's a very simplified example: No noise at all, 2 logic levels (0V and 5V), no modulation, and a bandwidth of 300 Hz (30 Hz - 330 Hz). It will have a Shannon Limit of ∞, and a Nyquist Limit of 600bps. Also assume that the channel is a perfect filter so anything outside of the bandwidth is completely dissipated. As I double the bandwidth, I double the bit rate etc. But why is this? For two level digital transmission With a bandwidth of 300 Hz (30 Hz - 330 Hz), the digital signal of "0V's" and "5V's" will be a (roughly) square wave. This square wave will have the harmonics below 30 Hz and above 330 Hz dissipated, so it will not be perfectly square. If it has a fundamental frequency at the minimum 30 Hz, (so the "0V's" and "5V's" are switching 30 times a second), then there will be a good amount of harmonics and a nice square wave. If it has a fundamental frequency at the max 330 Hz, the signal will be a pure sine wave as there are no higher order harmonics to make it square. However, as there is no noise the receiver will still be able to discriminate the zeros from the ones. In the first case the bit rate will be 60 bps, as the "0V's" and "5V's" are switching 30 times a second. In the second case the bit rate will be a maximum of 660bps, (if the threshold switching voltage of the receiver is exactly 2.5V), and slightly less if the threshold voltage is different. However this differs from the expected answer of 600 bps for the upper limit. In my explanation it is the upper limit of the channel frequency that matters, not the difference between the upper and lower limit (bandwidth). Can somebody please explain what have I misunderstood? Also when my logic is applied to the same example but using FSK modulation (frequency shift keying), I get the same problem. If a zero is expressed as a 30 Hz carrier frequency, a one is expressed as a 330 Hz carrier frequency, and the modulation signal is 330 Hz, then the max bit rate is 660 bps. Again, can somebody please clear up my misunderstanding? Also why use a square wave in the first place? Why cant we just send sine waves and design the receivers to have a switching threshold voltage exactly in the middle between the max and min value of the sin wave? This way the signal would take up much less bandwidth. Thanks for reading!
2014/03/30
[ "https://electronics.stackexchange.com/questions/104735", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/35366/" ]
It's a subtle point, but your thinking is going astray when you think of a 330-Hz tone as somehow conveying 660 bits/second of information. It doesn't — and in fact, a pure tone conveys no information at all other than its presence or absence. In order transmit *information* through a channel, you need to be able to specify an *arbitrary* sequence of signaling states that are to be transmitted, and — this is the key point — be able to distinguish those states at the other end. With your 30-330 Hz channel, you can specify 660 states per second, but it will turn out that 9% of those state sequences will violate the bandwidth limitations of the channel and will be indistinguishable from other state sequences at the far end, so you can't use them. This is why the information bandwidth turns out to be 600 b/s.
This is only a partial answer, but hopefully it gets at the main points you're misunderstanding. > > My problem is that I'm having a hard time understanding why bandwidth relates to bit rate at all. > ... > > > If a zero is expressed as a 30 Hz carrier frequency, a one is expressed as a 330 Hz carrier frequency, and the modulation signal is 330 Hz, then the max bit rate is 660 bps. > > > If you switch down to 30 Hz for a zero, you need to have about 1/60 s or so to really know you got 30 Hz and not 20 Hz or 50 Hz or something. Really in this case you are just on-off keying your 300 Hz carrier, and the 30 Hz signal that's sent for 1/660 s during the zeros is just confusing things. To talk about FSK, let's take a more realistic example. Say you use 1 MHz for the zero and 1.01 MHz for the one. It turns out you need to measure the signal for about \$1/2\Delta{}f\$, in this case 1/20,000 s, to be able to reliably distinguish those two frequencies. If you just measured the signal for 1 us, you wouldn't really be able to tell the difference between a 1 MHz signal and a 1.01 MHz signal (although in an ideal, noise-free scenario you could do it, just as Shannon's formula says you can transmit infinite data with zero bandwidth when SNR goes to infinity) So in this example the bit rate you can send is about 20 kHz, corresponding to 2x the difference between your 1 and 0 frequencies, just as the Nyquist formula leads you to expect for a 2-level code.
50,798,329
When you have a big POJO with loads of variables (Booleans, Int, Strings) and you want to use the new Work Manager to start a job. You then create a Data file which gets added to the one time work request object. What would be the best practices to build this data file? (*It feels wrong to be writing 100 lines of code to just say put int on the builder for every variable.*) Answer ====== I ended up breaking apart my parcelable object as i thought this was the best implementation. I did not want to use gson lib as it would of added another layer of serialization to my object. ``` Data.Builder builder = new Data.Builder(); builder.putBoolean(KEY_BOOL_1, stateObject.bool1); builder.putBoolean(KEY_BOOL_2, stateObject.bool2); builder.putBoolean(KEY_BOOL_3, stateObject.bool3); builder.putInt(KEY_INT_1, stateObject.int1); builder.putInt(KEY_INT_2, stateObject.int2); builder.putString(KEY_STRING_1, stateObject.string1); return builder.build(); ``` UPDATE ====== The partial answer to my question is as @CommonsWare pointed out : *The reason Parcelable is not supported is that the data is persisted.* Not sure what the detailed answer to Data not supporting parcelable? - [This answer](https://stackoverflow.com/questions/50366767/sending-class-object-through-data-class) explains its : > > The Data is a lightweight container which is a simple key-value map > and can only hold values of primitive & Strings along with their > String version. It is really meant for light, intermediate transfer of > data. It shouldn't be use for and is not capable of holding > Serializable or Parcelable objects. > > > Do note, the size of data is limited to 10KB when serialized. > > >
2018/06/11
[ "https://Stackoverflow.com/questions/50798329", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2959931/" ]
This solution works without using JSON, and serializes directly to byte array. ``` package com.andevapps.ontv.extension import android.os.Parcel import android.os.Parcelable import androidx.work.Data import java.io.* fun Data.Builder.putParcelable(key: String, parcelable: Parcelable): Data.Builder { val parcel = Parcel.obtain() try { parcelable.writeToParcel(parcel, 0) putByteArray(key, parcel.marshall()) } finally { parcel.recycle() } return this } fun Data.Builder.putParcelableList(key: String, list: List<Parcelable>): Data.Builder { list.forEachIndexed { i, item -> putParcelable("$key$i", item) } return this } fun Data.Builder.putSerializable(key: String, serializable: Serializable): Data.Builder { ByteArrayOutputStream().use { bos -> ObjectOutputStream(bos).use { out -> out.writeObject(serializable) out.flush() } putByteArray(key, bos.toByteArray()) } return this } @Suppress("UNCHECKED_CAST") inline fun <reified T : Parcelable> Data.getParcelable(key: String): T? { val parcel = Parcel.obtain() try { val bytes = getByteArray(key) ?: return null parcel.unmarshall(bytes, 0, bytes.size) parcel.setDataPosition(0) val creator = T::class.java.getField("CREATOR").get(null) as Parcelable.Creator<T> return creator.createFromParcel(parcel) } finally { parcel.recycle() } } inline fun <reified T : Parcelable> Data.getParcelableList(key: String): MutableList<T> { val list = mutableListOf<T>() with(keyValueMap) { while (containsKey("$key${list.size}")) { list.add(getParcelable<T>("$key${list.size}") ?: break) } } return list } @Suppress("UNCHECKED_CAST") fun <T : Serializable> Data.getSerializable(key: String): T? { val bytes = getByteArray(key) ?: return null ByteArrayInputStream(bytes).use { bis -> ObjectInputStream(bis).use { ois -> return ois.readObject() as T } } } ``` Add proguard rule ``` -keepclassmembers class * implements android.os.Parcelable { public static final android.os.Parcelable$Creator CREATOR; } ```
In Kotlin, thats how I do it Object to Json ``` inline fun Any.convertToJsonString():String{ return Gson().toJson(this)?:"" } ``` To Convert back to model, ``` inline fun <reified T> JSONObject.toModel(): T? = this.run { try { Gson().fromJson<T>(this.toString(), T::class.java) } catch (e:java.lang.Exception){ e.printStackTrace() Log.e("JSONObject to model", e.message.toString() ) null } } inline fun <reified T> String.toModel(): T? = this.run { try { JSONObject(this).toModel<T>() } catch (e:java.lang.Exception){ Log.e("String to model", e.message.toString() ) null } } ```
3,748,648
Let $X=(C[0,1],||\cdot||\_\infty)$ and $(Ax)(t)=x(t)+\int\_0^1 t^2s x(s)ds$, and $A:X\to X$, how to find $A^{-1}$ If it were not definite integral I could have changed it to the differential equation but I cannot do anything I found the following link but cannot apply because again it is not indefinite integral in the equation. [Prove that the operator $Ax(t)= \int\_{0}^{t}x(s)ds + x(t)$ is invertible and find $A^{-1}.$ $A:C[0,1]\to C[0,1]$](https://math.stackexchange.com/questions/2267231/prove-that-the-operator-axt-int-0txsds-xt-is-invertible-and-fin)
2020/07/07
[ "https://math.stackexchange.com/questions/3748648", "https://math.stackexchange.com", "https://math.stackexchange.com/users/342943/" ]
If we multiply this by $t$ and integrate from $0$ to $1$, we get $$\int\_0^1tTx(t)dt=\int\_0^1tx(t)dt+\left(\int\_0^1t^3dt\right)\left(\int\_0^1sx(s)ds\right)=\frac{5}{4}\int\_0^1sx(s)ds$$ We change the integration variable from $t\mapsto s$ on the left and then multiply by $4t^2/5$ and get $$\frac{4}{5}t^2\int\_0^1sTx(s)ds=t^2\int\_0^1sx(s)ds=Tx(t)-x(t)$$ We can then solve for $x(t)$ as a function of $Tx(t)$ and get $$x(t)=Tx(t)-\frac{4}{5}t^2\int\_0^1sTx(s)ds$$ or, $$T^{-1}y(t)=y(t)-\frac{4}{5}t^2\int\_0^1sy(s)ds$$ To check this, $$T^{-1}Tx(t)=Tx(t)-\frac{4}{5}t^2\int\_0^1sTx(s)ds=x(t)+t^2\int\_0^1sx(s)ds-\frac{4}{5}t^2\int\_0^1s\left(x(s)+s^2\int\_0^1ux(u)du\right)$$ $$=x(t)+t^2\int\_0^1sx(s)ds-\frac{4}{5}t^2\int\_0^1sx(s)ds-\frac{4}{5}t^2\int\_0^1\int\_0^1s^3ux(u)du$$ $$=x(t)+\frac{1}{5}t^2\int\_0^1sx(s)ds-\frac{1}{5}t^2\int\_0^1ux(u)du=x(t)$$ and we can show this similarly for $TT^{-1}y(t)=y(t)$.
Hint: try to find the operator $T^{-1}$ such that $T^{-1}Tx(t) = TT^{-1}x(t) = x(t),$ for all $x(t).$
63,942,902
I had a task where I needed to compare and filter two `JSON` arrays based on the same values using one column of each array. So I used [this](https://stackoverflow.com/a/62187694/6621346) answer of [this](https://stackoverflow.com/questions/62180696/compare-2-json-arrays-to-get-matching-and-un-matching-outputs) question. However, now I need to compare two `JSON` arrays matching two, or even three columns values. I already tried to use one `map` inside other, however, it isn't working. The examples could be the ones in the answer I used. Compare `db.code = file.code`, `db.name = file.nm` and `db.id = file.identity` ``` var db = [ { "CODE": "A11", "NAME": "Alpha", "ID": "C10000" }, { "CODE": "B12", "NAME": "Bravo", "ID": "B20000" }, { "CODE": "C11", "NAME": "Charlie", "ID": "C30000" }, { "CODE": "D12", "NAME": "Delta", "ID": "D40000" }, { "CODE": "E12", "NAME": "Echo", "ID": "E50000" } ] var file = [ { "IDENTITY": "D40000", "NM": "Delta", "CODE": "D12" }, { "IDENTITY": "C30000", "NM": "Charlie", "CODE": "C11" } ] ```
2020/09/17
[ "https://Stackoverflow.com/questions/63942902", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6621346/" ]
See if this works for you ``` %dw 2.0 output application/json var file = [ { "IDENTITY": "D40000", "NM": "Delta", "CODE": "D12" }, { "IDENTITY": "C30000", "NM": "Charlie", "CODE": "C11" } ] var db = [ { "CODE": "A11", "NAME": "Alpha", "ID": "C10000" }, { "CODE": "B12", "NAME": "Bravo", "ID": "B20000" }, { "CODE": "C11", "NAME": "Charlie", "ID": "C30000" }, { "CODE": "D12", "NAME": "Delta", "ID": "D40000" }, { "CODE": "E12", "NAME": "Echo", "ID": "E50000" } ] --- file flatMap(v) -> ( db filter (v.IDENTITY == $.ID and v.NM == $.NAME and v.CODE == $.CODE) ) ``` Using `flatMap` instead of `map` to flatten otherwise will get array of arrays in the output which is cleaner unless you are expecting a possibility of multiple matches per `file` entry, in which case I'd stick with `map`.
You can make use of `filter` directly and using `contains` ``` db filter(value) -> file contains {IDENTITY: value.ID, NM: value.NAME, CODE: value.CODE} ``` This tells you to filter the db array based on if the file contains the object `{IDENTITY: value.ID, NM: value.NAME, CODE: value.CODE}`. However, this will not work if objects in the file array has other fields that you will not use for comparison. Using above, you can update filter condition to check if an object in file array exist (using data selector) where the condition applies. You can use below to check that. ``` db filter(value) -> file[?($.IDENTITY==value.ID and $.NM == value.NAME and $.CODE == value.CODE)] != null ```
5,041,499
I know, this question was asked before, but I haven't seen a working answer for it. Is there any way to hide some items in a `ListView` without changing source data? I tried to set visibility of the item view to gone, it won't be displayed anymore, but the place reserved for this item is still there. I also set: ``` android:dividerHeight="0dp" android:divider="#FFFFFF" ``` Without success.
2011/02/18
[ "https://Stackoverflow.com/questions/5041499", "https://Stackoverflow.com", "https://Stackoverflow.com/users/408780/" ]
I tried several solutions including `setVisibitlity(View.GONE)` and inflating a default `null` view but all of them have a common problem and that's the dividers between hidden items are stacked up and make a bad visible gray space in large lists. If your `ListView` is backed by a `CursorAdapter` then the best solution is to wrap it with a `CursorWrapper`. So my solution (based on @RomanUsachev answer [here](https://stackoverflow.com/a/17333945/191148)) is this: **FilterCursorWrapper** ``` public class FilterCursorWrapper extends CursorWrapper { private int[] index; private int count = 0; private int pos = 0; public boolean isHidden(String path) { // the logic to check where this item should be hidden // if (some condintion) // return false; // else { // return true; // } return false; } public FilterCursorWrapper(Cursor cursor, boolean doFilter, int column) { super(cursor); if (doFilter) { this.count = super.getCount(); this.index = new int[this.count]; for (int i = 0; i < this.count; i++) { super.moveToPosition(i); if (!isHidden(this.getString(column))) this.index[this.pos++] = i; } this.count = this.pos; this.pos = 0; super.moveToFirst(); } else { this.count = super.getCount(); this.index = new int[this.count]; for (int i = 0; i < this.count; i++) { this.index[i] = i; } } } @Override public boolean move(int offset) { return this.moveToPosition(this.pos + offset); } @Override public boolean moveToNext() { return this.moveToPosition(this.pos + 1); } @Override public boolean moveToPrevious() { return this.moveToPosition(this.pos - 1); } @Override public boolean moveToFirst() { return this.moveToPosition(0); } @Override public boolean moveToLast() { return this.moveToPosition(this.count - 1); } @Override public boolean moveToPosition(int position) { if (position >= this.count || position < 0) return false; return super.moveToPosition(this.index[position]); } @Override public int getCount() { return this.count; } @Override public int getPosition() { return this.pos; } } ``` when your `Cursor` is ready, feed to `FilterCursorWrapper` with your desired column index ``` FilterCursorWrapper filterCursorWrapper = new FilterCursorWrapper(cursor, true,DATA_COLUMN_INDEX); dataAdapter.changeCursor(filterCursorWrapper); ``` and if you do filtering and sorting, don't forget to use `FilterCursorWrapper` everywhere: ``` dataAdapter.setFilterQueryProvider(new FilterQueryProvider() { @Override public Cursor runQuery(CharSequence constraint) { String selection = MediaStore.Video.Media.DATA + " LIKE '%" + constraint.toString().toLowerCase() + "%'"; return new FilterCursorWrapper(context.getContentResolver().query(videoMediaUri, columns, selection, null, null), true, DATA_COLUMN_INDEX); } }); ``` and for refreshing the list, that's sufficient to query with empty filter: ``` dataAdapter.getFilter().filter(""); ``` and you're done, simply by changing the logic of `isHidden` method, you control to show or hide hidden items. And the benefit is that you don't see undesired dividers stacked up. :-)
For me a simple solution was to create my own adapter with a custom row layout, containing a wrapper of all content (e.g. LinearLayout), and set this wrapper visibility to View.GONE in the adapter's getView() method, when I did not want that item shown. No need to modify the data set or maintain two lists. When creating the ListView, I also set it to not automatically show dividers: ``` android:divider="@null" android:dividerHeight="0dp" ``` and draw my own divider (which I also set to GONE when I don't want that item). Otherwise you'd see multiple dividers or a thick gray line where multiple items were hidden.
10,687
I know correlation does not imply causation but instead the strength and direction of the relationship. Does simple linear regression imply causation? Or is an inferential (t-test, etc.) statistical test required for that?
2011/05/11
[ "https://stats.stackexchange.com/questions/10687", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/4572/" ]
There is nothing explicit in the mathematics of regression that state causal relationships, and hence one need not explicitly interpret the slope (strength and direction) nor the p-values (i.e. the probability a relation as strong as or stronger would have been observed if the relationship were zero in the population) in a causal manner. That being said, I would say regression does have a much stronger connotation that one is estimating an explicit directional relationship than does estimating the correlation between two variables. Assuming by correlation you mean [Pearson's r](http://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient), it typically does not have an explicit causal interpretation as the metric is symmetrical (i.e. you can switch which variable is X and which is Y and you will still have the same measure). Also the colloquialism "Correlation does not imply causation" I would suspect is so well known that stating two variables are correlated the assumption is one is not making a causal statement. Estimated effects in [regression](http://en.wikipedia.org/wiki/Regression_analysis) analysis are not symetrical though, and so by choosing what variable is on the right hand side versus the left hand side one is making an implicit statement unlike that of the correlation. I suspect one intends to make some causal statement in the vast majority of circumstances in which regression is used (inference vs prediction aside). Even in cases of simply stating correlations I suspect people frequently have some implied goals of causal inference in mind. Given some constraints are met [correlation can imply causation](https://stats.stackexchange.com/questions/534/under-what-conditions-does-correlation-imply-causation)!
From a semantic perspective, an alternative goal is to build evidence for a good predictive model instead of proving causation. A simple procedure for building evidence for the predictive value of a regression model is to divide your data in 2 parts and fit your regression with one part of the data and with the other part of the data test how well it predicts. The notion of [Granger causality](http://en.wikipedia.org/wiki/Granger_causality) is interesting.
55,829,349
I have an interface an enum and a type : ``` export interface Filters { cat: Array<string>; statuses: Array<Status | TopStatus>; } export enum Status { ARCHIVED, IN_PROGRESS, COMING } export type TopStatus = Status.ARCHIVED | Status.IN_PROGRESS; ``` And in the method: ``` handleStatuses(myFilters: Filters): Array<string | TopStatus> { return [...myFilters.cat, ...myFilters.statuses]; } ``` I have the error `2322` who says he's waiting `string | ARCHIVED | IN_PROGRESS | COMING` while the method returns `string ARCHIVED | IN_PROGRESS` But it works when the method returns to Array `
2019/04/24
[ "https://Stackoverflow.com/questions/55829349", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7951032/" ]
U can try to put ur ajax code in a setInterval function ``` setInterval(function(){ //Here }, 3000); ``` Edit: i meant setInterval
Try this then: JS ``` $(document).ready(function () { $.ajax({ url: "http://localhost/mycharts/api/data.php", method: "GET", success: function (data) { console.log(data); var subholding = []; var TotalAccounts = []; for (var i in data) { subholding.push("" + data[i].subholding); TotalAccounts.push(data[i].TotalAccounts); } var chartdata = { labels: subholding, datasets: [ { label: 'Total Accounts', backgroundColor: [ "red", "green", "blue", "purple", "magenta", "yellow", "orange", "black" ], borderColor: 'rgba(200, 200, 200, 0.75)', hoverBackgroundColor: 'rgba(200, 200, 200, 1)', hoverBorderColor: 'rgba(200, 200, 200, 1)', data: TotalAccounts } ] }; var ctx = $("#mycanvas"); var barGraph = new Chart(ctx, { type: 'bar', data: chartdata }); function barGraph() { subholding.push("" + data[i].subholding); TotalAccounts.push(data[i].TotalAccounts); data.update(); } }, error: function (data) { console.log(data); } }); }); setInterval(function(){ $(document).ready(function () { $.ajax({ url: "http://localhost/mycharts/api/data.php", method: "GET", success: function (data) { console.log(data); var subholding = []; var TotalAccounts = []; for (var i in data) { subholding.push("" + data[i].subholding); TotalAccounts.push(data[i].TotalAccounts); } var chartdata = { labels: subholding, datasets: [ { label: 'Total Accounts', backgroundColor: [ "red", "green", "blue", "purple", "magenta", "yellow", "orange", "black" ], borderColor: 'rgba(200, 200, 200, 0.75)', hoverBackgroundColor: 'rgba(200, 200, 200, 1)', hoverBorderColor: 'rgba(200, 200, 200, 1)', data: TotalAccounts } ] }; var ctx = $("#mycanvas"); var barGraph = new Chart(ctx, { type: 'bar', data: chartdata }); function barGraph() { subholding.push("" + data[i].subholding); TotalAccounts.push(data[i].TotalAccounts); data.update(); } }, error: function (data) { console.log(data); } }); }); }, 30000); ```
33,089,808
Hey everyone so I am trying to build a small sample printing app on android and can't seem to print an existing pdf. There is plenty of documentation on creating a custom document with the canvas but I already have the document. Basically I just want to be a able to read in a pdf document and send it as a file output stream directly to the printer to be printed. Any help is appreciated.
2015/10/12
[ "https://Stackoverflow.com/questions/33089808", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4935793/" ]
We can simply achieve this by creating a custom `PrintDocumentAdapter` PdfDocumentAdapter.java ``` public class PdfDocumentAdapter extends PrintDocumentAdapter { Context context = null; String pathName = ""; public PdfDocumentAdapter(Context ctxt, String pathName) { context = ctxt; this.pathName = pathName; } @Override public void onLayout(PrintAttributes printAttributes, PrintAttributes printAttributes1, CancellationSignal cancellationSignal, LayoutResultCallback layoutResultCallback, Bundle bundle) { if (cancellationSignal.isCanceled()) { layoutResultCallback.onLayoutCancelled(); } else { PrintDocumentInfo.Builder builder= new PrintDocumentInfo.Builder(" file name"); builder.setContentType(PrintDocumentInfo.CONTENT_TYPE_DOCUMENT) .setPageCount(PrintDocumentInfo.PAGE_COUNT_UNKNOWN) .build(); layoutResultCallback.onLayoutFinished(builder.build(), !printAttributes1.equals(printAttributes)); } } @Override public void onWrite(PageRange[] pageRanges, ParcelFileDescriptor parcelFileDescriptor, CancellationSignal cancellationSignal, WriteResultCallback writeResultCallback) { InputStream in=null; OutputStream out=null; try { File file = new File(pathName); in = new FileInputStream(file); out=new FileOutputStream(parcelFileDescriptor.getFileDescriptor()); byte[] buf=new byte[16384]; int size; while ((size=in.read(buf)) >= 0 && !cancellationSignal.isCanceled()) { out.write(buf, 0, size); } if (cancellationSignal.isCanceled()) { writeResultCallback.onWriteCancelled(); } else { writeResultCallback.onWriteFinished(new PageRange[] { PageRange.ALL_PAGES }); } } catch (Exception e) { writeResultCallback.onWriteFailed(e.getMessage()); Logger.logError( e); } finally { try { in.close(); out.close(); } catch (IOException e) { Logger.logError( e); } } }} ``` Now call print by using `PrintManager` ``` PrintManager printManager=(PrintManager) getActivityContext().getSystemService(Context.PRINT_SERVICE); try { PrintDocumentAdapter printAdapter = new PdfDocumentAdapter(Settings.sharedPref.context,filePath ); } printManager.print("Document", printAdapter,new PrintAttributes.Builder().build()); } catch (Exception e) { Logger.logError(e); } ```
For those interested in the kotlin version of the [Karthik Bollisetti](https://stackoverflow.com/a/49298355/7680523) answer here is it. The `PdfDocumentAdapter` is re-written as this ``` class PdfDocumentAdapter(private val pathName: String) : PrintDocumentAdapter() { override fun onLayout( oldAttributes: PrintAttributes?, newAttributes: PrintAttributes, cancellationSignal: CancellationSignal?, callback: LayoutResultCallback, bundle: Bundle ) { if (cancellationSignal?.isCanceled == true) { callback.onLayoutCancelled() return } else { val builder = PrintDocumentInfo.Builder(" file name") builder.setContentType(PrintDocumentInfo.CONTENT_TYPE_DOCUMENT) .setPageCount(PrintDocumentInfo.PAGE_COUNT_UNKNOWN) .build() callback.onLayoutFinished(builder.build(), newAttributes == oldAttributes) } } override fun onWrite( pageRanges: Array<out PageRange>, destination: ParcelFileDescriptor, cancellationSignal: CancellationSignal?, callback: WriteResultCallback ) { try { // copy file from the input stream to the output stream FileInputStream(File(pathName)).use { inStream -> FileOutputStream(destination.fileDescriptor).use { outStream -> inStream.copyTo(outStream) } } if (cancellationSignal?.isCanceled == true) { callback.onWriteCancelled() } else { callback.onWriteFinished(arrayOf(PageRange.ALL_PAGES)) } } catch (e: Exception) { callback.onWriteFailed(e.message) } } } ``` then call the PrintManager in your code like this ``` val printManager : PrintManager = requireContext().getSystemService(Context.PRINT_SERVICE) as PrintManager try { val printAdapter = PdfDocumentAdapter(file.absolutePath) printManager.print("Document", printAdapter, PrintAttributes.Builder().build()) } catch (e : Exception) { Timber.e(e) } ```
45,822
According to wikipedia a statement is either (a) a meaningful declarative sentence that is either true or false, or (b) that which a true or false declarative sentence asserts. Is the sentence God exists a statement? Me and some friend were discussing about the above sentence whether it is a statement or not. My answer to the question was No. Since its truth value depends on the personal opinions and so it can not be a logical statement( therefore different persons give different truth values based on their opinions. ) Please give a clear answer. Thanks in advance.
2017/09/04
[ "https://philosophy.stackexchange.com/questions/45822", "https://philosophy.stackexchange.com", "https://philosophy.stackexchange.com/users/28528/" ]
The statement *God exists* is logical. But it is not necessarily true. You are confusing logic with truth. Logic is like mathematics: it is just a set of rules. It doesn't presuppose a truth value for *x* or *y*. *x=3* could be true or false. It does not depend on logic. Logic is just a method of reasoning regarding truthfulness and falsehood. But logic does not define *per se* what is true and what is false. The discipline that deals with such problem is philosophy, and from an empiric point of view, science. For example, for science, relativity is a valid theory. But it is not valid from a philosophical or metaphysical point of view, because time and space would be subjective constructs. From the point of view of the tool, logic, both values (the theory is true or false) are useful.
According to your theory of truth, the truth value of some propositions are determined by subjective conditions. In your opinion, the proposition "God exists" falls into that category, but in my opinion it doesn't. Therefore, which propositions fall into that category is itself a matter of opinion. Now, if we apply your theory of truth to the things you claim, someone might be of the opinion that the truth value of everything you say depends on opinion, and, accordingly, it would follow that there is nothing logical in what you say. For that reason, I would suggest adopting another theory of [truth](https://plato.stanford.edu/entries/truth/).
40,751,254
I'm programming a game(on a very basic level) for a school project in java using BlueJ, and I'm trying to split one constructor, containing a lot of information, into two or three separate constructors. The initial code, before my changes looks as follows: ``` public class Game //fields omitted.. { public Game() //initialise game { createRooms(); } private void createRooms() // initialise rooms and exists and set start room. { Room bedRoom, kitchen; bedRoom = new Room("in the bedroom"); kitchen = new Room("in the kitchen"); bedRoom.setExit("north", kitchen); kitchen.setExit("south", bedRoom); player = new Player(kitchen); } //Now, I want to seperate the contructor initialising the exits from the rest. //I do so, by copying this to a new constructor below the createRooms constructor: //initial code omitted.. private void createRooms() // initialise rooms { Room bedRoom, kitchen; bedRoom = new Room("in the bedroom"); kitchen = new Room("in the kitchen"); } private void createExits() // initialise room exits and set start room. { Room bedRoom, kitchen; bedRoom.setExit("north", kitchen); kitchen.setExit("south", bedRoom); player = new Player(kitchen); } } ``` When I compile, I get the error message in the new constructor: "variable bedRoom might not have been initialised". I don't get this, since the variable was initialised in the previous constructor. Can this be solved from the information and code provided above? Thanks in advance! BR The Newbie.
2016/11/22
[ "https://Stackoverflow.com/questions/40751254", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7196504/" ]
In your code, `bedRoom` is a *local variable* not an attribute, hence you need to assign a value to it when you declare it. Currently, it's uninitialized and it won't even compile, because if it did, it would raise a `NullPointerException` as soon as your code is executed. If you want to initialize variables inside the constructor so they can be seen everywhere, declare them *outside* as attributes: ``` public class Game { Room bedRoom; Room kitchen; } ``` And remove these lines from the other methods: ``` Room bedRoom, kitchen; ```
Variables `bedRoom` and `kitchen` have local scope, they don't exist outside the methods. You should declare them as class members. And `player` as well. Now, you should think twice when you put class member initialization code into a private method. Why? Because that method can be called after construction, and it's going to reset your member variables! The only reason I could think of is you have *a lot* member variables, and the constructor is getting really long. ``` class Game { private Room bedRoom; private Room kitchen; private Player player; public Game() { // And you should initialize class members directly in the // constructor. Most of the time. bedRoom = new Room("in the bedroom"); kitchen = new Room("in the kitchen"); player = new Player(kitchen); connectRooms(); } private void connectRooms() { bedRoom.setExit("north", kitchen); kitchen.setExit("south", bedRoom); } } ```
3,752,519
In the ribbon, I want to insert a picture or a link into a content page, but the "From Sharepoint" button is grayed out and I can only upload an image or insert a link "From Address". My field is rich text. I'm using SharePoint 2010. How can I make the link available? Thanks
2010/09/20
[ "https://Stackoverflow.com/questions/3752519", "https://Stackoverflow.com", "https://Stackoverflow.com/users/437852/" ]
Do not muck with the ribbon to make this work! Totally depends on how you want to use it. First of all, the Publishing feature must be active on Site Collection level and on the site where you want to add your rich content. Then, if you activate the Wiki Home page feature you will have the SharePoint option availabe on the page. If you want to use it in a custom list, it gets a bit more complicated. The normal Rich Text field greys out the SharePoint option. And you can not add the Full HTML with publishing field directly to a custom list. So the solution is to create a new site column based on the Full HTML field. And then add that site column to the custom list. This field is part of the publishing infrastructure. So only available on SharePoint Server 2010
Usually ribbon button is grayed out if you didn't add CommandUIHandler element for it in your CustomAction XML. For more details, you can see this MSDN article: <http://msdn.microsoft.com/en-us/library/ff458385.aspx> Also, you can find useful this article (with sample code and screenshot): <http://blogs.msdn.com/b/jfrost/archive/2009/11/06/adding-custom-button-to-the-sharepoint-2010-ribbon.aspx>
47,510,689
I have an array based MySql database. This is the array. ``` [ 0 => [ 'id' => '1997' 'lokasi_terakhir' => 'YA4121' ] 1 => [ 'id' => '1998' 'lokasi_terakhir' => 'PL2115' ] 2 => [ 'id' => '1999' 'lokasi_terakhir' => 'PL4111' ] ] ``` How can I get the element `lokasi_terakhir` that grouped by the first character ? What the best way ? This is the goal : ``` [ "Y" => 1, "P" => 2 ] ``` Please advise
2017/11/27
[ "https://Stackoverflow.com/questions/47510689", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4452417/" ]
Here are two refined methods. Which one you choose will come down to your personal preference (you won't find better methods). In the first, I am iterating the array, declaring the first character of the `lokasi_terakhir` value as the key in the `$result` declaration. If the key doesn't yet exist in the output array then it must be declared / set to `1`. After it has been instantiated, it can then be incremented -- I am using "pre-incrementation". The second method first maps a new array using the first character of the `lokasi_terakhir` value from each subarray, then counts each occurrence of each letter. ([Demonstrations Link](http://sandbox.onlinephpfunctions.com/code/e6def71b2ea02a9ca5829b4b019a7e0589d705e5)) Method #1: (foreach) ``` foreach($array as $item){ if(!isset($result[$item['lokasi_terakhir'][0]])){ $result[$item['lokasi_terakhir'][0]]=1; // instantiate }else{ ++$result[$item['lokasi_terakhir'][0]]; // increment } } var_export($result); ``` Method #2: (functional) ``` var_export(array_count_values(array_map(function($a){return $a['lokasi_terakhir'][0];},$array))); // generate array of single-character elements, then count occurrences ``` Output: (from either) ``` array ( 'Y' => 1, 'P' => 2, ) ```
You can group those items like this: ``` $array = [ 0 => [ 'id' => '1997', 'lokasi_terakhir' => 'YA4121' ], 1 => [ 'id' => '1998', 'lokasi_terakhir' => 'PL2115' ], 2 => [ 'id' => '1999', 'lokasi_terakhir' => 'PL4111' ] ]; $result = array(); foreach($array as $item) { $char = substr($item['lokasi_terakhir'], 0, 1); if(!isset($result[$char])) { $result[$char] = array(); } $result[$char][] = $item; } ```
6,840,367
When I make a SQL query, for example, in database where there's a table named "employees", which is the best practice of writing? ``` SELECT 'name', 'surname', 'phone' WHERE 'city'='ny' FROM 'employees' ORDER BY 'name' SELECT name, surname, phone, WHERE city=ny FROM employees ORDER BY name ``` or ``` SELECT employees.name, employees.surname WHERE employees.city=ny ORDER BY employee.name ``` And why? Is there a standard for this?
2011/07/27
[ "https://Stackoverflow.com/questions/6840367", "https://Stackoverflow.com", "https://Stackoverflow.com/users/864788/" ]
``` SELECT `name`, `surname`, `phone` WHERE `city`='ny' ORDER BY `name`. ``` Note there's a difference between ` and ' (the first is used for the name of fields, and the other one is for strings). Although the ` symbol is only strictly [necessary](https://stackoverflow.com/questions/261455/using-backticks-around-field-names) e.g. when name has special characters or if name is an SQL keyword.
Though it's mostly a matter of personal style, some forms have their advantages. My preference: ``` SELECT e.`name`, e.`surname`, e.`phone` FROM `employees` e WHERE e.`city`= 'ny' OR e.`city` = 'wa' ORDER BY e.`name` ``` 1. Keywords in uppercase, tablenames in lowercase (if you create your tables lowercase or have set them to be case-insensitive) 2. Each keyword on a different line 3. Each table gets an alias (but without the explicit `AS`: employees AS e) 4. Always specify the table name before a column name. This way, you safely can add other tables that possibly have columns with the same name without worries. Another example: ``` SELECT e.`name`, e.`surname`, e.`phone`, u.rank FROM `employees` e [INNER] JOIN `unionreps` u ON e.ID = u.ID ``` 1. JOINs are written in ANSI-92 style, not ANSI-89 (from e,u where e.id=u.id)
3,263,832
I have often the case where I want to return an `Enumerable<T>` from a method or a property. To build the returning `Enumerable<T>`, I use a `List<T>`-instance. After filling the list, I return the list. I always thought that this is enough. But it exists the possibility that the caller casts the resulting `Enumerable<T>` back into the `List<T>` and begins to work further with it. If in a later time I change the implementation of my method, the caller’s code will fail. To avoid this, I could return list.ToArray or make a read-only list before returning it to the caller. But for me this seems to be a big overkill. What do you think? Please note, **I never will return an internally used list** so that the caller can change my objects internal state. The question is only about a short living list that is built temporary to hold the return values. ``` IEnumerable<string> GetAList() { List<string> aList = new List<string>(); aList.Add("a"); aList.Add("b"); return aList; } IEnumerable<string> GetAList() { List<string> aList = new List<string>(); aList.Add("a"); aList.Add("b"); return aList.ToArray<string>(); } ``` The examples are super-simple and in this case I would work from the beginning on with arrays, but it’s only to show explain the question.
2010/07/16
[ "https://Stackoverflow.com/questions/3263832", "https://Stackoverflow.com", "https://Stackoverflow.com/users/340628/" ]
I think that your problem is farfetched because if someone improperly using your methods (making assumption about internal implementation), then actually that is not your problem. But you if you using .net 3.5, then you can use [AsEnumerable](http://msdn.microsoft.com/en-us/library/bb335435.aspx) to completely hide internal implementation: ``` return aList.AsEnumerable(); ``` Or simply wrap list with yield ``` foreach (string NextStr in aList) yield return NextStr; ```
If you change your implementation later on, and return something other than a `List<T>`, the caller code will indeed break. But the author of the caller code should know better than just casting to `List<T>` without checking that the return value actually *is* a list - you have promised nothing of the sort. As for myself, I tend to return `theList.AsEnumerable()`, just to be extra clear, but that is not necessary. The caller code will not know *anything* about what *implementation* of `*IEnumerable<T>` is returned - just that *some* implementation is returned.
2,922,783
I have a folder called notes, naturally they will be categorized into folders, and within those folders there will also be sub-folders for sub categories. Now my problem is I have a function that walks through 3 levels of sub directories: ``` def obtainFiles(path): list_of_files = {} for element in os.listdir(path): # if the element is an html file then.. if element[-5:] == ".html": list_of_files[element] = path + "/" + element else: # element is a folder therefore a category category = os.path.join(path, element) # go through the category dir for element_2 in os.listdir(category): dir_level_2 = os.path.join(path,element + "/" + element_2) if element_2[-5:] == ".html": print "- found file: " + element_2 # add the file to the list of files list_of_files[element_2] = dir_level_2 elif os.path.isdir(element_2): subcategory = dir_level_2 # go through the subcategory dir for element_3 in os.listdir(subcategory): subcategory_path = subcategory + "/" + element_3 if subcategory_path[-5:] == ".html": print "- found file: " + element_3 list_of_files[element_3] = subcategory_path else: for element_4 in os.listdir(subcategory_path): print "- found file:" + element_4 ``` Note that this is still very much a work in progress. Its very ugly in my eyes... What I am trying to achieve here is to go through all the folders and sub folders down and put all the file names in a dictionary called "list\_of\_files", the name as "key", and the full path as "value". The function doesn't quite work just yet, but was wondering how would one use the os.walk function to do a similar thing? Thanks
2010/05/27
[ "https://Stackoverflow.com/questions/2922783", "https://Stackoverflow.com", "https://Stackoverflow.com/users/154688/" ]
Based on your short descriptions, something like this should work: ``` list_of_files = {} for (dirpath, dirnames, filenames) in os.walk(path): for filename in filenames: if filename.endswith('.html'): list_of_files[filename] = os.sep.join([dirpath, filename]) ```
an alternative is to use generator, building on @ig0774's code ``` import os def walk_through_files(path, file_extension='.html'): for (dirpath, dirnames, filenames) in os.walk(path): for filename in filenames: if filename.endswith(file_extension): yield os.path.join(dirpath, filename) ``` and then ``` for fname in walk_through_files(): print(fname) ```
5,526,983
Recently upgrading to PHP 5.3 has resulted in a slew of depreciation errors being shown on my pages. In php.ini I have display\_errors off and error\_reporting = E\_ALL ^ E\_DEPRECATED, but the errors still show. Ideas?
2011/04/03
[ "https://Stackoverflow.com/questions/5526983", "https://Stackoverflow.com", "https://Stackoverflow.com/users/566179/" ]
Your script could be setting the error reporting level differently. Preferably at the end of the page that's having problems run: ``` phpinfo(); ``` It will give you the global, and local values for display\_errors. It's likely been turned on at some point. If you establish that it's being turned back on, you'll need to find where it's turned back on, and remove that. Searching for ini\_set() within your project will probably help.
restart PHP and execute script like this: ``` <?php phpinfo(); ?> ``` to confirm changes
1,995,292
I'm just writing a SQLite powered android applications however I keep getting a NullPointerException when I call my DatabaseHelper class. The code which appears to be causing the error is below: ``` public Cursor GetAllRows() { try { return db.query(DATABASE_TABLE, new String[] {KEY_ROWID, KEY_PHRASE}, null, null, null, null, null); } catch (SQLException e) { Log.e("Exception on query", e.toString()); return null; } } ``` I have gone over and over the code and see no error, although I normally miss the easy stuff! Can anyone see something wrong? If you think the error exists outside this I can post more code however im fairly certain this is the block causing the error... UPDATE: Full source of the DB-adapter. (this is based from the notepad example If I remember correctly). package com.trapp.tts; import android.content.ContentValues; import android.content.Context; import android.database.Cursor; import android.database.SQLException; import android.database.sqlite.SQLiteDatabase; import android.database.sqlite.SQLiteOpenHelper; import android.util.Log; public class DbAdapter { ``` public static final String KEY_PHRASE = "phrase"; public static final String KEY_ROWID = "_id"; private static final String TAG = "DbAdapter"; private DatabaseHelper mDbHelper; private SQLiteDatabase mDb; /** * Database creation sql statement */ private static final String DATABASE_CREATE = "create table phrases (_id integer primary key autoincrement, " + "phrase text not null);"; private static final String DATABASE_NAME = "db"; private static final String DATABASE_TABLE = " phrases"; private static final int DATABASE_VERSION = 1; private final Context mCtx; private static class DatabaseHelper extends SQLiteOpenHelper { DatabaseHelper(Context context) { super(context, DATABASE_NAME, null, DATABASE_VERSION); } @Override public void onCreate(SQLiteDatabase db) { db.execSQL(DATABASE_CREATE); ContentValues cv=new ContentValues(); cv.put(KEY_PHRASE, "1"); db.insert("phrases", KEY_PHRASE, cv); cv.put(KEY_PHRASE, "2"); db.insert("phrases", KEY_PHRASE, cv); cv.put(KEY_PHRASE, "3"); db.insert("phrases", KEY_PHRASE, cv); cv.put(KEY_PHRASE, "4"); db.insert("phrases", KEY_PHRASE, cv); cv.put(KEY_PHRASE, "5"); db.insert("phrases", KEY_PHRASE, cv); cv.put(KEY_PHRASE, "6"); db.insert("phrases", KEY_PHRASE, cv); cv.put(KEY_PHRASE, "7"); db.insert("phrases", KEY_PHRASE, cv); cv.put(KEY_PHRASE, "8"); db.insert("phrases", KEY_PHRASE, cv); cv.put(KEY_PHRASE, "9"); db.insert("phrases", KEY_PHRASE, cv); } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { Log.w(TAG, "Upgrading database from version " + oldVersion + " to " + newVersion + ", which will destroy all old data"); db.execSQL("DROP TABLE IF EXISTS notes"); onCreate(db); } } /** * Constructor - takes the context to allow the database to be * opened/created * * @param ctx the Context within which to work */ public DbAdapter(Context ctx) { this.mCtx = ctx; } /** * Open the notes database. If it cannot be opened, try to create a new * instance of the database. If it cannot be created, throw an exception to * signal the failure * * @return this (self reference, allowing this to be chained in an * initialization call) * @throws SQLException if the database could be neither opened or created */ public DbAdapter open() throws SQLException { mDbHelper = new DatabaseHelper(mCtx); mDb = mDbHelper.getWritableDatabase(); return this; } public void close() { mDbHelper.close(); } /** * Create a new note using the title and body provided. If the note is * successfully created return the new rowId for that note, otherwise return * a -1 to indicate failure. * * @param title the title of the note * @param body the body of the note * @return rowId or -1 if failed */ public long createPhrase(String title, String body) { ContentValues initialValues = new ContentValues(); initialValues.put(KEY_PHRASE, title); return mDb.insert(DATABASE_TABLE, null, initialValues); } /** * Delete the note with the given rowId * * @param rowId id of note to delete * @return true if deleted, false otherwise */ public boolean deletePhrase(long rowId) { return mDb.delete(DATABASE_TABLE, KEY_ROWID + "=" + rowId, null) > 0; } /** * Return a Cursor over the list of all notes in the database * * @return Cursor over all notes */ public Cursor fetchAllPhrases() { return mDb.query(DATABASE_TABLE, new String[] {KEY_ROWID, KEY_PHRASE}, null, null, null, null, null); } /** * Return a Cursor positioned at the note that matches the given rowId * * @param rowId id of note to retrieve * @return Cursor positioned to matching note, if found * @throws SQLException if note could not be found/retrieved */ public Cursor fetchPhrase(long rowId) throws SQLException { Cursor mCursor = mDb.query(true, DATABASE_TABLE, new String[] {KEY_ROWID, KEY_PHRASE}, KEY_ROWID + "=" + rowId, null, null, null, null, null); if (mCursor != null) { mCursor.moveToFirst(); } return mCursor; } /** * Update the note using the details provided. The note to be updated is * specified using the rowId, and it is altered to use the title and body * values passed in * * @param rowId id of note to update * @param title value to set note title to * @param body value to set note body to * @return true if the note was successfully updated, false otherwise */ public boolean updatePhrase(long rowId, String title, String body) { ContentValues args = new ContentValues(); args.put(KEY_PHRASE, title); return mDb.update(DATABASE_TABLE, args, KEY_ROWID + "=" + rowId, null) > 0; } ``` } Calling code: ``` private void fillData() { mCursor = dbHelper.fetchAllPhrases(); startManagingCursor(mCursor); ListAdapter adapter = new SimpleCursorAdapter( this, android.R.layout.simple_list_item_1, mCursor, new String[] {"phrase"}, new int[] {} ); setListAdapter(adapter); } ``` It seems to crash on the call for .fetchAllPhrases() mCursor = dbHelper.fetchAllPhrases();
2010/01/03
[ "https://Stackoverflow.com/questions/1995292", "https://Stackoverflow.com", "https://Stackoverflow.com/users/122547/" ]
Genericity has the advantage of being reusable. However, write things generic, only if: 1. It doesn't take much more time to do that, than do it non-generic 2. It doesn't complicate the code more than a non-generic solution 3. You know will benefit from it later However, **know your standard library**. The case you presented is already in STL as `std::swap`. Also, remember that when writing generically using templates, you can optimize special cases by using template specialization. However, always to it **when it's needed for performance**, not as you write it. Also, note that you have the question of run-time and compile-time performance here. Template-based solutions increase compile-time. Inline solutions *can but not must* decrease run-time. `Cause *"Premature optimization and genericity is the root of all evil"*. And you can quote me on that -\_-.
There are disadvantages to using templates all the time. It (can) greatly increase the compilation time of your program and can make compilation errors more difficult to understand. As taldor said, don't make your functions more generic than they need to be.
141,498
It’s bit lengthy but I think I need to do an introduction: Two years ago I took a job and got a contract with a certain salary that at the time seemed ok. I wasn’t sure about how much I should ask at the time (plus I was moving from my home country, which has different salary expectations) and, overall, I wanted the job and wanted to move here, so I accepted the offer. I got a good raise after 1 year I was here (20%), and got my contract renewed. Though in the meantime I got to know that other people who have started in my company with my same position, got an initial offer that is higher than my initial salary combined with the raise I got after a year. I subsequentially asked for a new raise which was initially denied due to the fact I already got a raise a few months before. I therefore told my manager I was aware I was getting less money compared to my colleagues who started later (not sure if that was unprofessional(?)), and I asked whether that was due to them having better skills, and if that was the case to help me understand and give me a feedback on which gaps I needed to fill. They weren’t able to tell me anything or point to anything that they are doing better (also because they never asked my supervisor about my performances), but agreed to give me a small raise (I’m still getting slightly less though). The topic sort of dropped at the time also from my side due to a couple of other personal reasons. Now, I’m about to start my third year here. I wanted to bring up the topic once again, because ultimately I’m not happy with how much I’m getting, plus I feel my work is not appreciated enough since in my opinion I’m doing a pretty good job. I feel like I would like a 20% raise compared to what I’m earning now, which seems a lot, but would actually be only an average salary for a professional with my experience in my industry. I’m prepared to leave eventually if I don’t get it and got other job offers recently. Of course I would prefer to stay where I am now, but this situation is making me feel very frustrated at the moment. Do you think it’s fair my employer reasoning? (That compares my raise in percentage to my salary, to which 20% is a lot, rather that the industry average salaries, to which my salary is low) Or it’s my fault to have accepted such a low salary when I started?
2019/08/02
[ "https://workplace.stackexchange.com/questions/141498", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/102880/" ]
In my experience, it is always harder (if not impossible) to adjust your low starting salary by getting raises, than it is to start with a higher salary from the beginning. The biggest salary increases I managed to obtain in my career so far were exclusively by switching to another company, and starting there with a (much) bigger salary. I'm sure that, had I stayed with my first employer all these years, I wouldn't be at even half of my current salary. In order to get a big enough starting salary, you need to "win" only one negotiation with your prospective future boss. On the other hand, in order to increase your low starting salary to the amount you actually deserve (or think you deserve) usually involves securing many smaller raises by convincing your boss over and over again that your value to them has increased and thus your salary should also increase. To give you an example in terms of percentages: * The last time I switched companies, my salary increased by ~35%. * The time I switched before that it increased by ~20%. * The last time I got a raise while staying at my company, my salary was increased by 5%. * The time I got one while staying before that it was increased by a measly 3%. So, if you're aiming for a substantial raise, my advice is for you to move on to your next employer.
Sure! Just ask for it! If you have a realistic estimation of the worth of your job, then it will not be difficult to find someone that pays for it (in your current company or otherwise) Address whoever is responsible in a calm tone, explaining your perspective on the issue, but focusing on them rather than on yourself. This means talking about what you are doing for the company and how it impacts its finances (i.e: how much money you are bringing to the table), rather than just complaining about how unfair you think your sitaution is. But also, don't forget that negotiation skills are critical to succesfully achieve this. You may want to work on those or, at least, acknowledge whether or not you currently lack them
2,800,710
**Problem** Prove $$\lim\_{n \to \infty}\frac{\ln (n+1)}{(n+1)[\ln^2 (n+1)-\ln^2 n]}=\frac{1}{2},$$where $n=1,2,\cdots.$ **My Proof** Consider the function $f(x)=\ln^2 x.$ Notice that $f'(x)=2\cdot \dfrac{\ln x}{x}.$ By Lagrange's Mean Value Theorem, we have $$\ln^2(n+1)-\ln^2 n=f(n+1)-f(n)=f'(\xi)(n+1-n)=f'(\xi)=2\cdot \frac{\ln \xi}{\xi},$$where $n<\xi<n+1.$ Moreover, consider another function $g(x)=\dfrac{\ln x}{x}.$ Since $g'(x)=\dfrac{1-\ln x}{x^2}<0$ holds for all $x>e,$ hence $g(n+1)<g(\xi)<g(n)$ holds for every sufficiently large $n.$ Therefore, $$\frac{1}{2} \leftarrow\frac{1}{2}\cdot\dfrac{g(n+1)}{g(n)}<\dfrac{\ln (n+1)}{(n+1)[\ln^2 (n+1)-\ln^2 n]}=\frac{1}{2}\cdot\dfrac{g(n+1)}{g(\xi)}<\frac{1}{2}\cdot\dfrac{g(n+1)}{g(n+1)}=\frac{1}{2}.$$ Thus, by Squeeze Theorem, we have that the limit we want equals $\dfrac{1}{2}.$ *Am I right? The proof above is not natural to me. Any other proof ?*
2018/05/29
[ "https://math.stackexchange.com/questions/2800710", "https://math.stackexchange.com", "https://math.stackexchange.com/users/560634/" ]
You can use this $$ \lim\_{n \to \infty}\dfrac{\ln (n+1)}{(n+1)[\ln^2 (n+1)-\ln^2 n]} = $$ $$ =\lim\_{n \to \infty}\dfrac{\ln (n+1)}{(n+1)[\ln (n+1)-\ln n][\ln (n+1)+\ln n]} = $$ $$ =\lim\_{n \to \infty}\dfrac{\ln (n+1)}{\ln\left[\left(1 +\frac{1}{n}\right)^{n+1}\right][\ln (n+1)+\ln n]} = $$ $$ =\lim\_{n \to \infty}\dfrac{\ln (n+1)}{\ln (n+1)+\ln n} = \frac{1}{2} $$
There exists an appplication in both solutions of @Virtuoz's and mine. That is > > $$\lim\_{n \to \infty}\frac{\ln(n+1)}{\ln n}=1.$$ > > > Now, I give its proof for complement. **Proof 1** By L'Hospital's Rule, we have $$\lim\_{n \to \infty}\frac{\ln(n+1)}{\ln n}=\lim\_{n \to \infty}\frac{\dfrac{1}{n+1}}{\dfrac{1}{n}}=\lim\_{n \to \infty}\frac{n}{n+1}=\lim\_{n \to \infty}\dfrac{1}{1+\dfrac{1}{n}}=1.$$ **Proof 2** Denote $f(x)=\ln x$. By Lagrange's Mean Value Theorem, we have $$\ln(n+1)-\ln n=f(n+1)-f(n)=f'(\xi)(n+1-n)=\frac{1}{\xi},$$where $n<\xi<n+1$. Let $n \to \infty$. Then $\xi \to \infty.$ Thus, $\ln(n+1)-\ln n \to 0.$ It follows that $$\lim\_{n \to \infty}\frac{\ln(n+1)}{\ln n}=\lim\_{n \to \infty}\left(\frac{\ln(n+1)-\ln n}{\ln n}+1\right)=0 \cdot 0+1=1.$$
85,064
I am trying to get sObject fields name in 2nd pick list on the basis of selected object name in 1st pick list. Here is my class.. ``` public with sharing class ExtractSobject{ public list<SelectOption> fields { get; set; } public String objectName { get; set; } public List<SelectOption> getSelectedobjnames() { List<Schema.SObjectType> obj = Schema.getGlobalDescribe().Values(); List<SelectOption> options = new List<SelectOption>(); options.add(new SelectOption('--Select Object--','--Select Object--')); for(Schema.SObjectType st : obj) { options.add(new SelectOption(st.getDescribe().getName(),st.getDescribe().getName())); } return options; } public String Sf{get;set;} public List<SelectOption> objFields{get; set;} public List<SelectOption> getSelectedobjFields() { SObjectType objTyp = Schema.getGlobalDescribe().get('Selectedobjnames'); DescribeSObjectResult objDef = objTyp.getDescribe(); Map<String, SObjectField> fields = objDef.fields.getMap(); Set<String> fieldSet = fields.keySet(); List<SelectOption> options = new List<SelectOption>(); options.add(new SelectOption('--Select Object--','--Select Object--')); for(String s:fieldSet) { SObjectField Sobjfields = fields.get(s); DescribeFieldResult selectedField = Sobjfields.getDescribe(); options.add(new SelectOption(selectedField.getName(),selectedField.getName())); } return options; } } ``` Page... ``` <apex:page controller="ExtractSobject"> <apex:form > <apex:pageblock > <apex:pageblocksection > <apex:pageBlockSectionItem > <apex:outputlabel value="Select Object"/> <apex:selectList value="{!fields}" size="1"> <apex:selectoptions value="{!Selectedobjnames}"></apex:selectoptions> <apex:actionSupport event="onchange" rerender="a"/> </apex:selectList> </apex:pageBlockSectionItem> <apex:pageBlockSectionItem > <apex:outputPanel id="a"> <apex:outputLabel value="Object Fields" ></apex:outputLabel> <apex:selectList value="{!Sf}" size="1"> <apex:selectOptions value="{!SelectedobjFields}" /> </apex:selectList> </apex:outputPanel> </apex:pageBlockSectionItem> </apex:pageBlockSection> </apex:pageblock> </apex:form> </apex:page> ``` I am getting an exception > > Attempt to de-reference a null object > > >
2015/07/28
[ "https://salesforce.stackexchange.com/questions/85064", "https://salesforce.stackexchange.com", "https://salesforce.stackexchange.com/users/20643/" ]
In this line ``` SObjectType objTyp = Schema.getGlobalDescribe().get('Selectedobjnames'); DescribeSObjectResult objDef = objTyp.getDescribe(); ``` You are giving 'Selectedobjnames' as a string. you need to pass it without single quote. I think it will solve your problem.
Seems time-consuming for us to read through the code and locate the issue. So instead, I will provide you some info on how to debug this one. For VF page, enable development mode for your current user: [How to enable development mode](https://help.salesforce.com/HTViewHelpDoc?id=pages_dev_mode.htm&language=en_US). Refresh your page. Now you should get the detailed stack trace. That would hopefully help you resolving your issue
154,225
I am trying to figure out the best way to publish my elderly father's life work. He says he would be considered a "*fringe*" scientist or an independent researcher. Should we self-publish? The subject matter would be of great interest to those interested in earth-moon systems, and the Egyptian Pyramids. How should we proceed?
2020/08/21
[ "https://academia.stackexchange.com/questions/154225", "https://academia.stackexchange.com", "https://academia.stackexchange.com/users/-1/" ]
There are precedents for this. J.S. Bach assumed that his music would be forgotten after he died. It would have been it if weren't for the efforts of Mendelsohn, Schuman and others. Nowadays Bach is considered by many to be the greatest composer whoever lived. His work is to be heard ubiquitously. > > For about 50 years after Bach’s death, his music was neglected. This > was only natural; in the days of Haydn and Mozart, no one could be > expected to take much interest in a composer who had been considered > old-fashioned even in his lifetime. > <https://www.britannica.com/biography/Johann-Sebastian-Bach/Reputation-and-influence> > > > With regard to publication by children of a parent's work, there is the case of Pierre Fermat. Fermat was a great mathematician by any measure. However his widespread recognition by the general public was the result of his so-called Last Theorem. This came to light purely as a result of his son reproducing a note from Pierre's handwritten note in a margin. > > Written in 1637, it wasn’t actually his last theorem, but nobody knew > about it until his son found it five years after Fermat died. Years > later, after all of Fermat’s other theorems had surrendered to > mathematical proof, this remarkable theorem resisted all assaults. > <https://www.famousscientists.org/pierre-de-fermat/> > > > --- Self-publishing is always an option even if it ends up being simply a treasured family keepsake. Without knowing the details (Did he visit and excavate the pyramids? Did he decipher hieroglyphics that no-one else could?) it is difficult for us to answer. I think you need to consult an expert in the field. Alternatively you need to get a publishing agent in the field of interest. They will negotiate the traps and tricks of the publishing industry for you (at a fee of course).
A few years back I was in a position when I had to submit a paper but could not use any affiliation. You could say I was between jobs or maybe in a job where I could not use my affiliation for independent research. I spent a few 100 dollars and registered a company. I registered with IEEE to get an email address. This is perfectly legal. In your case, the motto of your company is to do research in the ancient ways of Egyptians or whatever. You both are its stakeholders. Now you are not independent researchers. You work in a company. Legally and for all practical purposes, you are no less than researchers affiliated to universities or research labs.
21,670,988
I am using a **colorbox** plugin with iframe. The `iframe` source is another HTML page which has a image courosal. Where i need to write a click event to get the id of the clicked image. The ordinary click event on the HTML page in document ready is not working, i tried with live as well. Also i tried having a click event inside the colorbox onload function and failed. Its an Web application with **asp.net 4.0**. Please help me out on this to write a click event. The Script where i am calling the colorbox ``` $(".iframe").colorbox({ iframe: true, width: "70%", height: "80%" }); $(".iframe").colorbox({ onLoad: function () { alert('onLoad: colorbox has started to load the targeted content'); $('ul#boutiquegallery img').on('click', function () { alert('clciked img'); }); }, onComplete: function () { alert('onComplete: colorbox has displayed the loaded content'); }, //onClosed: function () { alert('onClosed: colorbox has completely closed'); } }); ``` The script that i have tried in My HTML ``` (function ($) { $('#boutique').boutique(); //$("iframe.cboxIframe").contents().find($('ul#boutique img').live('click', function () { // var id = $(this).attr('id'); // alert(id); //})); //$("iframe.cboxIframe").load(function () { // alert('hi'); // // write your code here // $('a').click(function () { // alert('clciked'); // }); // $('img').live('click', function () { // alert('clciked img'); // }); //}); })(jQuery); $(document).ready(function () { //$('img').on('click', function () { // alert('clciked img'); //}); //$('iframe.cboxIframe').load(function () { // $('iframe.cboxIframe').contents().find('img').live({ // click: function () { // alert('clicked img'); // } // }); //}); }); ```
2014/02/10
[ "https://Stackoverflow.com/questions/21670988", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1191918/" ]
`DataView` can be use to make *filter* from your data as: ``` public DataTable Filter(DataTable table) { DataView view = new DataView(table); view.RowFilter = "Camp IS NULL"; table = view.ToTable(); return table; } ```
Use `DataView` to get filtered result from your `DataTable`. ``` public DataTable Filter(DataTable table) { return table; } ```
15,708
I have two networks at work and when I have to use my Wireless settings I need IE to use one set of Proxy LAN Settings, and when I am plugged in I need a different set. I have been looking for a way to script in the Proxy Settings: HTTP, FTP and Secure I also need the "exemptions" I can't buy anything....my company is in a buying pinch. And my IT guys groaned when I asked if I could install FireFox...because I was going to use Firefox for Wireless, IE for LAN....but they yelled at me. Edit: I can't install anything for this. This is a "non issue" to my IT guys. Edit: I have IE 8 installed
2009/07/30
[ "https://superuser.com/questions/15708", "https://superuser.com", "https://superuser.com/users/1315/" ]
Absolutely! Almost all programs these days keep their settings within the registry somewhere. So if it is in the registry and you want to automate it you are in luck. The first step is to find the registry keys that contain the specific configuration that you are going to automate. Once you have the registry keys identified, export those keys to a REG file type. Then write yourself a script which will call the .REG file from the command line. The example REG file content below thanks to Ivo ``` Regedit4 [HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings] "MigrateProxy"=dword:00000001 "ProxyEnable"=dword:00000001 "ProxyHttp1.1"=dword:00000000 "ProxyServer"="http://ProxyServername:80" "ProxyOverride"="<local>" ``` An example to actually put the contents of the REG file within the registry is... ``` C:> REGSRV32 myregsettings.REG ``` If it prompts you for a response such as a Y/N question use the following example instead ``` C:> REGSRV32 myregistrysettings.REG < yes.txt ``` where yes.txt is you should create a text file which contains the Y character and a carriage return to complete the response.
Now I really hate network settings, so I can't guarantee [it works.](http://nscsysop.hypermart.net/setproxy.html) **Push the Browser Settings in the Login Script (for Internet Explorer)** Internet Explorer stores proxy settings in the registry. This makes it particularly easy to update, using a variety of methods. Even if your users do not normally use Internet Explorer as their usual browser, you will want to configure the proxy settings because many other applications key off proxy settings in Internet Explorer. In this method, you determine the proper registry key for your version of IE, export the settings to a .REG file, and then use REGEDIT in the login script to push the settings to the PC. To create the SETPXY.REG file, open notepad & type in as follows: ``` REGEDIT4 (or whatever version of REGEDIT is on your PC, such as REGEDT32) [HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings] "ProxyServer"="<your proxy IP address>:8080" "ProxyEnable"=dword:00000001 "ProxyOverride"="<local>" ``` Change to the private IP address of your HTTP Proxy, and to 127.0.0.1 or any combination of URL's for which you want to bypass the proxy. (I advise you to experiment with this for a while to get the syntax done correctly). Check [this Knowledge Base post](http://support.microsoft.com/?id=819961) for more info on the registry part.
1,826,705
I have an embedded system that currently keeps track of seconds until an event is supposed to occur using a real-time clock driven by a watch crystal. Now it needs to keep track of the actual date and time. So, I need to be able to calculate the day, month, year, hour, minute and second from a start date/time and offset in seconds. Could anyone point me in the right direction for taking into account leap years, daylight savings time (DST) and other complications? --- *Hardware solutions are not an option as this feature is being added to an existing product. An RTC peripheral is integrated into the MCU chosen for the next generation device.*
2009/12/01
[ "https://Stackoverflow.com/questions/1826705", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1491/" ]
I'm bored, couldn't resist trying a solution. Here's a prototype in ruby - should be clear enough to translate to C. Given `offset` and a start date stored as: `Baseyear, Baseday, Basesec` where day 0 = Jan1, you can calculate the date as ``` #initialize outputs year= Baseyear day = Baseday sec = Basesec+offset #days & seconds remaining in the current year is_leap = is_leap_year(year) days_remaining = 365+(is_leap ? 1 : 0) - day secs_remaining = SEC_PER_DAY*days_remaining #advance by year while (sec>=secs_remaining) sec-=secs_remaining year+=1 is_leap = is_leap_year(year) days_remaining = 365+(is_leap ? 1 : 0) secs_remaining = SEC_PER_DAY*days_remaining day=0 end #sec holds seconds into the current year, split into days+seconds day += sec / SEC_PER_DAY day = day.to_i #cast to int sec %= SEC_PER_DAY #lookup month for i in (0..11) dpm = DAYS_PER_MONTH[i] # =[31,28,31,30,...] if (i==1 && is_leap) dpm+=1 end if day < dpm month = i break else day-=dpm end end day+=1 #1-based hour = sec/3600 min = (sec%3600)/60 sec = sec%60 puts "%s %d, %d @ %02d:%02d:%02d" % [MONTHNAME[month],day,year, hour, min, sec] ``` It should be easy to add a check that the day is between the begin and end days for DST in the current locale, and adjust the hour accordingly.
The following function determines whether a given year is a leap year: ``` bool is_leap_year(int year) { return ((0 == year % 400) || ((0 == year % 4) && (0 != year % 100))); } ```
411,552
I'm looking for a solution for using rsync between 2 remote servers. It seems like its not possible. Does any one know why its not possible? I ask that because I think if I know the reason, maybe I could use another tool to make it possible. Update: I have a hypervisor on my primary site with n vm's running on. I have another hypervisor on my secondary site which I want to be the backup server for my primary server. For keeping the file synced between these two, the best way I've found is using rsync. The problem is I don't want to run my code (rsync) on the VMs because I want my product to be agent less. In this case I need to add a third computer to do run the code. Now, I need to rsync between my primary and secondary site which I'm stuck with because rsync doesn't work for remote to remote servers.
2012/07/26
[ "https://serverfault.com/questions/411552", "https://serverfault.com", "https://serverfault.com/users/79147/" ]
If server1 and server2 can connect to each other, the above solutions will work. Otherwise I don't have a solution for rsync, but an old `tar` trick will work. I'm doing it now. Prep: create an ssh key with no passphrase and install it into the `authorized_keys` file of the account at each server. Then do this: ``` $ ssh -i .ssh/my_key bob@server1 'tar -cf - -C sourcedir' | \ ssh -i .sh/my_key carol@server2 'tar -xvpf - -C targetdir' ``` Note that you will end up with targetdir/sourcedir on server2. Credit: The idea came from the Oreilly System Administration book with the armadillo on the cover. I just inserted a workstation in the middle since server1 can't talk directly to server2.
``` gw_en_2_segmentos# ssh cuenta@ip_origen 'cd /carpeta_origen;star \ -acl -artype=exustar -z \ -c -f=- *' | ssh cuenta@ip_destino \ 'cd /carpeta_destino;star \ -acl -artype=exustar -z -xv -f=-' ``` Will give you a remote-to-remote copy, with file permissions and POSIX ACLs
1,184,459
I have being trying to install Ubuntu to run alongside (or instead of) Windows without success. Sure, I download the .iso file, run Rufus to install Ubuntu onto my 29 MB USB stick. Indeed Rufus does just that, however, when I try to boot from the USB everything hangs. I did install Ubuntu onto my Mac using VBox; sure it worked, like a bag of nails, terrible. I am having no success with Windows. VBox 'fails' to start Ubuntu when I take that route and any other attempt at getting Ubuntu to run fail as well? Any help to get Ubuntu running on my PC would be greatly greatly appreciated. Thanks in anticipation, Merv
2019/10/28
[ "https://askubuntu.com/questions/1184459", "https://askubuntu.com", "https://askubuntu.com/users/1010176/" ]
Obviously it has been removed for security reasons. It popped up first in Debian Community: [#916310 - 4.6 should not be shipped in a stable release - Debian Bug report logs](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=916310) Then in [Launchpad](https://bugs.launchpad.net/ubuntu/+source/phpmyadmin/+bug/1837775) Ubuntu Forums thread here: [phpmyadmin missing from repository](https://ubuntuforums.org/showthread.php?t=2428229) It seems like some Debians joined the phpMyAdmin project to fix the problem in future releases.
Ubuntu "focal" 20.04 has now phpMyAdmin 4.9.2 <https://launchpad.net/ubuntu/focal/+package/phpmyadmin> Track progress for 19.10 (if some can be done) in <https://github.com/phpmyadmin/phpmyadmin/issues/15515>
29,710
"Pixel" always confuses me whenever I do web banner work. I use CorelDraw x5 for my work. I usually prefer to use inches rather than pixels as inches are much intuitive for me. But when I do an "Inch to Pixel" conversion, i get confused. * In CorelDraw: `1 inch = 300px`. * At [AuctionRepair.com](http://auctionrepair.com/pixels.html): `1 inch = 75 pixels`. * At [UnitConversion.org](http://www.unitconversion.org/typography/inchs-to-pixels-y-conversion.html): `1 inch = 96 pixels`. Now my client is asking for 1900x1200px banner. How should I do the conversion?
2014/04/16
[ "https://graphicdesign.stackexchange.com/questions/29710", "https://graphicdesign.stackexchange.com", "https://graphicdesign.stackexchange.com/users/22097/" ]
One more confusion to complete the discussion: In website design and layout, [one "CSS pixel" is *always* equal to 1/96th of a "CSS inch"](http://docs.webplatform.org/wiki/css/data_types/length), regardless of screen resolution. This was done because so many early websites used pixel-based measurements for layout assumed a standard screen resolution. In order that the actual size of text and other content remained consistent, web browsers find an approximately even conversion between screen pixels and layout "px" units, and adjust their interpretation of inches so there is always a consistent ratio between "px" and "in". The same goes for all units that are based on inches, such as "pt" for font -- [12pt font-size is always equal to 16px font-size](http://fiddle.jshell.net/HBLLp/). Now that screen resolutions are commonly much better than 96px-per-inch, the concept of "dots" has been borrowed from the print world to talk about actual physical resolution of the screen. There's even a dots-per-pixel (dppx) unit for describing physical resolution proportional to the standard CSS pixel. For example, the new "Retina" Mac & iPhone screens are 2dppx resolution. For your situation, I echo the advice of others: ask you client to clarify what is needed. That said, generally if someone is asking for art with pixels measurement, they are using it for websites. Create your image at exactly the pixel dimensions they ask, and set your image-editing software to use a resolution of 96dpi in order to lay out your rulers and text size in units you are comfortable using. That said, it might not hurt to actually create your image at twice that resolution (i.e., 192dpi), in order to have a version suitable for Retina screens if your client discovers that they want it. Then, your image software should have an easy way for you to save your image at a lower resolution & smaller file size. It's much easy to convert an image to a lower resolution than the reverse!
Pixel is the smallest form to display design for a display unit. Ex: An Display monitor consists of several pixels which form a display on your screen.
11,285,923
I have a tablix with lots of rows that span over multiple pages. I have set the Tablix property Repeat header rows on each page but this does not work. I read somewhere that this is a known bug in Report Builder 3.0. Is this true? If not, is there something else that needs to be done?
2012/07/01
[ "https://Stackoverflow.com/questions/11285923", "https://Stackoverflow.com", "https://Stackoverflow.com/users/804503/" ]
It depends on the tablix structure you are using. In a table, for example, you do not have column groups, so Reporting Services does not recognize which textboxes are the column headers and setting RepeatColumnHeaders property to True doesn't work. Instead, you need to: 1. Open Advanced Mode in the Groupings pane. (Click the arrow to the right of the Column Groups and select Advanced Mode.) * ![Screenshot](https://i.imgur.com/p0kXASk.png) 2. In the Row Groups area (not Column Groups), click on a Static group, which highlights the corresponding textbox in the tablix. Click through each Static group until it highlights the leftmost column header. This is generally the first Static group listed. 3. In the Properties window, set the `RepeatOnNewPage` property to True. * ![Screenshot](https://i.imgur.com/ysNeX8H.png) 4. Make sure that the `KeepWithGroup` property is set to `After`. The `KeepWithGroup` property specifies which group to which the static member needs to stick. If set to `After` then the static member sticks with the group after it, or below it, acting as a group header. If set to `Before`, then the static member sticks with the group before, or above it, acting as a group footer. If set to `None`, Reporting Services decides where to put the static member. Now when you view the report, the column headers repeat on each page of the tablix. [This](http://youtube.com/watch?v=WAO819-gkKw) video shows how to set it exactly as the answer described.
Open `Advanced Mode` in the Groupings pane. (Click the arrow to the right of the Column Groups and select Advanced Mode.) In the Row Groups area (not Column Groups), click on a Static group, which highlights the corresponding textbox in the tablix. Click through each Static group until it highlights the leftmost column header. This is generally the first Static group listed. In the properties grid: * set `KeepWithGroup` to `After` * set `RepeatOnNewPage` to `True` for repeating headers * set `FixedData` to `True` for keeping headers visible
1,301,568
How do I execute a SP and get the return value. The below code always returns null object. The storedprocedure has been tested in the database using the same parameters as in code, but the SubSonic sp always returns null. When executed in the db via sql, it returns the correct values. This is using SubSonic 3.0.0.3. ``` myDB db = new myDB(); StoredProcedure sp = db.GetReturnValue(myParameterValue); sp.Execute(); int? myReturnValue = (int?)sp.Output; ``` In the above code, sp.Output is always null. When executed in the database, the returned variable is a valid integer (0 or higher) and is never null. Stored procedure code below: ``` CREATE PROCEDURE [dbo].[GetReturnValue] @myVariable varchar(50) AS declare @myReturn int BEGIN set @myReturn = 5; return @myReturn; END ``` When executing the stored proc in SQL Server, the returned value is '5'.
2009/08/19
[ "https://Stackoverflow.com/questions/1301568", "https://Stackoverflow.com", "https://Stackoverflow.com/users/158638/" ]
I copied your sproc and stepped through the SubSonic code and .Output is never set anywhere. A work around would be using an output parameter and referring to it after executing: sproc.OutputValues[0];
Here's a simple way to do it: In the stored procedure, instead of using RETURN, use SELECT like this: ``` SELECT @@ROWCOUNT ``` or ``` SELECT @TheIntegerIWantToReturn ``` Then in the code use: ``` StoredProcName.ExecuteScalar() ``` This will return the single integer you SELECTED in your stored procedure.
55,310,734
How to add more indentation in a file tree structure? It has a little bit indentation I want to increase more just like NetBeans. check the image ![enter image description here](https://i.stack.imgur.com/hIQcO.png)
2019/03/23
[ "https://Stackoverflow.com/questions/55310734", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11195843/" ]
If you just want to change the indentation you can set these options: Press Ctrl+Shift+P -> Go to Preferences: Open Settings (JSON) ``` "workbench.tree.indent": 18, ``` You can add guidelines as well with: ``` "workbench.tree.renderIndentGuides": "always", ``` You can also change their color using: ``` "workbench.colorCustomizations": { "tree.indentGuidesStroke": "#008070" }, ```
``` { "workbench.tree.indent": 20, // just paste this line of code in setting.json file "editor.mouseWheelZoom": true // for zoom in & out font size with Ctrl+ mouse scroll } ```
35,844,791
I want to test function calls with optional arguments. Here is my code: ``` list_get() list_get(key, "city", 0) list_get(key, 'contact_no', 2, {}, policy) list_get(key, "contact_no", 0) list_get(key, "contact_no", 1, {}, policy, "") list_get(key, "contact_no", 0, 888) ``` I am not able to parametrize it due to optional arguments, so I have written separate test functions for each api call in `pytest`. I believe there should be better way of testing this one.
2016/03/07
[ "https://Stackoverflow.com/questions/35844791", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2710873/" ]
In addition to the answers @forge and @ezequiel-muns I suggest using some sugar from [`pyhamcrest`](https://github.com/hamcrest/PyHamcrest): ``` import pytest from hamcrest import assert_that, calling, is_not, raises @pytest.mark.parametrize('func, args, kwargs', [ [list_get, (), {}], [list_get, (key, "city", 0), {}], [list_get, (key, "contact_no", 1, {}, policy, ""), {}], [list_get, (), {'key': key}], ]) def test_func_dont_raises(func, args, kwargs): assert_that(calling(func).with_args(*args, **kwargs), is_not(raises(Exception))) ```
For future readers who come to this question trying to set up `@parameterize`d tests to generate a Cartesian set of parameters AND sometimes do not want to pass a given parameter at all (if optional), then using a filter on `None` values will help ``` def function_under_test(foo="foo-default", bar="bar-default"): print([locals()[arg] for arg in inspect.getargspec(function_under_test).args]) @pytest.mark.parametrize("foo", [None, 1, 2]) @pytest.mark.parametrize("bar", [None, "a", "b"]) def test_optional_params(foo, bar): args = locals() filtered = {k: v for k, v in args.items() if v is not None} function_under_test(**filtered) # <-- Notice the double star ``` Sample run: ``` PASSED [ 11%]['foo-default', 'bar-default'] PASSED [ 22%][1, 'bar-default'] PASSED [ 33%][2, 'bar-default'] PASSED [ 44%]['foo-default', 'a'] PASSED [ 55%][1, 'a'] PASSED [ 66%][2, 'a'] PASSED [ 77%]['foo-default', 'b'] PASSED [ 88%][1, 'b'] PASSED [100%][2, 'b'] ```
26,935,983
I have the Thinking Sphinx gem, and I am using it to replace my current advanced search setup. I am storing the Users dob, and then converting it into a age in the User model. User.rb: ``` def age now = Time.now.utc.to_date now.year - birthday.year - ((now.month > birthday.month || (now.month == birthday.month && now.day >= birthday.day)) ? 0 : 1) end ``` I am not sure how I can add implement a search based on user ages. For example visitor selects ages from a double down box, 22 to 27. This should return users who are between the ages of 22 and 27. How should this look if someone could provide an example?
2014/11/14
[ "https://Stackoverflow.com/questions/26935983", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2687095/" ]
One way to accomplish it would be to add an age column along side the DOB entered by users and have a trigger that updates all ages once per day. You could index that easily and query using BETWEEN.
I would store the date of birth as a timestamp attribute, and then filter the Sphinx search results by a range of the first and last dates that are valid for the the age/ages you're after. This avoids having your User model save the age (which will need to be updated regularly, both in the database and in Sphinx).
387
Sometimes when I am not at practice, I will visualize playing scales or a particular piece on my instrument. I know that visualization is often used by athletes, but is it often used among musicians? If so, is there advice on how it can be done most effectively?
2011/05/03
[ "https://music.stackexchange.com/questions/387", "https://music.stackexchange.com", "https://music.stackexchange.com/users/22/" ]
I would consider "visualization," as such, to be more a method of dealing with performance anxiety issues. However, there is plenty of concrete, actual, musical practice you can do inside your head. It's no replacement for real time with the physical instrument, but time spent score studying, audiating, and practicing fingerings will almost certainly transfer. Lots of wind instrumentalists, for example, find it easier to match notes with fingerings than with actual pitch. Sight singing an instrumental etude or piece while mimicing fingerings is one method that I have had quite a lot of success with.
Visualisation and Imagination are absolutely important. In my case, it is rare for me these days to watch my fingers on the fretboard as I've trained myself to not look. Sometimes, in mid performance, I can close my eyes and "see the fretboard and see my hands go to certain chords/scales/shapes" well in advance of the music and I will find myself there. Sometimes, I can do a musical follow the bouncing ball - all in my head. But this has come over about 5-6 years of practice. The idea of "seeing and hearing" your instrument in your mind is the first step in truly mastering the skill of improvisation and mastery of the instrument.
10,326,831
Does the reference consider as pointer in c++ ? ``` int &x = y; ``` Does x have an space in the memory ?
2012/04/26
[ "https://Stackoverflow.com/questions/10326831", "https://Stackoverflow.com", "https://Stackoverflow.com/users/423903/" ]
It's quite common for a reference to be implemented as a pointer under the hood. The Itanium C++ ABI specifies pointers for parameters: > > **3.1.2 Reference Parameters** > > > Reference parameters are handled by passing a pointer to the actual parameter. > > > Yes, a reference uses some memory. If its implementation is effectively a pointer, then it would be pointer sized. How a reference is implemented is implementation-defined. **Edit** As Jesse Good cited from the standard, whether a reference requires storage or not is unspecified.
*Passing* a reference as a function argument probably works by passing a pointer. Declaring a reference to an existing variable in the same scope, like in your example, probably just makes both names refer to the same place in memory. But this is implementation-dependent; different compilers may do it differently.
26,469,040
What is difference between `tabActivity` and `tabhost` and `tabLayout` in Android? My layout will have five tabs. Which one is best for this?
2014/10/20
[ "https://Stackoverflow.com/questions/26469040", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3143460/" ]
`TabActivity` is deprecated. You should use Fragments and FragmentManager instead. `TabHost` and `TabWidget` simply define a portion of the screen for tabs and tab content. There are ways to use them with a `TabActivity`, but it is not compulsory to do so. Note that there is now a [`FragmentTabHost`](https://developer.android.com/reference/android/support/v13/app/FragmentTabHost.html) class that you can consider as well. If you want these tabs to actually be in the same Activity and be able to swipe between them, you can also consider using a `ViewPager` with a `PagerTabStrip` on top: <https://developer.android.com/training/implementing-navigation/lateral.html>
Documentation about making swipe views / tabs: <http://developer.android.com/design/building-blocks/tabs.html> <http://developer.android.com/training/implementing-navigation/lateral.html> TabActivity: deprecated in API level 13 <http://developer.android.com/reference/android/app/TabActivity.html> TabHost: <http://developer.android.com/reference/android/widget/TabHost.html>
44,399,136
im developing a delivery app. So I have productos and popular products in firebase this way: * Products [![Products](https://i.stack.imgur.com/5WJ5O.png)](https://i.stack.imgur.com/5WJ5O.png) * PopularProducts(ID of the product as key and true as value) [![PopularProducts](https://i.stack.imgur.com/ZZZzj.png)](https://i.stack.imgur.com/ZZZzj.png) How I can query only the products who are popular using those childs?
2017/06/06
[ "https://Stackoverflow.com/questions/44399136", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7868100/" ]
Your xticks are completely out of the range where your data lives. Remove the line which sets the xticks and your plot is fine ``` import matplotlib.pyplot as plt plt.plot([1.95E-06, 9.75E-06, 1.95E-05, 9.75E-05, 1.95E-04, 9.75E-04, 1.95E-03], [0.2,0.4,0.6,0.8,1.0,1.2,1.4]) plt.title('Red') plt.ylabel('Absorption') plt.xlabel('Concentration') plt.grid(True) plt.show() ``` [![enter image description here](https://i.stack.imgur.com/HAuFx.png)](https://i.stack.imgur.com/HAuFx.png) If you want to use your custom ticks, you need to set them in the data range, i.e. somewhere between 0 and 0.002 and not between 1 and 7.
The first argument to `plt.xticks` should be x-coords (not tick indexes).
30,768,362
This code is provided as an example in for use with devise and OmniAuth, it works in [my project](https://github.com/plataformatec/devise/wiki/OmniAuth:-Overview). ``` class User < ActiveRecord::Base def self.new_with_session(params, session) super.tap do |user| if data = session["devise.facebook_data"] && session["devise.facebook_data"]["extra"]["raw_info"] user.email = data["email"] if user.email.blank? end end end end ``` I don't know why it's a single equals sign as apposed to a double equals sign, which I thought was necessary for `if`-statements. My IDE "intelliJ IDEA" agrees with my concerns.
2015/06/10
[ "https://Stackoverflow.com/questions/30768362", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2877322/" ]
A assignment operator (`=`) returns the assigned value, which is then evaluated by the `if`. In ruby, only `false` and `nil` are considered as `false`. Everything else evaluates to `true` in a boolean context (like an `if`).
Ruby doesn't care about types in conditionals, unlike Java. As long as the value is neither nil or false then it will pass. In your example you actually discriminate against nil: the if conditionnal ensures that data actually exists and isn't nil, so we can use it, assuming it's a hash. This is a common pattern in Ruby.
250,820
I tried to install teXlive on my laptop running Windows 8.1. After unpacking, I ran "install-tl-windows.bat". It seemed OK for most of the installation, but then I got this in the terminal (the log file is the same): ``` Installing [2321/3058, time/total: 01:06:55/01:24:53]: qsymbols [136k] Installing [2322/3058, time/total: 01:06:57/01:24:55]: qtree [213k] xzdec: (stdin): Unexpected end of input tar: Only read 5840 bytes from archive C:\texlive\2015\temp\qtree.doc.tar untar: untarring C:\texlive\2015\temp\qtree.doc.tar failed (in C:\texlive\2015\texmf-dist) untarring C:\texlive\2015\temp\qtree.doc.tar failed, stopping install. Installation failed. Rerunning the installer will try to restart the installation. Or you can restart by running the installer with: install-tl.bat --profile installation.profile [EXTRA-ARGS] ``` Notice that it failed close to the end, at package 2322/3058. My texlive folder is already 4.4GB. I tried reinstalling, it redownloaded everything, and failed again near the end. If there is any way to continue the installation from where it stopped, it would be great.
2015/06/17
[ "https://tex.stackexchange.com/questions/250820", "https://tex.stackexchange.com", "https://tex.stackexchange.com/users/79308/" ]
I've been facing the same problem since TeX Live 2015 was released last month (I have Windows 7 Pro 64-bit OS). I've tried installing it every week (in the hopes that there is a bug, which will be fixed in the weekly updates) but no luck. I tried installing it from different mirrors but again no luck. Finally, I was able to figure out a workaround yesterday. Here's what I did: 1. Run install-tl-advanced.bat. 2. In the TeX Live 2015 windows that opens up, under Basic Information...Selected Scheme, click on Change 3. Select basic scheme (plain and latex) and click on Ok. 4. Now Selected Scheme should show scheme-basic. 5. Under Further Customization...Installation collections, it should now show 3 collections out of 48. 6. Click on Install TeX Live. 7. This installs a basic version of Tex Live (around 88 files). 8. Once this is installed, open the Tex Live 2015 Manager and then install all the packages you want. This worked perfectly for me and TeX Live 2015 is up and running on my computer. Hope this solves your problem.
The problem turned out to be very simple - the error is generated when the computer gets locked after being idle for some time during the long installation. Staying on the computer solved it.
1,039
I am surprised by how many people automatically put on a cheesy smile when I point a camera in their direction. How can I encourage them to act more naturally, and what can I do to get better, more natural-looking portraits?
2010/07/22
[ "https://photo.stackexchange.com/questions/1039", "https://photo.stackexchange.com", "https://photo.stackexchange.com/users/191/" ]
I always tell people to make the ugliest frowning face possible and make them hold it for a while. After about 30 seconds I say "ok, now you can smile" and the smiles that come out are usually great. But you have to be quick, the smiles will revert back to the fake smiles within seconds. This works on almost anyone, it must be the novelty of frowning in front of a camera, I don't know.
I agree with Rowland. When people feel uncomfortable they try to put on a happy face or "cheesy smile". When I am doing portraits, especially with kids is I say something funny but something to break the ice. If you continue to talk and have a conversation that always helps. Also, if you you are in an open environment it relaxes the subjects as well. I find that adults that don't want to have their portrait taken are the most difficult. I tell them a funny story from my past that give them an opportunity to laugh at me a little bit so that I am not just a someone taking a picture, I am after all human as well.
1,584,314
Summary of the problem: For some decimal values, when we convert the type from decimal to double, a small fraction is added to the result. What makes it worse, is that there can be two "equal" decimal values that result in different double values when converted. Code sample: ``` decimal dcm = 8224055000.0000000000m; // dcm = 8224055000 double dbl = Convert.ToDouble(dcm); // dbl = 8224055000.000001 decimal dcm2 = Convert.ToDecimal(dbl); // dcm2 = 8224055000 double dbl2 = Convert.ToDouble(dcm2); // dbl2 = 8224055000.0 decimal deltaDcm = dcm2 - dcm; // deltaDcm = 0 double deltaDbl = dbl2 - dbl; // deltaDbl = -0.00000095367431640625 ``` Look at the results in the comments. Results are copied from debugger's watch. The numbers that produce this effect have far less decimal digits than the limit of the data types, so it can't be an overflow (I guess!). What makes it much more interesting is that there can be two **equal** decimal values (in the code sample above, see "dcm" and "dcm2", with "deltaDcm" equal to zero) resulting in **different** double values when converted. (In the code, "dbl" and "dbl2", which have a non-zero "deltaDbl") I guess it should be something related to difference in the bitwise representation of the numbers in the two data types, but can't figure out what! And I need to know what to do to make the conversion the way I need it to be. (like dcm2 -> dbl2)
2009/10/18
[ "https://Stackoverflow.com/questions/1584314", "https://Stackoverflow.com", "https://Stackoverflow.com/users/187996/" ]
Interesting - although I generally don't trust normal ways of writing out floating point values when you're interested in the exact results. Here's a slightly simpler demonstration, using [`DoubleConverter.cs`](http://pobox.com/~skeet/csharp/DoubleConverter.cs) which I've used a few times before. ``` using System; class Test { static void Main() { decimal dcm1 = 8224055000.0000000000m; decimal dcm2 = 8224055000m; double dbl1 = (double) dcm1; double dbl2 = (double) dcm2; Console.WriteLine(DoubleConverter.ToExactString(dbl1)); Console.WriteLine(DoubleConverter.ToExactString(dbl2)); } } ``` Results: ``` 8224055000.00000095367431640625 8224055000 ``` Now the question is why the original value (8224055000.0000000000) which is an integer - and exactly representable as a `double` - ends up with extra data in. I strongly suspect it's due to quirks in the algorithm used to convert from `decimal` to `double`, but it's unfortunate. It also violates section 6.2.1 of the C# spec: > > For a conversion from decimal to float or double, the decimal value is rounded to the > nearest double or float value. While this conversion may lose precision, it never causes > an exception to be thrown. > > > The "nearest double value" is clearly just 8224055000... so this is a bug IMO. It's not one I'd expect to get fixed any time soon though. (It gives the same results in .NET 4.0b1 by the way.) To avoid the bug, you probably want to normalize the decimal value first, effectively "removing" the extra 0s after the decimal point. This is somewhat tricky as it involves 96-bit integer arithmetic - the .NET 4.0 `BigInteger` class may well make it easier, but that may not be an option for you.
The answer lies in the fact that `decimal` attempts to preserve the number of significant digits. Thus, `8224055000.0000000000m` has 20 significant digits and is stored as `82240550000000000000E-10`, while `8224055000m` has only 10 and is stored as `8224055000E+0`. `double`'s mantissa is (logically) 53 bits, i.e. at most 16 decimal digits. This is exactly the precision you get when you convert to `double`, and indeed the stray `1` in your example is in the 16th decimal place. The conversion isn't 1-to-1 because `double` uses base 2. Here are the binary representations of your numbers: ``` dcm: 00000000000010100000000000000000 00000000000000000000000000000100 01110101010100010010000001111110 11110010110000000110000000000000 dbl: 0.10000011111.1110101000110001000111101101100000000000000000000001 dcm2: 00000000000000000000000000000000 00000000000000000000000000000000 00000000000000000000000000000001 11101010001100010001111011011000 dbl2 (8224055000.0): 0.10000011111.1110101000110001000111101101100000000000000000000000 ``` For double, I used dots to delimit sign, exponent and mantissa fields; for decimal, see [MSDN on decimal.GetBits](http://msdn.microsoft.com/en-us/library/system.decimal.getbits.aspx), but essentially the last 96 bits are the mantissa. Note how the mantissa bits of `dcm2` and the most significant bits of `dbl2` coincide exactly (don't forget about the implicit `1` bit in `double`'s mantissa), and in fact these bits represent 8224055000. The mantissa bits of `dbl` are the same as in `dcm2` and `dbl2` but for the nasty `1` in the least significant bit. The exponent of `dcm` is 10, and the mantissa is 82240550000000000000. **Update II:** It is actually very easy to lop off trailing zeros. ``` // There are 28 trailing zeros in this constant — // no decimal can have more than 28 trailing zeros const decimal PreciseOne = 1.000000000000000000000000000000000000000000000000m ; // decimal.ToString() faithfully prints trailing zeroes Assert ((8224055000.000000000m).ToString () == "8224055000.000000000") ; // Let System.Decimal.Divide() do all the work Assert ((8224055000.000000000m / PreciseOne).ToString () == "8224055000") ; Assert ((8224055000.000010000m / PreciseOne).ToString () == "8224055000.00001") ; ```
736,816
I have a question about Asymptotics involving big Omega... How do I need to approach this equation in order to prove it? $$n \cdotΩ(f(n)) = Ω(n\cdot f(n))$$ Thank you very much for your answers!
2014/04/02
[ "https://math.stackexchange.com/questions/736816", "https://math.stackexchange.com", "https://math.stackexchange.com/users/139840/" ]
Using the definition of Big-Omega: By definition, $f(n) \in \Omega(f(n))$, so $f(n) \leq 1 \* f(n)$, for all $n$. Now multiply both sides by $n$, to get $n f(n) \leq 1 \* n f(n)$. Again, we set our constant $C = 1$ and the inequality holds for all $n$. In fact, it's really equality. So $n \Omega(f(n)) = \Omega(n f(n))$.
$g(n) = \Omega ( f(n))$ means that $\dfrac{g(n)}{f(n)}$ is bounded in a suitable sense. Formally you have also that $\dfrac{\Omega(f(n))}{f(n)}$ is bounded. What you've written just states that $$ \frac{n \Omega(f(n))}{n f(n)} = \frac{\Omega (f(n))}{f(n)}$$ is bounded.
25,486,033
I have a templatized class like so : ``` template<typename T> class A { protected: std::vector<T> myVector; public: /* constructors + a bunch of member functions here */ } ``` I would like to add just ONE member function that would work only for 1 given type of T. Is it possible to do that at all without having to specialize the class and reimplement all the other already existing methods? Thanks
2014/08/25
[ "https://Stackoverflow.com/questions/25486033", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3975337/" ]
Yes, it's possible in C++03 with CRTP ([Curiously recurring template pattern](https://en.wikipedia.org/wiki/Curiously_recurring_template_pattern)): ``` #include <numeric> #include <vector> template<typename Derived, typename T> struct Base { }; template<typename Derived> struct Base<Derived, int> { int Sum() const { return std::accumulate(static_cast<Derived const*>(this)->myVector.begin(), static_cast<Derived const*>(this)->myVector.end(), int()); } }; template<typename T> class A : public Base<A<T>, T> { friend class Base<A<T>, T>; protected: std::vector<T> myVector; public: /* constructors + a bunch of member functions here */ }; int main() { A<int> Foo; Foo.Sum(); } ```
One approach not given yet in the answers is using the standard library `std::enable_if` to perform [SFINAE](http://en.wikipedia.org/wiki/Substitution_failure_is_not_an_error) on a base class that you inherit to the main class that defines appropriate member functions. Example code: ``` template<typename T, class Enable = void> class A_base; template<typename T> class A_base<T, typename std::enable_if<std::is_integral<T>::value>::type>{ public: void only_for_ints(){/* integer-based function */} }; template<typename T> class A_base<T, typename std::enable_if<!std::is_integral<T>::value>::type>{ public: // maybe specialize for non-int }; template<typename T> class A: public A_base<T>{ protected: std::vector<T> my_vector; }; ``` This approach would be better than an empty function because you are being more strict about your API and better than a `static_cast` because it simply won't make it to the inside of the function (it won't exist) and will give you a nice error message at compile time (GCC shows "has no member named ‘only\_for\_ints’" on my machine). The downside to this method would be compile time and code bloat, but I don't think it's too hefty. (don't you dare say that C++11 requirement is a down-side, we're in 2014 god-damnit and the next standard has even be finalized already!) Also, I noticed, you will probably have to define `my_vector` in the base class instead of the final because you probably want to handle that data within the member function. A nice way to do that without duplicating a bunch of code is to create a base base class (good god) and inherit that class in the base class. Example: ``` template<typename T> class base_data{ protected: std::vector<T> my_vector; }; template<typename T> class A_base<T, typename std::enable_if<std::is_integral<T>::value>::type>: public base_bata<T>{ public: void only_for_ints(){/* phew, finally. fiddle around with my_vector! */} }; // non-integer A-base template<typename T> class A: public A_base<T>{ protected: // helper functions not available in base }; ``` That does leave a horrible looking multiple-inheritance scheme, but it is very workable and makes it easy to define members based on template parameters (for future proofing). People often don't like multiple-inheritance or how complicated/messy SFINAE looks, but I couldn't live without it now that I know of it: the speed of static code with the polymorphism of dynamic code!
10,391,866
Does the .NET framework have any classes which allow you to run (compile, interpret or whatever) an external script file containing C# code? For instance, if I have a file `Hello.cs` containing this: ``` class Hello //This program displays Hello World { static public void Main() { System.Console.WriteLine("Hello World"); } } ``` how can i load the code above, from within a winform, app and execute it? I'm interested in the load/execute logic; the program could be anything, from a console app to another winform app. Does Reflection allow this?
2012/04/30
[ "https://Stackoverflow.com/questions/10391866", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1327073/" ]
Check out the following article: [C#: Writing extendable applications using on-the-fly compilation](http://blogs.msdn.com/b/abhinaba/archive/2006/02/09/528416.aspx).
I would check the Roslyn APIs. You can do what ever you want as long as you provide valid C# o VB.NET code.
1,955,644
I am trying to understand why grep built by me is much slower than the one that comes with the system and trying to find what compiler options are used by grep that comes with the system. OS Version: CentOS release 5.3 (Final) grep on system: ``` Version: grep (GNU grep) 2.5.1 Size: 88896 bytes ldd output: libpcre.so.0 => /lib64/libpcre.so.0 (0x0000003991800000) libc.so.6 => /lib64/libc.so.6 (0x0000003985a00000) /lib64/ld-linux-x86-64.so.2 (0x0000003984a00000) ``` grep built by me: ``` Version: 2.5.1 Size: 256437 bytes ldd output: libpcre.so.0 => /lib64/libpcre.so.0 (0x0000003991800000) libc.so.6 => /lib64/libc.so.6 (0x0000003985a00000) /lib64/ld-linux-x86-64.so.2 (0x0000003984a00000) ``` The performance of system grep (330 msecs) is way faster than grep that I built (22430 msecs) when run a regex search on a large list text file. Following is the command I used to time .. ``` % time src/grep ".*asa.*" large_list.txt > /dev/null real 0m22.430s user 0m22.291s sys 0m0.080s ``` OR ``` % time bin/grep ".*asa.*" large_list.txt > /dev/null real 0m0.331s user 0m0.236s sys 0m0.081s ``` The system grep is clearly using some optiomizing options that is giving huge performance difference. Can some body help me with what options the system grep may be built with? Here is the compile options for one of the source files when I build .. `gcc -DLIBDIR=\"/usr/local/lib\" -DHAVE_CONFIG_H -I. -I.. -I.. -I. -I../intl -g -O2 -MT xstrtol.o -MD -MP -MF .deps/xstrtol.Tpo -c -o xstrtol.o xstrtol.c` The output of ./configure: ``` checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... /bin/mkdir -p checking for gawk... gawk checking whether make sets $(MAKE)... yes checking build system type... x86_64-unknown-linux-gnu checking host system type... x86_64-unknown-linux-gnu checking for gawk... (cached) gawk checking for gcc... gcc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking for style of include used by make... GNU checking dependency style of gcc... gcc3 checking for a BSD-compatible install... /usr/bin/install -c checking for ranlib... ranlib checking for getconf... getconf checking for CFLAGS value to request large file support... checking for LDFLAGS value to request large file support... checking for LIBS value to request large file support... checking for _FILE_OFFSET_BITS... no checking for _LARGEFILE_SOURCE... no checking for _LARGE_FILES... no checking for function prototypes... yes checking how to run the C preprocessor... gcc -E checking for grep that handles long lines and -e... /bin/grep checking for egrep... /bin/grep -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking for string.h... (cached) yes checking for size_t... yes checking for ssize_t... yes checking for an ANSI C-conforming const... yes checking for inttypes.h... yes checking for unsigned long long... yes checking for ANSI C header files... (cached) yes checking for string.h... (cached) yes checking for stdlib.h... (cached) yes checking sys/param.h usability... yes checking sys/param.h presence... yes checking for sys/param.h... yes checking for memory.h... (cached) yes checking for unistd.h... (cached) yes checking libintl.h usability... yes checking libintl.h presence... yes checking for libintl.h... yes checking wctype.h usability... yes checking wctype.h presence... yes checking for wctype.h... yes checking wchar.h usability... yes checking wchar.h presence... yes checking for wchar.h... yes checking for dirent.h that defines DIR... yes checking for library containing opendir... none required checking whether stat file-mode macros are broken... no checking for working alloca.h... yes checking for alloca... yes checking whether closedir returns void... no checking for stdlib.h... (cached) yes checking for unistd.h... (cached) yes checking for getpagesize... yes checking for working mmap... yes checking for btowc... yes checking for isascii... yes checking for iswctype... yes checking for mbrlen... yes checking for memmove... yes checking for setmode... no checking for strerror... yes checking for wcrtomb... yes checking for wcscoll... yes checking for wctype... yes checking whether mbrtowc and mbstate_t are properly declared... yes checking for stdlib.h... (cached) yes checking for mbstate_t... yes checking for memchr... yes checking for stpcpy... yes checking for strtoul... yes checking for atexit... yes checking for fnmatch... yes checking for stdlib.h... (cached) yes checking whether defines strtoumax as a macro... no checking for strtoumax... yes checking whether strtoul is declared... yes checking whether strtoull is declared... yes checking for strerror in -lcposix... no checking for inline... inline checking for off_t... yes checking whether we are using the GNU C Library 2.1 or newer... yes checking argz.h usability... yes checking argz.h presence... yes checking for argz.h... yes checking limits.h usability... yes checking limits.h presence... yes checking for limits.h... yes checking locale.h usability... yes checking locale.h presence... yes checking for locale.h... yes checking nl_types.h usability... yes checking nl_types.h presence... yes checking for nl_types.h... yes checking malloc.h usability... yes checking malloc.h presence... yes checking for malloc.h... yes checking stddef.h usability... yes checking stddef.h presence... yes checking for stddef.h... yes checking for stdlib.h... (cached) yes checking for string.h... (cached) yes checking for unistd.h... (cached) yes checking for sys/param.h... (cached) yes checking for feof_unlocked... yes checking for fgets_unlocked... yes checking for getcwd... yes checking for getegid... yes checking for geteuid... yes checking for getgid... yes checking for getuid... yes checking for mempcpy... yes checking for munmap... yes checking for putenv... yes checking for setenv... yes checking for setlocale... yes checking for stpcpy... (cached) yes checking for strchr... yes checking for strcasecmp... yes checking for strdup... yes checking for strtoul... (cached) yes checking for tsearch... yes checking for __argz_count... yes checking for __argz_stringify... yes checking for __argz_next... yes checking for iconv... yes checking for iconv declaration... extern size_t iconv (iconv_t cd, char * *inbuf, size_t *inbytesleft, char * *outbuf, size_t *outbytesleft); checking for nl_langinfo and CODESET... yes checking for LC_MESSAGES... yes checking whether NLS is requested... yes checking whether included gettext is requested... no checking for libintl.h... (cached) yes checking for GNU gettext in libc... yes checking for dcgettext... yes checking for msgfmt... /usr/bin/msgfmt checking for gmsgfmt... /usr/bin/msgfmt checking for xgettext... /usr/bin/xgettext checking for bison... bison checking version of bison... 2.3, ok checking for catalogs to be installed... af be bg ca cs da de el eo es et eu fi fr ga gl he hr hu id it ja ko ky lt nb nl pl pt pt_BR ro ru rw sk sl sr sv tr uk vi zh_TW checking for dos file convention... no checking host system type... (cached) x86_64-unknown-linux-gnu checking host system type... (cached) x86_64-unknown-linux-gnu checking for DJGPP environment... no checking for environ variable separator... : checking for working re_compile_pattern... yes checking for getopt_long... yes configure: WARNING: Included lib/regex.c not used checking whether strerror_r is declared... yes checking for strerror_r... yes checking whether strerror_r returns char *... no checking for strerror... (cached) yes checking for strerror_r... (cached) yes checking for vprintf... yes checking for doprnt... no checking for ANSI C header files... (cached) yes checking for working malloc... yes checking for working realloc... yes checking for pcre_exec in -lpcre... yes configure: creating ./config.status config.status: creating Makefile config.status: creating lib/Makefile config.status: creating lib/posix/Makefile config.status: creating src/Makefile config.status: creating tests/Makefile config.status: creating po/Makefile.in config.status: creating intl/Makefile config.status: WARNING: intl/Makefile.in seems to ignore the --datarootdir setting config.status: creating doc/Makefile config.status: creating m4/Makefile config.status: creating vms/Makefile config.status: creating bootstrap/Makefile config.status: creating config.h config.status: config.h is unchanged config.status: executing depfiles commands config.status: executing default-1 commands config.status: creating po/POTFILES config.status: creating po/Makefile config.status: executing stamp-h commands ``` Thanks, kumar
2009/12/23
[ "https://Stackoverflow.com/questions/1955644", "https://Stackoverflow.com", "https://Stackoverflow.com/users/237938/" ]
Why don't you just get CentOS's SRPM for the grep binary and compare their compile options to yours? I would guess that this is much more efficient than having the entire StackOverflow community blindly poke around in the dark until they hit something. EDIT: Are you using a locale with a multibyte encoding? (Note: if you have no idea what that means, then the answer is probably "Yes", since UTF-8 has been the default for most Linux distributions for several years now and indeed RedHat (and thus CentOS) were the very first to make the switch). In that case, GNU grep *is* dog slow. And this not only applies to GNU grep but to pretty much all GNU tools that do some kind of text processing. The FSF refuses to accept any patches to improve multibyte performance, unless those patches are proven to not slow down fixed-width encodings. However, since *any* patch to improve performance for multibyte encodings must *at least* contain some `if` statement somewhere, it is actually impossible to write a patch that does not at least slow down fixed-width encodings by at least the overhead of that `if` statement. Thus, UTF-8 performance of GNU tools *will* continue to suck until the end of time. Anyway, most Linux distributors don't give a rat's *bleep* what the FSF thinks and patch GNU grep anyway. The [Fedora Rawhide SRPM](ftp://Download.Fedora.RedHat.Com/pub/fedora/linux/development/source/SRPMS/grep-2.5.3-6.fc12.src.rpm) contains a patch called [`grep-2.5.3-egf-speedup.patch`](https://Savannah.GNU.Org/patch/?3803), which speeds up the UTF-8 performance of GNU grep by several orders of magnitude. (Since this patch is already from 2005, I assume that it is also used in CentOS.) This patch is also used in Mac OSX, Debian, Ubuntu, ..., pretty much nobody uses GNU grep as distributed by GNU. Text processing in a multibyte encoding will never be as fast as in a fixed-width encoding, but it should at least be comparable, not 50x (or even 1500x as some people have reported) slower. There's also another patch called [`dfa-optional`](https://Savannah.GNU.Org/patch/?3802), which makes grep simply use GNU libc's regex engine instead of its own, which is not only *much* faster when dealing with UTF-8 but also has far fewer bugs. So, you might want to re-run your benchmarks with `export LC_ALL=POSIX` set. If that fixes your problem, you need to apply either one of the two above-mentioned patches. More information is also available in these two RedHat bugreports: * [Bug 69900 - grep writing output very slow](https://Bugzilla.RedHat.Com/show_bug.cgi?id=69900) * [Bug 121313 - grep SLOW on multibyte LC\_CTYPE](https://Bugzilla.RedHat.Com/show_bug.cgi?id=121313) The moral of the story: despite popular belief, the Linux distributors *do* know what they are doing, at least sometimes. Don't second-guess them.
Another think to note besides the -O options is it looks like you are building with debugging symbols "-g". Debug usually increases binary size and can reduce performance of said binary, I would image grep is pretty stable and you don't really need debug symbols for it.
46,268,087
I have a problem with SQL Server 2008 R2, when I try to connect to server it gives me the following message: > > TITLE: Connect to Server > > > Cannot connect to (local). > > > ADDITIONAL INFORMATION: > > > A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) > > (Microsoft SQL Server, Error: 2) > > > I found a couple of solutions for this, one of which told me to go the configuration manager and check the instance created by SQL Server. There was none. I found only this: [![enter image description here](https://i.stack.imgur.com/qTZVV.jpg)](https://i.stack.imgur.com/qTZVV.jpg) I need to learn SQL Server for a job, and I don't know what to do. -- UPDATE -- I reinstalled SQL Server 2008 and some errors might have something to do with my issue.. [errors in reinstalling](https://i.stack.imgur.com/I8SZJ.jpg) note: the first time I installed I was basically clueless, I might had the same errors before and I simply didn't notice..
2017/09/17
[ "https://Stackoverflow.com/questions/46268087", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8623435/" ]
Check the output of below command and whether the apache is running under \_www user ``` sudo lsof -i:80 ``` Stop the built-in Apache server in Mac OS X is by using this command: ``` sudo apachectl -k stop ``` Enter administrator password. Next run this launchctl unload command ``` sudo launchctl unload -w /System/Library/LaunchDaemons/org.apache.httpd.plist ``` Check with the first command again whether the built-in apache server is completely gone Stopped and disavowed
`sudo apachectl start` to make sure it is running go to <http://localhost:80> to ensure you see "It Works!" or something comes up to confirm it is running. ``` sudo launchctl unload -w /System/Library/LaunchDaemons/org.apache.httpd.plist ``` `cat /private/var/db/com.apple.xpc.launchd/disabled.plist` should produce output similar to the following to show that httpd has been disabled from autostarting. ``` <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>com.apple.ftpd</key> <true/> <key>com.apple.mdmclient.daemon.runatboot</key> <true/> <key>org.apache.httpd</key> <true/> </dict> </plist> ```
71,497,533
Like in title, i want to postion my absolute element at the middle of the relative div border, here is the picture of thing that i want to achive: [picture](https://i.stack.imgur.com/6igbL.png) here is what i did for this moment. Parent div must have % width and child div must have static width in px or some diffrent static unit. ```html <!DOCTYPE html> <html lang="en"> <head> <title>Document</title> <style> *{ padding: 0; margin: 0; box-sizing: border-box; } body{ width: 100%; height: 100vh; } .parent-div{ position: absolute; width: 30%; height: 50%; background: red; } .child-div{ position: absolute; height: 200px; width: 200px; border-radius: 50%; background: blue; top: 50%; transform: translateY(-50%); } </style> </head> <body> <div class="parent-div"> <div class="child-div"> </div> </div> </body> </html> ```
2022/03/16
[ "https://Stackoverflow.com/questions/71497533", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15165740/" ]
You can use ZIO.collectAll to convert List[Task] to Task[List], I thought it was ZIO.sequence..., maybe I'm getting confused with cats... Following example works with Zio2 ```scala package sample import zio._ object App extends ZIOAppDefault { case class Result(value: Int) extends AnyVal val data: List[Task[List[Result]]] = List( Task { List(Result(1), Result(2)) }, Task { List(Result(3)) } ) val flattenValues: Task[List[Result]] = for { values <- ZIO.collectAll { data } } yield values.flatten val app = for { values <- flattenValues _ <- Console.putStrLn { values } } yield() def run = app } ``` In particular for your sample ... and assuming that 'separate' it's just an extension method to collect some errors (returns a tuple of error list and result list), and ignoring the method 'describe' to turn an err into a throwable <https://scastie.scala-lang.org/jgoday/XKxVP2ECSFOv4chgSFCckg/7> ```scala package sample import zio._ object App extends ZIOAppDefault { class DynamoMock { def run: Task[List[RelevantReadingRow]] = Task { List( RelevantReadingRow(1), RelevantReadingRow(2), ) } } case class RelevantReadingRow(value: Int) extends AnyVal implicit class ListSeparate(list: List[RelevantReadingRow]) { def separate: (List[String], List[RelevantReadingRow]) = (Nil, list) } def getBaselinesForRequestIds(baseLineReqIds: Set[String]): Task[List[RelevantReadingRow]] = { val dynamoConnection = new DynamoMock() val subSet = baseLineReqIds.grouped(25).toList val res: List[Task[List[RelevantReadingRow]]] = for { rows <- subSet.map(reqIds => dynamoConnection .run.flatMap(e => e.toList.separate match { case (err :: _, _) => ZIO.fail(new Throwable(err)) case (Nil, relevantReadings) => ZIO.succeed(relevantReadings) })) } yield rows for { rows <- ZIO.collectAll(res) } yield rows.flatten } val app = for { values <- getBaselinesForRequestIds(Set("id1", "id2")) _ <- Console.putStrLn { values } } yield() def run = app } ```
So, here is an alternative solution based on @jgoday answer ``` def getBaselinesForRequestIds(baseLineReqIds: Set[String]): Task[List[RelevantReadingRow]] = for { values <- ZIO.foreachPar(baseLineReqIds.grouped(25).toList) { reqIds => dynamoConnection .run( table.getAll("baseline_req_id" in reqIds) ).flatMap(e => e.toList.separate match { case (err :: _, _) => ZIO.fail(new Throwable(describe(err))) case (Nil, relevantReadings) => ZIO.succeed(relevantReadings) }) } } yield values.flatten ```
446,843
I am unable to connect to on my ubuntu installation a remote tcp/ip which contains a mysql installation: ``` viggy@ubuntu:~$ mysql -u user.name -p -h xxx.xxx.xxx.xxx -P 3306 Enter password: ERROR 2003 (HY000): Can't connect to MySQL server on 'xxx.xxx.xxx.xxx' (111) ``` I commented out the line below using vim in /etc/mysql/my.cnf: ``` # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. #bind-address = 127.0.0.1 ``` Then I restarted the server: ``` sudo service mysql restart ``` But still I get the same error. This is the content of my.cnf: ``` # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp lc-messages-dir = /usr/share/mysql skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. #bind-address = 127.0.0.1 # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 # # Error logging goes to syslog due to /etc/mysql/conf.d/mysqld_safe_syslog.cnf. # # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ ``` (Note that I can log into my local mysql install just fine by running mysql (and it will log me in as root) and also note that I can get into mysql in the remote server by logging into via ssh and then invoking mysql), but I am unable to connect to the remote server via my terminal using the host, and I need to do it that way so that I can then use mysql workbench.
2012/11/08
[ "https://serverfault.com/questions/446843", "https://serverfault.com", "https://serverfault.com/users/57634/" ]
This could be a user permissions problem. What did you use for your CREATE\_USER? Try making a new user with ``` CREATE USER 'testuser' IDENTIFIED BY 'somepass'; ``` leave out the normal @'locahost' part so it isn't restricted. also have a look at /var/log/mysql and see if there are clues...
Try with mysql -u username -h xxxx.xxxx.xxx.xxxx -P portnumber -D mysql -p Enter password: \*\*\*\*\*\*\*\*\*\*\* Note -P "P" capital lettre and -D "D" capital also
462,791
Here's an interesting one that I can't figure out. I was about to call MS, but figured I'd check here first. **Scenario:** Two Exchange 2010 forests federated with GAL Sync. User Bob@domain.com had a mailbox on Exchange 2010 server. Bob now has a new mailbox on a different Exchange forest (Bob@Awesome.com). Bob wants his old email forwarded for Bob@domain.com to Bob@Awesome.com. So...easy enough right? Create a contact in the domain.com Exchange server and set the forwarding on the mailbox and for grins hide the mailbox from the address books. Done, right? **Wrong (sort of)...because (note: I have federation and GAL sync allowing free/busy across forests):** Bob is getting auto-forwarded meeting requests from Sally@domain.com who used the Scheduling assistant and typed in "bob@domain.com" and saw that he's available. He gets the calendar forward and says "Um...Sally...I'm booked at that time" to which she replies "not from what I see". Now if Bob is available on bob@awesome.com and he accepts, it shows up on his awesome.com calendar as it should. But Sally sees the request still sent to Bob@domain.com in the scheduling assistant as he is free but bob@awesome.com is coming to the meeting. SO...basically users in the domain.com organization can still see free/busy details on the old calendar for the mailbox bob@domain.com even though the mailbox is hidden from the GAL. **THE QUESTION:** Since I can't create a contact and then forward that contact....is there any way around the above? I don't think I can remove a calendar from a mailbox. I considered removing all calendar permissions but wasn't sure if that was the right path to go down or not. **OR even better: Can someone tell me how to accept email for bob@domain.com on Exchange without having a mailbox for him and then re-route it to bob@awesome.com?** UPDATE: I have figured out how to handle the calendar with removing the default permissions...it's an ok fix. **The BOUNTY will be for the "OR EVEN BETTER" question in bold. If it isn't possible, then that doesn't count as BOUNTY worthy. :) Thank you!**
2013/01/03
[ "https://serverfault.com/questions/462791", "https://serverfault.com", "https://serverfault.com/users/7861/" ]
The solution is to set up SMTP namespace sharing between the two Exchange servers.
Since Bob is actually part of a different Exchange forest, Sally can't see Bob's free/busy information at all in the other Exchange org (and vice versa). If I am reading what you did correctly, you essentially created an external contact for Bob for his new Exchange org(Bob@awesome.com) and hid his existing mailbox (Bob@domain.com) from the GAL (which does not hide free/busy info on his mailbox). Then you set the forwarding address on his Bob@domain.com mailbox to forward to Bob@awesome.com. When Sally goes to set up a meeting with Bob, she resolves his Bob@domain.com account and it shows that he is free which is true since, according to his @domain.com mailbox, he is. There are several different ways you can try to resolve this, but nothing really cut and dry. One of these methods may work for you based on your requirements: 1. Tell Sally that Bob doesn't live in the domain.com domain anymore, and she can't see his free/busy info to reliably schedule meetings with him. She can continue to send meeting requests to his domain.com account, but has to accept that she can't see if he is actually busy or not. This is fine if Sally and Bob are the only users involved with this problem, but doesn't work if you have 1/2 your users split between Exchange orgs and the domain.com users aren't sure which are in the awesome.com org. 2. Remove Bob's free/busy permissions on his domain.com mailbox so when Sally (or any other domain.com user) tries to schedule him, he shows up with no free/busy info and Sally can't claim that he appeared to be free on her end. From what I've gathered, you would do this either using the [Set-MailboxPermissions cmdlet](http://technet.microsoft.com/en-us/library/ff522363%28v=exchg.141%29.aspx) or by opening up Bob's domain.com mailbox directly in Outlook and setting the calendar permissions for "Default" to None. 3. Remove Bob's domain.com mailbox and only leave the external contact for Bob@awesome.com in the GAL. This will break the e-mail forwarding for bob@domain.com > bob@awesome.com, which may be an issue for Bob, but people will soon realize that he doesn't live at domain.com anymore when they get NDRs and ask questions, figure out his new address when they call him up and ask him about it, etc... If Bob no longer even needs to login to the domain.com domain you can remove his entire AD account. 4. If you have access to both Exchange orgs (domain.com & awesome.com), you can set them up to share Free/Busy information between them. I personally have never done this, but doing some quick googling found [this technet article from MS](http://technet.microsoft.com/en-us/library/hh310374%28v=exchg.141%29.aspx) on setting it up at a high level with links to the more detailed steps. Like many technet articles, there may be more caveats to it than what the article itself covers. At my company, we have users in 2 different locations that primarily use one or the other Exchange org for e-mail, but we do not have a unified calendar scheduling capability since we don't control the other Exchange org. We just forward messages to the other org if the user says they are primarily using that one for e-mail or don't forward if they primarily use ours. Over time our users have just gotten to remember to use our domain's e-mail or the external contact for sending mail ("Let's see... that person is at the other location so I don't e-mail their domain.com account, I use the external contact..."). It isn't easy to manage, but either somehow seems to work for them or they have just accepted the fact that they can't see the other org's scheduling info for meetings. Update for the OR even better scenario (disclaimer - untested): 1. Remove Bob@domain.com's mailbox and create a contact with the same e-mail address. 2. Configure a transport rule to redirect messages sent to Bob's domain.com contact to the bob@awesome.com contact.
12,031,947
I have a java project that includes Spring 3.0.2 and XmlSchema.jar 1.4.7 The project's pom.xml contains as a dependency: ``` <dependency> <groupId>org.apache.ws.commons.schema</groupId> <artifactId>XmlSchema</artifactId> <version>1.4.7</version> </dependency> ``` The project compiles ok but on hitting the context page it reports the following error: ``` SEVERE: StandardWrapper.Throwable org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'schemaCollection' defined in class path resource [applicationContext-jdeInterfaceService.xml]: Invocation of init method failed; nested exception is java.lang.NoSuchMethodError: org.apache.ws.commons.schema.XmlSchemaCollection.read(Lorg/xml/sax/InputSource;)Lorg/apache/ws/commons/schema/XmlSchema; at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1455) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:294) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:225) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:291) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:193) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:585) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:913) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:464) at org.springframework.web.servlet.FrameworkServlet.configureAndRefreshWebApplicationContext(FrameworkServlet.java:631) at org.springframework.web.servlet.FrameworkServlet.createWebApplicationContext(FrameworkServlet.java:588) at org.springframework.web.servlet.FrameworkServlet.createWebApplicationContext(FrameworkServlet.java:645) at org.springframework.web.servlet.FrameworkServlet.initWebApplicationContext(FrameworkServlet.java:508) at org.springframework.web.servlet.FrameworkServlet.initServletBean(FrameworkServlet.java:449) at org.springframework.web.servlet.HttpServletBean.init(HttpServletBean.java:133) at javax.servlet.GenericServlet.init(GenericServlet.java:160) at org.apache.catalina.core.StandardWrapper.initServlet(StandardWrapper.java:1266) at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1185) at org.apache.catalina.core.StandardWrapper.allocate(StandardWrapper.java:857) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:136) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:169) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:472) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:168) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:98) at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:927) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407) at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:999) at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:565) at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:307) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: java.lang.NoSuchMethodError: org.apache.ws.commons.schema.XmlSchemaCollection.read(Lorg/xml/sax/InputSource;)Lorg/apache/ws/commons/schema/XmlSchema; at org.springframework.xml.xsd.commons.CommonsXsdSchemaCollection.afterPropertiesSet(CommonsXsdSchemaCollection.java:137) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1514) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1452) ... 33 more 20/08/2012 12:05:35 PM org.apache.catalina.core.StandardWrapperValve invoke SEVERE: Allocate exception for servlet spring-ws java.lang.NoSuchMethodError: org.apache.ws.commons.schema.XmlSchemaCollection.read(Lorg/xml/sax/InputSource;)Lorg/apache/ws/commons/schema/XmlSchema; at org.springframework.xml.xsd.commons.CommonsXsdSchemaCollection.afterPropertiesSet(CommonsXsdSchemaCollection.java:137) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1514) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1452) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:294) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:225) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:291) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:193) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:585) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:913) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:464) at org.springframework.web.servlet.FrameworkServlet.configureAndRefreshWebApplicationContext(FrameworkServlet.java:631) at org.springframework.web.servlet.FrameworkServlet.createWebApplicationContext(FrameworkServlet.java:588) at org.springframework.web.servlet.FrameworkServlet.createWebApplicationContext(FrameworkServlet.java:645) at org.springframework.web.servlet.FrameworkServlet.initWebApplicationContext(FrameworkServlet.java:508) at org.springframework.web.servlet.FrameworkServlet.initServletBean(FrameworkServlet.java:449) at org.springframework.web.servlet.HttpServletBean.init(HttpServletBean.java:133) at javax.servlet.GenericServlet.init(GenericServlet.java:160) at org.apache.catalina.core.StandardWrapper.initServlet(StandardWrapper.java:1266) at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1185) at org.apache.catalina.core.StandardWrapper.allocate(StandardWrapper.java:857) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:136) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:169) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:472) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:168) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:98) at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:927) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407) at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:999) at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:565) at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:307) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) ```
2012/08/20
[ "https://Stackoverflow.com/questions/12031947", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1481505/" ]
> > Please change your dependency to 2.0.1 or 2.0.2 > > > *legacy 1.4.7 don't have the method defined* ``` public XmlSchema read(Source source) { if (source instanceof SAXSource) { return read(((SAXSource)source).getInputSource()); } ``` check the [1.4.x javadoc](http://ws.apache.org/commons/XmlSchema/apidocs/index.html) and [2.x javadoc](http://ws.apache.org/commons/xmlschema20/xmlschema-core/apidocs/index.html)
These steps won't necessarily work in WebLogic as it has its own implementation classes. After updating your dependency as shown by Anshul, you will also need to tell WebLogic to prefer that package in the weblogic-application.xml ``` <wls:prefer-application-packages> <wls:package-name>org.apache.ws.commons.schema</wls:package-name> </wls:prefer-application-packages> ```
5,051,688
I have a crawler that gathers articles from the web and stores the title and the body to a database. Until now the programmer has to come up with a set of rules per source (usually XPath and sometimes regular expressions) to point to the article title and body sections of the web page. Now I'm trying to go one step ahead and have the program auto-detect the title and the body of the article. My first approach add a weight to each element based on some common criteria. For example: ``` //@x-weight = 1.0 //h1/@x-weight * 2.0 //h2/@x-weight * 1.8 ``` There are many more rules but you get the point. After assigning the weights based on the markup I take into account and some other aspects such as similarity to `/head/title` and number of keywords. This approach while producing decent results for most of the web pages (thanks SEO experts :P), it fails catastrophically for some others. I'm thinking the possibility to use an [artificial neural network](http://en.wikipedia.org/wiki/Artificial_neural_network), but I can't find enough evidence that I'll get significantly better results. Another option is to take CSS into the game and adjust the weights by font size. The question(s): 1. Which path should I choose? 2. Am I missing something? 3. Is there a better way to this? PS: I know that there isn't a perfect solution for a problem like this.
2011/02/19
[ "https://Stackoverflow.com/questions/5051688", "https://Stackoverflow.com", "https://Stackoverflow.com/users/624475/" ]
Your `<uses-permission>` element needs to be an immediate child of the `<manifest>` element, and your code listing above suggests that it is not. [Here is a sample project](https://github.com/commonsguy/cw-advandroid/tree/master/SystemEvents/OnBoot) demonstrating the use of `BOOT_COMPLETED`.
on adding `<category android:name="android.intent.category.HOME" />` this to my manifest file solve my problem and works. ``` <receiver android:name=".BroadCastRecieverClass"> <intent-filter> <action android:name="android.intent.action.BOOT_COMPLETED"/> <category android:name="android.intent.category.HOME" /> </intent-filter> </receiver> ```
17,725,927
I have some questions about [boxplots](http://matplotlib.org/examples/pylab_examples/boxplot_demo.html) in matplotlib: **Question A**. What do the markers that I highlighted below with **Q1**, **Q2**, and **Q3** represent? I believe **Q1** is maximum and **Q3** are outliers, but what is **Q2**?                        ![enter image description here](https://i.stack.imgur.com/FBqdp.png) **Question B** How does matplotlib identify **outliers**? (i.e. how does it know that they are not the true `max` and `min` values?)
2013/07/18
[ "https://Stackoverflow.com/questions/17725927", "https://Stackoverflow.com", "https://Stackoverflow.com/users/283296/" ]
Here's a graphic that illustrates the components of the box from a [stats.stackexchange answer](https://stats.stackexchange.com/a/149178). Note that k=1.5 if you don't supply the `whis` keyword in Pandas. [![annotated box in a boxplot](https://i.stack.imgur.com/ty5wN.png)](https://i.stack.imgur.com/ty5wN.png) The boxplot function in Pandas is a wrapper for `matplotlib.pyplot.boxplot`. The [matplotlib docs](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.boxplot.html) explain the components of the boxes in detail: **Question A:** > > The box extends from the lower to upper quartile values of the data, with a line at the median. > > > i.e. a quarter of the input data values is below the box, a quarter of the data lies in each part of the box, and the remaining quarter lies above the box. **Question B:** > > whis : float, sequence, or string (default = 1.5) > > > As a float, determines the reach of the whiskers to the beyond the > first and third quartiles. In other words, where IQR is the > interquartile range (Q3-Q1), the upper whisker will extend to last > datum less than Q3 + whis\*IQR). Similarly, the lower whisker will > extend to the first datum greater than Q1 - whis\*IQR. Beyond the > whiskers, data are considered outliers and are plotted as individual > points. > > > Matplotlib (and Pandas) also gives you a lot of options to change this default definition of the whiskers: > > Set this to an unreasonably high value to force the whiskers to show > the min and max values. Alternatively, set this to an ascending > sequence of percentile (e.g., [5, 95]) to set the whiskers at specific > percentiles of the data. Finally, whis can be the string 'range' to > force the whiskers to the min and max of the data. > > >
Just in case this can benefit anyone else, I needed to put a legend on one of my box plot graphs so I made this little .png in Inkscape and thought I'd share it. edit: to clarify a bit more, The whiskers end at the farthest data point within the 1.5 \* IQR interval. [![enter image description here](https://i.stack.imgur.com/Bh5pf.png)](https://i.stack.imgur.com/Bh5pf.png)
11,022,509
How can I make www.mydomain.com/folder/?id=123 ---> www.mydomain.com/folder/xCkLbgGge I want my DB query page to get it's own URL, like I've seen on twitter etc etc.
2012/06/13
[ "https://Stackoverflow.com/questions/11022509", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1121487/" ]
This is known as a "slug" wordpress made this term popular. Anyway though. Ultimately what you need to do is have an .htaccess file that catches all your incoming traffic then reforms it at the server level to work with your PHP in the sense, you will still keep the ?id=123 logic intact, but to the client side '/folder/FHJKD/' will be the viewable result. here is an example of an .htaccess file I use a similar logic on.. (so does wordpress for that matter). ``` RewriteEngine On #strips the www out of the domain if there RewriteCond %{HTTP_HOST} ^www\.domain\.com$ #applies logic that changes the domain from http://mydomain.com/post/my-article #to resemble http://mydomain.com/?id=post/my-article RewriteRule ^(.*)$ http://domain.com/$1 [R=301,L] RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ index.php?id=$1 [QSA,L] ``` what this will do is take everything after domain.com/ and pass it as a variable to index.php the variable in this example would be 'id' from this you have to device a logic that best suits your sites needs. example ``` <?php //the URL for the example here: http://mydomain.com/?id=post/my-article if($_GET['id']) { $myParams = explode('/', $_GET['id']); echo '<pre>'; print_r($myParams); echo '</pre>'; } ?> ``` now the logic for this would have to go much deeper, this is only pure example at a basic level, but overall and especially cause your working with a database I assume, your gonna wanna make sure the $myParams is clean of malicious code, that can inject into your PHP or Database. The output of the above `$myParams` via `print_r()` would be: ``` Array( [0] => post [1] => my-article ) ``` To work with it you would need to do at the very least ``` echo $myParams[0].'<br />'; ``` or you could do it like this cause most browsers will add a final / ``` <?php //the URL for the example here: http://mydomain.com/?id=post/my-article if($_GET['id']) { //breaks the variable apart, removes any empty array values and reorders the index $myParams = array_values(array_filter(explode('/', $_GET['id']))); if(count($myParams > 1) { $sql = "SELECT * FROM post_table WHERE slug = '".mysql_real_escape_string($myParams[1])."'"; $result = mysql_query($sql); } } ?> ``` Now this admitedly is a very crude example, you would want to work some logic in there to prevent mysql injection, and then you will apply the query like you would how you are now in pulling your articles out using just id=123. Alternatively you could also go a completely different route, and explore the wonders of MVC (Model View Control). Something like CodeIgniter is a nice easy MVC framework to get started on. But thats up to you.
In your .htacess, you need add RewriteEngine on. After that, you will need to do some regexs to make this little beast work. I'm assuming ?id is folder.php?id=123. For example the folder piece: RewriteRule ^folder/([a-zA-Z0-9\_-]+)/([0-9]+).html$ folder.php?id=$123
12,531,348
In my controller.js I've function: ``` $(MyModel.addMyButtonTag).live("click", function () { MyModel.addRecord(); }); ``` and in my model.js I've: ``` var MyModel = { addMyButtonTag: "#AddButton", addRecord: function () { //Show modal $(MyModel.addMyButtonTag).modal(); $('#simplemodal-container').css('height', '230px'); $('#simplemodal-container').css('min-height', '0'); } } ``` These jqueries work well in IE8-9 but in firefox they don't work at all. Any suggestion please?
2012/09/21
[ "https://Stackoverflow.com/questions/12531348", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1468830/" ]
jQuery .live() is deprecated Use .on() instead for more info take a look here <http://api.jquery.com/live/>
Make sure your class "MyModel" defined before your live function.Include your class defination js first. Its worked for me : ``` <input type="button" id="AddButton" /> <script type="text/javascript"> var MyModel = { addMyButtonTag: "#AddButton", addRecord: function () { //Show modal alert("ITS OK"); } } $(MyModel.addMyButtonTag).live("click", function () { MyModel.addRecord(); }); </script> ```
5,655,454
I am trying to create an ExpandableListView inside my activity that shows shows types of wines on the top level, and individual bottles of wine on the second level. I am reading all of my data from a CSV file that I have created and filled with 8 specific bottles of wine which are all under one category for now. I am having a problem though, I am reading my data from the csv file into an Array and I can report it out to the log as I read it in and it shows correctly. But once I go to try to put it into my adapter and then into the listview the array is filled with 8 identical Wine objects which are whatever the last one in my file is. Here is the code I am using to read the file and create an Array of Wine objects. **Edit:** I changed my code to check my array write after the while loop finishes filling it and I am getting the same result. This is the newer version of the code. ``` handler = new Handler() { @Override public void handleMessage(Message msg) { Log.i(myTag, "Notify Change"); //By the time I get to here every object in the array is identical for(int i = 0; i < chrd.length; i++){ Log.i(myTag,i + " " + chrd[i].toString()); } super.handleMessage(msg); } }; Runnable r = new Runnable(){ public void run() { current = new Chardonnay(); //final int ITEMS = 15; int count = 0; try { File myFile = new File ("/sdcard/chardonnay.txt"); fis = new FileInputStream(myFile); BufferedReader reader = new BufferedReader(new InputStreamReader(fis)); String line; while ((line = reader.readLine()) != null) { String[] RowData = line.split(","); current.setName(RowData[0]); current.setPlace(RowData[1]); current.setDescription(RowData[2]); current.setYear(Integer.valueOf(RowData[3])); current.setPriceBottle(Integer.valueOf(RowData[4])); current.setPriceGlass(Integer.valueOf(RowData[5])); chrd[count] = current; Log.i(myTag, count + " " + chrd[count]); count++; } for(int i = 0; i < chrd.length; i++){ Log.i(myTag,i + " " + chrd[i]); } } catch (IOException ex) { // handle exception ex.printStackTrace(); } handler.sendEmptyMessage(1); try { fis.close(); } catch (IOException e) { e.printStackTrace(); } } }; Thread thread = new Thread(r); thread.start(); } ``` And here is the log output that running this creates: ``` 04-13 15:45:09.390: INFO/One2OneWineMenu(6472): 0 Wine [name=Acre, place=Central Coast, description=Seductive apple pie crust and lemon blossom aromas introduce crisp juicy flavors enriched by a creaminess resulting from surlie barrel aging, year=2008, priceBottle=25, priceGlass=7] 04-13 15:45:09.390: INFO/One2OneWineMenu(6472): 1 Wine [name=Silver Palm, place=North Coast, description=Fermented in stainless steel* this wine's delicate fruit characteristics were preserved without any overbearing flavors that an oak barrel might impart, year=2009, priceBottle=30, priceGlass=10] 04-13 15:45:09.390: INFO/One2OneWineMenu(6472): 2 Wine [name=Franciscan, place=Napa Valley, description=Ripe* generous aromas of apple* pear* and honey with toasty oak. Lively* rich creamy and supple with notes of vanilla on the finish, year=2009, priceBottle=30, priceGlass=-1] 04-13 15:45:09.390: INFO/One2OneWineMenu(6472): 3 Wine [name=Sonoma Cutrer, place=Russian River, description=The 2nd most popular chardonnay in W&S Restaurant Poll* this wine is beautifully balanced with well integrated oak, year=2008, priceBottle=35, priceGlass=11] 04-13 15:45:09.390: INFO/One2OneWineMenu(6472): 4 Wine [name=Matanzas Creek, place=Sonoma, description=92 pts WE* this wine has a silky texture with flavors of lemon cream* peach and pear which feels elegant and creamy on the palate, year=2007, priceBottle=40, priceGlass=-1] 04-13 15:45:09.390: INFO/One2OneWineMenu(6472): 5 Wine [name=Silver by Mer Soleil, place=Santa Lucia Highlands, description=Combines ripe* intense peach* nectarine and tangerine fruit with touches of floral and spice, year=2007, priceBottle=40, priceGlass=-1] 04-13 15:45:09.390: INFO/One2OneWineMenu(6472): 6 Wine [name=Jordan, place=Russian River, description=Voted Best Chardonnay by respected wine journalists who attended 2010 Critics Challenge, year=2008, priceBottle=50, priceGlass=-1] 04-13 15:45:09.390: INFO/One2OneWineMenu(6472): 7 Wine [name=Ramey, place=Santa Lucia Highlands, description=94 pts RP* intense and vibrant* shows full-bodied citrus* melon* and hazelnut flavors that turn subtle and offer hints of fig/tangerine, year=2007, priceBottle=90, priceGlass=-1] 04-13 15:45:09.405: INFO/One2OneWineMenu(6472): Notify Change 04-13 15:45:09.405: INFO/One2OneWineMenu(6472): 0 Wine [name=Ramey, place=Santa Lucia Highlands, description=94 pts RP* intense and vibrant* shows full-bodied citrus* melon* and hazelnut flavors that turn subtle and offer hints of fig/tangerine, year=2007, priceBottle=90, priceGlass=-1] 04-13 15:45:09.405: INFO/One2OneWineMenu(6472): 1 Wine [name=Ramey, place=Santa Lucia Highlands, description=94 pts RP* intense and vibrant* shows full-bodied citrus* melon* and hazelnut flavors that turn subtle and offer hints of fig/tangerine, year=2007, priceBottle=90, priceGlass=-1] 04-13 15:45:09.405: INFO/One2OneWineMenu(6472): 2 Wine [name=Ramey, place=Santa Lucia Highlands, description=94 pts RP* intense and vibrant* shows full-bodied citrus* melon* and hazelnut flavors that turn subtle and offer hints of fig/tangerine, year=2007, priceBottle=90, priceGlass=-1] 04-13 15:45:09.405: INFO/One2OneWineMenu(6472): 3 Wine [name=Ramey, place=Santa Lucia Highlands, description=94 pts RP* intense and vibrant* shows full-bodied citrus* melon* and hazelnut flavors that turn subtle and offer hints of fig/tangerine, year=2007, priceBottle=90, priceGlass=-1] 04-13 15:45:09.405: INFO/One2OneWineMenu(6472): 4 Wine [name=Ramey, place=Santa Lucia Highlands, description=94 pts RP* intense and vibrant* shows full-bodied citrus* melon* and hazelnut flavors that turn subtle and offer hints of fig/tangerine, year=2007, priceBottle=90, priceGlass=-1] 04-13 15:45:09.405: INFO/One2OneWineMenu(6472): 5 Wine [name=Ramey, place=Santa Lucia Highlands, description=94 pts RP* intense and vibrant* shows full-bodied citrus* melon* and hazelnut flavors that turn subtle and offer hints of fig/tangerine, year=2007, priceBottle=90, priceGlass=-1] 04-13 15:45:09.405: INFO/One2OneWineMenu(6472): 6 Wine [name=Ramey, place=Santa Lucia Highlands, description=94 pts RP* intense and vibrant* shows full-bodied citrus* melon* and hazelnut flavors that turn subtle and offer hints of fig/tangerine, year=2007, priceBottle=90, priceGlass=-1] 04-13 15:45:09.405: INFO/One2OneWineMenu(6472): 7 Wine [name=Ramey, place=Santa Lucia Highlands, description=94 pts RP* intense and vibrant* shows full-bodied citrus* melon* and hazelnut flavors that turn subtle and offer hints of fig/tangerine, year=2007, priceBottle=90, priceGlass=-1] ``` I have tried the same logical concept but with ArrayList instead of Wine[] and its having the same problem. I am stumped, I have never seen the contents of an Array just change for no apparent reason like this. Perhaps I am overlooking something relatively simple, does anyone have any idea what might be going on here?
2011/04/13
[ "https://Stackoverflow.com/questions/5655454", "https://Stackoverflow.com", "https://Stackoverflow.com/users/507810/" ]
You assign the same object (`current`) to all cells of `chrd`, this is why you end up with the last value. You should initialize `current` inside the loop to fix this. ``` while ((line = reader.readLine()) != null) { current = new Chardonnay(); String[] RowData = line.split(","); current.setName(RowData[0]); current.setPlace(RowData[1]); current.setDescription(RowData[2]); current.setYear(Integer.valueOf(RowData[3])); current.setPriceBottle(Integer.valueOf(RowData[4])); current.setPriceGlass(Integer.valueOf(RowData[5])); chrd[count] = current; Log.i(myTag, count + " " + chrd[count]); count++; } ```
In my experience, this is how I have had to format data for ExpandableListAdapter Groups: ``` ArrayList<HashMap<String, String>> alist = new ArrayList<HashMap<String, String>>(); ... //provided there are entries in the database, iterate through them all. create a hashmap using "company" as the key and //the company as the item and add this hashmap to the array of maps. if (cursor.moveToFirst()) { do { HashMap<String, String> m = new HashMap<String, String>(); m.put("company", cursor.getString(cursor.getColumnIndex(CompanyAndProductDatabaseAdapter.company_column))); alist.add(m); } while (cursor.moveToNext()); } ```