qid
int64
1
74.7M
question
stringlengths
15
58.3k
date
stringlengths
10
10
metadata
list
response_j
stringlengths
4
30.2k
response_k
stringlengths
11
36.5k
1,888
Hydrogen is the lightest element, so it's cable of lifting the most weight in out atmosphere (probably not the best terminology there, but you get the picture) Would hot hydrogen (in the same sense as hot air) be able to lift even more mass? Would a higher or lower density of hydrogen in a ballon lift more? If you could have a balloon which had nothing in it (it was a vacuum inside) would that lift more than a hydrogen balloon? Basically what are the physics to balloons and lifting? (really not sure what to tag this, so if someone else could that'd be great)
2010/12/13
[ "https://physics.stackexchange.com/questions/1888", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/4/" ]
A vacuum balloon is a possibility, but I doubt it can provide more lift than a hydrogen balloon: you need a rigid shell to prevent implosion of the vacuum balloon, and my feeling is the shell will be too heavy. However, while vacuum balloons cannot compete on lift, they can have better altitude control: you can bleed air in to lose altitude and pump out air to gain altitude. Together with my coauthor, I proposed some designs of vacuum balloons made of commercially available materials, such as boron carbide and aluminum honeycombs (US patent application 20070001053 (11/517915), or <http://akhmeteli.org/wp-content/uploads/2011/08/vacuum_balloons_cip.pdf> ). The main problem is so-called buckling (loss of stability). It is possible, but difficult, so nobody has built a vacuum balloon so far, as far as I know, although the idea of vacuum balloon is centuries old.
Approximate mean atomic mass of air: 29 Approximate mean atomic mass of hydrogen: 2 So with both gases at the same temperature and pressure, you are getting $\frac{27}{29}=93\%$ of the theoretical lifting capacity. You would have to heat the hydrogen to about 200 C in order to get to 95% efficiency. It would be much better to work on making the surrounding/supporting structure lighter than to heat the hydrogen - the apparatus plus fuel for the heating is likely to weigh much more than you gain in lift.
33,082,748
I am very new to MOQ and have an issue I cannot solve. I have the following code I am testing (I am testing first one - **ValidateInputBankFile**): ``` #region Constructor private readonly IErrorRepository _errorRepository; private readonly IFileSystem _fileSystem; public IP_BankInfoDeserializer(IErrorRepository errorRepository, IFileSystem fileSystem) { _errorRepository = errorRepository; _fileSystem = fileSystem; } #endregion public IP_BankInfo ValidateInputBankFile(string sPath, App.BankType bankType) { if (!_fileSystem.FileExists((sPath))) return null; //first retrieve representative bank info var tmpInfo = DeserializeBankInfo(bankType); if (tmpInfo == null) return null;//Does not exist return tmpInfo; } public IP_BankInfo DeserializeBankInfo(App.BankType bankType) { if (!IsFileCorrect(bankType)) return null; IP_BankInfo info = new IP_BankInfo(); using (var stream = new StreamReader(Directory.GetCurrentDirectory() + Path.DirectorySeparatorChar + sFolder + Path.DirectorySeparatorChar + bankType.ToString() + ".xml")) { XmlSerializer serializer = new XmlSerializer(typeof(IP_BankInfo)); try { info = serializer.Deserialize(stream) as IP_BankInfo; } catch (Exception ex) { info = null; } } return info; } ``` This is my test method: ``` [TestMethod] public void ValidateInputBank_ExistingPath_ExistingBank() { Mock<IFileSystem> fileSystem = new Mock<IFileSystem>(); fileSystem.Setup(n => n.FileExists(null)).Returns(true); Mock<IP_BankInfoDeserializer> mocSerializer = new Mock<IP_BankInfoDeserializer>(); mocSerializer.Setup(n => n.DeserializeBankInfo(App.BankType.UniCredit)).Returns(new Models.IP_BankInfo()); var result = mocSerializer.Object.ValidateInputBankFile(null, App.BankType.UniCredit); //Assert.AreEqual(serializer.Object.ValidateInputBankFile(null, App.BankType.UniCredit), new Models.IP_BankInfo()); } ``` What I am trying to do, is to avoid call to **DeserializeBankInfo**, return new `IP_BankInfo` and so that I can check it under my final assert stage. The problem is that my **var result** always returns null. I don't understand what am I doing wrong? Also it fails on the following code `mocSerializer.Setup(n => n.DeserializeBankInfo(App.BankType.UniCredit)).Returns(()=>null);`, yet I am passing correct parameters.
2015/10/12
[ "https://Stackoverflow.com/questions/33082748", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1539189/" ]
**To answer your question** > > The problem is that my var result always returns null. I don't understand what am I doing wrong? > > > `ValidateInputBankFile` is never setup and you use loose mocks, therefore it will return null. Use Strict mocks by passing MockBehavior.Strict in the constructor of your mock and you will have an exception telling you that your method was not setup. Apply a Setup to return an appropriate value on that mock for the `ValidateInputBankFile` method and it will behave correctly. **A word of advice** You are calling methods on a mocked object from within your test : ``` var result = mocSerializer.Object.ValidateInputBankFile(null, App.BankType.UniCredit); ``` As a rule of thumb, you should never call method on a `xxx.Object.MyMethod()` the reason why you should not do that is because you're basically just calling your "arrange" part of the test. I feel like I need to ask you what you are trying to achieve by doing so, because you're basically just *testing your test*.
Personally to me it looks like you are attempting to test your class but as the same time mock it so that not all of the class runs. **I** see this as incorrect, effectively how do you know that the deserialisation code is working correctly? If your answer is another test where you mock other functionality in your class I would reply saying that you need to abstract out the deserialisation functionality of the `BankInfo` into another interface/class that you can mock and inject this into your `IBankInfoValidator`. This means that you are splitting out your validation from the deserialisation of the object, this will help both in testing and maintainability/extensibility in the future if your use case changes. Personally I'd scrap your tests just now, start looking to abstract out the deserialisation of your object and then think about testing it as separate classes.
4,944,156
There is the template class List. ``` template <typename Point> class List { public: template <const unsigned short N> void load ( const char *file); ... }; template <typename Point> template <const unsigned short N> void List <Point>::load ( const char *file) } ``` How to specialize method load for N=2? This code is not valid... ``` template <typename Point> void List <Point> <2>::load ( const char *file) { } ``` And this code also does not work. ``` template <typename Point> void List <Point> ::load <2> ( const char *file ) { } Error 3 error C2768: 'List<Point>::load' : illegal use of explicit template arguments 66. Error 5 error C2244: 'List<Point>::load' : unable to match function definition to an existing declaration 66 ``` Compiler g++: ``` template <typename Point> template <> void List <Point> ::load <2> ( const char *file ) { } error: explicit specialization in non-namespace scope `class List<>' error: enclosing class templates are not explicitly specialized error: default arguments are only permitted for function parameters error: `load' is not a function template error: invalid function declaration ```
2011/02/09
[ "https://Stackoverflow.com/questions/4944156", "https://Stackoverflow.com", "https://Stackoverflow.com/users/598598/" ]
It turns out that there's a provision in the C++ spec that explicitly disallows specializing a template class or function nested inside of a template class unless you also explicitly specialize the outer template as well. Visual Studio doesn't enforce this rule, hence the confusion with the previous example, but g++ certainly does. If you want to specialize the template, your options will either be to also specialize the outer template or to somehow fake up the behavior of specialization by having the method dispatch to one of two different implementations based on the template parameter. Neither of these are very satisfying, I know, but unfortunately the language is designed weirdly in some template corners. :-( One way that you can emulate the behavior of the explicit specialization is to use a technique called *tag dispatching*. The idea is that we'll make a very simple struct that looks like this: ``` template <unsigned short N> struct Box {}; ``` This type is completely empty. It's not meant to be used directly, but rather is just a way of embedding an integer into the type system. In particular, `Box<3>` is not the same type as `Box<4>`, etc. Next, in your list class, define two functions that look like this, preferably marked private: ``` template <unsigned short N> void doLoad(const char* file, Box<N>); void doLoad(const char* file, Box<2>); ``` These two functions are overloads of one another, distinguishable only by their final parameter, which is either a `Box<N>` in the template case or a `Box<2>` in the non-template case. Note that the parameters don't have names. This is an arbitrary decision, but since we're not planning on actually reading the parameters, we don't need them. The intuition behind these functions is that this first function will be the "catch-all" implementation that will work for any `N` except 2. The second version will contain the implementation of loading for the case where `N == 2`. Finally, implement `load` as follows: ``` template <typename Point> template <unsigned short N> void List<Point>::load(const char* file) { doLoad(file, Box<N>()); } ``` How does this work? This function takes in a parameter, and then calls `doLoad` forwarding that parameter as the first argument and passing a temporary `Box<N>` as the second argument. If `N` is not two, then this is a call to the template version of `doLoad`, which is the catch-all handler. If, on the other hand, `N` is two, then this will call the non-template version of `doLoad`, because non-template functions have priority over template functions during overload resolution. In short, the implementation of `load` just becomes a trampoline to forward you to the correct of the two implementations. You can then put the logic in the appropriate `doLoad` function to get the behavior you want. Hope this helps!
**Edit**: Ok, so I rewrote your class a bit, with inlined function definitions, and this definitely works: ``` template <typename Point> class List { public: template <const unsigned short N> void load( const char *file){ } template<> void load<2>(const char* file){ } }; ```
4,944,156
There is the template class List. ``` template <typename Point> class List { public: template <const unsigned short N> void load ( const char *file); ... }; template <typename Point> template <const unsigned short N> void List <Point>::load ( const char *file) } ``` How to specialize method load for N=2? This code is not valid... ``` template <typename Point> void List <Point> <2>::load ( const char *file) { } ``` And this code also does not work. ``` template <typename Point> void List <Point> ::load <2> ( const char *file ) { } Error 3 error C2768: 'List<Point>::load' : illegal use of explicit template arguments 66. Error 5 error C2244: 'List<Point>::load' : unable to match function definition to an existing declaration 66 ``` Compiler g++: ``` template <typename Point> template <> void List <Point> ::load <2> ( const char *file ) { } error: explicit specialization in non-namespace scope `class List<>' error: enclosing class templates are not explicitly specialized error: default arguments are only permitted for function parameters error: `load' is not a function template error: invalid function declaration ```
2011/02/09
[ "https://Stackoverflow.com/questions/4944156", "https://Stackoverflow.com", "https://Stackoverflow.com/users/598598/" ]
It turns out that there's a provision in the C++ spec that explicitly disallows specializing a template class or function nested inside of a template class unless you also explicitly specialize the outer template as well. Visual Studio doesn't enforce this rule, hence the confusion with the previous example, but g++ certainly does. If you want to specialize the template, your options will either be to also specialize the outer template or to somehow fake up the behavior of specialization by having the method dispatch to one of two different implementations based on the template parameter. Neither of these are very satisfying, I know, but unfortunately the language is designed weirdly in some template corners. :-( One way that you can emulate the behavior of the explicit specialization is to use a technique called *tag dispatching*. The idea is that we'll make a very simple struct that looks like this: ``` template <unsigned short N> struct Box {}; ``` This type is completely empty. It's not meant to be used directly, but rather is just a way of embedding an integer into the type system. In particular, `Box<3>` is not the same type as `Box<4>`, etc. Next, in your list class, define two functions that look like this, preferably marked private: ``` template <unsigned short N> void doLoad(const char* file, Box<N>); void doLoad(const char* file, Box<2>); ``` These two functions are overloads of one another, distinguishable only by their final parameter, which is either a `Box<N>` in the template case or a `Box<2>` in the non-template case. Note that the parameters don't have names. This is an arbitrary decision, but since we're not planning on actually reading the parameters, we don't need them. The intuition behind these functions is that this first function will be the "catch-all" implementation that will work for any `N` except 2. The second version will contain the implementation of loading for the case where `N == 2`. Finally, implement `load` as follows: ``` template <typename Point> template <unsigned short N> void List<Point>::load(const char* file) { doLoad(file, Box<N>()); } ``` How does this work? This function takes in a parameter, and then calls `doLoad` forwarding that parameter as the first argument and passing a temporary `Box<N>` as the second argument. If `N` is not two, then this is a call to the template version of `doLoad`, which is the catch-all handler. If, on the other hand, `N` is two, then this will call the non-template version of `doLoad`, because non-template functions have priority over template functions during overload resolution. In short, the implementation of `load` just becomes a trampoline to forward you to the correct of the two implementations. You can then put the logic in the appropriate `doLoad` function to get the behavior you want. Hope this helps!
You cannot specialize a member template without also specializing the class template. I also wonder what the meaning of N could be, as it is not used in the function parameter?
652,556
This is my first time setting up an Ubuntu Server (14.04 LTS) and I am having trouble configuring the firewall (UFW). I only need `ssh` and `http`, so I am doing this: ``` sudo ufw disable sudo ufw reset sudo ufw default deny incoming sudo ufw default allow outgoing sudo ufw allow 22/tcp sudo ufw allow 80/tcp sudo ufw enable sudo reboot ``` **But I can still connect to databases on other ports of this machine**. Any idea about what am I doing wrong? **EDIT**: these databases are on Docker containers. Could this be related? is it overriding my ufw config? **EDIT2**: output of `sudo ufw status verbose` ``` Status: active Logging: on (low) Default: deny (incoming), allow (outgoing), deny (routed) New profiles: skip To Action From -- ------ ---- 22/tcp ALLOW IN Anywhere 80/tcp ALLOW IN Anywhere 22/tcp (v6) ALLOW IN Anywhere (v6) 80/tcp (v6) ALLOW IN Anywhere (v6) ```
2015/07/25
[ "https://askubuntu.com/questions/652556", "https://askubuntu.com", "https://askubuntu.com/users/433097/" ]
Use of `/etc/docker/daemon.json` with content ``` { "iptables": false } ``` might sound like a solution **but it only works until the next reboot**. After that you may notice that none of your containers has access to Internet so you can't for example ping any website. It may be undesired behavior. Same applies to binding a container to specific IP. You may not want to do that. **The ultimate option is to create a container and have it behind UFW not matter what happens** and how you create this container, so there's a solution: After you create `/etc/docker/daemon.json` file, invoke: ``` sed -i -e 's/DEFAULT_FORWARD_POLICY="DROP"/DEFAULT_FORWARD_POLICY="ACCEPT"/g' /etc/default/ufw ufw reload ``` so you set up default forward policy in UFW for accept, and use: ``` iptables -t nat -A POSTROUTING ! -o docker0 -s 172.17.0.0/16 -j MASQUERADE ``` If you're about to use docker-compose, then the IP from command above should be replaced with IP of network docker-compose creates when running it with `docker-compose up`. I described the problem and solution more comprehensively [in this article](https://www.mkubaczyk.com/2017/09/05/force-docker-not-bypass-ufw-rules-ubuntu-16-04/) Hope it helps!
**An addition to the accepted answer** If you map a port like `127.0.0.1:8080:8080` to keep it closed from the Internet, but still want to be able to access it from your working environment (e.g for monitoring/administration/debugging purposes, with IP whitelist or ssh tunnel) there is a "well-known" solution for that: <https://github.com/chaifeng/ufw-docker>
652,556
This is my first time setting up an Ubuntu Server (14.04 LTS) and I am having trouble configuring the firewall (UFW). I only need `ssh` and `http`, so I am doing this: ``` sudo ufw disable sudo ufw reset sudo ufw default deny incoming sudo ufw default allow outgoing sudo ufw allow 22/tcp sudo ufw allow 80/tcp sudo ufw enable sudo reboot ``` **But I can still connect to databases on other ports of this machine**. Any idea about what am I doing wrong? **EDIT**: these databases are on Docker containers. Could this be related? is it overriding my ufw config? **EDIT2**: output of `sudo ufw status verbose` ``` Status: active Logging: on (low) Default: deny (incoming), allow (outgoing), deny (routed) New profiles: skip To Action From -- ------ ---- 22/tcp ALLOW IN Anywhere 80/tcp ALLOW IN Anywhere 22/tcp (v6) ALLOW IN Anywhere (v6) 80/tcp (v6) ALLOW IN Anywhere (v6) ```
2015/07/25
[ "https://askubuntu.com/questions/652556", "https://askubuntu.com", "https://askubuntu.com/users/433097/" ]
In my case I've ended up modifying iptables to allow access to Docker only from specific IPs. As per [ESala's answer](https://askubuntu.com/posts/652572/revisions): > > If you use `-p` flag on containers Docker makes changes directly to iptables, ignoring the ufw. > > > **Example of records added to iptables by Docker** Routing to 'DOCKER' chain: ``` -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER -A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER ``` Forwarding packets from 'DOCKER' chain to container: ``` -A DOCKER -i docker0 -j RETURN -A DOCKER ! -i docker0 -p tcp -m tcp --dport 6379 -j DNAT --to-destination 172.17.0.3:6379 ``` **You can modify iptables to allow access to DOCKER chain only from specified source IP (e.g. `1.1.1.1`):** ``` -A PREROUTING -s 1.1.1.1 -m addrtype --dst-type LOCAL -j DOCKER -A OUTPUT -s 1.1.1.1 ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER ``` You may want to use `iptables-save > /tmp/iptables.conf` and `iptables-restore < /tmp/iptables.conf` to dump, edit, and restore iptables rules.
**An addition to the accepted answer** If you map a port like `127.0.0.1:8080:8080` to keep it closed from the Internet, but still want to be able to access it from your working environment (e.g for monitoring/administration/debugging purposes, with IP whitelist or ssh tunnel) there is a "well-known" solution for that: <https://github.com/chaifeng/ufw-docker>
652,556
This is my first time setting up an Ubuntu Server (14.04 LTS) and I am having trouble configuring the firewall (UFW). I only need `ssh` and `http`, so I am doing this: ``` sudo ufw disable sudo ufw reset sudo ufw default deny incoming sudo ufw default allow outgoing sudo ufw allow 22/tcp sudo ufw allow 80/tcp sudo ufw enable sudo reboot ``` **But I can still connect to databases on other ports of this machine**. Any idea about what am I doing wrong? **EDIT**: these databases are on Docker containers. Could this be related? is it overriding my ufw config? **EDIT2**: output of `sudo ufw status verbose` ``` Status: active Logging: on (low) Default: deny (incoming), allow (outgoing), deny (routed) New profiles: skip To Action From -- ------ ---- 22/tcp ALLOW IN Anywhere 80/tcp ALLOW IN Anywhere 22/tcp (v6) ALLOW IN Anywhere (v6) 80/tcp (v6) ALLOW IN Anywhere (v6) ```
2015/07/25
[ "https://askubuntu.com/questions/652556", "https://askubuntu.com", "https://askubuntu.com/users/433097/" ]
In my case I've ended up modifying iptables to allow access to Docker only from specific IPs. As per [ESala's answer](https://askubuntu.com/posts/652572/revisions): > > If you use `-p` flag on containers Docker makes changes directly to iptables, ignoring the ufw. > > > **Example of records added to iptables by Docker** Routing to 'DOCKER' chain: ``` -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER -A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER ``` Forwarding packets from 'DOCKER' chain to container: ``` -A DOCKER -i docker0 -j RETURN -A DOCKER ! -i docker0 -p tcp -m tcp --dport 6379 -j DNAT --to-destination 172.17.0.3:6379 ``` **You can modify iptables to allow access to DOCKER chain only from specified source IP (e.g. `1.1.1.1`):** ``` -A PREROUTING -s 1.1.1.1 -m addrtype --dst-type LOCAL -j DOCKER -A OUTPUT -s 1.1.1.1 ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER ``` You may want to use `iptables-save > /tmp/iptables.conf` and `iptables-restore < /tmp/iptables.conf` to dump, edit, and restore iptables rules.
A fast workaround is when running Docker and doing the port mapping. You can always do ``` docker run ...-p 127.0.0.1:<ext pot>:<internal port> ... ``` to prevent your Docker from being accessed from outside.
652,556
This is my first time setting up an Ubuntu Server (14.04 LTS) and I am having trouble configuring the firewall (UFW). I only need `ssh` and `http`, so I am doing this: ``` sudo ufw disable sudo ufw reset sudo ufw default deny incoming sudo ufw default allow outgoing sudo ufw allow 22/tcp sudo ufw allow 80/tcp sudo ufw enable sudo reboot ``` **But I can still connect to databases on other ports of this machine**. Any idea about what am I doing wrong? **EDIT**: these databases are on Docker containers. Could this be related? is it overriding my ufw config? **EDIT2**: output of `sudo ufw status verbose` ``` Status: active Logging: on (low) Default: deny (incoming), allow (outgoing), deny (routed) New profiles: skip To Action From -- ------ ---- 22/tcp ALLOW IN Anywhere 80/tcp ALLOW IN Anywhere 22/tcp (v6) ALLOW IN Anywhere (v6) 80/tcp (v6) ALLOW IN Anywhere (v6) ```
2015/07/25
[ "https://askubuntu.com/questions/652556", "https://askubuntu.com", "https://askubuntu.com/users/433097/" ]
The problem was using the `-p` flag on containers. It turns out that Docker makes changes directly on your `iptables`, which are not shown with `ufw status`. Possible solutions are: 1. Stop using the `-p` flag. Use docker linking or [docker networks](https://docs.docker.com/engine/userguide/networking/) instead. 2. Bind containers locally so they are not exposed outside your machine: `docker run -p 127.0.0.1:8080:8080 ...` 3. If you insist on using the `-p` flag, tell docker not to touch your `iptables` by disabling them in `/etc/docker/daemon.json` and restarting: `{ "iptables" : false }` I recommend option 1 or 2. Beware that option 3 [has side-effects](https://docs.docker.com/engine/userguide/networking/default_network/container-communication/), like containers becoming unable to connect to the internet.
Use --network=host when you start container so docker will map port to isolated host-only network instead of default bridge network. I see no legal ways to block bridged network. Alternatively you can use custom user-defined network with isolation.
652,556
This is my first time setting up an Ubuntu Server (14.04 LTS) and I am having trouble configuring the firewall (UFW). I only need `ssh` and `http`, so I am doing this: ``` sudo ufw disable sudo ufw reset sudo ufw default deny incoming sudo ufw default allow outgoing sudo ufw allow 22/tcp sudo ufw allow 80/tcp sudo ufw enable sudo reboot ``` **But I can still connect to databases on other ports of this machine**. Any idea about what am I doing wrong? **EDIT**: these databases are on Docker containers. Could this be related? is it overriding my ufw config? **EDIT2**: output of `sudo ufw status verbose` ``` Status: active Logging: on (low) Default: deny (incoming), allow (outgoing), deny (routed) New profiles: skip To Action From -- ------ ---- 22/tcp ALLOW IN Anywhere 80/tcp ALLOW IN Anywhere 22/tcp (v6) ALLOW IN Anywhere (v6) 80/tcp (v6) ALLOW IN Anywhere (v6) ```
2015/07/25
[ "https://askubuntu.com/questions/652556", "https://askubuntu.com", "https://askubuntu.com/users/433097/" ]
The problem was using the `-p` flag on containers. It turns out that Docker makes changes directly on your `iptables`, which are not shown with `ufw status`. Possible solutions are: 1. Stop using the `-p` flag. Use docker linking or [docker networks](https://docs.docker.com/engine/userguide/networking/) instead. 2. Bind containers locally so they are not exposed outside your machine: `docker run -p 127.0.0.1:8080:8080 ...` 3. If you insist on using the `-p` flag, tell docker not to touch your `iptables` by disabling them in `/etc/docker/daemon.json` and restarting: `{ "iptables" : false }` I recommend option 1 or 2. Beware that option 3 [has side-effects](https://docs.docker.com/engine/userguide/networking/default_network/container-communication/), like containers becoming unable to connect to the internet.
Use of `/etc/docker/daemon.json` with content ``` { "iptables": false } ``` might sound like a solution **but it only works until the next reboot**. After that you may notice that none of your containers has access to Internet so you can't for example ping any website. It may be undesired behavior. Same applies to binding a container to specific IP. You may not want to do that. **The ultimate option is to create a container and have it behind UFW not matter what happens** and how you create this container, so there's a solution: After you create `/etc/docker/daemon.json` file, invoke: ``` sed -i -e 's/DEFAULT_FORWARD_POLICY="DROP"/DEFAULT_FORWARD_POLICY="ACCEPT"/g' /etc/default/ufw ufw reload ``` so you set up default forward policy in UFW for accept, and use: ``` iptables -t nat -A POSTROUTING ! -o docker0 -s 172.17.0.0/16 -j MASQUERADE ``` If you're about to use docker-compose, then the IP from command above should be replaced with IP of network docker-compose creates when running it with `docker-compose up`. I described the problem and solution more comprehensively [in this article](https://www.mkubaczyk.com/2017/09/05/force-docker-not-bypass-ufw-rules-ubuntu-16-04/) Hope it helps!
652,556
This is my first time setting up an Ubuntu Server (14.04 LTS) and I am having trouble configuring the firewall (UFW). I only need `ssh` and `http`, so I am doing this: ``` sudo ufw disable sudo ufw reset sudo ufw default deny incoming sudo ufw default allow outgoing sudo ufw allow 22/tcp sudo ufw allow 80/tcp sudo ufw enable sudo reboot ``` **But I can still connect to databases on other ports of this machine**. Any idea about what am I doing wrong? **EDIT**: these databases are on Docker containers. Could this be related? is it overriding my ufw config? **EDIT2**: output of `sudo ufw status verbose` ``` Status: active Logging: on (low) Default: deny (incoming), allow (outgoing), deny (routed) New profiles: skip To Action From -- ------ ---- 22/tcp ALLOW IN Anywhere 80/tcp ALLOW IN Anywhere 22/tcp (v6) ALLOW IN Anywhere (v6) 80/tcp (v6) ALLOW IN Anywhere (v6) ```
2015/07/25
[ "https://askubuntu.com/questions/652556", "https://askubuntu.com", "https://askubuntu.com/users/433097/" ]
The problem was using the `-p` flag on containers. It turns out that Docker makes changes directly on your `iptables`, which are not shown with `ufw status`. Possible solutions are: 1. Stop using the `-p` flag. Use docker linking or [docker networks](https://docs.docker.com/engine/userguide/networking/) instead. 2. Bind containers locally so they are not exposed outside your machine: `docker run -p 127.0.0.1:8080:8080 ...` 3. If you insist on using the `-p` flag, tell docker not to touch your `iptables` by disabling them in `/etc/docker/daemon.json` and restarting: `{ "iptables" : false }` I recommend option 1 or 2. Beware that option 3 [has side-effects](https://docs.docker.com/engine/userguide/networking/default_network/container-communication/), like containers becoming unable to connect to the internet.
UPDATE 10.2021 ============== When we use only one port-number then docker will use an ephemeral port see [docker-compose ports](https://docs.docker.com/compose/compose-file/compose-file-v3/#ports) > > Specify just the container port (an ephemeral host port is chosen for the host port). > > > I thought that my original answer worked, because I could not connect to the expected port 1234. So please don't do this - follow the advice in the other answers. I will not delete this answer, because it might still be useful to someone to understand why **not** to do this. original answer =============== I used `docker-compose` to start several containers and also had the problem that one port was exposed to the world ignoring ufw rules. The fix to only make the port available to my docker containers was this change in my `docker-compose.yml` file: ``` ports: - "1234:1234" ``` to this: ``` ports: - "1234" ``` Now the other docker-containers still can use the port, but I cannot access it from outside.
652,556
This is my first time setting up an Ubuntu Server (14.04 LTS) and I am having trouble configuring the firewall (UFW). I only need `ssh` and `http`, so I am doing this: ``` sudo ufw disable sudo ufw reset sudo ufw default deny incoming sudo ufw default allow outgoing sudo ufw allow 22/tcp sudo ufw allow 80/tcp sudo ufw enable sudo reboot ``` **But I can still connect to databases on other ports of this machine**. Any idea about what am I doing wrong? **EDIT**: these databases are on Docker containers. Could this be related? is it overriding my ufw config? **EDIT2**: output of `sudo ufw status verbose` ``` Status: active Logging: on (low) Default: deny (incoming), allow (outgoing), deny (routed) New profiles: skip To Action From -- ------ ---- 22/tcp ALLOW IN Anywhere 80/tcp ALLOW IN Anywhere 22/tcp (v6) ALLOW IN Anywhere (v6) 80/tcp (v6) ALLOW IN Anywhere (v6) ```
2015/07/25
[ "https://askubuntu.com/questions/652556", "https://askubuntu.com", "https://askubuntu.com/users/433097/" ]
In my case I've ended up modifying iptables to allow access to Docker only from specific IPs. As per [ESala's answer](https://askubuntu.com/posts/652572/revisions): > > If you use `-p` flag on containers Docker makes changes directly to iptables, ignoring the ufw. > > > **Example of records added to iptables by Docker** Routing to 'DOCKER' chain: ``` -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER -A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER ``` Forwarding packets from 'DOCKER' chain to container: ``` -A DOCKER -i docker0 -j RETURN -A DOCKER ! -i docker0 -p tcp -m tcp --dport 6379 -j DNAT --to-destination 172.17.0.3:6379 ``` **You can modify iptables to allow access to DOCKER chain only from specified source IP (e.g. `1.1.1.1`):** ``` -A PREROUTING -s 1.1.1.1 -m addrtype --dst-type LOCAL -j DOCKER -A OUTPUT -s 1.1.1.1 ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER ``` You may want to use `iptables-save > /tmp/iptables.conf` and `iptables-restore < /tmp/iptables.conf` to dump, edit, and restore iptables rules.
Use of `/etc/docker/daemon.json` with content ``` { "iptables": false } ``` might sound like a solution **but it only works until the next reboot**. After that you may notice that none of your containers has access to Internet so you can't for example ping any website. It may be undesired behavior. Same applies to binding a container to specific IP. You may not want to do that. **The ultimate option is to create a container and have it behind UFW not matter what happens** and how you create this container, so there's a solution: After you create `/etc/docker/daemon.json` file, invoke: ``` sed -i -e 's/DEFAULT_FORWARD_POLICY="DROP"/DEFAULT_FORWARD_POLICY="ACCEPT"/g' /etc/default/ufw ufw reload ``` so you set up default forward policy in UFW for accept, and use: ``` iptables -t nat -A POSTROUTING ! -o docker0 -s 172.17.0.0/16 -j MASQUERADE ``` If you're about to use docker-compose, then the IP from command above should be replaced with IP of network docker-compose creates when running it with `docker-compose up`. I described the problem and solution more comprehensively [in this article](https://www.mkubaczyk.com/2017/09/05/force-docker-not-bypass-ufw-rules-ubuntu-16-04/) Hope it helps!
652,556
This is my first time setting up an Ubuntu Server (14.04 LTS) and I am having trouble configuring the firewall (UFW). I only need `ssh` and `http`, so I am doing this: ``` sudo ufw disable sudo ufw reset sudo ufw default deny incoming sudo ufw default allow outgoing sudo ufw allow 22/tcp sudo ufw allow 80/tcp sudo ufw enable sudo reboot ``` **But I can still connect to databases on other ports of this machine**. Any idea about what am I doing wrong? **EDIT**: these databases are on Docker containers. Could this be related? is it overriding my ufw config? **EDIT2**: output of `sudo ufw status verbose` ``` Status: active Logging: on (low) Default: deny (incoming), allow (outgoing), deny (routed) New profiles: skip To Action From -- ------ ---- 22/tcp ALLOW IN Anywhere 80/tcp ALLOW IN Anywhere 22/tcp (v6) ALLOW IN Anywhere (v6) 80/tcp (v6) ALLOW IN Anywhere (v6) ```
2015/07/25
[ "https://askubuntu.com/questions/652556", "https://askubuntu.com", "https://askubuntu.com/users/433097/" ]
A fast workaround is when running Docker and doing the port mapping. You can always do ``` docker run ...-p 127.0.0.1:<ext pot>:<internal port> ... ``` to prevent your Docker from being accessed from outside.
1. Login to your docker console: > > sudo docker exec -i -t **docker\_image\_name** /bin/bash > > > 2. And then inside your docker console: > > > ``` > sudo apt-get update > sudo apt-get install ufw > sudo ufw allow 22 > > ``` > > 3. Add your ufw rules and enable the ufw > > sudo ufw enable > > > * Your Docker image need to be started with --cap-add=NET\_ADMIN To enable "NET\_ADMIN" Docker option: 1.Stop Container: docker stop yourcontainer; 2.Get container id: docker inspect yourcontainer; 3.Modify hostconfig.json(default docker path:/var/lib/docker, you can change yours) ``` vim /var/lib/docker/containers/containerid/hostconfig.json ``` 4.Search "CapAdd", and modify null to ["NET\_ADMIN"]; ....,"VolumesFrom":null,"CapAdd":["NET\_ADMIN"],"CapDrop":null,.... 5.Restart docker in host machine; service docker restart; 6.Start yourconatiner; docker start yourcontainer;
652,556
This is my first time setting up an Ubuntu Server (14.04 LTS) and I am having trouble configuring the firewall (UFW). I only need `ssh` and `http`, so I am doing this: ``` sudo ufw disable sudo ufw reset sudo ufw default deny incoming sudo ufw default allow outgoing sudo ufw allow 22/tcp sudo ufw allow 80/tcp sudo ufw enable sudo reboot ``` **But I can still connect to databases on other ports of this machine**. Any idea about what am I doing wrong? **EDIT**: these databases are on Docker containers. Could this be related? is it overriding my ufw config? **EDIT2**: output of `sudo ufw status verbose` ``` Status: active Logging: on (low) Default: deny (incoming), allow (outgoing), deny (routed) New profiles: skip To Action From -- ------ ---- 22/tcp ALLOW IN Anywhere 80/tcp ALLOW IN Anywhere 22/tcp (v6) ALLOW IN Anywhere (v6) 80/tcp (v6) ALLOW IN Anywhere (v6) ```
2015/07/25
[ "https://askubuntu.com/questions/652556", "https://askubuntu.com", "https://askubuntu.com/users/433097/" ]
In my case I've ended up modifying iptables to allow access to Docker only from specific IPs. As per [ESala's answer](https://askubuntu.com/posts/652572/revisions): > > If you use `-p` flag on containers Docker makes changes directly to iptables, ignoring the ufw. > > > **Example of records added to iptables by Docker** Routing to 'DOCKER' chain: ``` -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER -A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER ``` Forwarding packets from 'DOCKER' chain to container: ``` -A DOCKER -i docker0 -j RETURN -A DOCKER ! -i docker0 -p tcp -m tcp --dport 6379 -j DNAT --to-destination 172.17.0.3:6379 ``` **You can modify iptables to allow access to DOCKER chain only from specified source IP (e.g. `1.1.1.1`):** ``` -A PREROUTING -s 1.1.1.1 -m addrtype --dst-type LOCAL -j DOCKER -A OUTPUT -s 1.1.1.1 ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER ``` You may want to use `iptables-save > /tmp/iptables.conf` and `iptables-restore < /tmp/iptables.conf` to dump, edit, and restore iptables rules.
UPDATE 10.2021 ============== When we use only one port-number then docker will use an ephemeral port see [docker-compose ports](https://docs.docker.com/compose/compose-file/compose-file-v3/#ports) > > Specify just the container port (an ephemeral host port is chosen for the host port). > > > I thought that my original answer worked, because I could not connect to the expected port 1234. So please don't do this - follow the advice in the other answers. I will not delete this answer, because it might still be useful to someone to understand why **not** to do this. original answer =============== I used `docker-compose` to start several containers and also had the problem that one port was exposed to the world ignoring ufw rules. The fix to only make the port available to my docker containers was this change in my `docker-compose.yml` file: ``` ports: - "1234:1234" ``` to this: ``` ports: - "1234" ``` Now the other docker-containers still can use the port, but I cannot access it from outside.
652,556
This is my first time setting up an Ubuntu Server (14.04 LTS) and I am having trouble configuring the firewall (UFW). I only need `ssh` and `http`, so I am doing this: ``` sudo ufw disable sudo ufw reset sudo ufw default deny incoming sudo ufw default allow outgoing sudo ufw allow 22/tcp sudo ufw allow 80/tcp sudo ufw enable sudo reboot ``` **But I can still connect to databases on other ports of this machine**. Any idea about what am I doing wrong? **EDIT**: these databases are on Docker containers. Could this be related? is it overriding my ufw config? **EDIT2**: output of `sudo ufw status verbose` ``` Status: active Logging: on (low) Default: deny (incoming), allow (outgoing), deny (routed) New profiles: skip To Action From -- ------ ---- 22/tcp ALLOW IN Anywhere 80/tcp ALLOW IN Anywhere 22/tcp (v6) ALLOW IN Anywhere (v6) 80/tcp (v6) ALLOW IN Anywhere (v6) ```
2015/07/25
[ "https://askubuntu.com/questions/652556", "https://askubuntu.com", "https://askubuntu.com/users/433097/" ]
The problem was using the `-p` flag on containers. It turns out that Docker makes changes directly on your `iptables`, which are not shown with `ufw status`. Possible solutions are: 1. Stop using the `-p` flag. Use docker linking or [docker networks](https://docs.docker.com/engine/userguide/networking/) instead. 2. Bind containers locally so they are not exposed outside your machine: `docker run -p 127.0.0.1:8080:8080 ...` 3. If you insist on using the `-p` flag, tell docker not to touch your `iptables` by disabling them in `/etc/docker/daemon.json` and restarting: `{ "iptables" : false }` I recommend option 1 or 2. Beware that option 3 [has side-effects](https://docs.docker.com/engine/userguide/networking/default_network/container-communication/), like containers becoming unable to connect to the internet.
A fast workaround is when running Docker and doing the port mapping. You can always do ``` docker run ...-p 127.0.0.1:<ext pot>:<internal port> ... ``` to prevent your Docker from being accessed from outside.
51,200,369
This is an extension of the question posed [here](https://stackoverflow.com/questions/20360675/roll-rows-of-a-matrix-independently) (quoted below) > > I have a matrix (2d numpy ndarray, to be precise): > > > > ``` > A = np.array([[4, 0, 0], > [1, 2, 3], > [0, 0, 5]]) > > ``` > > And I want to roll each row of A independently, according to roll > values in another array: > > > > ``` > r = np.array([2, 0, -1]) > > ``` > > That is, I want to do this: > > > > ``` > print np.array([np.roll(row, x) for row,x in zip(A, r)]) > > [[0 0 4] > [1 2 3] > [0 5 0]] > > ``` > > Is there a way to do this efficiently? Perhaps using fancy indexing > tricks? > > > The accepted solution was: ``` rows, column_indices = np.ogrid[:A.shape[0], :A.shape[1]] # Use always a negative shift, so that column_indices are valid. # (could also use module operation) r[r < 0] += A.shape[1] column_indices = column_indices - r[:,np.newaxis] result = A[rows, column_indices] ``` I would basically like to do the same thing, except when an index gets rolled "past" the end of the row, I would like the other side of the row to be padded with a NaN, rather than the value move to the "front" of the row in a periodic fashion. Maybe using `np.pad` somehow? But I can't figure out how to get that to pad different rows by different amounts.
2018/07/05
[ "https://Stackoverflow.com/questions/51200369", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3826115/" ]
Inspired by [Roll rows of a matrix independently's solution](https://stackoverflow.com/a/51613442/), here's a vectorized one based on [`np.lib.stride_tricks.as_strided`](http://www.scipy-lectures.org/advanced/advanced_numpy/#indexing-scheme-strides) - ``` from skimage.util.shape import view_as_windows as viewW def strided_indexing_roll(a, r): # Concatenate with sliced to cover all rolls p = np.full((a.shape[0],a.shape[1]-1),np.nan) a_ext = np.concatenate((p,a,p),axis=1) # Get sliding windows; use advanced-indexing to select appropriate ones n = a.shape[1] return viewW(a_ext,(1,n))[np.arange(len(r)), -r + (n-1),0] ``` Sample run - ``` In [76]: a Out[76]: array([[4, 0, 0], [1, 2, 3], [0, 0, 5]]) In [77]: r Out[77]: array([ 2, 0, -1]) In [78]: strided_indexing_roll(a, r) Out[78]: array([[nan, nan, 4.], [ 1., 2., 3.], [ 0., 5., nan]]) ```
I was able to hack this together with linear indexing...it gets the right result but performs rather slowly on large arrays. ``` A = np.array([[4, 0, 0], [1, 2, 3], [0, 0, 5]]).astype(float) r = np.array([2, 0, -1]) rows, column_indices = np.ogrid[:A.shape[0], :A.shape[1]] # Use always a negative shift, so that column_indices are valid. # (could also use module operation) r_old = r.copy() r[r < 0] += A.shape[1] column_indices = column_indices - r[:,np.newaxis] result = A[rows, column_indices] # replace with NaNs row_length = result.shape[-1] pad_inds = [] for ind,i in np.enumerate(r_old): if i > 0: inds2pad = [np.ravel_multi_index((ind,) + (j,),result.shape) for j in range(i)] pad_inds.extend(inds2pad) if i < 0: inds2pad = [np.ravel_multi_index((ind,) + (j,),result.shape) for j in range(row_length+i,row_length)] pad_inds.extend(inds2pad) result.ravel()[pad_inds] = nan ``` Gives the expected result: ``` print result [[ nan nan 4.] [ 1. 2. 3.] [ 0. 5. nan]] ```
51,200,369
This is an extension of the question posed [here](https://stackoverflow.com/questions/20360675/roll-rows-of-a-matrix-independently) (quoted below) > > I have a matrix (2d numpy ndarray, to be precise): > > > > ``` > A = np.array([[4, 0, 0], > [1, 2, 3], > [0, 0, 5]]) > > ``` > > And I want to roll each row of A independently, according to roll > values in another array: > > > > ``` > r = np.array([2, 0, -1]) > > ``` > > That is, I want to do this: > > > > ``` > print np.array([np.roll(row, x) for row,x in zip(A, r)]) > > [[0 0 4] > [1 2 3] > [0 5 0]] > > ``` > > Is there a way to do this efficiently? Perhaps using fancy indexing > tricks? > > > The accepted solution was: ``` rows, column_indices = np.ogrid[:A.shape[0], :A.shape[1]] # Use always a negative shift, so that column_indices are valid. # (could also use module operation) r[r < 0] += A.shape[1] column_indices = column_indices - r[:,np.newaxis] result = A[rows, column_indices] ``` I would basically like to do the same thing, except when an index gets rolled "past" the end of the row, I would like the other side of the row to be padded with a NaN, rather than the value move to the "front" of the row in a periodic fashion. Maybe using `np.pad` somehow? But I can't figure out how to get that to pad different rows by different amounts.
2018/07/05
[ "https://Stackoverflow.com/questions/51200369", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3826115/" ]
Inspired by [Roll rows of a matrix independently's solution](https://stackoverflow.com/a/51613442/), here's a vectorized one based on [`np.lib.stride_tricks.as_strided`](http://www.scipy-lectures.org/advanced/advanced_numpy/#indexing-scheme-strides) - ``` from skimage.util.shape import view_as_windows as viewW def strided_indexing_roll(a, r): # Concatenate with sliced to cover all rolls p = np.full((a.shape[0],a.shape[1]-1),np.nan) a_ext = np.concatenate((p,a,p),axis=1) # Get sliding windows; use advanced-indexing to select appropriate ones n = a.shape[1] return viewW(a_ext,(1,n))[np.arange(len(r)), -r + (n-1),0] ``` Sample run - ``` In [76]: a Out[76]: array([[4, 0, 0], [1, 2, 3], [0, 0, 5]]) In [77]: r Out[77]: array([ 2, 0, -1]) In [78]: strided_indexing_roll(a, r) Out[78]: array([[nan, nan, 4.], [ 1., 2., 3.], [ 0., 5., nan]]) ```
Based on @Seberg and @yann-dubois answers in the non-nan case, I've written a method that: * Is faster than the current answer * Works on ndarrays of any shape (specify the row-axis using the `axis` argument) * Allows for setting `fill` to either np.nan, any other "fill value" or False to allow regular rolling across the array edge. ### Benchmarking ```py cols, rows = 1024, 2048 arr = np.stack(rows*(np.arange(cols,dtype=float),)) shifts = np.random.randint(-cols, cols, rows) np.testing.assert_array_almost_equal(row_roll(arr, shifts), strided_indexing_roll(arr, shifts)) # True %timeit row_roll(arr, shifts) # 25.9 ms ± 161 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) %timeit strided_indexing_roll(arr, shifts) # 29.7 ms ± 446 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) ``` ```py def row_roll(arr, shifts, axis=1, fill=np.nan): """Apply an independent roll for each dimensions of a single axis. Parameters ---------- arr : np.ndarray Array of any shape. shifts : np.ndarray, dtype int. Shape: `(arr.shape[:axis],)`. Amount to roll each row by. Positive shifts row right. axis : int Axis along which elements are shifted. fill: bool or float If True, value to be filled at missing values. Otherwise just rolls across edges. """ if np.issubdtype(arr.dtype, int) and isinstance(fill, float): arr = arr.astype(float) shifts2 = shifts.copy() arr = np.swapaxes(arr,axis,-1) all_idcs = np.ogrid[[slice(0,n) for n in arr.shape]] # Convert to a positive shift shifts2[shifts2 < 0] += arr.shape[-1] all_idcs[-1] = all_idcs[-1] - shifts2[:, np.newaxis] result = arr[tuple(all_idcs)] if fill is not False: # Create mask of row positions above negative shifts # or below positive shifts. Then set them to np.nan. *_, nrows, ncols = arr.shape mask_neg = shifts < 0 mask_pos = shifts >= 0 shifts_pos = shifts.copy() shifts_pos[mask_neg] = 0 shifts_neg = shifts.copy() shifts_neg[mask_pos] = ncols+1 # need to be bigger than the biggest positive shift shifts_neg[mask_neg] = shifts[mask_neg] % ncols indices = np.stack(nrows*(np.arange(ncols),)) nanmask = (indices < shifts_pos[:, None]) | (indices >= shifts_neg[:, None]) result[nanmask] = fill arr = np.swapaxes(result,-1,axis) return arr ```
16,670,806
I have this in my `index.html` file but it does not show the paragraph that I want to add with `D3` ``` <!DOCTYPE html> <html> <head> <title> D3 page template </title> <script type="text/javascript" src = "d3/d3.v3.js"></script> </head> <body> <script type="text/javascript"> d3.select("body").append("p").text("new paragraph!"); </script> </body> </html> ``` The path to where I am referencing `D3.js` should be correct because if I do an inspect element in the browser I can click on the `D3` link and it takes me to its source code. ![enter image description here](https://i.stack.imgur.com/ishut.png)
2013/05/21
[ "https://Stackoverflow.com/questions/16670806", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
You are missing the character set in your HTML page. Add something like this: ``` <meta charset="utf-8"> ``` The un-minified source of D3 includes the actual symbol for pi, which confuses the browser if the character set is not defined.
I am going to assume that you are testing this out without a web server. If so, then your URL will read file://.... not http://.. With this, the Javascript request will go to file:///.../D3/d3/d3.v3.js which won't have the proper response header set, such as charset and MIME. You can always get it from a CDN to avoid this problem: ``` <script src="http://d3js.org/d3.v3.min.js" charset="utf-8"></script> ```
16,670,806
I have this in my `index.html` file but it does not show the paragraph that I want to add with `D3` ``` <!DOCTYPE html> <html> <head> <title> D3 page template </title> <script type="text/javascript" src = "d3/d3.v3.js"></script> </head> <body> <script type="text/javascript"> d3.select("body").append("p").text("new paragraph!"); </script> </body> </html> ``` The path to where I am referencing `D3.js` should be correct because if I do an inspect element in the browser I can click on the `D3` link and it takes me to its source code. ![enter image description here](https://i.stack.imgur.com/ishut.png)
2013/05/21
[ "https://Stackoverflow.com/questions/16670806", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
You are missing the character set in your HTML page. Add something like this: ``` <meta charset="utf-8"> ``` The un-minified source of D3 includes the actual symbol for pi, which confuses the browser if the character set is not defined.
Was having similar issue. My problem was my script file was loading before it could load the d3js library. Just adding `defer` fixed the issue. `<script src="app.js" defer></script>`
16,670,806
I have this in my `index.html` file but it does not show the paragraph that I want to add with `D3` ``` <!DOCTYPE html> <html> <head> <title> D3 page template </title> <script type="text/javascript" src = "d3/d3.v3.js"></script> </head> <body> <script type="text/javascript"> d3.select("body").append("p").text("new paragraph!"); </script> </body> </html> ``` The path to where I am referencing `D3.js` should be correct because if I do an inspect element in the browser I can click on the `D3` link and it takes me to its source code. ![enter image description here](https://i.stack.imgur.com/ishut.png)
2013/05/21
[ "https://Stackoverflow.com/questions/16670806", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
I am going to assume that you are testing this out without a web server. If so, then your URL will read file://.... not http://.. With this, the Javascript request will go to file:///.../D3/d3/d3.v3.js which won't have the proper response header set, such as charset and MIME. You can always get it from a CDN to avoid this problem: ``` <script src="http://d3js.org/d3.v3.min.js" charset="utf-8"></script> ```
Was having similar issue. My problem was my script file was loading before it could load the d3js library. Just adding `defer` fixed the issue. `<script src="app.js" defer></script>`
178,294
A few words in the WP Job Manager weren't translated yet. I edited most of them in de po file now but for the sentence "posted 5 hours ago" I can't find where to translate the "Hours" It's declared as `%s` in the po file, Cant find the word hour in any of the plugin's files... so where to change the "hours" term of this plugin. ``` #: templates/content-job_listing.php (Line 18) msgid "%s ago" msgstr "%s geleden" // Content-job_listing.php <li class="date"> <date> <?php printf( __( '%s ago', 'wp-job-manager' ), human_time_diff( get_post_time( 'U' ), current_time( 'timestamp' ) ) ); ?> </date> </li> ```
2015/02/16
[ "https://wordpress.stackexchange.com/questions/178294", "https://wordpress.stackexchange.com", "https://wordpress.stackexchange.com/users/67623/" ]
I think you need to understand how the code you are using works. The code that output the string is: ``` <?php printf( __( '%s ago', 'wp-job-manager' ), human_time_diff( get_post_time( 'U' ), current_time( 'timestamp' ) ) ); ?> ``` In the above code, `__( '%s ago', 'wp-job-manager' )` is translated by wp-job-manager language files and `%s` is replaced by the output of `human_time_diff()` function, which is in the format "1 day", "2 months", etc. The output of `human_time_diff()` is already translated by WordPress language files and, additionally, the output can be filtered. So, you have two options: 1) modify the output of `human_time_diff()` function using `human_time_diff` filter or 2) override the translations from core ussing `gettext` filter. 1.- **Using `human_time_diff` filter**. Example: ``` add_filter( 'human_time_diff', function($since, $diff, $from, $to) { //Here you can build your own human time diff strings //For example if ( empty( $to ) ) { $to = time(); } $diff = (int) abs( $to - $from ); if ( $diff < HOUR_IN_SECONDS ) { $since = "WOW, a lot of time ago!!!!"; } return $since; }, 10, 4 ); ``` 2.- **Override the translations** using `gettext` filter. For example: ``` add_filter( 'gettext', function($translated, $original, $domain) { if ( $original == "%s hour" ) { //fill $translated string with whatever you want $translated = "Some other string you want instead of original translattion"; } return $translated; }, 10, 3 ); ```
Since your problem resolves around the fact that the WP theme developers used a function that doesn't have any hooks so we could easily alter the output, you're going to have to copy & paste the following code in your functions.php file ``` function human_time_diff( $from, $to = '' ) { if ( empty( $to ) ) { $to = time(); } $diff = (int) abs( $to - $from ); if ( $diff < HOUR_IN_SECONDS ) { $mins = round( $diff / MINUTE_IN_SECONDS ); if ( $mins <= 1 ) $mins = 1; /* translators: min=minute */ $since = sprintf( _n( '%s min', '%s mins', $mins ), $mins ); } elseif ( $diff < DAY_IN_SECONDS && $diff >= HOUR_IN_SECONDS ) { $hours = round( $diff / HOUR_IN_SECONDS ); if ( $hours <= 1 ) $hours = 1; $since = sprintf( _n( '%s hour', '%s hours', $hours ), $hours ); } elseif ( $diff < WEEK_IN_SECONDS && $diff >= DAY_IN_SECONDS ) { $days = round( $diff / DAY_IN_SECONDS ); if ( $days <= 1 ) $days = 1; $since = sprintf( _n( '%s day', '%s days', $days ), $days ); } elseif ( $diff < 30 * DAY_IN_SECONDS && $diff >= WEEK_IN_SECONDS ) { $weeks = round( $diff / WEEK_IN_SECONDS ); if ( $weeks <= 1 ) $weeks = 1; $since = sprintf( _n( '%s week', '%s weeks', $weeks ), $weeks ); } elseif ( $diff < YEAR_IN_SECONDS && $diff >= 30 * DAY_IN_SECONDS ) { $months = round( $diff / ( 30 * DAY_IN_SECONDS ) ); if ( $months <= 1 ) $months = 1; $since = sprintf( _n( '%s month', '%s months', $months ), $months ); } elseif ( $diff >= YEAR_IN_SECONDS ) { $years = round( $diff / YEAR_IN_SECONDS ); if ( $years <= 1 ) $years = 1; $since = sprintf( _n( '%s year', '%s years', $years ), $years ); } /** * Filter the human readable difference between two timestamps. * * @since 4.0.0 * * @param string $since The difference in human readable text. * @param int $diff The difference in seconds. * @param int $from Unix timestamp from which the difference begins. * @param int $to Unix timestamp to end the time difference. */ return apply_filters( 'human_time_diff', $since, $diff, $from, $to ); } ``` > > '%s min', '%s mins' > > > '%s hour', '%s hours' > > > '%s day', '%s days' > > > '%s week', '%s weeks' > > > '%s month', '%s months' > > > These are the formats you need to change with your own wording; be sure to only change the words (minute, hour, day, week, month) and not the `%s` Hope this helps.
73,872,837
I am using axios to fetch and retrieve data from an api. I then set the api data to some state. When I save the changes, it shows the name from index [0] of the array, as I want. However when I refresh the page, it throws an error "Cannot read properties of undefined (reading 'name')". It seems like I am losing the api data when I refresh, what am I doing wrong here? The api endpoint is madeup, as I don't want to share the real endpoint here. ``` const [apiData, setApiData] = useState([]); useEffect(() => { axios .get("https://some-api") .then((res) => { setApiData(res.data); }) .catch((error) => { alert(error.message); }); }, []); return <h1>{apiData[0].name}</h1> ```
2022/09/27
[ "https://Stackoverflow.com/questions/73872837", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19883046/" ]
You do not need to compute the friendship duration in advance, unless you need it for later. You can compute it on the fly as follows: ```js const friends = [ { name: "Michael", start: 1981, end: 2004 }, { name: "Joe", start: 1992, end: 2008 }, { name: "Sara", start: 1999, end: 2007 }, { name: "Marcel", start: 1989, end: 2010 }, ]; const sortedfriends = friends.sort( (a,b) => (a.end - a.start) - (b.end - b.start) ); console.log( sortedfriends ); ``` **Need Duration?** Just in case you need `friendshipDuration` for later, you can use `Array#map` to compute it, followed by sorting: ```js const friends = [ { name: "Michael", start: 1981, end: 2004 }, { name: "Joe", start: 1992, end: 2008 }, { name: "Sara", start: 1999, end: 2007 }, { name: "Marcel", start: 1989, end: 2010 }, ]; const sortedfriends = friends.map( f => ({...f,friendshipDuration:f.end - f.start}) ).sort( (a,b) => a.friendshipDuration - b.friendshipDuration ); console.log( sortedfriends ); ```
You need to calculate the duration for both c1 and c2 and then compare those two calculated values each time the comparison function is called. ```js const friends = [ { name: "Michael", start: 1981, end: 2004 }, { name: "Joe", start: 1992, end: 2008 }, { name: "Sara", start: 1999, end: 2007 }, { name: "Marcel", start: 1989, end: 2010 }, ]; const sortedfriends2 = friends.sort(function(c1, c2) { const c1FriendshipDuration = c1.end - c1.start; const c2FriendshipDuration = c2.end - c2.start; if (c1FriendshipDuration > c2FriendshipDuration) { return 1; } else if (c1FriendshipDuration < c2FriendshipDuration) { return -1; } else { return 0; } }); console.log(sortedfriends2); ```
55,820,297
im passing users list as a props to UserItem Component to make iterate on user list and displaying them on table. the list is displayed correctly and i dont have any divs in my render return but i still get the error : index.js:1446 Warning: validateDOMNesting(...): cannot appear as a child of . tried many solutions found online but none of them worked UsersManagement code : ``` import React, { Component } from 'react'; import PropTypes from 'prop-types'; import { connect } from 'react-redux'; import Spinner from './common/Spinner'; import { getUsers } from '../actions/userActions'; import UserItem from './UserItem'; class UsersManagement extends Component { componentDidMount() { if (!this.props.auth.isAuthenticated) { this.props.history.push('/login'); } this.props.getUsers(); } render() { const { users, loading } = this.props.user; let usersList; if (users === null || loading) { usersList = <Spinner /> } else { if (users.length > 0) { usersList = users.map(user => ( <UserItem key={user._id} user={user} /> )) } else { usersList = <h2>No users</h2> } } return ( <div className="row"> <div className="col-12"> <h1 className="text-center mb-2">Users Management</h1> <button type="button" className="btn btn-success mb-4">New User</button> <table className="table"> <thead> <tr> <th scope="col">Options</th> <th scope="col">Username</th> <th scope="col">Email</th> <th scope="col">Phone Number</th> </tr> </thead> <tbody> {usersList} </tbody> </table> </div> </div> ) } } UsersManagement.propTypes = { getUsers: PropTypes.func.isRequired, auth: PropTypes.object.isRequired, user: PropTypes.object.isRequired } const mapStateToProps = state => ({ auth: state.auth, user: state.user }) export default connect(mapStateToProps, { getUsers })(UsersManagement); ``` UserItem code : ``` import React, { Component } from 'react'; import PropTypes from 'prop-types'; class UserItem extends Component { render() { const { user } = this.props; console.log(user); return ( <tr> <th scope="row"> <button type="button" className="btn btn-primary fa-xs mr-1"><i className="fas fa-pencil-alt"></i></button> <button type="button" className="btn btn-danger fa-xs"><i className="far fa-trash-alt"></i></button> </th> <td>{user.username}</td> <td>{user.email}</td> <td>{user.phone}</td> </tr> ) } } UserItem.propTypes = { user: PropTypes.object.isRequired } export default UserItem; ``` i expect to to fix the warning message
2019/04/23
[ "https://Stackoverflow.com/questions/55820297", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11389034/" ]
Most likely the component `Spinner` renders a `<div>` as the outermost node. Check the implementation of it. You implicitly render it inside `<tbody>` through the lines ``` <tbody> {usersList} </tbody> ``` where `usersList` defaults to `<Spinner />` when there are no users or `loading` is `true`. This is why get the error. A fix would be to wrap the `Spinner` into a `td` that spans the whole row: ``` if (users === null || loading) { usersList = <tr><td colSpan="4"><Spinner /></td></tr>; } else { // ... } ```
I had the same problem. I had a external spinner component that I needed to inject. This is how i solved it: ``` <table className="table profit-withdrawal"> <thead className="thead-default"> <tr> <th>Nick Name</th> <th>Transfer to</th> <th>Details</th> <th>Actions</th> </tr> </thead> <tbody> <tr> <td> <DotsLoaderComponent variant="dark" dimension="large" /> </td> </tr> {this.renderTableData()} </tbody> </table> ``` So I wrapped my spinner component inside a: ``` <tr> <td>YOUR COMPONENT HERE</td> </tr> ```
55,820,297
im passing users list as a props to UserItem Component to make iterate on user list and displaying them on table. the list is displayed correctly and i dont have any divs in my render return but i still get the error : index.js:1446 Warning: validateDOMNesting(...): cannot appear as a child of . tried many solutions found online but none of them worked UsersManagement code : ``` import React, { Component } from 'react'; import PropTypes from 'prop-types'; import { connect } from 'react-redux'; import Spinner from './common/Spinner'; import { getUsers } from '../actions/userActions'; import UserItem from './UserItem'; class UsersManagement extends Component { componentDidMount() { if (!this.props.auth.isAuthenticated) { this.props.history.push('/login'); } this.props.getUsers(); } render() { const { users, loading } = this.props.user; let usersList; if (users === null || loading) { usersList = <Spinner /> } else { if (users.length > 0) { usersList = users.map(user => ( <UserItem key={user._id} user={user} /> )) } else { usersList = <h2>No users</h2> } } return ( <div className="row"> <div className="col-12"> <h1 className="text-center mb-2">Users Management</h1> <button type="button" className="btn btn-success mb-4">New User</button> <table className="table"> <thead> <tr> <th scope="col">Options</th> <th scope="col">Username</th> <th scope="col">Email</th> <th scope="col">Phone Number</th> </tr> </thead> <tbody> {usersList} </tbody> </table> </div> </div> ) } } UsersManagement.propTypes = { getUsers: PropTypes.func.isRequired, auth: PropTypes.object.isRequired, user: PropTypes.object.isRequired } const mapStateToProps = state => ({ auth: state.auth, user: state.user }) export default connect(mapStateToProps, { getUsers })(UsersManagement); ``` UserItem code : ``` import React, { Component } from 'react'; import PropTypes from 'prop-types'; class UserItem extends Component { render() { const { user } = this.props; console.log(user); return ( <tr> <th scope="row"> <button type="button" className="btn btn-primary fa-xs mr-1"><i className="fas fa-pencil-alt"></i></button> <button type="button" className="btn btn-danger fa-xs"><i className="far fa-trash-alt"></i></button> </th> <td>{user.username}</td> <td>{user.email}</td> <td>{user.phone}</td> </tr> ) } } UserItem.propTypes = { user: PropTypes.object.isRequired } export default UserItem; ``` i expect to to fix the warning message
2019/04/23
[ "https://Stackoverflow.com/questions/55820297", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11389034/" ]
Most likely the component `Spinner` renders a `<div>` as the outermost node. Check the implementation of it. You implicitly render it inside `<tbody>` through the lines ``` <tbody> {usersList} </tbody> ``` where `usersList` defaults to `<Spinner />` when there are no users or `loading` is `true`. This is why get the error. A fix would be to wrap the `Spinner` into a `td` that spans the whole row: ``` if (users === null || loading) { usersList = <tr><td colSpan="4"><Spinner /></td></tr>; } else { // ... } ```
validateDOMNesting(...): cannot appear as a child of `<div>`. I see an error similar to this You replace : tag `<body> to <div>`
55,820,297
im passing users list as a props to UserItem Component to make iterate on user list and displaying them on table. the list is displayed correctly and i dont have any divs in my render return but i still get the error : index.js:1446 Warning: validateDOMNesting(...): cannot appear as a child of . tried many solutions found online but none of them worked UsersManagement code : ``` import React, { Component } from 'react'; import PropTypes from 'prop-types'; import { connect } from 'react-redux'; import Spinner from './common/Spinner'; import { getUsers } from '../actions/userActions'; import UserItem from './UserItem'; class UsersManagement extends Component { componentDidMount() { if (!this.props.auth.isAuthenticated) { this.props.history.push('/login'); } this.props.getUsers(); } render() { const { users, loading } = this.props.user; let usersList; if (users === null || loading) { usersList = <Spinner /> } else { if (users.length > 0) { usersList = users.map(user => ( <UserItem key={user._id} user={user} /> )) } else { usersList = <h2>No users</h2> } } return ( <div className="row"> <div className="col-12"> <h1 className="text-center mb-2">Users Management</h1> <button type="button" className="btn btn-success mb-4">New User</button> <table className="table"> <thead> <tr> <th scope="col">Options</th> <th scope="col">Username</th> <th scope="col">Email</th> <th scope="col">Phone Number</th> </tr> </thead> <tbody> {usersList} </tbody> </table> </div> </div> ) } } UsersManagement.propTypes = { getUsers: PropTypes.func.isRequired, auth: PropTypes.object.isRequired, user: PropTypes.object.isRequired } const mapStateToProps = state => ({ auth: state.auth, user: state.user }) export default connect(mapStateToProps, { getUsers })(UsersManagement); ``` UserItem code : ``` import React, { Component } from 'react'; import PropTypes from 'prop-types'; class UserItem extends Component { render() { const { user } = this.props; console.log(user); return ( <tr> <th scope="row"> <button type="button" className="btn btn-primary fa-xs mr-1"><i className="fas fa-pencil-alt"></i></button> <button type="button" className="btn btn-danger fa-xs"><i className="far fa-trash-alt"></i></button> </th> <td>{user.username}</td> <td>{user.email}</td> <td>{user.phone}</td> </tr> ) } } UserItem.propTypes = { user: PropTypes.object.isRequired } export default UserItem; ``` i expect to to fix the warning message
2019/04/23
[ "https://Stackoverflow.com/questions/55820297", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11389034/" ]
Most likely the component `Spinner` renders a `<div>` as the outermost node. Check the implementation of it. You implicitly render it inside `<tbody>` through the lines ``` <tbody> {usersList} </tbody> ``` where `usersList` defaults to `<Spinner />` when there are no users or `loading` is `true`. This is why get the error. A fix would be to wrap the `Spinner` into a `td` that spans the whole row: ``` if (users === null || loading) { usersList = <tr><td colSpan="4"><Spinner /></td></tr>; } else { // ... } ```
It may not be your case, but it may be someone else's. It's kind of embarrassing but I had this problem because I was importing the wrong component: ``` import Table from 'react-bootstrap/Col'; <-- Is a Col not a Table ``` Check if the imports are correct
55,820,297
im passing users list as a props to UserItem Component to make iterate on user list and displaying them on table. the list is displayed correctly and i dont have any divs in my render return but i still get the error : index.js:1446 Warning: validateDOMNesting(...): cannot appear as a child of . tried many solutions found online but none of them worked UsersManagement code : ``` import React, { Component } from 'react'; import PropTypes from 'prop-types'; import { connect } from 'react-redux'; import Spinner from './common/Spinner'; import { getUsers } from '../actions/userActions'; import UserItem from './UserItem'; class UsersManagement extends Component { componentDidMount() { if (!this.props.auth.isAuthenticated) { this.props.history.push('/login'); } this.props.getUsers(); } render() { const { users, loading } = this.props.user; let usersList; if (users === null || loading) { usersList = <Spinner /> } else { if (users.length > 0) { usersList = users.map(user => ( <UserItem key={user._id} user={user} /> )) } else { usersList = <h2>No users</h2> } } return ( <div className="row"> <div className="col-12"> <h1 className="text-center mb-2">Users Management</h1> <button type="button" className="btn btn-success mb-4">New User</button> <table className="table"> <thead> <tr> <th scope="col">Options</th> <th scope="col">Username</th> <th scope="col">Email</th> <th scope="col">Phone Number</th> </tr> </thead> <tbody> {usersList} </tbody> </table> </div> </div> ) } } UsersManagement.propTypes = { getUsers: PropTypes.func.isRequired, auth: PropTypes.object.isRequired, user: PropTypes.object.isRequired } const mapStateToProps = state => ({ auth: state.auth, user: state.user }) export default connect(mapStateToProps, { getUsers })(UsersManagement); ``` UserItem code : ``` import React, { Component } from 'react'; import PropTypes from 'prop-types'; class UserItem extends Component { render() { const { user } = this.props; console.log(user); return ( <tr> <th scope="row"> <button type="button" className="btn btn-primary fa-xs mr-1"><i className="fas fa-pencil-alt"></i></button> <button type="button" className="btn btn-danger fa-xs"><i className="far fa-trash-alt"></i></button> </th> <td>{user.username}</td> <td>{user.email}</td> <td>{user.phone}</td> </tr> ) } } UserItem.propTypes = { user: PropTypes.object.isRequired } export default UserItem; ``` i expect to to fix the warning message
2019/04/23
[ "https://Stackoverflow.com/questions/55820297", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11389034/" ]
Most likely the component `Spinner` renders a `<div>` as the outermost node. Check the implementation of it. You implicitly render it inside `<tbody>` through the lines ``` <tbody> {usersList} </tbody> ``` where `usersList` defaults to `<Spinner />` when there are no users or `loading` is `true`. This is why get the error. A fix would be to wrap the `Spinner` into a `td` that spans the whole row: ``` if (users === null || loading) { usersList = <tr><td colSpan="4"><Spinner /></td></tr>; } else { // ... } ```
In case someone gets this error and use render like : ``` ... {usersData ? ( Object.values(usersData)?.map((user: any) => ( <div key={user?.id}> <tr> <td>{user?.id}1</td> <td>2</td> <td>3</td> </tr> </div> )) ) : ( <Box> {' '} <tr> <td>1</td> <td>2</td> <td>3</td> </tr> </Box> )} ... ``` two mistakes here: Box is a div, and tbody needs only tr, td, so remove div or Box from render to be like: ``` <table className="table-admin"> <thead> <tr> <th className="th-a">ID</th> <th className="th-a">EMAIL</th> <th className="th-a">NAME</th> <th className="th-a"></th> </tr> </thead> <tbody> {usersData ? ( Object.values(usersData)?.map((user: any) => ( <tr key={user?.id}> <td>{user?.id}</td> <td>{user?.email}</td> <td>{user?.name}</td> </tr> )) ) : ( <tr> <td>N/A</td> </tr> )} </tbody> </table> ```
55,820,297
im passing users list as a props to UserItem Component to make iterate on user list and displaying them on table. the list is displayed correctly and i dont have any divs in my render return but i still get the error : index.js:1446 Warning: validateDOMNesting(...): cannot appear as a child of . tried many solutions found online but none of them worked UsersManagement code : ``` import React, { Component } from 'react'; import PropTypes from 'prop-types'; import { connect } from 'react-redux'; import Spinner from './common/Spinner'; import { getUsers } from '../actions/userActions'; import UserItem from './UserItem'; class UsersManagement extends Component { componentDidMount() { if (!this.props.auth.isAuthenticated) { this.props.history.push('/login'); } this.props.getUsers(); } render() { const { users, loading } = this.props.user; let usersList; if (users === null || loading) { usersList = <Spinner /> } else { if (users.length > 0) { usersList = users.map(user => ( <UserItem key={user._id} user={user} /> )) } else { usersList = <h2>No users</h2> } } return ( <div className="row"> <div className="col-12"> <h1 className="text-center mb-2">Users Management</h1> <button type="button" className="btn btn-success mb-4">New User</button> <table className="table"> <thead> <tr> <th scope="col">Options</th> <th scope="col">Username</th> <th scope="col">Email</th> <th scope="col">Phone Number</th> </tr> </thead> <tbody> {usersList} </tbody> </table> </div> </div> ) } } UsersManagement.propTypes = { getUsers: PropTypes.func.isRequired, auth: PropTypes.object.isRequired, user: PropTypes.object.isRequired } const mapStateToProps = state => ({ auth: state.auth, user: state.user }) export default connect(mapStateToProps, { getUsers })(UsersManagement); ``` UserItem code : ``` import React, { Component } from 'react'; import PropTypes from 'prop-types'; class UserItem extends Component { render() { const { user } = this.props; console.log(user); return ( <tr> <th scope="row"> <button type="button" className="btn btn-primary fa-xs mr-1"><i className="fas fa-pencil-alt"></i></button> <button type="button" className="btn btn-danger fa-xs"><i className="far fa-trash-alt"></i></button> </th> <td>{user.username}</td> <td>{user.email}</td> <td>{user.phone}</td> </tr> ) } } UserItem.propTypes = { user: PropTypes.object.isRequired } export default UserItem; ``` i expect to to fix the warning message
2019/04/23
[ "https://Stackoverflow.com/questions/55820297", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11389034/" ]
I had the same problem. I had a external spinner component that I needed to inject. This is how i solved it: ``` <table className="table profit-withdrawal"> <thead className="thead-default"> <tr> <th>Nick Name</th> <th>Transfer to</th> <th>Details</th> <th>Actions</th> </tr> </thead> <tbody> <tr> <td> <DotsLoaderComponent variant="dark" dimension="large" /> </td> </tr> {this.renderTableData()} </tbody> </table> ``` So I wrapped my spinner component inside a: ``` <tr> <td>YOUR COMPONENT HERE</td> </tr> ```
It may not be your case, but it may be someone else's. It's kind of embarrassing but I had this problem because I was importing the wrong component: ``` import Table from 'react-bootstrap/Col'; <-- Is a Col not a Table ``` Check if the imports are correct
55,820,297
im passing users list as a props to UserItem Component to make iterate on user list and displaying them on table. the list is displayed correctly and i dont have any divs in my render return but i still get the error : index.js:1446 Warning: validateDOMNesting(...): cannot appear as a child of . tried many solutions found online but none of them worked UsersManagement code : ``` import React, { Component } from 'react'; import PropTypes from 'prop-types'; import { connect } from 'react-redux'; import Spinner from './common/Spinner'; import { getUsers } from '../actions/userActions'; import UserItem from './UserItem'; class UsersManagement extends Component { componentDidMount() { if (!this.props.auth.isAuthenticated) { this.props.history.push('/login'); } this.props.getUsers(); } render() { const { users, loading } = this.props.user; let usersList; if (users === null || loading) { usersList = <Spinner /> } else { if (users.length > 0) { usersList = users.map(user => ( <UserItem key={user._id} user={user} /> )) } else { usersList = <h2>No users</h2> } } return ( <div className="row"> <div className="col-12"> <h1 className="text-center mb-2">Users Management</h1> <button type="button" className="btn btn-success mb-4">New User</button> <table className="table"> <thead> <tr> <th scope="col">Options</th> <th scope="col">Username</th> <th scope="col">Email</th> <th scope="col">Phone Number</th> </tr> </thead> <tbody> {usersList} </tbody> </table> </div> </div> ) } } UsersManagement.propTypes = { getUsers: PropTypes.func.isRequired, auth: PropTypes.object.isRequired, user: PropTypes.object.isRequired } const mapStateToProps = state => ({ auth: state.auth, user: state.user }) export default connect(mapStateToProps, { getUsers })(UsersManagement); ``` UserItem code : ``` import React, { Component } from 'react'; import PropTypes from 'prop-types'; class UserItem extends Component { render() { const { user } = this.props; console.log(user); return ( <tr> <th scope="row"> <button type="button" className="btn btn-primary fa-xs mr-1"><i className="fas fa-pencil-alt"></i></button> <button type="button" className="btn btn-danger fa-xs"><i className="far fa-trash-alt"></i></button> </th> <td>{user.username}</td> <td>{user.email}</td> <td>{user.phone}</td> </tr> ) } } UserItem.propTypes = { user: PropTypes.object.isRequired } export default UserItem; ``` i expect to to fix the warning message
2019/04/23
[ "https://Stackoverflow.com/questions/55820297", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11389034/" ]
I had the same problem. I had a external spinner component that I needed to inject. This is how i solved it: ``` <table className="table profit-withdrawal"> <thead className="thead-default"> <tr> <th>Nick Name</th> <th>Transfer to</th> <th>Details</th> <th>Actions</th> </tr> </thead> <tbody> <tr> <td> <DotsLoaderComponent variant="dark" dimension="large" /> </td> </tr> {this.renderTableData()} </tbody> </table> ``` So I wrapped my spinner component inside a: ``` <tr> <td>YOUR COMPONENT HERE</td> </tr> ```
In case someone gets this error and use render like : ``` ... {usersData ? ( Object.values(usersData)?.map((user: any) => ( <div key={user?.id}> <tr> <td>{user?.id}1</td> <td>2</td> <td>3</td> </tr> </div> )) ) : ( <Box> {' '} <tr> <td>1</td> <td>2</td> <td>3</td> </tr> </Box> )} ... ``` two mistakes here: Box is a div, and tbody needs only tr, td, so remove div or Box from render to be like: ``` <table className="table-admin"> <thead> <tr> <th className="th-a">ID</th> <th className="th-a">EMAIL</th> <th className="th-a">NAME</th> <th className="th-a"></th> </tr> </thead> <tbody> {usersData ? ( Object.values(usersData)?.map((user: any) => ( <tr key={user?.id}> <td>{user?.id}</td> <td>{user?.email}</td> <td>{user?.name}</td> </tr> )) ) : ( <tr> <td>N/A</td> </tr> )} </tbody> </table> ```
55,820,297
im passing users list as a props to UserItem Component to make iterate on user list and displaying them on table. the list is displayed correctly and i dont have any divs in my render return but i still get the error : index.js:1446 Warning: validateDOMNesting(...): cannot appear as a child of . tried many solutions found online but none of them worked UsersManagement code : ``` import React, { Component } from 'react'; import PropTypes from 'prop-types'; import { connect } from 'react-redux'; import Spinner from './common/Spinner'; import { getUsers } from '../actions/userActions'; import UserItem from './UserItem'; class UsersManagement extends Component { componentDidMount() { if (!this.props.auth.isAuthenticated) { this.props.history.push('/login'); } this.props.getUsers(); } render() { const { users, loading } = this.props.user; let usersList; if (users === null || loading) { usersList = <Spinner /> } else { if (users.length > 0) { usersList = users.map(user => ( <UserItem key={user._id} user={user} /> )) } else { usersList = <h2>No users</h2> } } return ( <div className="row"> <div className="col-12"> <h1 className="text-center mb-2">Users Management</h1> <button type="button" className="btn btn-success mb-4">New User</button> <table className="table"> <thead> <tr> <th scope="col">Options</th> <th scope="col">Username</th> <th scope="col">Email</th> <th scope="col">Phone Number</th> </tr> </thead> <tbody> {usersList} </tbody> </table> </div> </div> ) } } UsersManagement.propTypes = { getUsers: PropTypes.func.isRequired, auth: PropTypes.object.isRequired, user: PropTypes.object.isRequired } const mapStateToProps = state => ({ auth: state.auth, user: state.user }) export default connect(mapStateToProps, { getUsers })(UsersManagement); ``` UserItem code : ``` import React, { Component } from 'react'; import PropTypes from 'prop-types'; class UserItem extends Component { render() { const { user } = this.props; console.log(user); return ( <tr> <th scope="row"> <button type="button" className="btn btn-primary fa-xs mr-1"><i className="fas fa-pencil-alt"></i></button> <button type="button" className="btn btn-danger fa-xs"><i className="far fa-trash-alt"></i></button> </th> <td>{user.username}</td> <td>{user.email}</td> <td>{user.phone}</td> </tr> ) } } UserItem.propTypes = { user: PropTypes.object.isRequired } export default UserItem; ``` i expect to to fix the warning message
2019/04/23
[ "https://Stackoverflow.com/questions/55820297", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11389034/" ]
validateDOMNesting(...): cannot appear as a child of `<div>`. I see an error similar to this You replace : tag `<body> to <div>`
It may not be your case, but it may be someone else's. It's kind of embarrassing but I had this problem because I was importing the wrong component: ``` import Table from 'react-bootstrap/Col'; <-- Is a Col not a Table ``` Check if the imports are correct
55,820,297
im passing users list as a props to UserItem Component to make iterate on user list and displaying them on table. the list is displayed correctly and i dont have any divs in my render return but i still get the error : index.js:1446 Warning: validateDOMNesting(...): cannot appear as a child of . tried many solutions found online but none of them worked UsersManagement code : ``` import React, { Component } from 'react'; import PropTypes from 'prop-types'; import { connect } from 'react-redux'; import Spinner from './common/Spinner'; import { getUsers } from '../actions/userActions'; import UserItem from './UserItem'; class UsersManagement extends Component { componentDidMount() { if (!this.props.auth.isAuthenticated) { this.props.history.push('/login'); } this.props.getUsers(); } render() { const { users, loading } = this.props.user; let usersList; if (users === null || loading) { usersList = <Spinner /> } else { if (users.length > 0) { usersList = users.map(user => ( <UserItem key={user._id} user={user} /> )) } else { usersList = <h2>No users</h2> } } return ( <div className="row"> <div className="col-12"> <h1 className="text-center mb-2">Users Management</h1> <button type="button" className="btn btn-success mb-4">New User</button> <table className="table"> <thead> <tr> <th scope="col">Options</th> <th scope="col">Username</th> <th scope="col">Email</th> <th scope="col">Phone Number</th> </tr> </thead> <tbody> {usersList} </tbody> </table> </div> </div> ) } } UsersManagement.propTypes = { getUsers: PropTypes.func.isRequired, auth: PropTypes.object.isRequired, user: PropTypes.object.isRequired } const mapStateToProps = state => ({ auth: state.auth, user: state.user }) export default connect(mapStateToProps, { getUsers })(UsersManagement); ``` UserItem code : ``` import React, { Component } from 'react'; import PropTypes from 'prop-types'; class UserItem extends Component { render() { const { user } = this.props; console.log(user); return ( <tr> <th scope="row"> <button type="button" className="btn btn-primary fa-xs mr-1"><i className="fas fa-pencil-alt"></i></button> <button type="button" className="btn btn-danger fa-xs"><i className="far fa-trash-alt"></i></button> </th> <td>{user.username}</td> <td>{user.email}</td> <td>{user.phone}</td> </tr> ) } } UserItem.propTypes = { user: PropTypes.object.isRequired } export default UserItem; ``` i expect to to fix the warning message
2019/04/23
[ "https://Stackoverflow.com/questions/55820297", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11389034/" ]
validateDOMNesting(...): cannot appear as a child of `<div>`. I see an error similar to this You replace : tag `<body> to <div>`
In case someone gets this error and use render like : ``` ... {usersData ? ( Object.values(usersData)?.map((user: any) => ( <div key={user?.id}> <tr> <td>{user?.id}1</td> <td>2</td> <td>3</td> </tr> </div> )) ) : ( <Box> {' '} <tr> <td>1</td> <td>2</td> <td>3</td> </tr> </Box> )} ... ``` two mistakes here: Box is a div, and tbody needs only tr, td, so remove div or Box from render to be like: ``` <table className="table-admin"> <thead> <tr> <th className="th-a">ID</th> <th className="th-a">EMAIL</th> <th className="th-a">NAME</th> <th className="th-a"></th> </tr> </thead> <tbody> {usersData ? ( Object.values(usersData)?.map((user: any) => ( <tr key={user?.id}> <td>{user?.id}</td> <td>{user?.email}</td> <td>{user?.name}</td> </tr> )) ) : ( <tr> <td>N/A</td> </tr> )} </tbody> </table> ```
62,417,963
I have a question about how to make an iteration. I want to place a total row after each item in the array if the next element in the array matches a specific condition. Spesific conditions have logic like this the data like this [![enter image description here](https://i.stack.imgur.com/kjxAn.png)](https://i.stack.imgur.com/kjxAn.png) if i request a qty for example = 60 the result i hope like this you can see > > `data[2]` = 01/03/2020 just took 10 out of 40 > > > [![enter image description here](https://i.stack.imgur.com/6Mfg1.png)](https://i.stack.imgur.com/6Mfg1.png) ``` $iter = new \ArrayIterator($values); $sum = 0; foreach($values as $key => $value) { $nextValue = $iter->current(); $iter->next(); $nextKey = $iter->key(); if(condition) { $sum += $value; } } dd($iter); ``` --- [![enter image description here](https://i.stack.imgur.com/SZ2eM.png)](https://i.stack.imgur.com/SZ2eM.png) how to make this logic work on php language/ laravel?
2020/06/16
[ "https://Stackoverflow.com/questions/62417963", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10498511/" ]
Following logic might help you on your way: ``` <?php $stock = [ '01/01/2020' => 20, '01/02/2020' => 30, '01/03/2020' => 40 ]; showStatus($stock, 'in stock - before transaction'); $demand = 60; foreach ($stock as $key => $value) { if ($value <= $demand) { $stock[$key] = 0; $supplied[$key] = $value; $demand -= $value; } else { $stock[$key] -= $demand; $supplied[$key] = $value - ($value - $demand); $demand = 0; } } showStatus($supplied, 'supplied'); showStatus($stock, 'in stock - after transaction'); function showStatus($arr = [], $msg = '') { echo $msg; echo '<pre>'; print_r($arr); echo '</pre>'; } ?> **Output:** in stock - before transaction Array ( [01/01/2020] => 20 [01/02/2020] => 30 [01/03/2020] => 40 ) supplied Array ( [01/01/2020] => 20 [01/02/2020] => 30 [01/03/2020] => 10 ) in stock - after transaction Array ( [01/01/2020] => 0 [01/02/2020] => 0 [01/03/2020] => 30 ) ``` Working [demo](https://3v4l.org/v8rjF)
I'm not sure I've understood you correctly but this might help: ``` $values = [ '01/01/2020' => 20, '01/02/2020' => 30, '01/03/2020' => 40 ]; $demand = 60; $total = array_sum($values); $decrease = $total - $demand; //(20+30+40) - 60 = 30 $last_key = array_keys($values,end($values))[0]; //Is 01/03/2020 in this case $values[$last_key] -= $decrease; //Decrease value with 30 calulated above ``` **Would output:** ``` Array ( [01/01/2020] => 20 [01/02/2020] => 30 [01/03/2020] => 10 ) ```
7,234
Can anyone help me identify what LEGO set this is from? My son has it half built and we cannot find the instructions and the bricks have been mixed with his other LEGO. [![enter image description here](https://i.stack.imgur.com/MY9dC.jpg)](https://i.stack.imgur.com/MY9dC.jpg)
2016/01/16
[ "https://bricks.stackexchange.com/questions/7234", "https://bricks.stackexchange.com", "https://bricks.stackexchange.com/users/6723/" ]
Based on the different shades of gray and the odd placement of the ball joint, I believe this is not a Lego set. Its more likely that it is something your son made out of his imagination and other Lego pieces.
I personally don't think this is lego. I've seen some of the elements in the pictures in Kre-o battleships sets. Perhaps you can evaluate on that?
28,018,253
I've been trying to make this work for 2 days now and have read many examples and stack overflow questions. I'm new to html and css, this is my 3rd day. Any tips or insights into what I'm doing wrong would be much appreciated, as well as general comments or criticisms. I'm trying to make the blue pane on the left side extend the entire length of the page, not just the height of the browser's viewport. [Here's a jsfiddle](http://jsfiddle.net/m0t9ftrx/) Thank you ``` #leftPane { position: absolute; width: 200px; margin:0px; padding:0px; height: 100%; background-color: #0e365b; } #rightPane { position: absolute; top:0px; left:200px; margin:0px; padding:0px; background-color: #EEEEEE; width: 900px; height: 100%; } ```
2015/01/19
[ "https://Stackoverflow.com/questions/28018253", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1902625/" ]
You need a `container`, otherwise `leftpane` wouldn't know how much the `rightpane` expands. So with `container` in place, the `container` will expand with `rightpane` and since `leftpane` is child object of `container` it gets the `height` of it when set to `100%` with some appropriate positioning. ```css /* Lightest Blue: #4096e5 (nav boxes) Darker Blue: #195e9f (nav boxes roll over) Darker Blue: #043d71 (children roll over) Darkest Blue: #0e365b (left pane) Off-white: #EEEEEE (right pane background) */ * { font-family: "Helvetica", sans-serif; } body { margin: 0px; padding: 0px; background-color: #EEEEEE; } #container { position: relative; height: auto; } #leftPane { color: white; text-align: justify; } #leftPane > p { margin: 10px; } #leftPane { position: absolute; width: 200px; margin: 0px; padding: 0px; height: 100%; background-color: #0e365b; } #leftPane h1, h2, h3 { text-align: center; margin: 10px; padding: 0px; text-shadow: 5px 5px 10px #000; } #rightPane { position: relative; top: 0px; left: 200px; margin: 0px; padding: 0px; background-color: #EEEEEE; width: 900px; height: 100%; } #header { position: absolute; top: 0px; left: 0px; height: 40px; } #nav { list-style: none; overflow: hidden; z-index: 5; } #nav li, ul { float: left; margin: 0px; padding: 0px; } #nav a:link, a:visited, .top { display: block; color: #FFFFFF; background-color: #4096e5; padding: 5px; text-decoration: none; text-shadow: 1px 1px 1px #000; } #nav a.top:link, a.top:visited, .top { width: 180px; font-weight: bold; text-align: center; } #nav a.sub:link, a.sub:visited { text-align: left; } #nav .divider { border-right: 1px black dashed; } #nav a:hover, span:hover { background-color: #195e9f; text-decoration: none; } #nav ul li a:hover { background-color: #043d71; } #nav ul { list-style: none; position: absolute; left: -9999px; } #nav ul li { background: #195e9f; float: none; border-top: 1px black dashed; } #nav ul a { white-space: nowrap; } #nav li:hover ul { left: 0px; z-index: 5; position: absolute; width: 100%; } #nav li:hover a, li:hover span { /* These create persistent hover states, meaning the top-most link stays 'hovered' even when your cursor has moved down the list. */ background: #195e9f; } #nav li:hover ul a { /* The persistent hover state does however create a global style for links even before they're hovered. Here we undo these effects. */ text-decoration: none; } #mainContent { position: relative; left: 0px; top: 40px; margin: 10px; } ``` ```html <body> <div id="container"> <div id="leftPane"> <h2>Title<br>Section</h2> <p>some more text, this should wrap.</p> </div> <div id="rightPane"> <div id="header"> <ul id="nav"> <li> <span class="top divider">Projects</span> <ul> <li><a class="sub lastA" href="#">One</a> </li> <li><a class="sub lastA" href="#">Two</a> </li> <li><a class="sub lastA" href="#section3">Three</a> </li> <li><a class="sub lastA" href="#">Four</a> </li> </ul> </li> <li> <span class="top divider">Examples</span> <ul> <li><a class="sub lastA" href="#">Data Structures</a> </li> </ul> </li> <li><a class="top" href="#">Blog</a> </li> </ul> </div> <div id="mainContent"> There are a few interesting things here that should be noted. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <a name="section3">Begin Section 3</a> <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. </div> </div> </div> </body> ```
Refer here: [here](https://stackoverflow.com/questions/5671012/extend-div-height-to-entire-webpage) ``` <div><!-- make a <div> to hold everything in.. --> <div style="width:125;height:100%;">blah blah blah</div> <div style="height:100%;">blah blah blah</div> </div> ```
28,018,253
I've been trying to make this work for 2 days now and have read many examples and stack overflow questions. I'm new to html and css, this is my 3rd day. Any tips or insights into what I'm doing wrong would be much appreciated, as well as general comments or criticisms. I'm trying to make the blue pane on the left side extend the entire length of the page, not just the height of the browser's viewport. [Here's a jsfiddle](http://jsfiddle.net/m0t9ftrx/) Thank you ``` #leftPane { position: absolute; width: 200px; margin:0px; padding:0px; height: 100%; background-color: #0e365b; } #rightPane { position: absolute; top:0px; left:200px; margin:0px; padding:0px; background-color: #EEEEEE; width: 900px; height: 100%; } ```
2015/01/19
[ "https://Stackoverflow.com/questions/28018253", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1902625/" ]
Well the problem is, all your elements(leftpane, rightpane, header and mainContent) are positioned *absolute*. **Absolutely positioned elements are removed from the normal flow** and positioned relative to the first parent element that has a position other than *static* (in this case, "html"). [hint: open firebug and check *layout* of your page. "body" is not getting any height.] ![enter image description here](https://i.stack.imgur.com/1EAKu.png) Thats why providing height:100% is giving default *height of the container* and not *full height of document* **Solution:** As @Prachit answered above, a legitimate fix would be to *enclose your absolutely positioned elements in a relatively positioned container* and since **relatively positioned elements are positioned relative to their normal position**, * It'll be preserved in the normal flow. So, now giving height:100% would provide height of the document and not the container itself. * (absolutely positioned)leftpane, would now know how much height rightpane is getting[since both of them would be taking height of their parent container]. You may get more insight on positioning from [this](http://www.barelyfitz.com/screencast/html-training/css/positioning/) tutorial. cheers!
Refer here: [here](https://stackoverflow.com/questions/5671012/extend-div-height-to-entire-webpage) ``` <div><!-- make a <div> to hold everything in.. --> <div style="width:125;height:100%;">blah blah blah</div> <div style="height:100%;">blah blah blah</div> </div> ```
28,018,253
I've been trying to make this work for 2 days now and have read many examples and stack overflow questions. I'm new to html and css, this is my 3rd day. Any tips or insights into what I'm doing wrong would be much appreciated, as well as general comments or criticisms. I'm trying to make the blue pane on the left side extend the entire length of the page, not just the height of the browser's viewport. [Here's a jsfiddle](http://jsfiddle.net/m0t9ftrx/) Thank you ``` #leftPane { position: absolute; width: 200px; margin:0px; padding:0px; height: 100%; background-color: #0e365b; } #rightPane { position: absolute; top:0px; left:200px; margin:0px; padding:0px; background-color: #EEEEEE; width: 900px; height: 100%; } ```
2015/01/19
[ "https://Stackoverflow.com/questions/28018253", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1902625/" ]
You need a `container`, otherwise `leftpane` wouldn't know how much the `rightpane` expands. So with `container` in place, the `container` will expand with `rightpane` and since `leftpane` is child object of `container` it gets the `height` of it when set to `100%` with some appropriate positioning. ```css /* Lightest Blue: #4096e5 (nav boxes) Darker Blue: #195e9f (nav boxes roll over) Darker Blue: #043d71 (children roll over) Darkest Blue: #0e365b (left pane) Off-white: #EEEEEE (right pane background) */ * { font-family: "Helvetica", sans-serif; } body { margin: 0px; padding: 0px; background-color: #EEEEEE; } #container { position: relative; height: auto; } #leftPane { color: white; text-align: justify; } #leftPane > p { margin: 10px; } #leftPane { position: absolute; width: 200px; margin: 0px; padding: 0px; height: 100%; background-color: #0e365b; } #leftPane h1, h2, h3 { text-align: center; margin: 10px; padding: 0px; text-shadow: 5px 5px 10px #000; } #rightPane { position: relative; top: 0px; left: 200px; margin: 0px; padding: 0px; background-color: #EEEEEE; width: 900px; height: 100%; } #header { position: absolute; top: 0px; left: 0px; height: 40px; } #nav { list-style: none; overflow: hidden; z-index: 5; } #nav li, ul { float: left; margin: 0px; padding: 0px; } #nav a:link, a:visited, .top { display: block; color: #FFFFFF; background-color: #4096e5; padding: 5px; text-decoration: none; text-shadow: 1px 1px 1px #000; } #nav a.top:link, a.top:visited, .top { width: 180px; font-weight: bold; text-align: center; } #nav a.sub:link, a.sub:visited { text-align: left; } #nav .divider { border-right: 1px black dashed; } #nav a:hover, span:hover { background-color: #195e9f; text-decoration: none; } #nav ul li a:hover { background-color: #043d71; } #nav ul { list-style: none; position: absolute; left: -9999px; } #nav ul li { background: #195e9f; float: none; border-top: 1px black dashed; } #nav ul a { white-space: nowrap; } #nav li:hover ul { left: 0px; z-index: 5; position: absolute; width: 100%; } #nav li:hover a, li:hover span { /* These create persistent hover states, meaning the top-most link stays 'hovered' even when your cursor has moved down the list. */ background: #195e9f; } #nav li:hover ul a { /* The persistent hover state does however create a global style for links even before they're hovered. Here we undo these effects. */ text-decoration: none; } #mainContent { position: relative; left: 0px; top: 40px; margin: 10px; } ``` ```html <body> <div id="container"> <div id="leftPane"> <h2>Title<br>Section</h2> <p>some more text, this should wrap.</p> </div> <div id="rightPane"> <div id="header"> <ul id="nav"> <li> <span class="top divider">Projects</span> <ul> <li><a class="sub lastA" href="#">One</a> </li> <li><a class="sub lastA" href="#">Two</a> </li> <li><a class="sub lastA" href="#section3">Three</a> </li> <li><a class="sub lastA" href="#">Four</a> </li> </ul> </li> <li> <span class="top divider">Examples</span> <ul> <li><a class="sub lastA" href="#">Data Structures</a> </li> </ul> </li> <li><a class="top" href="#">Blog</a> </li> </ul> </div> <div id="mainContent"> There are a few interesting things here that should be noted. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <a name="section3">Begin Section 3</a> <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. <br>. </div> </div> </div> </body> ```
Well the problem is, all your elements(leftpane, rightpane, header and mainContent) are positioned *absolute*. **Absolutely positioned elements are removed from the normal flow** and positioned relative to the first parent element that has a position other than *static* (in this case, "html"). [hint: open firebug and check *layout* of your page. "body" is not getting any height.] ![enter image description here](https://i.stack.imgur.com/1EAKu.png) Thats why providing height:100% is giving default *height of the container* and not *full height of document* **Solution:** As @Prachit answered above, a legitimate fix would be to *enclose your absolutely positioned elements in a relatively positioned container* and since **relatively positioned elements are positioned relative to their normal position**, * It'll be preserved in the normal flow. So, now giving height:100% would provide height of the document and not the container itself. * (absolutely positioned)leftpane, would now know how much height rightpane is getting[since both of them would be taking height of their parent container]. You may get more insight on positioning from [this](http://www.barelyfitz.com/screencast/html-training/css/positioning/) tutorial. cheers!
46,933,951
We are using Google Cloud SQL for our project but facing some administrative issues around it . In our cloud DB We have two users like : root% (any host) root182.68.122.202 Now we need "SUPER" user access on these two users to perform some admin tasks like modifying the variable 'max\_allowed\_packet' to higher limit and other related stuff to optimize our functioning . Like I want to execute one of the command : SET GLOBAL max\_allowed\_packet=32\*1024\*1024; But I couldn't find a way from the Google Cloud Console or from the MYSQL itself to get it done as I am getting an error prompting: "SQL Error (1227) : Access denied ; you need (atleast one of) the SUPER privilege(s) for this operation." I have even tried a hack to do the direct changes in the "mysql.user" table (making YES to the SUPER privilege) but all futile . Can you please let me know how can I perform this tasks on my DB ? What is the way to grant these Super Access to the desired users .
2017/10/25
[ "https://Stackoverflow.com/questions/46933951", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2935602/" ]
According to Google’s [Cloud SQL documentation](https://cloud.google.com/sql/docs/mysql/users#root-user), the root user has all privileges but SUPER and FILE. It’s a characteristic of the Google Cloud SQL. But rest assured! You have three other ways of easily changing that global "group by" variable that has been slowing your progress. There’s a neat explanation on this [Configuring database flags](https://cloud.google.com/sql/docs/mysql/flags#determining_what_database_flags_have_been_set_for_an_instance) this will guide. I hope this helps others!!!
You can edit your running instance. From the left menu in the SQL section : 1. view your instance detail by clicking on the concerned line 2. click on the "Edit button", in the configuration block, edit the "add database flags" add a new item by choosing in the defined list 3. Don't forget to save your new flag by hitting the "save" button
46,933,951
We are using Google Cloud SQL for our project but facing some administrative issues around it . In our cloud DB We have two users like : root% (any host) root182.68.122.202 Now we need "SUPER" user access on these two users to perform some admin tasks like modifying the variable 'max\_allowed\_packet' to higher limit and other related stuff to optimize our functioning . Like I want to execute one of the command : SET GLOBAL max\_allowed\_packet=32\*1024\*1024; But I couldn't find a way from the Google Cloud Console or from the MYSQL itself to get it done as I am getting an error prompting: "SQL Error (1227) : Access denied ; you need (atleast one of) the SUPER privilege(s) for this operation." I have even tried a hack to do the direct changes in the "mysql.user" table (making YES to the SUPER privilege) but all futile . Can you please let me know how can I perform this tasks on my DB ? What is the way to grant these Super Access to the desired users .
2017/10/25
[ "https://Stackoverflow.com/questions/46933951", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2935602/" ]
Google Cloud SQL does not support SUPER privileges, which means that GRANT ALL PRIVILEGES statements will not work. As an alternative, you can use GRANT ALL ON `%`.\*.
You can edit your running instance. From the left menu in the SQL section : 1. view your instance detail by clicking on the concerned line 2. click on the "Edit button", in the configuration block, edit the "add database flags" add a new item by choosing in the defined list 3. Don't forget to save your new flag by hitting the "save" button
31,752,034
When Oracle compiles a stored procedure, it stores the AST for the procedure in DIANA format. * how can I access this AST? * are there built-in tools for processing this AST?
2015/07/31
[ "https://Stackoverflow.com/questions/31752034", "https://Stackoverflow.com", "https://Stackoverflow.com/users/116/" ]
There is an undocumented package DUMPDIANA that is meant to dump the Diana in a human-readable format. The file $ORACLE\_HOME\rdbms\admin\dumpdian.sql says "Documentation is available in /vobs/plsql/notes/dumpdiana.txt". I cannot find that file, and without it we can only guess at the meaning of some parameters. Basic usage of DUMPDIANA is as follows: ``` SQL> show user USER is "SYS" SQL> @?\rdbms\admin\dumpdian Library created. Package created. Package body created. create or replace procedure hello_world 2 as 3 begin 4 dbms_output.put_line('hello world'); 5* end; Procedure created. SQL> set serveroutput on SQL> execute sys.DUMPDIANA.dump('HELLO_WORLD'); user: SYS PL/SQL procedure successfully completed. ``` At this point a pair of files should have been created in the folder `$ORACLE_BASE/diag/rdbms/orcl12c/orcl12c/trace`. The two files seem to follow the naming convention: ``` orcl12c_ora_{PROCESS}.trc orcl12c_ora_{PROCESS.trm ``` Where the trc file is a human readable version of the corresponding trm file, and {PROCESS} is the operating system process ID. To find this use the following query from the same session: ``` select p.spid from v$session s,v$process p where s.paddr = p.addr and s.sid = sys_context('USERENV','SID'); ``` For example if the session ID was 8861 then from a bash shell you can view the results using: ``` vim $ORACLE_BASE/diag/rdbms/orcl12c/orcl12c/trace/orcl12c_ora_8861.trc ``` The result is interesting... if not particularly intuitive! For example here is a snippet of the file produced. Note the HELLO\_WORLD string literal. ``` PD1(2):D_COMP_U [ L_SRCPOS : row 1 col 1 A_CONTEX : PD2(2): D_CONTEX [ L_SRCPOS : row 1 col 1 AS_LIST : < > ] A_UNIT_B : PD3(2): D_S_BODY [ L_SRCPOS : row 1 col 1 A_D_ : PD4(2): DI_PROC [ L_SRCPOS : row 1 col 11 L_SYMREP : HELLO_WORLD, S_SPEC : PD5^(2), S_BODY : PD8^(2), ``` A couple of notes. I've run this as SYS, which as we know is not a good practice, this is no reason I know of why you shouldn't grant privileges on DUMPDIANA to a normal user. All the procedures you dump go into the same file - if you delete that file, it stops working, and you'll need to start a new session. If it stops working, starting a new session sometimes seems to fix the problem.
Here is an excellent tutorial on DIANA and IDL in the PDF [How to unwrap PL/SQL](https://www.blackhat.com/presentations/bh-usa-06/BH-US-06-Finnigan.pdf) by Pete Finnigan, principal consultant at Siemens at the time of the writting, specializing in researching and securing Oracle databases. Among other very interesting things you will learn that: * DIANA is written down as IDL (Interface Definition Language). * The 4 tables the IDL is stored in (IDL\_CHAR$, IDL\_SB4$, IDL\_UB1$ and IDL\_UB2$) * Wrapper PL/SQL is simply DIANA written down as IDL. * Dumpdiana is not installed by default, you need to ensure DIANA, PIDL, and DIUTIL PL/SQL packages are installed as well and you need to run it as SYS. * How to dump the DIANA tree and understand it. * How to reconstruct the PL/SQL source from DIANA. * How to write a PL/SQL un-wrapper. * Limitations of a PL/SQL API based un-wrapper. * Limitations of the PL/SQL API itself. * How to enumerate DIANA nodes and attributes. * A proof of concept un-wrapper. You can find [his website here](http://www.petefinnigan.com/). There is so much content there. You will find [awesome papers about Oracle security](http://www.petefinnigan.com/orasec.htm) and also a lot of [useful security tools](http://www.petefinnigan.com/tools.htm) developed not only by him but other authors as well. Best of all, you can get in touch with him if after the reading you still have questions.
1,175,799
There is a mean called Harmonic mean. <http://dlmf.nist.gov/1.2#E19> I mostly see usage of arithematic mean and geometric mean. On the other hand, I have never seen the usage of Harmonic mean yet. In what kind of case, is the Harmonic mean used?
2015/03/04
[ "https://math.stackexchange.com/questions/1175799", "https://math.stackexchange.com", "https://math.stackexchange.com/users/110621/" ]
You go on a round trip. You go out at $50$ miles per hour and come back at $60$ miles per hour. The average speed of your whole trip will be the harmonic mean of $50$ and $60$ (mph). You can extend this to several legs of a trip all the same length. Your overall average speed will be the harmonic mean of the speeds on the various legs of the trip.
The harmonic mean usually appears when averaging speed (or rates in general). See this question as a reference: <https://stackoverflow.com/questions/34794664/how-should-i-calculate-the-average-speed-by-road-segment-for-multiple-segments/34795821#34795821>
476,953
I'm looking for a way to map some hot-keys to "delete the line that my cursor is on" in Xcode. I found "delete to end of line" and "delete to beginning of line" in the text key bindings, but I am missing how to completely delete the line no matter what I have selected. TextMate has this functionality mapped to Ctrl+Shift+D and I'd like the same thing if possible. Any ideas?
2009/01/25
[ "https://Stackoverflow.com/questions/476953", "https://Stackoverflow.com", "https://Stackoverflow.com/users/53653/" ]
You can set up a system-wide key binding file that will apply to all Cocoa apps. To do what you want it should like like this: In your home folder, Library/KeyBindings/DefaultKeyBinding.dict ``` { "^D" = ( "moveToBeginningOfLine:", "deleteToEndOfLine:", ); } ``` I believe if you only want it to apply to Xcode you can name the file `PBKeyBinding.dict` instead but I didn't try that myself. You can read more about this system [here](https://developer.apple.com/library/mac/documentation/cocoa/conceptual/eventoverview/TextDefaultsBindings/TextDefaultsBindings.html) and [here](https://web.archive.org/web/20090826052401/http://www.erasetotheleft.com/post/mac-os-x-key-bindings).
I was looking for a solution to this, and I tried Ashley Clark's, but it turns out there's an easier option using an included User Script called delete Line. * Open the weird menu to the left of 'help' that looks like a scroll. * Choose Edit User Scripts... * Click the Key Bindings tab * Expand the Text section * Double click the ⌘ column next to 'Delete Line' and type your hotkey. It may warn you that you stole it from some other command but that's fine. Done! You can do the same for Move Line Up and Move Line Down if you're an Eclipse junkie like me.
476,953
I'm looking for a way to map some hot-keys to "delete the line that my cursor is on" in Xcode. I found "delete to end of line" and "delete to beginning of line" in the text key bindings, but I am missing how to completely delete the line no matter what I have selected. TextMate has this functionality mapped to Ctrl+Shift+D and I'd like the same thing if possible. Any ideas?
2009/01/25
[ "https://Stackoverflow.com/questions/476953", "https://Stackoverflow.com", "https://Stackoverflow.com/users/53653/" ]
``` <key>Custom Keyword Set</key> <dict> <key>Delete Current Line In One Hit</key> <string>moveToEndOfLine:, deleteToBeginningOfLine:, deleteToEndOfParagraph:</string> </dict> ``` I suggest to create your customized dictonary in your file **IDETextKeyBindingSet.plist**. So: * close Xcode; * open Terminal; * sudo nano /Applications/Xcode.app/Contents/Frameworks/IDEKit.framework/Resources/IDETextKeyBindingSet.plist * add new custom section, for instance code in top; * save, exit and open Xcode; * [Xcode > Preferences > Key Binding] * search “Delete..” and create the new shortcut.
For Xcode 9.0(beta), inserting customized key dictionary into IDETextKeyBindingSet.plist working fine for me.You need to restart XCode if already open and after next launch you will find new customized shortcuts under the KeyBindings menu. ``` <key>Customized</key> <dict> <key>Delete Rest Of Line</key> <string>deleteToEndOfLine:</string> <key>Delete Line</key> <string>moveToBeginningOfLine:, deleteToEndOfLine:</string> <key>Duplicate Current Line</key> <string>selectLine:, copy:, moveToEndOfLine:, insertNewline:, moveToBeginningOfLine:, paste:</string> </dict> ```
476,953
I'm looking for a way to map some hot-keys to "delete the line that my cursor is on" in Xcode. I found "delete to end of line" and "delete to beginning of line" in the text key bindings, but I am missing how to completely delete the line no matter what I have selected. TextMate has this functionality mapped to Ctrl+Shift+D and I'd like the same thing if possible. Any ideas?
2009/01/25
[ "https://Stackoverflow.com/questions/476953", "https://Stackoverflow.com", "https://Stackoverflow.com/users/53653/" ]
As I don't always work on the same xcode I prefer not to install scripts. Xcode uses some sub-set of emacs commands. I use this approach to quickly delete a line. ^k (control-k) deletes from the cursor to the end of the line. Doing it twice also deletes the carriage return and takes up the next line. ^a takes you to the start of the line. So to delete a complete line from the beginning you can use ^a^k^k.
This works for me (Xcode 4.4.1): Same steps like described here: [Xcode duplicate line](https://stackoverflow.com/questions/10266170/xcode-4-duplicate-line) (Halley's answer) But instead of: selectLine:, copy:, moveToEndOfLine:, insertNewline:, paste:, deleteBackward: Use: selectLine:, moveToBeginningOfLine:, deleteToEndOfLine:
476,953
I'm looking for a way to map some hot-keys to "delete the line that my cursor is on" in Xcode. I found "delete to end of line" and "delete to beginning of line" in the text key bindings, but I am missing how to completely delete the line no matter what I have selected. TextMate has this functionality mapped to Ctrl+Shift+D and I'd like the same thing if possible. Any ideas?
2009/01/25
[ "https://Stackoverflow.com/questions/476953", "https://Stackoverflow.com", "https://Stackoverflow.com/users/53653/" ]
I was looking for a solution to this, and I tried Ashley Clark's, but it turns out there's an easier option using an included User Script called delete Line. * Open the weird menu to the left of 'help' that looks like a scroll. * Choose Edit User Scripts... * Click the Key Bindings tab * Expand the Text section * Double click the ⌘ column next to 'Delete Line' and type your hotkey. It may warn you that you stole it from some other command but that's fine. Done! You can do the same for Move Line Up and Move Line Down if you're an Eclipse junkie like me.
``` <key>Custom Keyword Set</key> <dict> <key>Delete Current Line In One Hit</key> <string>moveToEndOfLine:, deleteToBeginningOfLine:, deleteToEndOfParagraph:</string> </dict> ``` I suggest to create your customized dictonary in your file **IDETextKeyBindingSet.plist**. So: * close Xcode; * open Terminal; * sudo nano /Applications/Xcode.app/Contents/Frameworks/IDEKit.framework/Resources/IDETextKeyBindingSet.plist * add new custom section, for instance code in top; * save, exit and open Xcode; * [Xcode > Preferences > Key Binding] * search “Delete..” and create the new shortcut.
476,953
I'm looking for a way to map some hot-keys to "delete the line that my cursor is on" in Xcode. I found "delete to end of line" and "delete to beginning of line" in the text key bindings, but I am missing how to completely delete the line no matter what I have selected. TextMate has this functionality mapped to Ctrl+Shift+D and I'd like the same thing if possible. Any ideas?
2009/01/25
[ "https://Stackoverflow.com/questions/476953", "https://Stackoverflow.com", "https://Stackoverflow.com/users/53653/" ]
You can set up a system-wide key binding file that will apply to all Cocoa apps. To do what you want it should like like this: In your home folder, Library/KeyBindings/DefaultKeyBinding.dict ``` { "^D" = ( "moveToBeginningOfLine:", "deleteToEndOfLine:", ); } ``` I believe if you only want it to apply to Xcode you can name the file `PBKeyBinding.dict` instead but I didn't try that myself. You can read more about this system [here](https://developer.apple.com/library/mac/documentation/cocoa/conceptual/eventoverview/TextDefaultsBindings/TextDefaultsBindings.html) and [here](https://web.archive.org/web/20090826052401/http://www.erasetotheleft.com/post/mac-os-x-key-bindings).
This works for me (Xcode 4.4.1): Same steps like described here: [Xcode duplicate line](https://stackoverflow.com/questions/10266170/xcode-4-duplicate-line) (Halley's answer) But instead of: selectLine:, copy:, moveToEndOfLine:, insertNewline:, paste:, deleteBackward: Use: selectLine:, moveToBeginningOfLine:, deleteToEndOfLine:
476,953
I'm looking for a way to map some hot-keys to "delete the line that my cursor is on" in Xcode. I found "delete to end of line" and "delete to beginning of line" in the text key bindings, but I am missing how to completely delete the line no matter what I have selected. TextMate has this functionality mapped to Ctrl+Shift+D and I'd like the same thing if possible. Any ideas?
2009/01/25
[ "https://Stackoverflow.com/questions/476953", "https://Stackoverflow.com", "https://Stackoverflow.com/users/53653/" ]
You can set up a system-wide key binding file that will apply to all Cocoa apps. To do what you want it should like like this: In your home folder, Library/KeyBindings/DefaultKeyBinding.dict ``` { "^D" = ( "moveToBeginningOfLine:", "deleteToEndOfLine:", ); } ``` I believe if you only want it to apply to Xcode you can name the file `PBKeyBinding.dict` instead but I didn't try that myself. You can read more about this system [here](https://developer.apple.com/library/mac/documentation/cocoa/conceptual/eventoverview/TextDefaultsBindings/TextDefaultsBindings.html) and [here](https://web.archive.org/web/20090826052401/http://www.erasetotheleft.com/post/mac-os-x-key-bindings).
As I don't always work on the same xcode I prefer not to install scripts. Xcode uses some sub-set of emacs commands. I use this approach to quickly delete a line. ^k (control-k) deletes from the cursor to the end of the line. Doing it twice also deletes the carriage return and takes up the next line. ^a takes you to the start of the line. So to delete a complete line from the beginning you can use ^a^k^k.
476,953
I'm looking for a way to map some hot-keys to "delete the line that my cursor is on" in Xcode. I found "delete to end of line" and "delete to beginning of line" in the text key bindings, but I am missing how to completely delete the line no matter what I have selected. TextMate has this functionality mapped to Ctrl+Shift+D and I'd like the same thing if possible. Any ideas?
2009/01/25
[ "https://Stackoverflow.com/questions/476953", "https://Stackoverflow.com", "https://Stackoverflow.com/users/53653/" ]
You can set up a system-wide key binding file that will apply to all Cocoa apps. To do what you want it should like like this: In your home folder, Library/KeyBindings/DefaultKeyBinding.dict ``` { "^D" = ( "moveToBeginningOfLine:", "deleteToEndOfLine:", ); } ``` I believe if you only want it to apply to Xcode you can name the file `PBKeyBinding.dict` instead but I didn't try that myself. You can read more about this system [here](https://developer.apple.com/library/mac/documentation/cocoa/conceptual/eventoverview/TextDefaultsBindings/TextDefaultsBindings.html) and [here](https://web.archive.org/web/20090826052401/http://www.erasetotheleft.com/post/mac-os-x-key-bindings).
If you're having trouble in modern Xcode (which I was) solution for this in Xcode 7.2 is to do what Opena [mentioned here with screenshots](https://stackoverflow.com/a/13049587/5760384) or [in text form via Velthune's answer](https://stackoverflow.com/a/26670057/5760384). Since I wanted a more direct command I simplified the command to: ``` selectLine:, delete:, moveToBeginningOfLine: ``` Of course in the Xcode's Preferences >> Key Bindings, you can just find the command double-click under the Key column and give it your own binding of Ctrl+Shift+D. [Here's a screenshot of what I ended up with](https://i.stack.imgur.com/KtRQU.png)
476,953
I'm looking for a way to map some hot-keys to "delete the line that my cursor is on" in Xcode. I found "delete to end of line" and "delete to beginning of line" in the text key bindings, but I am missing how to completely delete the line no matter what I have selected. TextMate has this functionality mapped to Ctrl+Shift+D and I'd like the same thing if possible. Any ideas?
2009/01/25
[ "https://Stackoverflow.com/questions/476953", "https://Stackoverflow.com", "https://Stackoverflow.com/users/53653/" ]
Thanks for the help, Ashley. After some experimentation I mapped my favorite TextMate commands (duplicate line, delete line). I created the file *~/Library/KeyBindings/PBKeyBinding.dict* and added the following: ``` { "^$K" = ( "selectLine:", "cut:" ); "^$D" = ( "selectLine:", "copy:", "moveToEndOfLine:", "insertNewline:", "paste:" ); } ``` The added "deleteBackward:" backs up one line after removing the line's content. You could probably just use "selectLine:" as well.
I was looking for a solution to this, and I tried Ashley Clark's, but it turns out there's an easier option using an included User Script called delete Line. * Open the weird menu to the left of 'help' that looks like a scroll. * Choose Edit User Scripts... * Click the Key Bindings tab * Expand the Text section * Double click the ⌘ column next to 'Delete Line' and type your hotkey. It may warn you that you stole it from some other command but that's fine. Done! You can do the same for Move Line Up and Move Line Down if you're an Eclipse junkie like me.
476,953
I'm looking for a way to map some hot-keys to "delete the line that my cursor is on" in Xcode. I found "delete to end of line" and "delete to beginning of line" in the text key bindings, but I am missing how to completely delete the line no matter what I have selected. TextMate has this functionality mapped to Ctrl+Shift+D and I'd like the same thing if possible. Any ideas?
2009/01/25
[ "https://Stackoverflow.com/questions/476953", "https://Stackoverflow.com", "https://Stackoverflow.com/users/53653/" ]
``` <key>Custom Keyword Set</key> <dict> <key>Delete Current Line In One Hit</key> <string>moveToEndOfLine:, deleteToBeginningOfLine:, deleteToEndOfParagraph:</string> </dict> ``` I suggest to create your customized dictonary in your file **IDETextKeyBindingSet.plist**. So: * close Xcode; * open Terminal; * sudo nano /Applications/Xcode.app/Contents/Frameworks/IDEKit.framework/Resources/IDETextKeyBindingSet.plist * add new custom section, for instance code in top; * save, exit and open Xcode; * [Xcode > Preferences > Key Binding] * search “Delete..” and create the new shortcut.
This works for me (Xcode 4.4.1): Same steps like described here: [Xcode duplicate line](https://stackoverflow.com/questions/10266170/xcode-4-duplicate-line) (Halley's answer) But instead of: selectLine:, copy:, moveToEndOfLine:, insertNewline:, paste:, deleteBackward: Use: selectLine:, moveToBeginningOfLine:, deleteToEndOfLine:
476,953
I'm looking for a way to map some hot-keys to "delete the line that my cursor is on" in Xcode. I found "delete to end of line" and "delete to beginning of line" in the text key bindings, but I am missing how to completely delete the line no matter what I have selected. TextMate has this functionality mapped to Ctrl+Shift+D and I'd like the same thing if possible. Any ideas?
2009/01/25
[ "https://Stackoverflow.com/questions/476953", "https://Stackoverflow.com", "https://Stackoverflow.com/users/53653/" ]
As I don't always work on the same xcode I prefer not to install scripts. Xcode uses some sub-set of emacs commands. I use this approach to quickly delete a line. ^k (control-k) deletes from the cursor to the end of the line. Doing it twice also deletes the carriage return and takes up the next line. ^a takes you to the start of the line. So to delete a complete line from the beginning you can use ^a^k^k.
``` <key>Custom Keyword Set</key> <dict> <key>Delete Current Line In One Hit</key> <string>moveToEndOfLine:, deleteToBeginningOfLine:, deleteToEndOfParagraph:</string> </dict> ``` I suggest to create your customized dictonary in your file **IDETextKeyBindingSet.plist**. So: * close Xcode; * open Terminal; * sudo nano /Applications/Xcode.app/Contents/Frameworks/IDEKit.framework/Resources/IDETextKeyBindingSet.plist * add new custom section, for instance code in top; * save, exit and open Xcode; * [Xcode > Preferences > Key Binding] * search “Delete..” and create the new shortcut.
41,096,588
My route for pages in routes.rb ``` get ":slug", to: 'site#pages' ``` my actions in site\_controller.rb ``` def pages render @page.page_template end def about end def contact end def content end def local_news end def global_news @newscasts = Newscast.published.paginate(page: params[:page], per_page: 5) end ``` and it's my error :) [![enter image description here](https://i.stack.imgur.com/X5fKu.png)](https://i.stack.imgur.com/X5fKu.png) Not see `global_news` action my `@newscasts` parameter
2016/12/12
[ "https://Stackoverflow.com/questions/41096588", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5613673/" ]
You need to define @newscasts inside pages method ``` @newscasts = Newscast.published.paginate(page: params[:page], per_page: 5) ``` Or you can write this in your controller above your methods. ``` before_action :global_news, only: [:pages] ``` Before action will run your global\_news methods before every action defined inside only: in your case you can write (:pages) you can mention as many methods you want. If you remove only then global\_news will run before every method.
This cast an error because you are just rendering the `global_news`. With `render` you are not executing the controller action. So @newscast is never set. You can either use a before filter as in the other answer or call the method manually, because I think you are doing something dynamically here, right? for example ``` def pages global_news render @page.page_template end ```
177,716
I want to ensure the sender of Document B is the same person as who previously sent me Document A. Both documents are signed with self-signed certificates. I'm not interested in knowing the real-world identity of the sender. When I open the self-signed certificate with a certificate viewer, it shows the certificate's subject, issuer, serial number, subject key identifier, public key (very long gibberish), SHA1 digest of public key, X.509 data, SHA1 digest (of what?), and MD5 digest (of what?). I know the issuer of the self-signed certificate can put arbitrary things into (i.e., fake) "subject," "issuer," "serial number" fields, so they are meaningless. But I don't know anything about other fields. If the certificates contained in those two documents have, for example, exactly same "SHA1 digest of public key" string, does that mean they are indeed signed by the same person? Can an attacker fake it?
2018/01/16
[ "https://security.stackexchange.com/questions/177716", "https://security.stackexchange.com", "https://security.stackexchange.com/users/168602/" ]
Public and private keys are linked in such as way that if two certificates have the same public key, they were created using the same private key. So if you assume that the private key is indeed kept private, the part you can trust in the certificates to identify the creator is the **public key**, and by extension the digest of the public key.
If two self-signed certificates have different public keys you cannot determine if these certificates were created by the same person or not. If two self-signed certificates have the same public key you at least know that the same private key was used to create the certificates. If you assume that this secret private key is only known to one person you can deduce from that the same person has created both certificates. If you instead must assume that multiple persons might have access to the same private key then you at least know that one of the certificates was issued by one person in this group and the other by the same or different person from the same group.
177,716
I want to ensure the sender of Document B is the same person as who previously sent me Document A. Both documents are signed with self-signed certificates. I'm not interested in knowing the real-world identity of the sender. When I open the self-signed certificate with a certificate viewer, it shows the certificate's subject, issuer, serial number, subject key identifier, public key (very long gibberish), SHA1 digest of public key, X.509 data, SHA1 digest (of what?), and MD5 digest (of what?). I know the issuer of the self-signed certificate can put arbitrary things into (i.e., fake) "subject," "issuer," "serial number" fields, so they are meaningless. But I don't know anything about other fields. If the certificates contained in those two documents have, for example, exactly same "SHA1 digest of public key" string, does that mean they are indeed signed by the same person? Can an attacker fake it?
2018/01/16
[ "https://security.stackexchange.com/questions/177716", "https://security.stackexchange.com", "https://security.stackexchange.com/users/168602/" ]
If two self-signed certificates have different public keys you cannot determine if these certificates were created by the same person or not. If two self-signed certificates have the same public key you at least know that the same private key was used to create the certificates. If you assume that this secret private key is only known to one person you can deduce from that the same person has created both certificates. If you instead must assume that multiple persons might have access to the same private key then you at least know that one of the certificates was issued by one person in this group and the other by the same or different person from the same group.
I think that the math behind the functions should keep you relatively safe, given that the public key in the certificate matches with the creator's public key. --------------------------------------------------------------------------------------------------------------------------------------------------------------- From your question though, it seems that you are new to the concept of SHA digest and Digital Signatures (most signatures are self-signed). If you would like to learn the principles there a lot of websites and YouTube videos that would help. The basics are as follows. There are two functions, one for creation and one to check. The creation function takes the input as the SHA digest of the message and the creator's private key and gives an output as a digital signature. The check function takes input in the form of the message (it's SHA digest), the creators public key, and the digital signature to give out a logical output 'true' or 'false' depending on whether the digital signature is valid or not. Also note that the SHA digest of a fixed message doesn't vary. This is a great way to ensure data integrity and proves that a certificate is legit. --- But a simple answer is, if the public key in the certificate matches with the creator's public key you are relatively safe. For example, two public keys having the same SHA1 digest is incredibly rare and tough to find. That can ensure that it is highly unlikely that SHA1 is the weak link, since you can cross-check the SHA1 digest of a key given with one calculated from the public key (They should match). You can also try to self-check the various digests mentioned in your question to ensure the legitimacy of the certificate. If the digital signature is invalid, your OS or application should pick up on that via in-built functions, if it doesn't apply the function by yourself. So in conclusion, cross-check the digests and the public key. For your initial question, if you trust that Document A was sent by a legit sender, then the public keys of both the certificates should match. If the attacker tries to pretend to be the original sender, it'll show in the digest and signatures.
177,716
I want to ensure the sender of Document B is the same person as who previously sent me Document A. Both documents are signed with self-signed certificates. I'm not interested in knowing the real-world identity of the sender. When I open the self-signed certificate with a certificate viewer, it shows the certificate's subject, issuer, serial number, subject key identifier, public key (very long gibberish), SHA1 digest of public key, X.509 data, SHA1 digest (of what?), and MD5 digest (of what?). I know the issuer of the self-signed certificate can put arbitrary things into (i.e., fake) "subject," "issuer," "serial number" fields, so they are meaningless. But I don't know anything about other fields. If the certificates contained in those two documents have, for example, exactly same "SHA1 digest of public key" string, does that mean they are indeed signed by the same person? Can an attacker fake it?
2018/01/16
[ "https://security.stackexchange.com/questions/177716", "https://security.stackexchange.com", "https://security.stackexchange.com/users/168602/" ]
If two self-signed certificates have different public keys you cannot determine if these certificates were created by the same person or not. If two self-signed certificates have the same public key you at least know that the same private key was used to create the certificates. If you assume that this secret private key is only known to one person you can deduce from that the same person has created both certificates. If you instead must assume that multiple persons might have access to the same private key then you at least know that one of the certificates was issued by one person in this group and the other by the same or different person from the same group.
In addition to the points made by other users, if the documents themselves are signed with MD5 or SHA-1, then you cannot trust that they were signed by the same person, even if the signatures are valid and have the same public key (which would normally be sufficient). The reason for this is that both MD5 and SHA-1 have been found to have weaknesses that can be exploited to make an attacker controlled document appear to have been signed by the original author. The SHA-1 attack is still very expensive, so could only be attempted by a very well-funded attacker, but MD5 is exploitable with relatively inexpensive hardware (effective real-world attacks have been mounted by university researchers). Both attacks would be reliant on a signing oracle, so may not be applicable to your situation, but MD5 or SHA-1 are problematic, especially if you have well-funded adversaries.
177,716
I want to ensure the sender of Document B is the same person as who previously sent me Document A. Both documents are signed with self-signed certificates. I'm not interested in knowing the real-world identity of the sender. When I open the self-signed certificate with a certificate viewer, it shows the certificate's subject, issuer, serial number, subject key identifier, public key (very long gibberish), SHA1 digest of public key, X.509 data, SHA1 digest (of what?), and MD5 digest (of what?). I know the issuer of the self-signed certificate can put arbitrary things into (i.e., fake) "subject," "issuer," "serial number" fields, so they are meaningless. But I don't know anything about other fields. If the certificates contained in those two documents have, for example, exactly same "SHA1 digest of public key" string, does that mean they are indeed signed by the same person? Can an attacker fake it?
2018/01/16
[ "https://security.stackexchange.com/questions/177716", "https://security.stackexchange.com", "https://security.stackexchange.com/users/168602/" ]
Public and private keys are linked in such as way that if two certificates have the same public key, they were created using the same private key. So if you assume that the private key is indeed kept private, the part you can trust in the certificates to identify the creator is the **public key**, and by extension the digest of the public key.
I think that the math behind the functions should keep you relatively safe, given that the public key in the certificate matches with the creator's public key. --------------------------------------------------------------------------------------------------------------------------------------------------------------- From your question though, it seems that you are new to the concept of SHA digest and Digital Signatures (most signatures are self-signed). If you would like to learn the principles there a lot of websites and YouTube videos that would help. The basics are as follows. There are two functions, one for creation and one to check. The creation function takes the input as the SHA digest of the message and the creator's private key and gives an output as a digital signature. The check function takes input in the form of the message (it's SHA digest), the creators public key, and the digital signature to give out a logical output 'true' or 'false' depending on whether the digital signature is valid or not. Also note that the SHA digest of a fixed message doesn't vary. This is a great way to ensure data integrity and proves that a certificate is legit. --- But a simple answer is, if the public key in the certificate matches with the creator's public key you are relatively safe. For example, two public keys having the same SHA1 digest is incredibly rare and tough to find. That can ensure that it is highly unlikely that SHA1 is the weak link, since you can cross-check the SHA1 digest of a key given with one calculated from the public key (They should match). You can also try to self-check the various digests mentioned in your question to ensure the legitimacy of the certificate. If the digital signature is invalid, your OS or application should pick up on that via in-built functions, if it doesn't apply the function by yourself. So in conclusion, cross-check the digests and the public key. For your initial question, if you trust that Document A was sent by a legit sender, then the public keys of both the certificates should match. If the attacker tries to pretend to be the original sender, it'll show in the digest and signatures.
177,716
I want to ensure the sender of Document B is the same person as who previously sent me Document A. Both documents are signed with self-signed certificates. I'm not interested in knowing the real-world identity of the sender. When I open the self-signed certificate with a certificate viewer, it shows the certificate's subject, issuer, serial number, subject key identifier, public key (very long gibberish), SHA1 digest of public key, X.509 data, SHA1 digest (of what?), and MD5 digest (of what?). I know the issuer of the self-signed certificate can put arbitrary things into (i.e., fake) "subject," "issuer," "serial number" fields, so they are meaningless. But I don't know anything about other fields. If the certificates contained in those two documents have, for example, exactly same "SHA1 digest of public key" string, does that mean they are indeed signed by the same person? Can an attacker fake it?
2018/01/16
[ "https://security.stackexchange.com/questions/177716", "https://security.stackexchange.com", "https://security.stackexchange.com/users/168602/" ]
Public and private keys are linked in such as way that if two certificates have the same public key, they were created using the same private key. So if you assume that the private key is indeed kept private, the part you can trust in the certificates to identify the creator is the **public key**, and by extension the digest of the public key.
In addition to the points made by other users, if the documents themselves are signed with MD5 or SHA-1, then you cannot trust that they were signed by the same person, even if the signatures are valid and have the same public key (which would normally be sufficient). The reason for this is that both MD5 and SHA-1 have been found to have weaknesses that can be exploited to make an attacker controlled document appear to have been signed by the original author. The SHA-1 attack is still very expensive, so could only be attempted by a very well-funded attacker, but MD5 is exploitable with relatively inexpensive hardware (effective real-world attacks have been mounted by university researchers). Both attacks would be reliant on a signing oracle, so may not be applicable to your situation, but MD5 or SHA-1 are problematic, especially if you have well-funded adversaries.
177,716
I want to ensure the sender of Document B is the same person as who previously sent me Document A. Both documents are signed with self-signed certificates. I'm not interested in knowing the real-world identity of the sender. When I open the self-signed certificate with a certificate viewer, it shows the certificate's subject, issuer, serial number, subject key identifier, public key (very long gibberish), SHA1 digest of public key, X.509 data, SHA1 digest (of what?), and MD5 digest (of what?). I know the issuer of the self-signed certificate can put arbitrary things into (i.e., fake) "subject," "issuer," "serial number" fields, so they are meaningless. But I don't know anything about other fields. If the certificates contained in those two documents have, for example, exactly same "SHA1 digest of public key" string, does that mean they are indeed signed by the same person? Can an attacker fake it?
2018/01/16
[ "https://security.stackexchange.com/questions/177716", "https://security.stackexchange.com", "https://security.stackexchange.com/users/168602/" ]
In addition to the points made by other users, if the documents themselves are signed with MD5 or SHA-1, then you cannot trust that they were signed by the same person, even if the signatures are valid and have the same public key (which would normally be sufficient). The reason for this is that both MD5 and SHA-1 have been found to have weaknesses that can be exploited to make an attacker controlled document appear to have been signed by the original author. The SHA-1 attack is still very expensive, so could only be attempted by a very well-funded attacker, but MD5 is exploitable with relatively inexpensive hardware (effective real-world attacks have been mounted by university researchers). Both attacks would be reliant on a signing oracle, so may not be applicable to your situation, but MD5 or SHA-1 are problematic, especially if you have well-funded adversaries.
I think that the math behind the functions should keep you relatively safe, given that the public key in the certificate matches with the creator's public key. --------------------------------------------------------------------------------------------------------------------------------------------------------------- From your question though, it seems that you are new to the concept of SHA digest and Digital Signatures (most signatures are self-signed). If you would like to learn the principles there a lot of websites and YouTube videos that would help. The basics are as follows. There are two functions, one for creation and one to check. The creation function takes the input as the SHA digest of the message and the creator's private key and gives an output as a digital signature. The check function takes input in the form of the message (it's SHA digest), the creators public key, and the digital signature to give out a logical output 'true' or 'false' depending on whether the digital signature is valid or not. Also note that the SHA digest of a fixed message doesn't vary. This is a great way to ensure data integrity and proves that a certificate is legit. --- But a simple answer is, if the public key in the certificate matches with the creator's public key you are relatively safe. For example, two public keys having the same SHA1 digest is incredibly rare and tough to find. That can ensure that it is highly unlikely that SHA1 is the weak link, since you can cross-check the SHA1 digest of a key given with one calculated from the public key (They should match). You can also try to self-check the various digests mentioned in your question to ensure the legitimacy of the certificate. If the digital signature is invalid, your OS or application should pick up on that via in-built functions, if it doesn't apply the function by yourself. So in conclusion, cross-check the digests and the public key. For your initial question, if you trust that Document A was sent by a legit sender, then the public keys of both the certificates should match. If the attacker tries to pretend to be the original sender, it'll show in the digest and signatures.
8,616,765
I cannot get the symfony2 configuration to correctly overwrite values from other config-files. Here is the problem: I have a new environment "staging" where I want to use most of the stuff from config\_prod.yml but have another logging level (I want it to be as it is in development, simply logging everything to a file). Here are the config stuff I use: config\_prod.yml: ``` imports: - { resource: config.yml } monolog: handlers: main: type: fingers_crossed action_level: error handler: nested nested: type: stream path: %kernel.logs_dir%/%kernel.environment%.log level: debug ``` config\_staging.yml: ``` imports: - { resource: config_prod.yml } monolog: handlers: main: type: stream path: %kernel.logs_dir%/%kernel.environment%.log level: debug nested: ~ ``` From my point of view, the nested logger is now null and the main logs to the given file. **What really happens is that he logs every message twice!** The same happens when I use this for the config\_staging.yml: ``` imports: - { resource: config_prod.yml } monolog: handlers: main: type: stream path: %kernel.logs_dir%/%kernel.environment%.log level: debug handler: ~ nested: ~ ``` I found a workaround, setting the action\_level of the main handler to debug and leaving everything else as is, but I don't like this solution. There must be a way to overwrite config stuff so I only have the main monolog handler.
2011/12/23
[ "https://Stackoverflow.com/questions/8616765", "https://Stackoverflow.com", "https://Stackoverflow.com/users/372562/" ]
Pretty much one year later I now have an understanding of what's happening and how to prevent it: The `nested` handler is befilled with the configuration from the `config.yml` and when parsing the `config_staging.yml`, the yaml component does not overwrite the whole hashmap and set the value to null but tries to merge both, resulting in the same array as before. There is a type called `null` which can be used ot overwrite any Logger. It does nothing and is therefor suitable for this use case: ``` monolog: handlers: main: type: stream path: %kernel.logs_dir%/%kernel.environment%.log level: debug handler: ~ nested: ~ type: null ``` Another solution would be to not configure any logging in the config.yml but only in the specific environment configs like `config_prod.yml` and so on.
Check that you don't have any repeated keys in the \_staging config file -- the second one would override the first, with the net result that the first is ignored.
8,616,765
I cannot get the symfony2 configuration to correctly overwrite values from other config-files. Here is the problem: I have a new environment "staging" where I want to use most of the stuff from config\_prod.yml but have another logging level (I want it to be as it is in development, simply logging everything to a file). Here are the config stuff I use: config\_prod.yml: ``` imports: - { resource: config.yml } monolog: handlers: main: type: fingers_crossed action_level: error handler: nested nested: type: stream path: %kernel.logs_dir%/%kernel.environment%.log level: debug ``` config\_staging.yml: ``` imports: - { resource: config_prod.yml } monolog: handlers: main: type: stream path: %kernel.logs_dir%/%kernel.environment%.log level: debug nested: ~ ``` From my point of view, the nested logger is now null and the main logs to the given file. **What really happens is that he logs every message twice!** The same happens when I use this for the config\_staging.yml: ``` imports: - { resource: config_prod.yml } monolog: handlers: main: type: stream path: %kernel.logs_dir%/%kernel.environment%.log level: debug handler: ~ nested: ~ ``` I found a workaround, setting the action\_level of the main handler to debug and leaving everything else as is, but I don't like this solution. There must be a way to overwrite config stuff so I only have the main monolog handler.
2011/12/23
[ "https://Stackoverflow.com/questions/8616765", "https://Stackoverflow.com", "https://Stackoverflow.com/users/372562/" ]
Check that you don't have any repeated keys in the \_staging config file -- the second one would override the first, with the net result that the first is ignored.
If you want to alter a collection by removing an element you will have to create an intermediate YAML file (importing the base) setting the collection to "null" and re-adding all required collection elements in a file which in turn imports the intermediate YAML file. You can not simply overwrite a collection. New elements will get added but you can't remove existing ones except by the workaround described.
8,616,765
I cannot get the symfony2 configuration to correctly overwrite values from other config-files. Here is the problem: I have a new environment "staging" where I want to use most of the stuff from config\_prod.yml but have another logging level (I want it to be as it is in development, simply logging everything to a file). Here are the config stuff I use: config\_prod.yml: ``` imports: - { resource: config.yml } monolog: handlers: main: type: fingers_crossed action_level: error handler: nested nested: type: stream path: %kernel.logs_dir%/%kernel.environment%.log level: debug ``` config\_staging.yml: ``` imports: - { resource: config_prod.yml } monolog: handlers: main: type: stream path: %kernel.logs_dir%/%kernel.environment%.log level: debug nested: ~ ``` From my point of view, the nested logger is now null and the main logs to the given file. **What really happens is that he logs every message twice!** The same happens when I use this for the config\_staging.yml: ``` imports: - { resource: config_prod.yml } monolog: handlers: main: type: stream path: %kernel.logs_dir%/%kernel.environment%.log level: debug handler: ~ nested: ~ ``` I found a workaround, setting the action\_level of the main handler to debug and leaving everything else as is, but I don't like this solution. There must be a way to overwrite config stuff so I only have the main monolog handler.
2011/12/23
[ "https://Stackoverflow.com/questions/8616765", "https://Stackoverflow.com", "https://Stackoverflow.com/users/372562/" ]
Pretty much one year later I now have an understanding of what's happening and how to prevent it: The `nested` handler is befilled with the configuration from the `config.yml` and when parsing the `config_staging.yml`, the yaml component does not overwrite the whole hashmap and set the value to null but tries to merge both, resulting in the same array as before. There is a type called `null` which can be used ot overwrite any Logger. It does nothing and is therefor suitable for this use case: ``` monolog: handlers: main: type: stream path: %kernel.logs_dir%/%kernel.environment%.log level: debug handler: ~ nested: ~ type: null ``` Another solution would be to not configure any logging in the config.yml but only in the specific environment configs like `config_prod.yml` and so on.
If you want to alter a collection by removing an element you will have to create an intermediate YAML file (importing the base) setting the collection to "null" and re-adding all required collection elements in a file which in turn imports the intermediate YAML file. You can not simply overwrite a collection. New elements will get added but you can't remove existing ones except by the workaround described.
24,266,951
How can i find words with doubled letters(**e.g. progress, tool and so on**) in text using regex?
2014/06/17
[ "https://Stackoverflow.com/questions/24266951", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3673609/" ]
``` my $str = "katttaarww"; my @arr = $str =~ /(.)\1+/g; print join "~", @arr; ``` output ``` t~a~w ```
use a backreference to a single wildcard capture group, see below: ``` a = "hello" a =~ /(.)\1/ ```
8,788,817
I need to use as.Date on the index of a zoo object. Some of the dates are in BST and so when converting I lose a day on (only) these entries. I don't care about one hour's difference or even the time part of the date at all, I just want to make sure that the dates displayed stay the same. I'm guessing this is not very hard but I can't manage it. Can somebody help please? ``` class(xtsRet) #[1] "xts" "zoo" index(xtsRet) #[1] "2007-07-31 BST" "2007-08-31 BST" "2007-09-30 BST" "2007-10-31 GMT" class(index(xtsRet)) #[1] "POSIXt" "POSIXct" index(xtsRet) <- as.Date(index(xtsRet)) index(xtsRet) #[1] "2007-07-30" "2007-08-30" "2007-09-29" "2007-10-31" ``` Minimally reproducible example (not requiring `zoo` package): ``` my_date <- as.POSIXct("2007-04-01") # Users in non-UK timezone will need to # do as.POSIXct("2007-04-01", "Europe/London") my_date #[1] "2017-04-01 BST" as.Date(my_date) #[1] "2017-03-31" ```
2012/01/09
[ "https://Stackoverflow.com/questions/8788817", "https://Stackoverflow.com", "https://Stackoverflow.com/users/978760/" ]
Suppose we have this sample data: ``` library(zoo) x <- as.POSIXct("2000-01-01", tz = "GMT") ``` Then see if any of these are what you want: ``` # use current time zone as.Date(as.character(x, tz = "")) # use GMT as.Date(as.character(x, tz = "GMT")) # set entire session to GMT Sys.setenv(TZ = "GMT") as.Date(x) ``` Also try `"BST"` in place of `"GMT"` and note the article on dates and times in [R News 4/1](http://cran.r-project.org/doc/Rnews/Rnews_2004-1.pdf) .
I would suggest using as.POSIXlt to convert to a date object, wrapped in as.Date: ``` d <- as.POSIXct(c("2007-07-31","2007-08-31","2007-09-30","2007-10-31")) d [1] "2007-07-31 BST" "2007-08-31 BST" "2007-09-30 BST" "2007-10-31 GMT" as.Date(as.POSIXlt(d)) [1] "2007-07-31" "2007-08-31" "2007-09-30" "2007-10-31" ``` Achieves the same thing as the +3600 above, but slightly less of a hack
8,788,817
I need to use as.Date on the index of a zoo object. Some of the dates are in BST and so when converting I lose a day on (only) these entries. I don't care about one hour's difference or even the time part of the date at all, I just want to make sure that the dates displayed stay the same. I'm guessing this is not very hard but I can't manage it. Can somebody help please? ``` class(xtsRet) #[1] "xts" "zoo" index(xtsRet) #[1] "2007-07-31 BST" "2007-08-31 BST" "2007-09-30 BST" "2007-10-31 GMT" class(index(xtsRet)) #[1] "POSIXt" "POSIXct" index(xtsRet) <- as.Date(index(xtsRet)) index(xtsRet) #[1] "2007-07-30" "2007-08-30" "2007-09-29" "2007-10-31" ``` Minimally reproducible example (not requiring `zoo` package): ``` my_date <- as.POSIXct("2007-04-01") # Users in non-UK timezone will need to # do as.POSIXct("2007-04-01", "Europe/London") my_date #[1] "2017-04-01 BST" as.Date(my_date) #[1] "2017-03-31" ```
2012/01/09
[ "https://Stackoverflow.com/questions/8788817", "https://Stackoverflow.com", "https://Stackoverflow.com/users/978760/" ]
You can offset the `POSIX` objects so its not based around midnight. 1 hour (3600 secs) should be sufficient: ``` d <- as.POSIXct(c("2007-07-31","2007-08-31","2007-09-30","2007-10-31")) d [1] "2007-07-31 BST" "2007-08-31 BST" "2007-09-30 BST" "2007-10-31 GMT" as.Date(d) [1] "2007-07-30" "2007-08-30" "2007-09-29" "2007-10-31" as.Date(d+3600) [1] "2007-07-31" "2007-08-31" "2007-09-30" "2007-10-31" ```
I would suggest using as.POSIXlt to convert to a date object, wrapped in as.Date: ``` d <- as.POSIXct(c("2007-07-31","2007-08-31","2007-09-30","2007-10-31")) d [1] "2007-07-31 BST" "2007-08-31 BST" "2007-09-30 BST" "2007-10-31 GMT" as.Date(as.POSIXlt(d)) [1] "2007-07-31" "2007-08-31" "2007-09-30" "2007-10-31" ``` Achieves the same thing as the +3600 above, but slightly less of a hack
67,811,438
I have 3 shared integer variables which are being written/read in multithreaded code . Something like this happens in the shared code . How do I make the thread 2 operation free of data race without relying on a lock ? using a lock would impact my runtime , this is legacy code so I can't really move to std::atomic. Initially : ``` int var1 = 0, var2 = 0, var3 = 0; ``` later thread 1 does: ``` __sync_fetch_and_add(&var1, 1); ``` thread: 2 ``` var3 = var1 > var2 ? var1 : var2 ; ``` thread 3 : ``` __sync_fetch_and_add(&var2, 1); ``` thread 4 ``` __sync_fetch_and_sub(&var1, 1); ```
2021/06/02
[ "https://Stackoverflow.com/questions/67811438", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4354472/" ]
You don't *start* reading from `doneQ` until you've finished sending *all* the lines to `lineParseQ`, which is more lines than there is buffer space. So once the `doneQ` buffer is full, that send blocks, which starts filling the `lineParseQ` buffer, and once that's full, it deadlocks. Move either the loop sending to `lineParseQ`, the loop reading from `doneQ`, or both, to separate goroutine(s), e.g.: ``` go func() { for _, line := range lines { countSend++ lineParseQ <- line } close(lineParseQ) }() ``` This will still deadlock at the end, because you've got a `range` over a channel and the `close` after it in the same goroutine; since `range` continues until the channel is closed, and the close comes after the `range` finishes, you still have a deadlock. You need to put the closes in appropriate places; that being, either in the sending routine, or blocked on a `WaitGroup` monitoring the sending routines if there are multiple senders for a given channel. ``` // Start line parsing workers and send to jobProcessQ wg := new(sync.WaitGroup) for i := 1; i <= 2; i++ { wg.Add(1) go lineToStructWorker(i, lineParseQ, jobProcessQ, wg) } // Process myStruct from jobProcessQ for i := 1; i <= 5; i++ { go WorkerProcessStruct(i, jobProcessQ, doneQ) } countSend := 0 go func() { for _, line := range lines { countSend++ lineParseQ <- line } close(lineParseQ) }() go func() { wg.Wait() close(jobProcessQ) }() for a := range doneQ { fmt.Printf("Received %v.\n", a) } // ... func lineToStructWorker(workerID int, lineQ <-chan string, strQ chan<- myStruct, wg *sync.WaitGroup) { for j := range lineQ { strQ <- lineToStruct(j) // just parses the csv to a struct... } wg.Done() } func WorkerProcessStruct(workerID int, strQ <-chan myStruct, done chan<- myStruct) { for a := range strQ { time.Sleep(time.Millisecond * 500) // fake long operation... done <- a } close(done) } ``` Full working example here: <https://play.golang.org/p/XsnewSZeb2X>
Coordinate the pipeline with `sync.WaitGroup` breaking each piece into stages. When you know one piece of the pipeline is complete (and no one is writing to a particular channel), close the channel to instruct all "workers" to exit e.g. ``` var wg sync.WaitGroup for i := 1; i <= 5; i++ { i := i wg.Add(1) go func() { Worker(i) wg.Done() }() } // wg.Wait() signals the above have completed ``` Buffered channels are handy to handle burst workloads, but sometimes they are used to avoid deadlocks in poor designs. If you want to avoid running certain parts of your pipeline in a goroutine you can buffer some channels (matching the number of workers typically) to avoid a blockage in your main goroutine. If you have dependent pieces that read & write and want to avoid deadlock - ensure they are in separate goroutines. Having all parts of the pipeline it its own goroutine will even remove the need for buffered channels: ``` // putting all channel work into separate goroutines // removes the need for buffered channels lineParseQ := make(chan string, 0) jobProcessQ := make(chan myStruct, 0) doneQ := make(chan myStruct, 0) ``` Its a tradeoff of course - a goroutine costs about 2K in resources - versus a buffered channel which is much less. As with most designs it depends on how it is used. Also don't get caught by the notorious Go [for-loop gotcha](https://yourbasic.org/golang/gotcha-data-race-closure/), so use a closure assignment to avoid this: ``` for i := 1; i <= 5; i++ { i := i // new i (not the i above) go func() { myfunc(i) // otherwise all goroutines will most likely get '5' }() } ``` Finally ensure you wait for all results to be processed before exiting. It's a common mistake to return from a channel based function and believe all results have been processed. In a service this will eventually be true. But in a standalone executable the processing loop may still be working on results. ``` go func() { wgW.Wait() // waiting on worker goroutines to finish close(doneQ) // safe to close results channel now }() // ensure we don't return until all results have been processed for a := range doneQ { fmt.Printf("Received %v.\n", a) } ``` by processing the results in the main goroutine, we ensure we don't return prematurely without having processed everything. Pulling it all together: <https://play.golang.org/p/MjLpQ5xglP3>
59,170,905
I have a csv that contains 100 rows by three columns of random numbers: ``` 100, 20, 30 746, 82, 928 387, 12, 287.3 12, 47, 2938 125, 198, 263 ... 12, 2736, 14 ``` In bash, I need to add another column that will be either a 0 or a 1. However, (and here is the hard part), I need to have 20% of the rows with 0s, and 80% with 1s. Result: ``` 100, 20, 30, 0 746, 82, 928, 1 387, 12, 287.3, 1 12, 47, 2938, 1 125, 198, 263, 0 ... 12, 2736, 14, 1 ``` What I have tried: ``` sed '1~3s/$/0/' mycsv.csv ``` but i thought I could replace the 1~3 with 'random number' but that doesn't work. Maybe a loop would? Maybe sed or awk?
2019/12/04
[ "https://Stackoverflow.com/questions/59170905", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8305680/" ]
Using awk and `rand()` to get randomly 0s and 1s with 20 % probability of getting a 0: ``` $ awk 'BEGIN{OFS=", ";srand()}{print $0,(rand()>0.2)}' file ``` Output: ``` 100, 20, 30, 1 746, 82, 928, 1 387, 12, 287.3, 1 12, 47, 2938, 0 125, 198, 263, 1 ..., 0 12, 2736, 14, 1 ``` Explained: ``` $ awk ' BEGIN { OFS=", " # set output field separator srand() # time based seed for rand() } { print $0,(rand()>0.2) # output 0/1 ~ 20/80 }' file ``` As `srand()` per se is time (seconds) based, depending on the need, you might want to introduce external seed for it, for example, from Bash: ``` $ awk -v seed=$RANDOM 'BEGIN{srand(seed)}...' ``` **Update**: A version that first counts the lines in the file, calculates how many are 20 % 0s and randomly picks a 0 or a 1 and keeps count: ``` $ awk -v seed=$RANDOM ' BEGIN { srand(seed) # feed the seed to random } NR==1 { # processing the first record while((getline line < FILENAME)>0) # count the lines in the file nr++ # nr stores the count for(i=1;i<=nr;i++) # produce a[(i>0.2*nr)]++ # 20 % 0s, 80 % 1s } { p=a[0]/(a[0]+a[1]) # probability to pick 0 or 1 print $0 ". " (a[v=(rand()>p)]?v:v=(!v)) # print record and 0 or 1 a[v]-- # remove 0 or 1 }' file ```
Adding to the previous reply, Here is a Python 3 way to do this : ``` #!/usr/local/bin/python3 import csv import math import random totalOflines = len(open('columns.csv').readlines()) newColumn = ( [0] * math.ceil(totalOflines * 0.20) ) + ( [1] * math.ceil(totalOflines * 0.80) ) random.shuffle(newColumn) csvr = csv.reader(open('columns.csv'), delimiter = ",") i=0 for row in csvr: print("{},{},{},{}".format(row[0],row[1],row[2],newColumn[i])) i+=1 ``` Regards!
59,170,905
I have a csv that contains 100 rows by three columns of random numbers: ``` 100, 20, 30 746, 82, 928 387, 12, 287.3 12, 47, 2938 125, 198, 263 ... 12, 2736, 14 ``` In bash, I need to add another column that will be either a 0 or a 1. However, (and here is the hard part), I need to have 20% of the rows with 0s, and 80% with 1s. Result: ``` 100, 20, 30, 0 746, 82, 928, 1 387, 12, 287.3, 1 12, 47, 2938, 1 125, 198, 263, 0 ... 12, 2736, 14, 1 ``` What I have tried: ``` sed '1~3s/$/0/' mycsv.csv ``` but i thought I could replace the 1~3 with 'random number' but that doesn't work. Maybe a loop would? Maybe sed or awk?
2019/12/04
[ "https://Stackoverflow.com/questions/59170905", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8305680/" ]
Using awk and `rand()` to get randomly 0s and 1s with 20 % probability of getting a 0: ``` $ awk 'BEGIN{OFS=", ";srand()}{print $0,(rand()>0.2)}' file ``` Output: ``` 100, 20, 30, 1 746, 82, 928, 1 387, 12, 287.3, 1 12, 47, 2938, 0 125, 198, 263, 1 ..., 0 12, 2736, 14, 1 ``` Explained: ``` $ awk ' BEGIN { OFS=", " # set output field separator srand() # time based seed for rand() } { print $0,(rand()>0.2) # output 0/1 ~ 20/80 }' file ``` As `srand()` per se is time (seconds) based, depending on the need, you might want to introduce external seed for it, for example, from Bash: ``` $ awk -v seed=$RANDOM 'BEGIN{srand(seed)}...' ``` **Update**: A version that first counts the lines in the file, calculates how many are 20 % 0s and randomly picks a 0 or a 1 and keeps count: ``` $ awk -v seed=$RANDOM ' BEGIN { srand(seed) # feed the seed to random } NR==1 { # processing the first record while((getline line < FILENAME)>0) # count the lines in the file nr++ # nr stores the count for(i=1;i<=nr;i++) # produce a[(i>0.2*nr)]++ # 20 % 0s, 80 % 1s } { p=a[0]/(a[0]+a[1]) # probability to pick 0 or 1 print $0 ". " (a[v=(rand()>p)]?v:v=(!v)) # print record and 0 or 1 a[v]-- # remove 0 or 1 }' file ```
Another way to do it is the following: 1. Create a sequence of 0 and 1's with the correct ratio: ``` $ awk 'END{for(i=1;i<=FNR;++i) print (i <= 0.8*FNR) }' file ``` 2. Shuffle the output to randomize it: ``` $ awk 'END{for(i=1;i<=FNR;++i) print (i <= 0.8*FNR) }' file | shuf ``` 3. Paste it next to the file with a <comma>-character as delimiter: ``` $ paste -d, file <(awk 'END{for(i=1;i<=FNR;++i) print (i <= 0.8*FNR) }' file | shuf) ``` The reason I do not want to use any form of random number generator, is that this could lead to 100% ones or 100% zeros. Or anything of that nature. The above produces the closest possible 80% of ones and 20% of zeros. Another method would be a double parse with awk in the following way: ``` $ awk '(NR==FNR) { next } (FNR==1) { for(i=1;i<NR;i++) a[i] = (i<0.8*(NR-1)) } { for(i in a) { print $0","a[i]; delete a[i]; break } }' file file ``` The above makes use of of the fact that `for(i in a)` cycles through the array in an undetermined way. You can see this by quickly doing ``` $ awk 'BEGIN{ORS=","; for(i=1;i<=20;++i) a[i]; for(i in a) print i; printf "\n"}' 17,4,18,5,19,6,7,8,9,10,20,11,12,13,14,1,15,2,16,3, ``` But this is implementation dependent. Finally, you could actually use `shuf` in awk to get to the desired result ``` $ awk '(NR==FNR) { next } (FNR==1) { cmd = "shuf -i 1-"(NR-1)" } { cmd | getline i; print $0","(i <= 0.8*(NR-FNR)) }' file file ```
59,170,905
I have a csv that contains 100 rows by three columns of random numbers: ``` 100, 20, 30 746, 82, 928 387, 12, 287.3 12, 47, 2938 125, 198, 263 ... 12, 2736, 14 ``` In bash, I need to add another column that will be either a 0 or a 1. However, (and here is the hard part), I need to have 20% of the rows with 0s, and 80% with 1s. Result: ``` 100, 20, 30, 0 746, 82, 928, 1 387, 12, 287.3, 1 12, 47, 2938, 1 125, 198, 263, 0 ... 12, 2736, 14, 1 ``` What I have tried: ``` sed '1~3s/$/0/' mycsv.csv ``` but i thought I could replace the 1~3 with 'random number' but that doesn't work. Maybe a loop would? Maybe sed or awk?
2019/12/04
[ "https://Stackoverflow.com/questions/59170905", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8305680/" ]
Using awk and `rand()` to get randomly 0s and 1s with 20 % probability of getting a 0: ``` $ awk 'BEGIN{OFS=", ";srand()}{print $0,(rand()>0.2)}' file ``` Output: ``` 100, 20, 30, 1 746, 82, 928, 1 387, 12, 287.3, 1 12, 47, 2938, 0 125, 198, 263, 1 ..., 0 12, 2736, 14, 1 ``` Explained: ``` $ awk ' BEGIN { OFS=", " # set output field separator srand() # time based seed for rand() } { print $0,(rand()>0.2) # output 0/1 ~ 20/80 }' file ``` As `srand()` per se is time (seconds) based, depending on the need, you might want to introduce external seed for it, for example, from Bash: ``` $ awk -v seed=$RANDOM 'BEGIN{srand(seed)}...' ``` **Update**: A version that first counts the lines in the file, calculates how many are 20 % 0s and randomly picks a 0 or a 1 and keeps count: ``` $ awk -v seed=$RANDOM ' BEGIN { srand(seed) # feed the seed to random } NR==1 { # processing the first record while((getline line < FILENAME)>0) # count the lines in the file nr++ # nr stores the count for(i=1;i<=nr;i++) # produce a[(i>0.2*nr)]++ # 20 % 0s, 80 % 1s } { p=a[0]/(a[0]+a[1]) # probability to pick 0 or 1 print $0 ". " (a[v=(rand()>p)]?v:v=(!v)) # print record and 0 or 1 a[v]-- # remove 0 or 1 }' file ```
This seems to be more a problem of algorithm than of programming. You state in your question: *I need to have 20% of the rows with 0s, and 80% with 1s.*. So the first question is, what to do, if the number of rows is not a multiple of 5. If you have 112 rows in total, 20% would be 22.4 rows, and this does not make sense. Assuming that you can redefine your task to deal with that case, the simplest solution would be assign a 0 to the first 20% of the rows and a 1 to the remaining ones. But say that you want to have some randomness in the distribution of the 0 and 1. One quick-and-dirty solution would be to create an array consisting of the numbers of zeroes and ones you are going to redeem in total, and in each iteration take a random element from this array (and remove it from the array).
59,170,905
I have a csv that contains 100 rows by three columns of random numbers: ``` 100, 20, 30 746, 82, 928 387, 12, 287.3 12, 47, 2938 125, 198, 263 ... 12, 2736, 14 ``` In bash, I need to add another column that will be either a 0 or a 1. However, (and here is the hard part), I need to have 20% of the rows with 0s, and 80% with 1s. Result: ``` 100, 20, 30, 0 746, 82, 928, 1 387, 12, 287.3, 1 12, 47, 2938, 1 125, 198, 263, 0 ... 12, 2736, 14, 1 ``` What I have tried: ``` sed '1~3s/$/0/' mycsv.csv ``` but i thought I could replace the 1~3 with 'random number' but that doesn't work. Maybe a loop would? Maybe sed or awk?
2019/12/04
[ "https://Stackoverflow.com/questions/59170905", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8305680/" ]
Another way to do it is the following: 1. Create a sequence of 0 and 1's with the correct ratio: ``` $ awk 'END{for(i=1;i<=FNR;++i) print (i <= 0.8*FNR) }' file ``` 2. Shuffle the output to randomize it: ``` $ awk 'END{for(i=1;i<=FNR;++i) print (i <= 0.8*FNR) }' file | shuf ``` 3. Paste it next to the file with a <comma>-character as delimiter: ``` $ paste -d, file <(awk 'END{for(i=1;i<=FNR;++i) print (i <= 0.8*FNR) }' file | shuf) ``` The reason I do not want to use any form of random number generator, is that this could lead to 100% ones or 100% zeros. Or anything of that nature. The above produces the closest possible 80% of ones and 20% of zeros. Another method would be a double parse with awk in the following way: ``` $ awk '(NR==FNR) { next } (FNR==1) { for(i=1;i<NR;i++) a[i] = (i<0.8*(NR-1)) } { for(i in a) { print $0","a[i]; delete a[i]; break } }' file file ``` The above makes use of of the fact that `for(i in a)` cycles through the array in an undetermined way. You can see this by quickly doing ``` $ awk 'BEGIN{ORS=","; for(i=1;i<=20;++i) a[i]; for(i in a) print i; printf "\n"}' 17,4,18,5,19,6,7,8,9,10,20,11,12,13,14,1,15,2,16,3, ``` But this is implementation dependent. Finally, you could actually use `shuf` in awk to get to the desired result ``` $ awk '(NR==FNR) { next } (FNR==1) { cmd = "shuf -i 1-"(NR-1)" } { cmd | getline i; print $0","(i <= 0.8*(NR-FNR)) }' file file ```
Adding to the previous reply, Here is a Python 3 way to do this : ``` #!/usr/local/bin/python3 import csv import math import random totalOflines = len(open('columns.csv').readlines()) newColumn = ( [0] * math.ceil(totalOflines * 0.20) ) + ( [1] * math.ceil(totalOflines * 0.80) ) random.shuffle(newColumn) csvr = csv.reader(open('columns.csv'), delimiter = ",") i=0 for row in csvr: print("{},{},{},{}".format(row[0],row[1],row[2],newColumn[i])) i+=1 ``` Regards!
59,170,905
I have a csv that contains 100 rows by three columns of random numbers: ``` 100, 20, 30 746, 82, 928 387, 12, 287.3 12, 47, 2938 125, 198, 263 ... 12, 2736, 14 ``` In bash, I need to add another column that will be either a 0 or a 1. However, (and here is the hard part), I need to have 20% of the rows with 0s, and 80% with 1s. Result: ``` 100, 20, 30, 0 746, 82, 928, 1 387, 12, 287.3, 1 12, 47, 2938, 1 125, 198, 263, 0 ... 12, 2736, 14, 1 ``` What I have tried: ``` sed '1~3s/$/0/' mycsv.csv ``` but i thought I could replace the 1~3 with 'random number' but that doesn't work. Maybe a loop would? Maybe sed or awk?
2019/12/04
[ "https://Stackoverflow.com/questions/59170905", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8305680/" ]
This seems to be more a problem of algorithm than of programming. You state in your question: *I need to have 20% of the rows with 0s, and 80% with 1s.*. So the first question is, what to do, if the number of rows is not a multiple of 5. If you have 112 rows in total, 20% would be 22.4 rows, and this does not make sense. Assuming that you can redefine your task to deal with that case, the simplest solution would be assign a 0 to the first 20% of the rows and a 1 to the remaining ones. But say that you want to have some randomness in the distribution of the 0 and 1. One quick-and-dirty solution would be to create an array consisting of the numbers of zeroes and ones you are going to redeem in total, and in each iteration take a random element from this array (and remove it from the array).
Adding to the previous reply, Here is a Python 3 way to do this : ``` #!/usr/local/bin/python3 import csv import math import random totalOflines = len(open('columns.csv').readlines()) newColumn = ( [0] * math.ceil(totalOflines * 0.20) ) + ( [1] * math.ceil(totalOflines * 0.80) ) random.shuffle(newColumn) csvr = csv.reader(open('columns.csv'), delimiter = ",") i=0 for row in csvr: print("{},{},{},{}".format(row[0],row[1],row[2],newColumn[i])) i+=1 ``` Regards!
6,755,758
I am curious, why all the larger open source PHP projects, it seems none of them use the MVC pattern and all the post on SO promote it's use?
2011/07/20
[ "https://Stackoverflow.com/questions/6755758", "https://Stackoverflow.com", "https://Stackoverflow.com/users/143030/" ]
phpBB and PHPMyAdmin, (and PHPlist, SquirrelMail and others) are all very old code-bases originating on PHP3 and PHP4. They have not been rewritten to use techniques like MVC or even OO in most cases. PHP coding conventions prior to PHP5 were mainly procedural and it was very common to find application logic inter-mingled with presentation and database logic. In fact, the PHP language encourages inter-mingling presentation and logic since PHP is itself a templating language. As the OO support improved, those coding methods are becoming increasingly discouraged. Newer or rapidly developed code-bases like Drupal, WordPress and the Facebook API do use modern patterns, however.
Most of those were already based in nonMVC php, and it worked. Although I am supporter of MVC symfony I can see why they'd changed the codebase to make it MVC.
48,905,062
It's been several months since I have a compilation error on when I want to change the value of an enumeration declared at the beginning of the program (global), in a function that replaces it with an integer. Before I did not have this problem, but having switched my code from a mini arduino card, to ESP8266 the problem appeared .. It do not have the same compiler ?? The error below is still blocking and prevents me from advancing on my project .. I can not find the solution: `ERROR : request for member 'state' in 'CYCLE_ARROSAGE', which is of non-class type '<anonymous enum>'` Here is a simplified example of the problem: ``` enum { S, // SECURITE N, // NUIT J1_1, J1_2, J1_3, // Luminosité 1 J2_1, J2_2, J2_3, // Luminosité 2 J3_1, J3_2, J3_3, // Luminosité 3 } CYCLE_ARROSAGE; // SECURITE void setup () { CYCLE_ARROSAGE = N; // OK } void loop () { CheckChangementCycleArrosage(J2_2); } void CheckChangementCycleArrosage(int NouveauCycle ){ if(CYCLE_ARROSAGE != NouveauCycle){ Serial.print("CYCLE CHECKE : "); Serial.println(NouveauCycle); // -> 6 Serial.print("CYCLE CHECKE CAST: "); Serial.println(String(NouveauCycle)); // -> 6 Serial.print("CYCLE ARROSAGE: "); Serial.println(CYCLE_ARROSAGE); // -> 1 CYCLE_ARROSAGE = NouveauCycle; // -> ERROR } } ``` What could be the solution? I do not understand..
2018/02/21
[ "https://Stackoverflow.com/questions/48905062", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8375960/" ]
You can try following HTML ``` This is my text. <i class="far fa-question-circle" data-toggle="tooltip" data-title="This is my tooltip."></i> ``` Since the default nature of browser is difficult to override and may cause unexpected behavior we can choose alternative way to solve the issue The Bootstrap 4 tool-tip can also show tool tip if the attribute is prefixed with `data`. so you can replace `title` attribute with `data-title` Here is a working fiddle <https://jsfiddle.net/samuelj90/qfcs9azv/18/>
I had issues with the `data-title` from the answers above, instead I had to use the `data-original-title`. You can set this property using the `attr` function from jQuery or directly into the DOM. **HTML:** ``` This is my text. <i class="far fa-question-circle" data-toggle="tooltip"></i> ``` **JavaScript:** ``` $(function () { //Initialize the Bootstrap tooltip $('[data-toggle="tooltip"]').tooltip(); //Force the Tooltip title change at run time $('.fa-question-circle').attr('data-original-title', "This is my tooltip."); }) ``` [Fiddle](https://jsfiddle.net/jrod336/908u17sq/7/)
48,905,062
It's been several months since I have a compilation error on when I want to change the value of an enumeration declared at the beginning of the program (global), in a function that replaces it with an integer. Before I did not have this problem, but having switched my code from a mini arduino card, to ESP8266 the problem appeared .. It do not have the same compiler ?? The error below is still blocking and prevents me from advancing on my project .. I can not find the solution: `ERROR : request for member 'state' in 'CYCLE_ARROSAGE', which is of non-class type '<anonymous enum>'` Here is a simplified example of the problem: ``` enum { S, // SECURITE N, // NUIT J1_1, J1_2, J1_3, // Luminosité 1 J2_1, J2_2, J2_3, // Luminosité 2 J3_1, J3_2, J3_3, // Luminosité 3 } CYCLE_ARROSAGE; // SECURITE void setup () { CYCLE_ARROSAGE = N; // OK } void loop () { CheckChangementCycleArrosage(J2_2); } void CheckChangementCycleArrosage(int NouveauCycle ){ if(CYCLE_ARROSAGE != NouveauCycle){ Serial.print("CYCLE CHECKE : "); Serial.println(NouveauCycle); // -> 6 Serial.print("CYCLE CHECKE CAST: "); Serial.println(String(NouveauCycle)); // -> 6 Serial.print("CYCLE ARROSAGE: "); Serial.println(CYCLE_ARROSAGE); // -> 1 CYCLE_ARROSAGE = NouveauCycle; // -> ERROR } } ``` What could be the solution? I do not understand..
2018/02/21
[ "https://Stackoverflow.com/questions/48905062", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8375960/" ]
You can try following HTML ``` This is my text. <i class="far fa-question-circle" data-toggle="tooltip" data-title="This is my tooltip."></i> ``` Since the default nature of browser is difficult to override and may cause unexpected behavior we can choose alternative way to solve the issue The Bootstrap 4 tool-tip can also show tool tip if the attribute is prefixed with `data`. so you can replace `title` attribute with `data-title` Here is a working fiddle <https://jsfiddle.net/samuelj90/qfcs9azv/18/>
You can achieve by `title` attribute so don't use directly `data-title` or `data-original-title` attribute because of if we are targeting SEO friendly page then need to write well title text. So This is not `Bootstrap4` tooltip issue so the main reason is that when created `svg` tag by `fontawesome` script for icon then its wrapping `title="hello" attribute` to `<title>hello<title> tag` inside svg tag. So we can remove `title` tag by `show.bs.tooltip` event. Doc: <https://getbootstrap.com/docs/4.4/components/tooltips/#events> ```js $(function () { $('[data-toggle="tooltip"]').tooltip(); }); $(function () { $('[data-toggle="tooltip"]').on('show.bs.tooltip', function (e) { //Remove title tag from inside created svg tag $(this).find('title').remove(); }); }); ``` ```html <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.4.1/css/bootstrap.min.css" integrity="sha384-Vkoo8x4CGsO3+Hhxv8T/Q5PaXtkKtu6ug5TOeNV6gBiFeWPGFN9MuhOf23Q9Ifjh" crossorigin="anonymous"> <script src="https://code.jquery.com/jquery-3.4.1.slim.min.js" integrity="sha384-J6qa4849blE2+poT4WnyKhv5vZF5SrPo0iEjwBvKU7imGFAV0wwj1yYfoRSJoZ+n" crossorigin="anonymous"></script> <script src="https://cdn.jsdelivr.net/npm/popper.js@1.16.0/dist/umd/popper.min.js" integrity="sha384-Q6E9RHvbIyZFJoft+2mJbHaEWldlvI9IOYy5n3zV9zzTtmI3UksdQRVvoxMfooAo" crossorigin="anonymous"></script> <script src="https://stackpath.bootstrapcdn.com/bootstrap/4.4.1/js/bootstrap.min.js" integrity="sha384-wfSDF2E50Y2D1uUdj0O3uMBJnjuUD4Ih7YwaYd1iqfktj0Uod8GCExl3Og8ifwB6" crossorigin="anonymous"></script> <script src="https://use.fontawesome.com/releases/v5.0.6/js/all.js"></script> <div class="container py-4"> <div class="row"> <div class="col-sm-4"> This is my text. <i class="far fa-question-circle" data-toggle="tooltip" title="This is my tooltip."></i> </div> </div> </div> ```
48,905,062
It's been several months since I have a compilation error on when I want to change the value of an enumeration declared at the beginning of the program (global), in a function that replaces it with an integer. Before I did not have this problem, but having switched my code from a mini arduino card, to ESP8266 the problem appeared .. It do not have the same compiler ?? The error below is still blocking and prevents me from advancing on my project .. I can not find the solution: `ERROR : request for member 'state' in 'CYCLE_ARROSAGE', which is of non-class type '<anonymous enum>'` Here is a simplified example of the problem: ``` enum { S, // SECURITE N, // NUIT J1_1, J1_2, J1_3, // Luminosité 1 J2_1, J2_2, J2_3, // Luminosité 2 J3_1, J3_2, J3_3, // Luminosité 3 } CYCLE_ARROSAGE; // SECURITE void setup () { CYCLE_ARROSAGE = N; // OK } void loop () { CheckChangementCycleArrosage(J2_2); } void CheckChangementCycleArrosage(int NouveauCycle ){ if(CYCLE_ARROSAGE != NouveauCycle){ Serial.print("CYCLE CHECKE : "); Serial.println(NouveauCycle); // -> 6 Serial.print("CYCLE CHECKE CAST: "); Serial.println(String(NouveauCycle)); // -> 6 Serial.print("CYCLE ARROSAGE: "); Serial.println(CYCLE_ARROSAGE); // -> 1 CYCLE_ARROSAGE = NouveauCycle; // -> ERROR } } ``` What could be the solution? I do not understand..
2018/02/21
[ "https://Stackoverflow.com/questions/48905062", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8375960/" ]
I had issues with the `data-title` from the answers above, instead I had to use the `data-original-title`. You can set this property using the `attr` function from jQuery or directly into the DOM. **HTML:** ``` This is my text. <i class="far fa-question-circle" data-toggle="tooltip"></i> ``` **JavaScript:** ``` $(function () { //Initialize the Bootstrap tooltip $('[data-toggle="tooltip"]').tooltip(); //Force the Tooltip title change at run time $('.fa-question-circle').attr('data-original-title', "This is my tooltip."); }) ``` [Fiddle](https://jsfiddle.net/jrod336/908u17sq/7/)
You can achieve by `title` attribute so don't use directly `data-title` or `data-original-title` attribute because of if we are targeting SEO friendly page then need to write well title text. So This is not `Bootstrap4` tooltip issue so the main reason is that when created `svg` tag by `fontawesome` script for icon then its wrapping `title="hello" attribute` to `<title>hello<title> tag` inside svg tag. So we can remove `title` tag by `show.bs.tooltip` event. Doc: <https://getbootstrap.com/docs/4.4/components/tooltips/#events> ```js $(function () { $('[data-toggle="tooltip"]').tooltip(); }); $(function () { $('[data-toggle="tooltip"]').on('show.bs.tooltip', function (e) { //Remove title tag from inside created svg tag $(this).find('title').remove(); }); }); ``` ```html <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.4.1/css/bootstrap.min.css" integrity="sha384-Vkoo8x4CGsO3+Hhxv8T/Q5PaXtkKtu6ug5TOeNV6gBiFeWPGFN9MuhOf23Q9Ifjh" crossorigin="anonymous"> <script src="https://code.jquery.com/jquery-3.4.1.slim.min.js" integrity="sha384-J6qa4849blE2+poT4WnyKhv5vZF5SrPo0iEjwBvKU7imGFAV0wwj1yYfoRSJoZ+n" crossorigin="anonymous"></script> <script src="https://cdn.jsdelivr.net/npm/popper.js@1.16.0/dist/umd/popper.min.js" integrity="sha384-Q6E9RHvbIyZFJoft+2mJbHaEWldlvI9IOYy5n3zV9zzTtmI3UksdQRVvoxMfooAo" crossorigin="anonymous"></script> <script src="https://stackpath.bootstrapcdn.com/bootstrap/4.4.1/js/bootstrap.min.js" integrity="sha384-wfSDF2E50Y2D1uUdj0O3uMBJnjuUD4Ih7YwaYd1iqfktj0Uod8GCExl3Og8ifwB6" crossorigin="anonymous"></script> <script src="https://use.fontawesome.com/releases/v5.0.6/js/all.js"></script> <div class="container py-4"> <div class="row"> <div class="col-sm-4"> This is my text. <i class="far fa-question-circle" data-toggle="tooltip" title="This is my tooltip."></i> </div> </div> </div> ```
35,932,568
I am coding a MVC5 internet application where some of my Views have hidden values for the ViewModel. Here is an example: ``` @Html.HiddenFor(model => model.id) ``` Is this a safe way to store variables that could potentially be sensitive? By sensitive, I mean variables that should not be seen or changed by any user or any javaScript code.
2016/03/11
[ "https://Stackoverflow.com/questions/35932568", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3736648/" ]
Yes, the end user can see and potentially change the value. All that `HiddenFor` does is render a hidden input tag like so: ``` <input type="hidden" id="id" name="id" value="abc123"/> ``` So, this isn't a safe way to store sensitive data. A *slightly* better way would be a session variable, but that can still be altered by a more savvy user. The answer largely depends on just *how* sensitive the data is. Your best bet is probably saving anything sensitive server side to a database or other data store, and only rendering an ID to the client side so you can validate and retrieve then data when needed.
The only "safe" way is to not send your variables to the client. That aside, you could look at encrypting your values that you send to the clients and decrypting them when they are posted back. A hidden field like below will not be readily apparent to the user and changing the value in the hidden field will invalidate the request. ``` <input type="hidden" id="id" name="id" value="GH1k2ji5as2352"/> ```
3,554,787
I have two MTS video files, each one 2 minutes long. I need to be able to join the files together and convert the format to MPEG4. I have a suitable command line for converting MTS to MP4 but don't know how to join the files together in the first place. Some articles on the web suggest using the CAT command, like: ``` cat video1.mts video2.mts > whole_video.mts ``` However this doesn't work and according to FFMPEG, "whole\_video.mts" is only 2 minutes long, not 4 minutes. Does anyone know how to join the files together? Is FFMPEG the best program to use to do this? Thanks in advance.
2010/08/24
[ "https://Stackoverflow.com/questions/3554787", "https://Stackoverflow.com", "https://Stackoverflow.com/users/312031/" ]
Using cat works. Its just that video players will be kind of fooled about the video length while reading the resulting whole\_video.mts. There will be typically a sudden timestamp jump where the file were previously cut. But this is okay. You can encode it and then you'll get a right timestamped file. Encoding with ffmpeg and then joining with MP4Box is a bad idea. You'll get ugly images with missing blocks at the crossing position if the second file doesn't start with a keyframe (which happens when it has been cut by a camcorder because of the 2GB file limitation). Do join and then encode, not the opposite.
It's OK, I've sorted it. Using the latest SVN versions of FFMPEG, x264 and MP4Box (GPAC), here's what I did... Use FFMPEG to convert the MTS files to MP4 as normal: ``` ffmpeg -i video1.mts -vcodec libx264 -deinterlace -crf 25 -vpre hq -f mp4 -s hd480 -ab 128k -threads 0 -y 1.mp4 ffmpeg -i video2.mts -vcodec libx264 -deinterlace -crf 25 -vpre hq -f mp4 -s hd480 -ab 128k -threads 0 -y 2.mp4 ``` Use MP4Box to join the MP4 files together: ``` MP4Box -cat 1.mp4 -cat 2.mp4 output.mp4 ``` This joins the files together into "output.mp4", however when I use "ffmpeg -i output.mp4" it says the duration is longer that it should be. To fix this, I had to use FFMPEG again: ``` ffmpeg -i output.mp4 -vcodec copy -y final.mp4 ``` And voila! Querying the "final.mp4" file using FFMPEG shows the correct duration and the video plays fine. Hope this helps anyone else experiencing the same problem.
3,554,787
I have two MTS video files, each one 2 minutes long. I need to be able to join the files together and convert the format to MPEG4. I have a suitable command line for converting MTS to MP4 but don't know how to join the files together in the first place. Some articles on the web suggest using the CAT command, like: ``` cat video1.mts video2.mts > whole_video.mts ``` However this doesn't work and according to FFMPEG, "whole\_video.mts" is only 2 minutes long, not 4 minutes. Does anyone know how to join the files together? Is FFMPEG the best program to use to do this? Thanks in advance.
2010/08/24
[ "https://Stackoverflow.com/questions/3554787", "https://Stackoverflow.com", "https://Stackoverflow.com/users/312031/" ]
The following worked perfectly for me (i.e. resulting in seamless joins): ``` ffmpeg -i "concat:00019.MTS|00020.MTS|00021.MTS|00022.MTS" output.mp4 ```
It's OK, I've sorted it. Using the latest SVN versions of FFMPEG, x264 and MP4Box (GPAC), here's what I did... Use FFMPEG to convert the MTS files to MP4 as normal: ``` ffmpeg -i video1.mts -vcodec libx264 -deinterlace -crf 25 -vpre hq -f mp4 -s hd480 -ab 128k -threads 0 -y 1.mp4 ffmpeg -i video2.mts -vcodec libx264 -deinterlace -crf 25 -vpre hq -f mp4 -s hd480 -ab 128k -threads 0 -y 2.mp4 ``` Use MP4Box to join the MP4 files together: ``` MP4Box -cat 1.mp4 -cat 2.mp4 output.mp4 ``` This joins the files together into "output.mp4", however when I use "ffmpeg -i output.mp4" it says the duration is longer that it should be. To fix this, I had to use FFMPEG again: ``` ffmpeg -i output.mp4 -vcodec copy -y final.mp4 ``` And voila! Querying the "final.mp4" file using FFMPEG shows the correct duration and the video plays fine. Hope this helps anyone else experiencing the same problem.
3,554,787
I have two MTS video files, each one 2 minutes long. I need to be able to join the files together and convert the format to MPEG4. I have a suitable command line for converting MTS to MP4 but don't know how to join the files together in the first place. Some articles on the web suggest using the CAT command, like: ``` cat video1.mts video2.mts > whole_video.mts ``` However this doesn't work and according to FFMPEG, "whole\_video.mts" is only 2 minutes long, not 4 minutes. Does anyone know how to join the files together? Is FFMPEG the best program to use to do this? Thanks in advance.
2010/08/24
[ "https://Stackoverflow.com/questions/3554787", "https://Stackoverflow.com", "https://Stackoverflow.com/users/312031/" ]
The following worked perfectly for me (i.e. resulting in seamless joins): ``` ffmpeg -i "concat:00019.MTS|00020.MTS|00021.MTS|00022.MTS" output.mp4 ```
Using cat works. Its just that video players will be kind of fooled about the video length while reading the resulting whole\_video.mts. There will be typically a sudden timestamp jump where the file were previously cut. But this is okay. You can encode it and then you'll get a right timestamped file. Encoding with ffmpeg and then joining with MP4Box is a bad idea. You'll get ugly images with missing blocks at the crossing position if the second file doesn't start with a keyframe (which happens when it has been cut by a camcorder because of the 2GB file limitation). Do join and then encode, not the opposite.
8,001,339
LinqKit has an extension method `ForEach` for `IEnumerable` which clashes with `System.Collections.Generic.IEnumerable`. ``` Error 4 The call is ambiguous between the following methods or properties: 'LinqKit.Extensions.ForEach<Domain>(System.Collections.Generic.IEnumerable<Domain>, System.Action<Domain>)' and 'System.Linq.EnumerableExtensionMethods.ForEach<Domain>(System.Collections.Generic.IEnumerable<Domain>, System.Action<Domain>)' ``` How can I get rid of this error?
2011/11/03
[ "https://Stackoverflow.com/questions/8001339", "https://Stackoverflow.com", "https://Stackoverflow.com/users/264140/" ]
`Enumerable`, in the framework, does not declare an extension for `ForEach()`. Both of these are from external references. You should consider only using one of them - either the reference that's adding `EnumerableExtensionMethods` or the `LinqKit`. (This, btw, is one reason that using the same namespace as the framework causes problems - in this case, the author of `EnumerableExtensionMethods` placed it in `System.Linq`, which is going to cause an issue any time you're using Linq and you have a namespace clash.) If you truly need to use this method, then you'll have to call it directly instead of using the extension method, ie: ``` LinqKit.Extensions.ForEach(collection, action); ``` Or: ``` System.Linq.EnumerableExtensionMethods.ForEach(collection, action); ``` That being said, I would personally just use a foreach loop to process the elements.
You simply need to fully-qualify the method that you're calling, as in the error message. So, instead of using ``` ForEach<Domain>( ... ); ``` use ``` LinqKit.Extensions.ForEach<Domain>( ... ); ```
13,274,953
I'm trying to clean up our work-site Team Foundation Server 2010 defaultcollection. Unfortunately we originally set it up with a whole bunch of projects at the root level of the defaultcollection. Now we want to clean it up by moving a bunch of those projects into a root-level archive directory, while preserving the history of the projects. This is proving extremely difficult. I've read a bunch of stuff online and run some trials, but I'm still having issues. Part of the problem is that projects at the root level seem to be "immune" to a bunch of "normal" actions you can perform on projects in general, such as the Move command (which is greyed-out). If I try to use the command line to perform the move like this: ``` tf.exe move $/TestProj $/Archive/TestProj/ ``` I get: ``` TF10169: Unsupported pending change attempted on team project folder $/Test. Use the Project Creation Wizard in Team Explorer to create a project or the Team Project deletion tool to delete one. ``` So I figured I'd move the contents like this: ``` tf.exe move $/TestProj/* $/Archive/TestProj/ ``` That worked, and history was preserved, but then when I deleted the original project like this: ``` TFSDeleteProject.exe /collection:MYSERVER\DefaultCollection TestProj /force ``` History was lost!
2012/11/07
[ "https://Stackoverflow.com/questions/13274953", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1348592/" ]
Those aren't "root level projects". Those are "Team Projects". There's a lot more to a team project than just source control, so, no, you can't do the same things with a "team project folder" as you could with a lower-level folder. TFS does not use the term "project" the same way that SourceSafe did. In SS, "project" meant pretty much the same thing as "folder".
You can try the /keephistory option... as I understand it, that is supposed to allow you to do what you are trying to do.
48,281,220
I've created a data sync process in Azure so that Azure has created few tables in my SQL Server database in the `Datasync` schema. I want to hide those tables that are located in the `Datasync` schema. Can you guys please suggest how to avoid showing those tables in Azure, or how to hide tables from my SQL Server? [![enter image description here](https://i.stack.imgur.com/xaBDz.png)](https://i.stack.imgur.com/xaBDz.png) [![enter image description here](https://i.stack.imgur.com/yYNMK.png)](https://i.stack.imgur.com/yYNMK.png)
2018/01/16
[ "https://Stackoverflow.com/questions/48281220", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4689622/" ]
There is No `HIDE` feature in SQL Server instead, you can Deny the permission to that Table for Certain Users of User Groups whom you do not want to view the Tables or Objects on your schema You can Use the `DENY` keyword to deny certain Users and `REVOKE` to Remove the existing permission
This question is somewhat old, but today I was looking for something similar so that I can organize tables in my database, either by hiding tables whose designing is completed, or placing them in some folder so that they do not interfere with the tables that I am currently working on *(and confuse me :P )*. I found this article which discusses [Hiding tables in SSMS Object Explorer](https://sqlstudies.com/2017/04/03/hiding-tables-in-ssms-object-explorer-using-extended-properties/) Using extended properties, the OP was able to hide tables from the view. You can read in more details in the article itself, but for this answer purpose, I am reproducing the OP's code here **Hide Table "Person.Address"** ``` EXEC sp_addextendedproperty @name = N'microsoft_database_tools_support', @value = 'Hide', @level0type = N'Schema', @level0name = 'Person', @level1type = N'Table', @level1name = 'Address'; GO ``` **Show Table "Person.Address"** ``` EXEC sp_dropextendedproperty @name = N'microsoft_database_tools_support', @level0type = N'Schema', @level0name = 'Person', @level1type = N'Table', @level1name = 'Address'; GO ``` --- I have just used this on one of my table `dbo.Users.Logins`, for that I had used `@level0name='dbo'`, and `@level1name='Users.Logins'` After executing the command, I refreshed the tables list, it took some 45 seconds to refresh (not sure it is usual or not) but after refresh **the specified table's name was NOT in the list of tables** After removing the extended property, the table's name **was back again**. --- Even when hidden, the table was working properly (`SELECT, INSERT, UPDATE, DELETE` and `JOINS`) --- I am using SQL Server Management Studio v18.4 (© 2019) HTH.
12,019,483
Let me explain my problem. Please excuse me for the long question. Here it goes. I have a View (**BusyProviderView**) ``` <Grid> <xctk:BusyIndicator x:Name="aaa" IsBusy="{Binding IsRunning}" > <xctk:BusyIndicator.BusyContentTemplate> <DataTemplate> <Grid cal:Bind.Model="{Binding}"> <TextBlock Name="Message"/> </Grid> </DataTemplate> </xctk:BusyIndicator.BusyContentTemplate> </xctk:BusyIndicator> </Grid> ``` Which has **View model:** ``` public class BusyProviderViewModel : PropertyChangedBase, IBusyProvider { //two properties with INPC, Message and IsRunning } ``` Again I have a **Shell view** ``` <Window x:Class="MvvmTest.ShellView" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="ShellView" Height="300" Width="300"> <Grid> <Button Height="25" x:Name="Run">Run</Button> <ContentControl x:Name="BusyProvider"/> </Grid> ``` Which has a **view model** ``` public class ShellViewModel : PropertyChangedBase, IShellViewModel { private IBusyProvider busyProvider; public ShellViewModel(IBusyProvider busy) { this.BusyProvider = busy; } public IEnumerable<IResult> Run() { yield return new DummyOperation(this.BusyProvider); } public IBusyProvider BusyProvider { get { return this.busyProvider; } set { if (Equals(value, this.busyProvider)) { return; } this.busyProvider = value; this.NotifyOfPropertyChange(() => this.BusyProvider); } } } ``` **DummyOperation** Looks ``` public class DummyOperation : IResult { public IBusyProvider Provider { get; set; } public DummyOperation(IBusyProvider provider) { Provider = provider; } public void Execute(ActionExecutionContext context) { BackgroundWorker worker = new BackgroundWorker(); worker.DoWork += (a, b) => { Provider.IsRunning = true; Provider.Message = "Working"; Thread.Sleep(TimeSpan.FromSeconds(5)); Provider.Message = "Stopping"; Thread.Sleep(TimeSpan.FromSeconds(5)); Provider.IsRunning = false; }; worker.RunWorkerCompleted += (a, b) => { Completed(this, new ResultCompletionEventArgs()); }; worker.RunWorkerAsync(); } public event EventHandler<ResultCompletionEventArgs> Completed; } ``` Finally I have **BootStrapper** ``` public class AppBootstrapper : Bootstrapper<IShellViewModel> { private Container container; protected override void Configure() { this.container = new Container(); this.container.Register<IWindowManager,WindowManager>(); this.container.Register<IShellViewModel,ShellViewModel>(); this.container.Register<IBusyProvider, BusyProviderViewModel>(); } protected override object GetInstance(Type serviceType, string key) { return this.container.GetInstance(serviceType); } protected override IEnumerable<object> GetAllInstances(Type serviceType) { return this.container.GetAllInstances(serviceType); } protected override void BuildUp(object instance) { this.container.Verify(); } } ``` Looks Like I have set everything, But When I try to run it throws an exception. ![enter image description here](https://i.stack.imgur.com/bHqNS.png) I am sure the problem is causing by ``` <DataTemplate> <Grid cal:Bind.Model="{Binding}"> <TextBlock Name="Message"/> </Grid> </DataTemplate> ``` > > cal:Bind.Model="{Binding} > > > Once I remove above statement the program runs without a crash but no binding. If you look at the Image, ``` protected override object GetInstance(Type serviceType, string key) { return this.container.GetInstance(serviceType); } ``` serviceType is passed as a **NULL**, and key is **"Please Wait...."** , Where that comes from ??
2012/08/18
[ "https://Stackoverflow.com/questions/12019483", "https://Stackoverflow.com", "https://Stackoverflow.com/users/106986/" ]
It seems by default the [Extended Toolkit](https://wpftoolkit.codeplex.com/)'s `BusyIndicator` uses the string `"Please Wait...."` for the `BusyContent`. So inside the `DataTemplate` the `DataContext` will be the above mentioned string and this causes the confusion and exception in Caliburn. To fix it you need to set the `BusyContent` on the `BusyIndicator` to the current `DataContext` and it will work: ``` <xctk:BusyIndicator x:Name="aaa" IsBusy="{Binding IsRunning}" BusyContent="{Binding}" > <xctk:BusyIndicator.BusyContentTemplate> <DataTemplate> <Grid cal:Bind.Model="{Binding}"> <TextBlock Name="Message"/> </Grid> </DataTemplate> </xctk:BusyIndicator.BusyContentTemplate> </xctk:BusyIndicator> ```
I think Oleg is right though, you cannot use conventions in a DataTemplate using Caliburn (CaliburnMicro you can). From the [Documentation - Other Things to Know](http://caliburnmicro.codeplex.com/wikipage?title=All%20About%20Conventions) > > Other Things To Know > On all platforms, conventions cannot by applied to the contents of a DataTemplate. This is a current limitation of the Xaml templating system. I have asked Microsoft to fix this, but I doubt they will respond. As a result, in order to have Binding and Action conventions applied to your DataTemplate, you must add a Bind.Model="{Binding}" attached property to the root element inside the DataTemplate. This provides the necessary hook for Caliburn.Micro to apply its conventions each time a UI is instantiated from a DataTemplate. > > >
63,378,567
I am trying to run an `ember.js` app on my laptop but after installing the `ember-cli` and trying to run `ember --version` command I get an `error`. To install `ember-cli` I used the following command - `npm install -g ember-cli` It is probably important to mention that when I run `ember --version` command outside of the app directory it works, but when I run it inside app directory it crashes and gives `error`. `Node.js` version - `8.11.3` This is the `error` that I get: ``` ember --version /Users/user/go/src/github.com/apps/app/node_modules/ember-cli/node_modules/fs-extra/lib/mkdirs/make-dir.js:85 } catch { ^ SyntaxError: Unexpected token { at createScript (vm.js:80:10) at Object.runInThisContext (vm.js:139:10) at Module._compile (module.js:616:28) at Object.Module._extensions..js (module.js:663:10) at Module.load (module.js:565:32) at tryModuleLoad (module.js:505:12) at Function.Module._load (module.js:497:3) at Module.require (module.js:596:17) at require (internal/module.js:11:18) at Object.<anonymous> (/Users/user/go/src/github.com/apps/app/node_modules/ember-cli/node_modules/fs-extra/lib/mkdirs/index.js:3:44) ``` As suggested in comments by Buck Doyle I changed `Node.js` version to `14.5.0` and ran `ember --version` command again, but I got a different error this time: ``` Node Sass does not yet support your current environment: OS X 64-bit with Unsupported runtime (83) For more information on which environments are supported please see: https://github.com/sass/node-sass/releases/tag/v3.13.1 Stack Trace and Error Report: /var/folders/bt/p_dtgwnd23gbv8nc7v_wpzmr0000gn/T/error.dump.ddc14c42b05e40a5181262bd0b9ad027.log ```
2020/08/12
[ "https://Stackoverflow.com/questions/63378567", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6212903/" ]
> > It is probably important to mention that when I run ember --version command outside of the app directory it works, but when I run it inside app directory it crashes and gives error. > > > Just wanted to answer this small piece. When you run `ember` inside of a directory with a `package.json` that includes `ember-cli` it will run the version of ember installed there. This is really nice when you're moving between a few apps with different versions of `ember-cli`, but can be surprising in situations like this.
replace `catch` with `catch (error)` just had the same problem and solved it by replacing catch with catch(error) so you need to add (error) next to each catch. i needed to change this in about 10 files. so you gotta change every time and run it so you see the new location of the error. best of luck
22,814,315
I count the vowels and the consonant in a string. Now I want to display the most used vowel and consonant in this string the code that I have for the counting ``` private void Button_Click(object sender, RoutedEventArgs e) { char[] charArray = new char[] { 'a', 'e', 'i', 'o', 'u' }; string line = testBox.Text.ToLower(); char letter; int vowels = 0; int sug = 0; for (int i = 0; i < line.Length; i++) { letter = line[i]; if (charArray.Contains(letter)) vowels++; if (!charArray.Contains(letter)) sug++; } MessageBox.Show("number of vowels is" + vowels.ToString()); MessageBox.Show("number of vowels is" + sug.ToString()); } ```
2014/04/02
[ "https://Stackoverflow.com/questions/22814315", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3454520/" ]
Make the vowels and constants lists instead of an int counter then you can manipulate each list at a later stage. ``` private void Button_Click(object sender, RoutedEventArgs e) { char[] charArray = new char[] { 'a', 'e', 'i', 'o', 'u' }; string line = testBox.Text.ToLower(); char letter; List<char> vowels = new List<char>(); List<char> sug = new List<char>(); for (int i = 0; i < line.Length; i++) { letter = line[i]; if (charArray.Contains(letter)) vowels.Add(letter); if (!charArray.Contains(letter)) sug.Add(letter); } MessageBox.Show("number of vowels is" + vowels.Count); MessageBox.Show("number of vowels is" + sug.Count); MessageBox.Show("most used vowel: " + vowels.GroupBy(x => x).OrderByDescending(xs => xs.Count()).Select(xs => xs.Key).First()); MessageBox.Show("most used constant: " + sug.GroupBy(x => x).OrderByDescending(xs => xs.Count()).Select(xs => xs.Key).First()); } ```
Ok here is one way to do it. It may be a little more advanced due to the heavy use of linq and lambadas. It does work, but I would recommend breaking some of the functionality out into functions. ``` char[] charArray = new char[] { 'a', 'e', 'i', 'o', 'u' }; string line = "bbcccaaaeeiiiioouu"; var vowelCounts = new Dictionary<char, int>(); foreach(var vowel in charArray) { vowelCounts.Add(vowel, line.Count(charInString => vowel == charInString)); } var consonantCounts = new Dictionary<char, int>(); foreach(var consonant in line.Where(charInString => !charArray.Contains(charInString)).Distinct()) { consonantCounts.Add(consonant, line.Count(charInString => consonant == charInString)); } KeyValuePair<char, int> mostUsedVowel = vowelCounts.OrderBy(Entry => Entry.Value).FirstOrDefault(); KeyValuePair<char, int> mostUsedConsonant = consonantCounts.OrderBy(Entry => Entry.Value).FirstOrDefault(); string output1 = String.Format("The Vowel '{0}' was used {1} times", mostUsedVowel.Key, mostUsedVowel.Value); string output2 = String.Format("The Consonant '{0}' was used {1} times", mostUsedConsonant.Key, mostUsedConsonant.Value); MessageBox.Show(output1); MessageBox.Show(output2); ```
22,814,315
I count the vowels and the consonant in a string. Now I want to display the most used vowel and consonant in this string the code that I have for the counting ``` private void Button_Click(object sender, RoutedEventArgs e) { char[] charArray = new char[] { 'a', 'e', 'i', 'o', 'u' }; string line = testBox.Text.ToLower(); char letter; int vowels = 0; int sug = 0; for (int i = 0; i < line.Length; i++) { letter = line[i]; if (charArray.Contains(letter)) vowels++; if (!charArray.Contains(letter)) sug++; } MessageBox.Show("number of vowels is" + vowels.ToString()); MessageBox.Show("number of vowels is" + sug.ToString()); } ```
2014/04/02
[ "https://Stackoverflow.com/questions/22814315", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3454520/" ]
Make the vowels and constants lists instead of an int counter then you can manipulate each list at a later stage. ``` private void Button_Click(object sender, RoutedEventArgs e) { char[] charArray = new char[] { 'a', 'e', 'i', 'o', 'u' }; string line = testBox.Text.ToLower(); char letter; List<char> vowels = new List<char>(); List<char> sug = new List<char>(); for (int i = 0; i < line.Length; i++) { letter = line[i]; if (charArray.Contains(letter)) vowels.Add(letter); if (!charArray.Contains(letter)) sug.Add(letter); } MessageBox.Show("number of vowels is" + vowels.Count); MessageBox.Show("number of vowels is" + sug.Count); MessageBox.Show("most used vowel: " + vowels.GroupBy(x => x).OrderByDescending(xs => xs.Count()).Select(xs => xs.Key).First()); MessageBox.Show("most used constant: " + sug.GroupBy(x => x).OrderByDescending(xs => xs.Count()).Select(xs => xs.Key).First()); } ```
As String is an enumerable of characters You can use LINQs GroupBy function to group by characters an then do all kinds of evaluation with the groups: <http://dotnetfiddle.net/dmLkVb> ``` var grouped = line.GroupBy(c=> c); var vowels = grouped.Where(g => charArray.Contains(g.Key)); var mostUsed = vowels.OrderBy(v => v.Count()).Last(); Console.WriteLine("Distinct characters: {0}:", grouped.Count()); Console.WriteLine("Vowels: {0}:", vowels.Count()); Console.WriteLine("Most used vowel: {0} - {1}:", mostUsed.Key, mostUsed.Count()); ```
4,809,314
I am having some trouble with ImageMagick. I have installed GhostScript v9.00 and ImageMagick-6.6.7-1-Q16 on Windows 7 - 32Bit When I run the following command in cmd **convert D:\test\sample.pdf D:\test\pages\page.jpg** only the first page of the pdf is converted to pdf. I have also tried the following command **convert D:\test\sample.pdf D:\test\pages\page-%d.jpg** This creates the first jpg as page-0.jpg but the other are not created. I would really appreciated if someone can shed some light on this. Thanks. **UPDATE:** I have ran the command using -debug "All" one of the many lines out put says: ``` 2011-01-26T22:41:49+01:00 0:00.727 0.109u 6.6.7 Configure Magick[5800]: nt-base.c/NTGhostscriptGetString/1008/Configure registry: "HKEY_CURRENT_USER\SOFTWARE\GPL Ghostscript\9.00\GS_DLL" (failed) ``` Could it maybe have something to do with GhostScript after all?
2011/01/26
[ "https://Stackoverflow.com/questions/4809314", "https://Stackoverflow.com", "https://Stackoverflow.com/users/295654/" ]
You can specify which page to convert by putting a number in [] after the filename: ``` convert D:\test\sample.pdf[7] D:\test\pages\page-7.jpg ``` It should have, however, converted all pages to individual images with your command.
By the way if you need to convert first and second pages then provide in array comma separated values ``` convert D:\test\sample.pdf[0,1] D:\test\pages\page.jpg ``` Resulting JPEG files will be named: * for page 1: `page-0.jpg` * for page 2: `page-1.jpg` You can also do ``` convert D:\test\sample.pdf[10,15,20-22,50] D:\test\pages\page.jpg ``` Resulting JPEG files will be named: * for page 11: `page-10.jpg` * for page 16: `page-15.jpg` * for page 21: `page-20.jpg` * for page 22: `page-21.jpg` * for page 23: `page-22.jpg` * for page 51: `page-50.jpg` May be it will help to someone.
4,809,314
I am having some trouble with ImageMagick. I have installed GhostScript v9.00 and ImageMagick-6.6.7-1-Q16 on Windows 7 - 32Bit When I run the following command in cmd **convert D:\test\sample.pdf D:\test\pages\page.jpg** only the first page of the pdf is converted to pdf. I have also tried the following command **convert D:\test\sample.pdf D:\test\pages\page-%d.jpg** This creates the first jpg as page-0.jpg but the other are not created. I would really appreciated if someone can shed some light on this. Thanks. **UPDATE:** I have ran the command using -debug "All" one of the many lines out put says: ``` 2011-01-26T22:41:49+01:00 0:00.727 0.109u 6.6.7 Configure Magick[5800]: nt-base.c/NTGhostscriptGetString/1008/Configure registry: "HKEY_CURRENT_USER\SOFTWARE\GPL Ghostscript\9.00\GS_DLL" (failed) ``` Could it maybe have something to do with GhostScript after all?
2011/01/26
[ "https://Stackoverflow.com/questions/4809314", "https://Stackoverflow.com", "https://Stackoverflow.com/users/295654/" ]
You can specify which page to convert by putting a number in [] after the filename: ``` convert D:\test\sample.pdf[7] D:\test\pages\page-7.jpg ``` It should have, however, converted all pages to individual images with your command.
According to the site admin at the ImageMagick forum: > > ImageMagick uses the pngalpha device when it finds an Adobe > Illustrator PDF. Many of these are a single page. Ideally, Ghostscript > would support a device that allows multiple PDF pages with > transparency but it doesn't... > > > Easy fix. **Edit delegates.xml and change pngalpha to pnmraw.** > > > This worked for me. I don't know if it introduces any other problems however. See [this post from their forums](http://www.imagemagick.org/discourse-server/viewtopic.php?f=3&t=18001).
4,809,314
I am having some trouble with ImageMagick. I have installed GhostScript v9.00 and ImageMagick-6.6.7-1-Q16 on Windows 7 - 32Bit When I run the following command in cmd **convert D:\test\sample.pdf D:\test\pages\page.jpg** only the first page of the pdf is converted to pdf. I have also tried the following command **convert D:\test\sample.pdf D:\test\pages\page-%d.jpg** This creates the first jpg as page-0.jpg but the other are not created. I would really appreciated if someone can shed some light on this. Thanks. **UPDATE:** I have ran the command using -debug "All" one of the many lines out put says: ``` 2011-01-26T22:41:49+01:00 0:00.727 0.109u 6.6.7 Configure Magick[5800]: nt-base.c/NTGhostscriptGetString/1008/Configure registry: "HKEY_CURRENT_USER\SOFTWARE\GPL Ghostscript\9.00\GS_DLL" (failed) ``` Could it maybe have something to do with GhostScript after all?
2011/01/26
[ "https://Stackoverflow.com/questions/4809314", "https://Stackoverflow.com", "https://Stackoverflow.com/users/295654/" ]
You can specify which page to convert by putting a number in [] after the filename: ``` convert D:\test\sample.pdf[7] D:\test\pages\page-7.jpg ``` It should have, however, converted all pages to individual images with your command.
I found this solution which convert all pages in the pdf to a single jpg image: ``` montage input.pdf -mode Concatenate -tile 1x output.jpg ``` montage is included in ImageMagick. Tested on ImageMagick 6.7.7-10 on Ubuntu 13.04.
4,809,314
I am having some trouble with ImageMagick. I have installed GhostScript v9.00 and ImageMagick-6.6.7-1-Q16 on Windows 7 - 32Bit When I run the following command in cmd **convert D:\test\sample.pdf D:\test\pages\page.jpg** only the first page of the pdf is converted to pdf. I have also tried the following command **convert D:\test\sample.pdf D:\test\pages\page-%d.jpg** This creates the first jpg as page-0.jpg but the other are not created. I would really appreciated if someone can shed some light on this. Thanks. **UPDATE:** I have ran the command using -debug "All" one of the many lines out put says: ``` 2011-01-26T22:41:49+01:00 0:00.727 0.109u 6.6.7 Configure Magick[5800]: nt-base.c/NTGhostscriptGetString/1008/Configure registry: "HKEY_CURRENT_USER\SOFTWARE\GPL Ghostscript\9.00\GS_DLL" (failed) ``` Could it maybe have something to do with GhostScript after all?
2011/01/26
[ "https://Stackoverflow.com/questions/4809314", "https://Stackoverflow.com", "https://Stackoverflow.com/users/295654/" ]
You can specify which page to convert by putting a number in [] after the filename: ``` convert D:\test\sample.pdf[7] D:\test\pages\page-7.jpg ``` It should have, however, converted all pages to individual images with your command.
I ran into similar problem with GhostScript. This can be solver with using `%03d` iterator in the output file name. Here is example: ``` gs -r300 -dNOPAUSE -dBATCH -sDEVICE#pngalpha -sOutputFile=output-%03d.png input.pdf ``` Here is the reference with detailed information: <https://ghostscript.com/doc/current/Devices.htm>
4,809,314
I am having some trouble with ImageMagick. I have installed GhostScript v9.00 and ImageMagick-6.6.7-1-Q16 on Windows 7 - 32Bit When I run the following command in cmd **convert D:\test\sample.pdf D:\test\pages\page.jpg** only the first page of the pdf is converted to pdf. I have also tried the following command **convert D:\test\sample.pdf D:\test\pages\page-%d.jpg** This creates the first jpg as page-0.jpg but the other are not created. I would really appreciated if someone can shed some light on this. Thanks. **UPDATE:** I have ran the command using -debug "All" one of the many lines out put says: ``` 2011-01-26T22:41:49+01:00 0:00.727 0.109u 6.6.7 Configure Magick[5800]: nt-base.c/NTGhostscriptGetString/1008/Configure registry: "HKEY_CURRENT_USER\SOFTWARE\GPL Ghostscript\9.00\GS_DLL" (failed) ``` Could it maybe have something to do with GhostScript after all?
2011/01/26
[ "https://Stackoverflow.com/questions/4809314", "https://Stackoverflow.com", "https://Stackoverflow.com/users/295654/" ]
By the way if you need to convert first and second pages then provide in array comma separated values ``` convert D:\test\sample.pdf[0,1] D:\test\pages\page.jpg ``` Resulting JPEG files will be named: * for page 1: `page-0.jpg` * for page 2: `page-1.jpg` You can also do ``` convert D:\test\sample.pdf[10,15,20-22,50] D:\test\pages\page.jpg ``` Resulting JPEG files will be named: * for page 11: `page-10.jpg` * for page 16: `page-15.jpg` * for page 21: `page-20.jpg` * for page 22: `page-21.jpg` * for page 23: `page-22.jpg` * for page 51: `page-50.jpg` May be it will help to someone.
According to the site admin at the ImageMagick forum: > > ImageMagick uses the pngalpha device when it finds an Adobe > Illustrator PDF. Many of these are a single page. Ideally, Ghostscript > would support a device that allows multiple PDF pages with > transparency but it doesn't... > > > Easy fix. **Edit delegates.xml and change pngalpha to pnmraw.** > > > This worked for me. I don't know if it introduces any other problems however. See [this post from their forums](http://www.imagemagick.org/discourse-server/viewtopic.php?f=3&t=18001).
4,809,314
I am having some trouble with ImageMagick. I have installed GhostScript v9.00 and ImageMagick-6.6.7-1-Q16 on Windows 7 - 32Bit When I run the following command in cmd **convert D:\test\sample.pdf D:\test\pages\page.jpg** only the first page of the pdf is converted to pdf. I have also tried the following command **convert D:\test\sample.pdf D:\test\pages\page-%d.jpg** This creates the first jpg as page-0.jpg but the other are not created. I would really appreciated if someone can shed some light on this. Thanks. **UPDATE:** I have ran the command using -debug "All" one of the many lines out put says: ``` 2011-01-26T22:41:49+01:00 0:00.727 0.109u 6.6.7 Configure Magick[5800]: nt-base.c/NTGhostscriptGetString/1008/Configure registry: "HKEY_CURRENT_USER\SOFTWARE\GPL Ghostscript\9.00\GS_DLL" (failed) ``` Could it maybe have something to do with GhostScript after all?
2011/01/26
[ "https://Stackoverflow.com/questions/4809314", "https://Stackoverflow.com", "https://Stackoverflow.com/users/295654/" ]
By the way if you need to convert first and second pages then provide in array comma separated values ``` convert D:\test\sample.pdf[0,1] D:\test\pages\page.jpg ``` Resulting JPEG files will be named: * for page 1: `page-0.jpg` * for page 2: `page-1.jpg` You can also do ``` convert D:\test\sample.pdf[10,15,20-22,50] D:\test\pages\page.jpg ``` Resulting JPEG files will be named: * for page 11: `page-10.jpg` * for page 16: `page-15.jpg` * for page 21: `page-20.jpg` * for page 22: `page-21.jpg` * for page 23: `page-22.jpg` * for page 51: `page-50.jpg` May be it will help to someone.
I found this solution which convert all pages in the pdf to a single jpg image: ``` montage input.pdf -mode Concatenate -tile 1x output.jpg ``` montage is included in ImageMagick. Tested on ImageMagick 6.7.7-10 on Ubuntu 13.04.
4,809,314
I am having some trouble with ImageMagick. I have installed GhostScript v9.00 and ImageMagick-6.6.7-1-Q16 on Windows 7 - 32Bit When I run the following command in cmd **convert D:\test\sample.pdf D:\test\pages\page.jpg** only the first page of the pdf is converted to pdf. I have also tried the following command **convert D:\test\sample.pdf D:\test\pages\page-%d.jpg** This creates the first jpg as page-0.jpg but the other are not created. I would really appreciated if someone can shed some light on this. Thanks. **UPDATE:** I have ran the command using -debug "All" one of the many lines out put says: ``` 2011-01-26T22:41:49+01:00 0:00.727 0.109u 6.6.7 Configure Magick[5800]: nt-base.c/NTGhostscriptGetString/1008/Configure registry: "HKEY_CURRENT_USER\SOFTWARE\GPL Ghostscript\9.00\GS_DLL" (failed) ``` Could it maybe have something to do with GhostScript after all?
2011/01/26
[ "https://Stackoverflow.com/questions/4809314", "https://Stackoverflow.com", "https://Stackoverflow.com/users/295654/" ]
By the way if you need to convert first and second pages then provide in array comma separated values ``` convert D:\test\sample.pdf[0,1] D:\test\pages\page.jpg ``` Resulting JPEG files will be named: * for page 1: `page-0.jpg` * for page 2: `page-1.jpg` You can also do ``` convert D:\test\sample.pdf[10,15,20-22,50] D:\test\pages\page.jpg ``` Resulting JPEG files will be named: * for page 11: `page-10.jpg` * for page 16: `page-15.jpg` * for page 21: `page-20.jpg` * for page 22: `page-21.jpg` * for page 23: `page-22.jpg` * for page 51: `page-50.jpg` May be it will help to someone.
I ran into similar problem with GhostScript. This can be solver with using `%03d` iterator in the output file name. Here is example: ``` gs -r300 -dNOPAUSE -dBATCH -sDEVICE#pngalpha -sOutputFile=output-%03d.png input.pdf ``` Here is the reference with detailed information: <https://ghostscript.com/doc/current/Devices.htm>
4,809,314
I am having some trouble with ImageMagick. I have installed GhostScript v9.00 and ImageMagick-6.6.7-1-Q16 on Windows 7 - 32Bit When I run the following command in cmd **convert D:\test\sample.pdf D:\test\pages\page.jpg** only the first page of the pdf is converted to pdf. I have also tried the following command **convert D:\test\sample.pdf D:\test\pages\page-%d.jpg** This creates the first jpg as page-0.jpg but the other are not created. I would really appreciated if someone can shed some light on this. Thanks. **UPDATE:** I have ran the command using -debug "All" one of the many lines out put says: ``` 2011-01-26T22:41:49+01:00 0:00.727 0.109u 6.6.7 Configure Magick[5800]: nt-base.c/NTGhostscriptGetString/1008/Configure registry: "HKEY_CURRENT_USER\SOFTWARE\GPL Ghostscript\9.00\GS_DLL" (failed) ``` Could it maybe have something to do with GhostScript after all?
2011/01/26
[ "https://Stackoverflow.com/questions/4809314", "https://Stackoverflow.com", "https://Stackoverflow.com/users/295654/" ]
According to the site admin at the ImageMagick forum: > > ImageMagick uses the pngalpha device when it finds an Adobe > Illustrator PDF. Many of these are a single page. Ideally, Ghostscript > would support a device that allows multiple PDF pages with > transparency but it doesn't... > > > Easy fix. **Edit delegates.xml and change pngalpha to pnmraw.** > > > This worked for me. I don't know if it introduces any other problems however. See [this post from their forums](http://www.imagemagick.org/discourse-server/viewtopic.php?f=3&t=18001).
I ran into similar problem with GhostScript. This can be solver with using `%03d` iterator in the output file name. Here is example: ``` gs -r300 -dNOPAUSE -dBATCH -sDEVICE#pngalpha -sOutputFile=output-%03d.png input.pdf ``` Here is the reference with detailed information: <https://ghostscript.com/doc/current/Devices.htm>
4,809,314
I am having some trouble with ImageMagick. I have installed GhostScript v9.00 and ImageMagick-6.6.7-1-Q16 on Windows 7 - 32Bit When I run the following command in cmd **convert D:\test\sample.pdf D:\test\pages\page.jpg** only the first page of the pdf is converted to pdf. I have also tried the following command **convert D:\test\sample.pdf D:\test\pages\page-%d.jpg** This creates the first jpg as page-0.jpg but the other are not created. I would really appreciated if someone can shed some light on this. Thanks. **UPDATE:** I have ran the command using -debug "All" one of the many lines out put says: ``` 2011-01-26T22:41:49+01:00 0:00.727 0.109u 6.6.7 Configure Magick[5800]: nt-base.c/NTGhostscriptGetString/1008/Configure registry: "HKEY_CURRENT_USER\SOFTWARE\GPL Ghostscript\9.00\GS_DLL" (failed) ``` Could it maybe have something to do with GhostScript after all?
2011/01/26
[ "https://Stackoverflow.com/questions/4809314", "https://Stackoverflow.com", "https://Stackoverflow.com/users/295654/" ]
I found this solution which convert all pages in the pdf to a single jpg image: ``` montage input.pdf -mode Concatenate -tile 1x output.jpg ``` montage is included in ImageMagick. Tested on ImageMagick 6.7.7-10 on Ubuntu 13.04.
I ran into similar problem with GhostScript. This can be solver with using `%03d` iterator in the output file name. Here is example: ``` gs -r300 -dNOPAUSE -dBATCH -sDEVICE#pngalpha -sOutputFile=output-%03d.png input.pdf ``` Here is the reference with detailed information: <https://ghostscript.com/doc/current/Devices.htm>
14,360,880
I have a form to collect the user's contact info, and this form is using 3 fields for date of birth. I'm using Jquery UI's Datepicker for selecting the date and ValidationEngine ([source here](https://github.com/posabsolute/jQuery-Validation-Engine) and original developer [here](http://www.position-absolute.com/articles/jquery-form-validator-because-form-validation-is-a-mess/)) for form validation. I want to be sure the date is correct before the I submit the form.
2013/01/16
[ "https://Stackoverflow.com/questions/14360880", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1983959/" ]
API 10 is gingerbread which doesn't support fragments as you can see in the log cat the error is inflating the class fragment. You would either need to use a library like `ActionBarSherlock` or the android support library may allow it, or provide an alternative layout for the gingerbread version. **UPDATE** If your using the support library, make sure to use `getSupportFragmentManager` not `getFragmentManager()`. Maybe this link will also help <http://mobile.tutsplus.com/tutorials/android/android-compatibility-working-with-fragments/>
Make sure you have Importet Fragments from the Supporter Library: ``` import android.support.v4.app.Fragment; ``` If you added the minSDK in you Manifest you can run Lint to see if you are using Methods that aren't available in some of your supported Versions. In Manifest: ``` <uses-sdk android:minSdkVersion="8" /> ```
14,360,880
I have a form to collect the user's contact info, and this form is using 3 fields for date of birth. I'm using Jquery UI's Datepicker for selecting the date and ValidationEngine ([source here](https://github.com/posabsolute/jQuery-Validation-Engine) and original developer [here](http://www.position-absolute.com/articles/jquery-form-validator-because-form-validation-is-a-mess/)) for form validation. I want to be sure the date is correct before the I submit the form.
2013/01/16
[ "https://Stackoverflow.com/questions/14360880", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1983959/" ]
API 10 is gingerbread which doesn't support fragments as you can see in the log cat the error is inflating the class fragment. You would either need to use a library like `ActionBarSherlock` or the android support library may allow it, or provide an alternative layout for the gingerbread version. **UPDATE** If your using the support library, make sure to use `getSupportFragmentManager` not `getFragmentManager()`. Maybe this link will also help <http://mobile.tutsplus.com/tutorials/android/android-compatibility-working-with-fragments/>
The problem is you are using new API calls. API only supports `Fragments` through the [support library](http://developer.android.com/tools/extras/support-library.html), but the changes do not happen automatically just by importing the library. You have to make sure you use the library functionality and not the newer API. For example, you need to change your imports to use the support library. Should look something like -- ``` import android.support.v4.app.Fragment; import android.support.v4.app.FragmentManager; ``` Add whatever other imports you need. The other obvious difference, and probably the root of your problem is that in order to host a `Fragment`, you need to extend your activity from `FragmentActivity` instead of `Activity`. See "[Using the v4 Library APIs](http://developer.android.com/tools/extras/support-library.html#Using)" for more details on the support API vs the regular APIs.
1,519,792
As I know in win32 every program receives say 4GB of virtual memory. Memory manager is responsible for offloading chunks of memory from physical memory to disk. Does it imply that malloc or any other memory allocation API will throw OUT\_OF\_MEMORY exception only when virtual limit is hit? I mean is it possible for malloc to fail even if program is far from its virtual size limit e.g. none of physical memory can be offloaded to disk. Assume disk have unlimited capacity and no specific limitation is set.
2009/10/05
[ "https://Stackoverflow.com/questions/1519792", "https://Stackoverflow.com", "https://Stackoverflow.com/users/53861/" ]
Yes, it's possible. Remember that memory can be fragmented and that `malloc` won't be able to find a sufficiently large chunk to serve the size you requested. This can easily be way before you hit your 4 GiB limit.
The virtual memory limit on Win 32 is 2Gb. On Win 64, it's much bigger. `malloc` doesn't throw an exception - it returns NULL. NULL return, or exception, the memory manager can fail well before the 2Gb limit is reached if * The paging file isn't big enough. If the paging file is limited either by policy, or by lack of room to expand: If memory allocations can't be met by page file availability then they will fail. * Fragmentation. The underlying memory manager allocates memory in 4Kb chunks. Its quite possible, through patterns of allocations and deallocations to end up with a small amount of allocated memory, but a fragmented virtual memory meaning that there is no contiguous area large enough to meet a particular request.
1,519,792
As I know in win32 every program receives say 4GB of virtual memory. Memory manager is responsible for offloading chunks of memory from physical memory to disk. Does it imply that malloc or any other memory allocation API will throw OUT\_OF\_MEMORY exception only when virtual limit is hit? I mean is it possible for malloc to fail even if program is far from its virtual size limit e.g. none of physical memory can be offloaded to disk. Assume disk have unlimited capacity and no specific limitation is set.
2009/10/05
[ "https://Stackoverflow.com/questions/1519792", "https://Stackoverflow.com", "https://Stackoverflow.com/users/53861/" ]
Yes, it's possible. Remember that memory can be fragmented and that `malloc` won't be able to find a sufficiently large chunk to serve the size you requested. This can easily be way before you hit your 4 GiB limit.
how about allocating a few smaller areas, if a huge one isn't available?
1,519,792
As I know in win32 every program receives say 4GB of virtual memory. Memory manager is responsible for offloading chunks of memory from physical memory to disk. Does it imply that malloc or any other memory allocation API will throw OUT\_OF\_MEMORY exception only when virtual limit is hit? I mean is it possible for malloc to fail even if program is far from its virtual size limit e.g. none of physical memory can be offloaded to disk. Assume disk have unlimited capacity and no specific limitation is set.
2009/10/05
[ "https://Stackoverflow.com/questions/1519792", "https://Stackoverflow.com", "https://Stackoverflow.com/users/53861/" ]
Yes, it's possible. Remember that memory can be fragmented and that `malloc` won't be able to find a sufficiently large chunk to serve the size you requested. This can easily be way before you hit your 4 GiB limit.
For full chapter and verse on Windows virtual memory check out this post on Mark Russinovich's Blog (lots of other great stuff here too): [Pushing the Limits of Windows: Virtual Memory](http://blogs.technet.com/markrussinovich/archive/2008/11/17/3155406.aspx) If memory fragmentation is your problem and writing custom allocators isn't your thing you could consider enabling the low fragmentation heap: [Low Fragmentation Heap (Windows)](http://msdn.microsoft.com/en-us/library/aa366750%28VS.85%29.aspx) This is on by default these days mind you.
1,519,792
As I know in win32 every program receives say 4GB of virtual memory. Memory manager is responsible for offloading chunks of memory from physical memory to disk. Does it imply that malloc or any other memory allocation API will throw OUT\_OF\_MEMORY exception only when virtual limit is hit? I mean is it possible for malloc to fail even if program is far from its virtual size limit e.g. none of physical memory can be offloaded to disk. Assume disk have unlimited capacity and no specific limitation is set.
2009/10/05
[ "https://Stackoverflow.com/questions/1519792", "https://Stackoverflow.com", "https://Stackoverflow.com/users/53861/" ]
The virtual memory limit on Win 32 is 2Gb. On Win 64, it's much bigger. `malloc` doesn't throw an exception - it returns NULL. NULL return, or exception, the memory manager can fail well before the 2Gb limit is reached if * The paging file isn't big enough. If the paging file is limited either by policy, or by lack of room to expand: If memory allocations can't be met by page file availability then they will fail. * Fragmentation. The underlying memory manager allocates memory in 4Kb chunks. Its quite possible, through patterns of allocations and deallocations to end up with a small amount of allocated memory, but a fragmented virtual memory meaning that there is no contiguous area large enough to meet a particular request.
how about allocating a few smaller areas, if a huge one isn't available?
1,519,792
As I know in win32 every program receives say 4GB of virtual memory. Memory manager is responsible for offloading chunks of memory from physical memory to disk. Does it imply that malloc or any other memory allocation API will throw OUT\_OF\_MEMORY exception only when virtual limit is hit? I mean is it possible for malloc to fail even if program is far from its virtual size limit e.g. none of physical memory can be offloaded to disk. Assume disk have unlimited capacity and no specific limitation is set.
2009/10/05
[ "https://Stackoverflow.com/questions/1519792", "https://Stackoverflow.com", "https://Stackoverflow.com/users/53861/" ]
The virtual memory limit on Win 32 is 2Gb. On Win 64, it's much bigger. `malloc` doesn't throw an exception - it returns NULL. NULL return, or exception, the memory manager can fail well before the 2Gb limit is reached if * The paging file isn't big enough. If the paging file is limited either by policy, or by lack of room to expand: If memory allocations can't be met by page file availability then they will fail. * Fragmentation. The underlying memory manager allocates memory in 4Kb chunks. Its quite possible, through patterns of allocations and deallocations to end up with a small amount of allocated memory, but a fragmented virtual memory meaning that there is no contiguous area large enough to meet a particular request.
For full chapter and verse on Windows virtual memory check out this post on Mark Russinovich's Blog (lots of other great stuff here too): [Pushing the Limits of Windows: Virtual Memory](http://blogs.technet.com/markrussinovich/archive/2008/11/17/3155406.aspx) If memory fragmentation is your problem and writing custom allocators isn't your thing you could consider enabling the low fragmentation heap: [Low Fragmentation Heap (Windows)](http://msdn.microsoft.com/en-us/library/aa366750%28VS.85%29.aspx) This is on by default these days mind you.
1,519,792
As I know in win32 every program receives say 4GB of virtual memory. Memory manager is responsible for offloading chunks of memory from physical memory to disk. Does it imply that malloc or any other memory allocation API will throw OUT\_OF\_MEMORY exception only when virtual limit is hit? I mean is it possible for malloc to fail even if program is far from its virtual size limit e.g. none of physical memory can be offloaded to disk. Assume disk have unlimited capacity and no specific limitation is set.
2009/10/05
[ "https://Stackoverflow.com/questions/1519792", "https://Stackoverflow.com", "https://Stackoverflow.com/users/53861/" ]
For full chapter and verse on Windows virtual memory check out this post on Mark Russinovich's Blog (lots of other great stuff here too): [Pushing the Limits of Windows: Virtual Memory](http://blogs.technet.com/markrussinovich/archive/2008/11/17/3155406.aspx) If memory fragmentation is your problem and writing custom allocators isn't your thing you could consider enabling the low fragmentation heap: [Low Fragmentation Heap (Windows)](http://msdn.microsoft.com/en-us/library/aa366750%28VS.85%29.aspx) This is on by default these days mind you.
how about allocating a few smaller areas, if a huge one isn't available?
10,920,255
I have a custom TextBox on a standard Windows Form. In OnLeave() of the TextBox, I am trying to find out the value of a particular custom string property added to the form in its constructor; Form constructor; ``` public partial class FormName : Form { public string psTableName { get; set; } ``` TextBox OnLeave Method; ``` protected override void OnLeave(EventArgs e) { try { if (!Convert.ToDouble(this.Text).Equals(this.rnOrigValue)) { ``` Inside the if statement above, I am trying to find; ``` This.FindForm().psTableName ``` I have tried looping through the controls with; ``` foreach (Control loObject in this.FindForm().Controls) { // Code here } ``` But that only retrieves the TextBox, Labels, etc. However can I find the value of psTableName?
2012/06/06
[ "https://Stackoverflow.com/questions/10920255", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1440549/" ]
First, a string is not a control and therefore won't be returned by `FindForm().Controls` Since it's a public member, can't you just do: `(this.FindForm() as FormName).psTablename` I would check for null, first, but you get the idea.
Does this achieve what you are looking for? ``` protected override void OnLeave(EventArgs e) { string x = this.psTableName; } ```