text
stringlengths
70
452k
dataset
stringclasses
2 values
Where is the Authorized Redirect URIs in Google API now? I am using Google Drive API. I have set up my project, set the credentials on OAuth 2.0 Client IDs, filled in every item on OAuth Consent Screen. Neither from OAuth Consent Screen and the OAuth 2.0 Client IDs, I cannot find the authorized redirect URLs. Where is it now? My project require me to set up the redirect URL. I filled in all the item and still can't find the option. Are they not located where the documentation explains? https://developers.google.com/workspace/guides/create-credentials#oauth-client-id what error are you getting exactly? The Authorized Redirect URIs are set under each individual Client ID - see https://developers.google.com/identity/protocols/oauth2/web-server#creatingcred
common-pile/stackexchange_filtered
cap production deploy ruby on rails I tried to deploy the ruby on rails project from a github folder. However, I got such an error when I run cap production deploy /home/deploy/.rbenv/versions/2.1.2/lib/ruby/2.1.0/rubygems/core_ext/kernel_require.rb:55:in 'require': cannot load such file -- capistrano/cli (LoadError) from /home/deploy/.rbenv/ versions/2.1.2/lib/ruby/2.1.0/rubygems/core_ext/kernel_require.rb:55:in 'require' from /usr/bin/cap:3:in '<main>' I have deploy.rb starting with: # config valid only for Capistrano 3.1 lock '3.1.0' And I installed Capistrano v.3.1 with the following command-line: gem install capistrano -v 3.1.0 Anybody has ideas why I still got an error? Attached Gemfile: source 'https://rubygems.org' gem 'rails', '4.1.0' # Frontend gem 'simple_form' gem 'nested_form', github: 'ryanb/nested_form' gem 'turbolinks' gem 'bootstrap-sass', '~> 3.2.0' gem 'kaminari' # Javascript gem 'gon' gem 'angularjs-rails' gem 'selectize-rails' gem 'js-routes' # Backend gem 'pg' gem 'mongoid', github: 'mongoid/mongoid' gem 'mongoid_geospatial' gem "active_model_serializers" gem 'devise' gem 'state_machine' gem "rolify" gem "pundit" gem 'enumerize' gem 'simple-rss' gem 'tweetstream' gem 'swagger-docs', path: "vendor/gems/swagger-docs-0.1.5" gem 'wkhtmltopdf-binary' gem 'wicked_pdf' gem 'paper_trail', '~> 3.0.3' # Temporary gem 'faker' gem 'factory_girl_rails' # Asset gems gem 'haml-rails' gem 'uglifier', '>= 1.3.0' gem 'coffee-rails', '~> 4.0.0' gem 'jquery-rails' gem 'sass-rails', '~> 4.0.3' gem 'compass-rails' group :development do gem 'spring' # Spring speeds up development by keeping your application running in the background. Read more: https://github.com/rails/spring gem 'switch_user' gem 'better_errors' gem 'binding_of_caller' gem 'sextant' gem 'guard-livereload', require: false gem 'capistrano', '~> 3.1.0' gem 'capistrano-bundler', '~> 1.1.2' gem 'capistrano-rails', '~> 1.1.1' gem 'capistrano-rbenv', github: "capistrano/rbenv" end group :test do gem 'rspec-rails' gem 'spring-commands-rspec' gem 'guard-rspec' gem 'fuubar' gem 'capybara' gem 'capybara-webkit' gem 'capybara-email' gem 'capybara-screenshot' gem 'database_cleaner' end group :test, :darwin do gem 'rb-fsevent'# if `uname` =~ /Darwin/ end group :development, :test do gem 'pry-rails' gem 'pry-remote' end # See https://github.com/sstephenson/execjs#readme for more supported runtimes # gem 'therubyracer', platforms: :ruby # Use unicorn as the app server # gem 'unicorn' # Use Capistrano for deployment # gem 'capistrano-rails', group: :development # Use debugger # gem 'debugger', group: [:development, :test] Thanks! Have you install capistrano by adding capistrano to the GEMFILE. And then run the bundle install command. And then run bundle exec cap install. to Generate config files. Then do the configuration as per appropriate per your project. Could you show also your Gemfile. So we can try figure out what is causting this issue here. You could try follow the instructions from https://github.com/capistrano/capistrano and start from the beginning and see if the same issue still there. @ChawlaS , thanks for the quick reply. I have add the Gemfile.
common-pile/stackexchange_filtered
Using smart pointers for class members I'm having trouble understanding the usage of smart pointers as class members in C++11. I have read a lot about smart pointers and I think I do understand how unique_ptr and shared_ptr/weak_ptr work in general. What I don't understand is the real usage. It seems like everybody recommends using unique_ptr as the way to go almost all the time. But how would I implement something like this: class Device { }; class Settings { Device *device; public: Settings(Device *device) { this->device = device; } Device *getDevice() { return device; } }; int main() { Device *device = new Device(); Settings settings(device); // ... Device *myDevice = settings.getDevice(); // do something with myDevice... } Let's say I would like to replace the pointers with smart pointers. A unique_ptr would not work because of getDevice(), right? So that's the time when I use shared_ptr and weak_ptr? No way of using unique_ptr? Seems to me like for most cases shared_ptr makes more sense unless I'm using a pointer in a really small scope? class Device { }; class Settings { std::shared_ptr<Device> device; public: Settings(std::shared_ptr<Device> device) { this->device = device; } std::weak_ptr<Device> getDevice() { return device; } }; int main() { std::shared_ptr<Device> device(new Device()); Settings settings(device); // ... std::weak_ptr<Device> myDevice = settings.getDevice(); // do something with myDevice... } Is that the way to go? Thanks very much! It helps to be really clear as to lifetime, ownership and possible nulls. For example, having passed device to the constructor of settings, do you want to still be able to refer to it in the calling scope, or only via settings? If the latter, unique_ptr is useful. Also, do you have a scenario where the return value of getDevice() is null. If not, just return a reference. Yes, a shared_ptr is correct in 8/10 cases. The other 2/10 are split between unique_ptr and weak_ptr. Also, weak_ptr is generally used to break circular references; I'm not sure that your usage would be considered correct. First of all, what ownership do you want for the device data member? You first have to decide that. Ok, I understand that as the caller I could use a unique_ptr instead and give the ownership up when calling the constructor, if I know I won't need it anymore for now. But as the designer of the Settings class I don't know if the caller wants to keep a reference as well. Maybe the device will be used in many places. Ok, maybe that's exactly your point. In that case, I would not be the sole owner and that's when I would use shared_ptr, I guess. And: so smart points do replace pointers, but not references, right? this->device = device; Also use initialization lists. As I posted on the accepted answer, there are many implied limitations you accept by making a unique_ptr a member variable, probably chief among them is the implicit deletion of the standard copy constructor. This deletion possibly makes the use of your class highly unintuitive for any programmer downstream. See: https://stackoverflow.com/questions/16030081/copy-constructor-for-a-class-with-unique-ptr A unique_ptr would not work because of getDevice(), right? No, not necessarily. What is important here is to determine the appropriate ownership policy for your Device object, i.e. who is going to be the owner of the object pointed to by your (smart) pointer. Is it going to be the instance of the Settings object alone? Will the Device object have to be destroyed automatically when the Settings object gets destroyed, or should it outlive that object? In the first case, std::unique_ptr is what you need, since it makes Settings the only (unique) owner of the pointed object, and the only object which is responsible for its destruction. Under this assumption, getDevice() should return a simple observing pointer (observing pointers are pointers which do not keep the pointed object alive). The simplest kind of observing pointer is a raw pointer: #include <memory> class Device { }; class Settings { std::unique_ptr<Device> device; public: Settings(std::unique_ptr<Device> d) { device = std::move(d); } Device* getDevice() { return device.get(); } }; int main() { std::unique_ptr<Device> device(new Device()); Settings settings(std::move(device)); // ... Device *myDevice = settings.getDevice(); // do something with myDevice... } [NOTE 1: You may be wondering why I am using raw pointers here, when everybody keeps telling that raw pointers are bad, unsafe, and dangerous. Actually, that is a precious warning, but it is important to put it in the correct context: raw pointers are bad when used for performing manual memory management, i.e. allocating and deallocating objects through new and delete. When used purely as a means to achieve reference semantics and pass around non-owning, observing pointers, there is nothing intrinsically dangerous in raw pointers, except maybe for the fact that one should take care not to dereference a dangling pointer. - END NOTE 1] [NOTE 2: As it emerged in the comments, in this particular case where the ownership is unique and the owned object is always guaranteed to be present (i.e. the internal data member device is never going to be nullptr), function getDevice() could (and maybe should) return a reference rather than a pointer. While this is true, I decided to return a raw pointer here because I meant this to be a short answer that one could generalize to the case where device could be nullptr, and to show that raw pointers are OK as long as one does not use them for manual memory management. - END NOTE 2] The situation is radically different, of course, if your Settings object should not have the exclusive ownership of the device. This could be the case, for instance, if the destruction of the Settings object should not imply the destruction of the pointed Device object as well. This is something that only you as a designer of your program can tell; from the example you provide, it is hard for me to tell whether this is the case or not. To help you figure it out, you may ask yourself whether there are any other objects apart from Settings that are entitled to keep the Device object alive as long as they hold a pointer to it, instead of being just passive observers. If that is indeed the case, then you need a shared ownership policy, which is what std::shared_ptr offers: #include <memory> class Device { }; class Settings { std::shared_ptr<Device> device; public: Settings(std::shared_ptr<Device> const& d) { device = d; } std::shared_ptr<Device> getDevice() { return device; } }; int main() { std::shared_ptr<Device> device = std::make_shared<Device>(); Settings settings(device); // ... std::shared_ptr<Device> myDevice = settings.getDevice(); // do something with myDevice... } Notice, that weak_ptr is an observing pointer, not an owning pointer - in other words, it does not keep the pointed object alive if all other owning pointers to the pointed object go out of scope. The advantage of weak_ptr over a regular raw pointer is that you can safely tell whether weak_ptr is dangling or not (i.e. whether it is pointing to a valid object, or if the object originally pointed to has been destroyed). This can be done by calling the expired() member function on the weak_ptr object. Thx a lot, thinking about the time of destruction was exactly what helped me to understand the different in usage. Also the note considering the raw pointer was very helpful. If I understand it correctly a weak_ptr would also be possible here (as both are observing pointers), but not necessary? @LKK: Yes, correct. A weak_ptr is always an alternative to raw observing pointers. It is safer in a sense, because you could check if it is dangling before dereferencing it, but it also comes with some overhead. If you can easily guarantee that you are not going to dereference a dangling pointer, then you should be fine with observing raw pointers @LKK, probably the easiest way to make sure that you won't dereference a dangling pointer is not to store it, or pass it to someone who will. Of course, you should also make sure that the owner isn't getting destroyed during the current scope, but this is easier to see since it's local. It's trickier in a multithreaded environment though. In the first case it would probably even be better to let getDevice() return a reference, whouldn't it? So the caller would not have to check for nullptr. @AndyProwl Regarding the first case, just make getDevice() return by reference. AFAIU, the Settings guarantees there must be a device attached, it's not optional. So, no use of pointers needed. @vobject: In this case it could return a reference, yes, or a reference wrapper. However, this can't be generalized to the case where a returned pointer could be null, so I thought I'd just put a pointer in there (my main goal in this answer was to explain the meaning of observing pointers vs owning pointers in a generalized context, and to point out that raw pointers are not that bad when observing pointers are needed). @vobject: Also, returning a reference would make it possible for the user to erroneously write something like: Device myDevice = settings.getDevice(); when they actually meant Device& myDevice = settings.getDevice(); (forgetting the &), which won't happen when returning a pointer. I also feel like checking against nullptr is not needed here, because the function is always guaranteed to return a valid pointer. @AndyProwl and employing usage of auto myDevice = settings.getDevice() has potential of avoiding the mistake of forgetting the &. @chico: Not sure what you mean. auto myDevice = settings.getDevice() will create a new instance of type Device called myDevice and copy-construct it from the one referenced by the reference that getDevice() returns. If you want myDevice to be a reference, you need to do auto& myDevice = settings.getDevice(). So unless I am missing something, we're back in the same situation we had without using auto. @AndyProwl true, need to revise auto (and all the universal references stuff). I've expected auto to resolve to a reference type for the returning reference, my mistake. @AndyProwl It seems likely in this example that Device should be non-copyable anyway (private copy constructor), which would avoid the issue with users accidentally writing Devide myDevice = settings.getDevice(). Returning an observing raw pointer implies multiple responsibilities. The pointer semantics imply nullptr is a possible, and valid, result. It would only be through convention or explicit documentation that nullptr needn't be checked. I think a (const) reference should be the the choice by default, with an observing pointer as an alternative if you wish to also express that the type is optional. Keep an eye on std::optional for possibly C++14, or check out boost::optional in order to fully express the right semantics. Also, a web search for exempt_ptr should return an ISO C++ proposal for a "dumb smart pointer" who's sole purpose is to express "observing pointer" semantics. @BretKuhns: As I mentioned in a previous comment, it is true that in this case we could return a reference, but my goal in the answer was just to provide a simple solution that 1) could be generalized (applies also when the allowed pointer could be null), and 2) showed that raw pointers are bad only when used for manual memory management. But I agree that in this particular case we could/should return a reference. @AndyProwl I Saw the earlier comments, but wanted to emphasize that returning the observing pointer in this case is misleading and doesn't convey the correct information to the caller. Earlier comments were more literal, whereas I'm speaking to the API and maintainability of the code. I have a question: why not return a std::unique_ptr<T>& instead of a T*? @Purrformance: Because you don't want to give away the ownership of the object - handing a modifiable unique_ptr to a client opens the possibility that the client will move from it, thus acquiring ownership and leaving you with a null (unique) pointer. @Any Thank you, that makes sense. What about a std::unique_ptr<T> const&? @Purrformance: While that would prevent a client from moving (unless the client is a mad scientist keen on const_casts), I personally wouldn't do it. It exposes an implementation detail, i.e. the fact that ownership is unique and realized through a unique_ptr. I see things this way: if you want/need to pass/return ownership, pass/return a smart pointer (unique_ptr or shared_ptr, depending on the kind of ownership). If you don't want/need to pass/return ownership, use a (properly const-qualified) pointer or reference, mostly depending on whether the argument can be null or not. @AndyProwl If Settings has no interest in Device life (just uses it), is it correct to have a raw pointer as class field or is it anyway better a smart pointer? @MarcoStramezzi Yes, in that case a raw pointer would be correct. @AndyProwl: You recommend using a raw pointer/reference as getDevice() return type. Lets say Settings owns the Device. Also lets say I make a mistake by using getDevice() the wrong way, storing the returned raw pointer/reference longer than the Settings object lives. I would get a hard to debug undefined behavior. Could I use smart pointers to prevent that or at least make it easier to debug? I imagine using shared_ptr as member and as return value. I would need to throw an exception on destruction if the shared_ptr member does not hold the last reference. Is this feasible? This answer is highly incomplete. For example, it does not mention all the implicit limitations you accept by choosing a unique_ptr over a shared_ptr, probably chief among them is the implicit deletion of the standard copy constructor, see [1]. Please thoroughly research the topic before posting an incomplete answer. [1] https://stackoverflow.com/questions/16030081/copy-constructor-for-a-class-with-unique-ptr class Device { }; class Settings { std::shared_ptr<Device> device; public: Settings(const std::shared_ptr<Device>& device) : device(device) { } const std::shared_ptr<Device>& getDevice() { return device; } }; int main() { std::shared_ptr<Device> device(new Device()); Settings settings(device); // ... std::shared_ptr<Device> myDevice(settings.getDevice()); // do something with myDevice... return 0; } week_ptr is used only for reference loops. The dependency graph must be acyclicdirected graph. In shared pointers there are 2 reference counts: 1 for shared_ptrs, and 1 for all pointers (shared_ptr and weak_ptr). When all shared_ptrs are removed, the pointer is deleted. When pointer is needed from weak_ptr, lock should be used to get the pointer, if it exists. So if I understand your answer correctly, smart pointers do replace raw pointers, but not necessarily references? Are there actually two reference counts in a shared_ptr? Can you please explain why? As far as I understand, weak_ptr doesn't have to be counted because it simply creates a new shared_ptr when operating on the object (if the underlying object still exists). @BjörnPollex: I created a short example for you: link. I haven't implemented everything just the copy constructors and lock. boost version is also thread safe on reference counting (delete is called only once). @Naszta: Your example shows that it is possible to implement this using two reference counts, but your answer suggests that this is required, which I don't believe it is. Could you please clarify this in your answer? @BjörnPollex: Look, this is the way how boost implementation does work. In boost atomic counters are used for thread safe. That's all. If you don't beleave me, you could easily check it in boost implementation (or in VS2012 its own). @BjörnPollex, in order for weak_ptr::lock() to tell if the object has expired it must inspect the "control block" that contains the first reference count and pointer to the object, so the control block must not be destroyed while there are any weak_ptr objects still in use, so the number of weak_ptr objects must be tracked, which is what the second reference count does. The object gets destroyed when the first ref count drops to zero, the control block gets destroyed when the second ref count drops to zero.
common-pile/stackexchange_filtered
spring security: how to configure a set of test data for authentication I would like to set some test data username-password, I followed different examples and I used the sequent code in spring-security.xml: <security:http auto-config='true'> <security:intercept-url pattern="/logged" access= 'ROLE_USER' /> <security:form-login login-page='/login.jsp'/> </security:http> <security:authentication-manager> <security:authentication-provider> <security:user-service> <security:user name="user" password="password" authorities="ROLE_USER" /> </security:user-service> </security:authentication-provider> When I access to logged.jsp page it throws the following exception: java.lang.IllegalArgumentException: Failed to evaluate expression 'ROLE_USER' root cause: org.springframework.expression.spel.SpelEvaluationException: EL1008E:(pos 0): Property or field 'ROLE_USER' cannot be found on object of type 'org.springframework.security.web.access.expression.WebSecurityExpressionRoot' - maybe not public? any idea ? isn't just because you have single quotes in 'ROLE_USER' ? this : <security:intercept-url pattern="/logged/**" access="ROLE_USER" />
common-pile/stackexchange_filtered
Use ContentPlaceHolder's default content instead of page's content When a page that uses a master page doesn't have an asp:Content control for one of the master page's ContentPlaceHolders, the default content is shown instead. I want to use that default content on a page that does have an asp:Content control for that ContentPlaceHolder. In the page that uses the master page I want to decide in code whether to use the default content or the page-specific content. How can I show the default content from the master page instead of the content from the asp:Content control for the ContentPlaceHolderID? For instance, say I have a ContentPlaceHolder for a menu. The default content shows a basic menu. The page builds the menu from a query, but if there's no result for the query I want to show the default menu. By default though, the empty asp:Content control will be shown. How do I get the master page's default content instead? The best approach I've come up with is not to use default content in the ContentPlaceHolder. Instead, I added the default content in a PlaceHolder adjacent to the ContentPlaceHolder: <asp:ContentPlaceHolder ID="Menu" runat="server" /><!-- no default content --> <asp:PlaceHolder runat=server ID="DefaultContentForMenu" Visible=false EnableViewState=false >Default menu content here</asp:PlaceHolder> Then I added a ForceDefaultContentForMenu property to the master page so that pages that use the master page can specify that the default content should be used even if the page provides its own content. The master page's Render method shows the default content if the ForceDefaultContentForMenu property is true or the content placeholder is empty: protected override void Render(HtmlTextWriter writer) { if (ForceDefaultContentForMenu || !Menu.HasControls()) { Menu.Controls.Clear(); DefaultContentForMenu.Visible = true; } base.Render(writer); } Now pages that use the master page will get the default content by default if they don't add their own content for the Menu content placeholder, but can specify that the default content should be used instead of their own content. The only drawback I've found with this approach is that when Visual Studio adds the content area to a page the default content isn't copied. For the work I'm doing this is a benefit rather than a drawback, since if I'm adding the content area to a page it's because I don't want the default content. One way to do it is to clear the controls collection of the placeholder contents in the Page_Load and add your updated menu. protected void Page_Load(object sender, EventArgs e) { if(needsBetterMenu) { ContentPlaceHolder holder = (ContentPlaceHolder)Master.FindControl("ContentPlaceHolder1"); holder.Controls.Clear(); holder.Controls.Add(betterMenu); } } Should probably do this in Init rather than Page_Load, in order to get event firing on betterMenu. I'll remember this hint. And yes, better add control on Init stage and registrate for lifecycle events if needed. This is assuming that the page that uses the master page doesn't have an asp:Content control for ContentPlaceHolder1. I'd like to find a way to access the default content even if the asp:Content has been added. If your fine with changing the content of the Master Page in the code behind of the child page, you can do the following: Add a runat="server" to the html control in the master page that you want to edit: Site.Master ... <ul id="menu" runat="server"> <li><a href="#">Link 1</a></li> <li><a href="#">Link 2</a></li> </ul> ... Then in the code behind of a child page that needs to change the menu content, put the following code: protected void Page_Load(object sender, EventArgs e) { HtmlGenericControl c = Master.FindControl("menu") as HtmlGenericControl; if (c != null) { c.Controls.Clear(); c.Controls.Add(new HtmlGenericControl("li") { InnerHtml = "<a href=\"#\">Link 3</a>" }); } } or whatever html you want to put in the menu control. A page without the code will output the following html: <ul id="ctl00_menu"> <li><a href="#">Link 1</a></li> <li><a href="#">Link 2</a></li> </ul> and a page with the code with display the following: <ul id="ctl00_menu"> <li><a href="#">Link 3</a></li> </ul> Obviously you wouldn't want to use this code as is. This was only a prototype. I would refactor it to allow adding content to any control and throw it into a base page class that all my pages would inherit.
common-pile/stackexchange_filtered
Camera chasing player I want the camera to follow the player and lag a little bit and then catch up with the player when the player halts. When stopped the player is in the center. At the moment im at the point where i get the camera to move with the player but i dont know how to get the lag effect! On the camera object void Update () { var newX = player.transform.position.x; var newZ = player.transform.position.z; var y = transform.position.y; transform.position = new Vector3(newX, y, newZ); } Check out Lerp. It'll allow you to interpolate to the target position over time. You could also try attaching it with a spring In Unity LateUpdate is meant to be called after all the other updates are done. You'll want to utilize this method because you want to ensure the player position has been updated before you update the position of your camera. (Actually the Unity documentation mentions something very similar.) Now that we know where to put it, there are two ways I can think of to make the camera follow the player: Use any kind of interpolation: A simple example might be linear interpolation, expressed in parameterized vector notation, vectors are in bold NewPos = CamPos + t * normalize(PlayerPos - CamPos); Lerp is also a build in function that's available to you. Using this with LateUpdate will give lag similar effect. Elastic based camera: With the spring formula F = k x F stands for Force k constant based on the spring stiffness (you can choose as you see fit) x can be used distance between the camera and the player. Note that Fcan be factored into F=m* a where m is mass(constant) and a is acceleration so you can modify the camera position using the calculated values. In general this will give you a spring like behavior, but you need to multiply with some damping factor so it won't "bounce" forever. One easy way is to just add a percentage of the desired position to the camera position each update. void Update () { var newX = player.transform.position.x; var newZ = player.transform.position.z; var y = transform.position.y; var chaseSpeed = 0.05; transform.position = transform.position + (new Vector3(newX, y, newZ) - transform.position) * chaseSpeed; }
common-pile/stackexchange_filtered
Registry Monitoring using RegNotifyChangeKeyValue The below code is to monitor the changes occurred in the registry (add, delete, modify) the waiting is done till some changed is occurred Now looking for output in which key the change action is performed the changes in registry the key name should be the output void __cdecl _tmain(int argc, TCHAR *argv[]) { DWORD dwFilter = REG_NOTIFY_CHANGE_NAME | REG_NOTIFY_CHANGE_ATTRIBUTES | REG_NOTIFY_CHANGE_LAST_SET | REG_NOTIFY_CHANGE_SECURITY| REG_NOTIFY_THREAD_AGNOSTIC; HANDLE hEvent; HKEY hMainKey; HKEY hKey; LONG lErrorCode; RegOpenKeyEx(HKEY_LOCAL_MACHINE, TEXT("SOFTWARE\\444\\1"), 0, KEY_NOTIFY | KEY_CREATE_SUB_KEY | KEY_ENUMERATE_SUB_KEYS | KEY_QUERY_VALUE | KEY_WOW64_64KEY, &hKey); // Create an event. hEvent = CreateEvent(NULL, TRUE, FALSE, NULL); if (hEvent == NULL) { _tprintf(TEXT("Error in CreateEvent (%d).\n"), GetLastError()); return; } // Watch the registry key for a change of value. lErrorCode = RegNotifyChangeKeyValue(hKey, TRUE, dwFilter, hEvent, TRUE); if (lErrorCode != ERROR_SUCCESS) { _tprintf(TEXT("Error in RegNotifyChangeKeyValue (%d).\n"), lErrorCode); return; } // Wait for an event to occur. _tprintf(TEXT("Waiting for a change in the specified key...\n")); if (WaitForSingleObject(hEvent, INFINITE) == WAIT_FAILED) { _tprintf(TEXT("Error in WaitForSingleObject (%d).\n"), GetLastError()); return; } else { //Get chile events for the event key ... In this case select. Display the key name and values. _tprintf(TEXT("\nChange has occurred.\n")); std::cout << hEvent << std::endl; _tprintf(TEXT("the modified key is",hEvent));//this was commited Sleep(2000); return; } // Close the key. lErrorCode = RegCloseKey(hKey); if (lErrorCode != ERROR_SUCCESS) { _tprintf(TEXT("Error in RegCloseKey (%d).\n"), GetLastError()); return; } // Close the handle. if (!CloseHandle(hEvent)) { _tprintf(TEXT("Error in CloseHandle.\n")); return; } system("pause"); } You haven't asked a question. i want to print the key in which the change has occurred . Is it possible to print this? You will have to search for it. (After all, there may be multiple changes.)
common-pile/stackexchange_filtered
Facebook reactions as a metric for user engagement Previously when facebook only had a 'like' button and comments for people to respond to a post, user engagement can be evaluated in terms of the number of likes as well as the number of comments. Since making a comment takes more effort than clicking the 'like' button, it could be argued that posts with a high ratio of likes to comments indicates a high level of user engagement (i.e. not someone who just clicks 'like' and move on). Now that facebook has expanded the user response to a set of 'reactions', I am wondering how this has changed the way people analyze user engagement. In one way the initial effort has increased dramatically from deciding whether to simply clicking on a button or not to having to choose the appropriate reaction to a post. Does this mean that getting a reaction is 'valued' higher than getting a like? Does this also mean that because reactions incorporates a little bit more information than a 'like' that perhaps comments lose a little bit of their value in the process? I am wondering how facebook reactions are being used as metrics for user engagement, and whether it can be weighed or compared against the original 'likes' or if it is too difficult to draw meaningful comparisons. Excellent question ! Apparently a study by globalwebindex shows that after the launch of reactions, the number of users who interact or react with a post has increased after Facebook introducted Reactions. To quote the article The replacement of the one-size-fits-all button appears to have been a hit among Facebook’s users. GlobalWebIndex’s data shows a clear upswing in the number of Facebookers “liking” things on the platform since the new button was launched. It’s now a sizable 8 in 10 who are clicking “like” or "reacting" to posts each month – a 16-point jump on the previous quarter. With regards to your question on whether users spend more time deciding on which reaction to use, I still am yet to find data on that but one impact might be on how people interact with positive and negative news. A study from facebook found that when people posted positive news they got a lot of likes but when they posted negative news, the number of comments were significantly higher. To quote the article We categorized the top 200 feelings as positive or negative, and whether they related to the poster's self-worth (e.g., feeling accomplished, feeling proud, feeling defeated, feeling stupid all relate to self-worth, while feeling lucky, feeling rested, feeling tired, and feeling furious don't). Ambiguous or neutral feelings (like feeling weird, feeling crazy, feeling hungry) were omitted from analysis (about 11% of feelings). See the paper for the full list of feelings. Roughly 1/3 of feelings shared on Facebook are negative, indicating that people share more than just good news on the site. Then we counted how many likes and comments these posts received. Not surprisingly, posts with positive feelings (like feeling excited) get about 58% more likes, and positive self-worth feelings (like feeling strong) get about 71% more likes. Our analyses control for other factors likely to affect the use of feelings and feedback rates, such as posters' age, gender, friend count, and years using Facebook. But when people share difficult moments, their friends skip the like button and instead write comments. Posts with negative feelings (like feeling upset) get 36% more comments, and negative self-worth feelings (like feeling lonely) get 72% more comments. Figure 1 shows these effects. Figure 1. Posts with positive feeling annotations, especially those related to self-worth (like feeling loved) receive far more likes than posts without feeling annotations. On the other hand, posts with negative feeling annotations, especially those related to self-worth (like feeling hopeless) receive far more comments and far fewer likes. Error bars are doubled for visibility. Hence with the new reactions,since users earlier did not have an option to communicate displeasure and had to use comments, it would be interesting to see how the impact on comments is there when negative news is posted. Also this article which checked people's reactions to posts on popular facebook pages shows that LIKES still leads the pack. To summarize the article Like is used 92.9% of the time Love is used 4.6% of the time HaHa is used 0.3% of the time Wow is used 1.8% of the time Sad is used 0.2% of the time Angry is used 0.2% of the time
common-pile/stackexchange_filtered
How to open eml file: "Invalid path or URL" I have a snippet of VBA code where I want to press a button in Excel that will open an Outlook template. I get an error Invalid path or URL Sub emailBugReport() Dim myoutapp As Object Dim myitem As Object Set myoutapp = CreateObject("Outlook.Application") Set myitem = myoutapp.Session.OpenSharedItem("C:\Users\kaiba\OneDrive\Desktop\test.eml") myitem.Display End Sub The filepath is correct. I thought it had to do with the OneDrive, but when I removed it, it gave me another error saying that VBA couldn't find the file. In the Immediate pane what does ? Dir("C:\Users\kaiba\OneDrive\Desktop\test.eml") give you? Are you sure that this file-type '.eml' is something OpenSharedItem can handle? the Microsoft documents only talk about .vcf , .ics, .msg. Also see: https://stackoverflow.com/a/35271082/11695049
common-pile/stackexchange_filtered
javascript resolveLocalFileSystemURL triggering both success and fail callbacks I am doing a simple file check in my cordova app using the following back to back commands: function initWhipData(dir) { console.log("DIR = " +dir) ; } //Test: window.resolveLocalFileSystemURL(fPath + "whipdata.json", function() { console.log("TEST File Exists")}, function() { console.log("TEST File doesnt exist") } ) ; //Actual: window.resolveLocalFileSystemURL(fPath + "whipdata.json", initWhipData(10), initWhipData(20) ) ; In console. I get the following and can't understand why: TEST File Exists // expected Dir = 10 // expected Dir = 20 // not expected In the second file check, both success and fail are being called. What am I doing wrong...and not understanding? I then tried the following and got my expected results: window.resolveLocalFileSystemURL(fPath + "whipdata.json", function() {initWhipData(10)}, function() { initWhipData(20)} ) ; It prints out "Dir = 10". Obviously I am not understanding something that I thought I understood. Why does wrapping my functions in a function work while just directly referencing the function as success/fail callbacks does not work? This is how we call a function and get the result var a = initWhipData(10); And this is how we get the reference of a function var a = initWhipData; I mean, you're trying to call initWhipData function then pass the result of it as a parameter instead of passing it's reference. That's why initWhipData(10) and initWhipData(20) will be called first then the returned value will being passed as a parameter. window.resolveLocalFileSystemURL(fPath + "whipdata.json", initWhipData(10), initWhipData(20) ); Below are the example on how to pass a callback function.. window.resolveLocalFileSystemURL(fPath + "whipdata.json", successCallback, //If success then call this function errorCallback //If error happen then call this function ); function successCallback(){ initWhipData(10) } function errorCallback(){ initWhipData(20) } This was the part I needed to understand and/or was missing: you're trying to call initWhipData function then pass the result of it as a parameter instead of passing it's reference - thanks!!
common-pile/stackexchange_filtered
How can I run a command on several buffers / files? I'm working on some code that is spread across several files. I keep finding myself doing whitespace-cleanup and indent-region (on the whole file), and having to do it on each of the six files. Any way to say: 'do tidying up on all .c and .h files currently open', or maybe use emacs in batch mode from the bash shell? You can mapc a function across a sequence, so e.g. write a cleanup function that does a find-file on a filename and then applies whitespace-cleanup and whatever else you want. Then create a list of the files that you want to apply it to: (setq my-file list '("file1" "file2" ....)) and then apply the cleanup ffunction on each file with (mapc '#my-cleanup-function my-filelist). One option would be to use ibuffer. You can mark the buffers you want to modify, then use ibuffer-do-eval (bound to E) to evaluate a command on all of them. If you always run the same sequence of steps, you can define a command such as: (defun my-clean-buffer () (interactive) (whitespace-cleanup) (indent-region (point-min) (point-max))) Then use E (my-clean-buffer) as described above. Note that this leaves the buffers modified and not saved, so either include a save in your command or use ibuffer-do-save (bound to S) after the updates to save the marked buffers. ibuffer is so cool! To apply an elisp function to multiple files without any packages you can use eshell. Go to the directory containing files and run a create a bash-like expression to match the filenames like *c. Then create a for loop to open the file and apply the function and save the file. ~ $ for file in *c { (progn (find-file file) (whitespace-cleanup) (save-buffer)) } You mention files, not buffers. And you say nothing about whether you care whether you visit the files in buffers or whether you are already visiting the files that you want to act on. There are many ways to do such things. If you just want to act on a set of files, without caring whether you keep them visited in buffers, then yes, you can use Emacs in batch mode or just pass the file names to a shell script. And you say nothing about how the set of files is chosen, e.g., whether the file names exist already as a list or you pick them interactively. The question is really underspecified (too broad). That said, here are a couple of possibilities that uses Dired, where you can choose the files by marking them in various ways (e.g., by extension, regexp, date, name). Assuming that you have marked the files you want to act on: You can use ! to apply a shell script or system command to each of them. If you use Dired+ then you can use @ to apply a Lisp function to each of them. (E.g., apply the function posted by @glucas.) C-h f diredp-do-apply-function: @ runs the command diredp-do-apply-function, which is an interactive Lisp function in dired+.el. It is bound to @, menu-bar operate diredp-do-apply-function. (diredp-do-apply-function FUNCTION &optional ARG) Apply FUNCTION to the marked files. With a plain prefix ARG (C-u), visit each file and invoke FUNCTION with no arguments. Otherwise, apply FUNCTION to each file name. Any other prefix arg behaves according to the ARG argument of dired-get-marked-files. In particular, C-u C-u operates on all files in the Dired buffer. You can also use M-+ @ (diredp-do-apply-function-recursive) to act on all marked files in a Dired buffer plus all marked files in any marked subdirs of that buffer that have their own Dired buffers, etc., recursively. (No need to insert the subdirs.)
common-pile/stackexchange_filtered
What are the Advantages and Disadvantages of JQuery and Glow JavaScript Libraries? Can anyone give a comparison of JQuery and the BBC's Glow JavaScript libraries? BBC's Glow JavaScript Library was just released. No one outside the BBC has much experience with it yet. There's an Ajaxian discussion here. There's a bit of a dustup between jQuery creator John Resig and the BBC here. Glow looks pleasant enough. It'll be interesting to watch changes land. It's open source, hosted at github. jquery has the advantage of far greater developer support. As with the Creative Archive licenses (mirroring Creative Commons but restricted to UK), this is arguably a case of 'not-invented-here-syndrome' by the BBC. As was pointed out in the thread linked to by Nosredna, Glow's selling point (compatibility with older browsers) could have been integrated with jquery via the latter's plugin framework. One of the most notable differences is that Glow supports Safari 1.3 and 2.x, whereas jQuery only supports Safari 3+. JQuery is a mature and widely used and extremely well tested library. I do not know of any compelling features which Glow offers that can offset those facts. So I would go with JQuery.
common-pile/stackexchange_filtered
Why does gvim often freeze when editing files on a USB memory stick? I'm using gvim 7.2 on Windows XP to edit files on a slow USB memory stick. When other programs are accessing the stick, the editor freezes for several seconds at a time. I have already tried using the "set directory" command to move the Vim swap files to the hard disk. Why is gvim accessing the disk while I'm editing, and what can I do to prevent these freezes? Update: Using process monitoring tools, I found that the freeze occurs when gvim checks whether file is changed when the window gains focus. Is there anyway to turn that off? It could be gvim's autoread feature, which checks files being edited for changes made outside gvim. You can control this feature. See the relevant portion of the gvim FAQ. Great suggestion! Sorry that's not it -- autoread is turned off by default and I have confirmed that it is off. I would think this is due the the .filename.swp file vim reads and writes from in the same directory as the file you are editing. To get around this, you can do: :set dir=/tmp :vi This assumes that /tmp is fast (and not on the flash drive). You could also do: :set dir=/dev/shm :vi If your entire system is running out of flash, however, you will not get any recovery option after a system crash. You could also put this in your .vimrc on a system booting/running from flash: set dir=/dev/shm This is still seen on vim 7.4, Windows 7 professional. Above things did not help :(
common-pile/stackexchange_filtered
How is this Makefile being encountered? Here is my simple Makefile: #create an exe file run: link gcc link.o -o run #sketch link.o link.o: main.o sum.o ld -r main.o sum.o -o link.o #sketch main.o main.o: main.c gcc -c main.c -o main.o #sketch sum.o sum.o: sum.c gcc -c sum.c -o sum.o #make clean recipie clean: rm *.o rm run This makefile may be something kind of immature or weak one.But my real concern is all about the process the target are being hit. Before telling actual question, let's first look its output. gcc -c main.c -o main.o gcc -c sum.c -o sum.o ld -r main.o sum.o -o link.o cc link.o -o link gcc link.o -o run My question is: is this because of link the dependency of run or because of link.o mention in command section of run, make command seeks for link.o and again it is because of main.o sum.o the dependencies of link.o it looks for mention dependencies or because of main.o sum.o in command section and then first command in target main.o is encountered and sum.o and link.o respectively? Is it because of file mentioned in dependencies or file mention on the command?? It is unclear what "this" refers to in your question. Is it the fact that link is being built in the second to last step? this means the last question in my entire post(question). What was the make command that produced this output? It was make alone. This is probably a duplicate of your earlier question, What actually is causing `The System Can not find the file specified` problem in make?, but let’s use this one to explain in detail what’s going on. Make takes your Makefile declarations literally: run: link tells it that run needs link, and the associated recipe tells it that to create run, it should execute gcc link.o -o run link.o: main.o sum.o tells it that link.o needs main.o and sum.o, and the associated recipe tells it that to create link.o, it should execute ld -r main.o sum.o -o link.o main.o: main.c tells it that main.o needs main.c, and the associated recipe tells it that to create main.o, it should execute gcc -c main.c -o main.o sum.o: sum.c does the same for sum.c and sum.o When you run make Make tries to satisfy the first target in the Makefile, run. It sees that there is no link file, and there is no rule in your Makefile which specifies how to build link. However Make “knows” how to build an extensionless file from a .o file, so it uses that built-in rule to build link; that resolves to cc link.o -o link which is where the cc command comes from (strictly speaking, it’s the default value of the $(CC) Make variable). Once link is available, Make considers that the prerequisites for run are satisfied, and it runs the corresponding recipe: gcc link.o -o run Note that link isn’t actually used here. It only confuses things; you should write the first rule as run: link.o In detail, Make resolves run as follows: run needs link link has no explicit rule, but can be built using the built-in rule from link.o link.o needs main.o and sum.o main.o needs main.c, which exists sum.o needs sum.c, which exists The prequisites are now resolved, and Make can run the recipes: gcc -c main.o -o main.o to build main.o gcc -c sum.c -o sum.o to build sum.o ld -r main.o sum.o -o link.o to build link.o cc link.o -o link to build link gcc link.o -o run to build run If you rewrite the run rule as run: link.o, the resolution changes to run needs link.o link.o needs main.o and sum.o main.o needs main.c, which exists sum.o needs sum.c, which exists and the build to gcc -c main.o -o main.o to build main.o gcc -c sum.c -o sum.o to build sum.o ld -r main.o sum.o -o link.o to build link.o gcc link.o -o run to build run Thanks a lot....Both post answered...
common-pile/stackexchange_filtered
List to Map of List using java streams I have a class Company public class Company { private String companyid; private List<Employee> employees; } Which is related to Employee in One2Many relationship public class Employee { private String employeeId; private Company company; } I am supplied with a list of employees and I want to generate a map like Map<companyId, List<Employee>> using java streams as it has to be performant. employees.stream().collect( Collectors.groupingBy(Employee::getCompany, HashMap::new, Collectors.toCollection(ArrayList::new)); But the problem is I can't call something like Employee::getCompany().getCompanyId() How can I do this. Any suggestions are welcome Use a lambda expression instead of a method reference: Map<String,List<Employee> output = employees.stream() .collect(Collectors.groupingBy(e -> e.getCompany().getCompanyId(), HashMap::new, Collectors.toCollection(ArrayList::new))); Or simply: Map<String,List<Employee> output = employees.stream() .collect(Collectors.groupingBy(e -> e.getCompany().getCompanyId())); Hi, this line "e -> e.getCompany().getCompanyId()" is causing a compilation problem in java8. I can only use java8 @ds2799 this should work in Java8, assuming your Employee class has a getCompany() method that returns a Company and your Company class has a getCompanyId() method that returns a String. Thanks it is working :) Hi again, just one thing though it is throwing a NPE with just one item in the list and beans are completely filled. @ds2799 this means either the input List contains a null value, or it contains an Employee having a null Company, or a Company having a null CompanyId. You can filter out the null values. You were right, thanks again You can do this : Map<String,List<Employee>> output = employees.stream() .collect(Collectors.groupingBy(e -> e.getCompany().getCompanyId(), HashMap::new, Collectors.toCollection(ArrayList::new))); I am getting a NullPointerException. With just one item in the list and nothing is null.
common-pile/stackexchange_filtered
C: Is this a correct case for type-casting? When I compile the following code (with -Wall), I get no warnings because I cast the *(str)++ to void type. However, if I do not cast that operation to void type, the follow warning is put: warning: value computed is not used [-Wunused-value] Is this the correct way to get rid of compiler warnings? #include <stdio.h> static void print(char *str) { while(*str) { printf("%c\n", *str); (void)*(str)++; } } int main(void) { print("Hello"); return 0; } you don't need the * when you increment the pointer. You incrementing pointer, dereferencing its old value, and doing nothing with the result. That's what warning says. (void) is correct way to mark something as 'used' even though it does nothing. But you don't need it here (in fact, it is very rarely needed, usually in macros) - because you don't need value you don't use, so no need to dereference pointer in the first place. The correct way to get rid of those compilers warnings is to simply do str++; str++ still has a return value, this shouldn't fix it. I don't get warnings when doing this. @RPM : you wont, this is valid C. @quantdev I'm wrong about how gcc works so it seems this does stop the warning, but there's no reason it should -- *(str++) and str++ both return a value -- the former returns a char, the latter returns a char*. @RPM's question is much more cleanly solved by using a for loop than by having a random hanging str++. str++ in a for loop will do exactly the same. Any modern compiler will generate strictly the same machine code for both loops: they are the same ! No need to read the return value for an operator that has side-effect. Edit: but I appreciate discussing this, its positive ! @quantdev that's true, I just think that while(foo){ ...; foo++; ...} is poor style -- that's precisely the reason that we have for loops. Reading a lot of the sysvinit programs from debian, I've seen a pattern of for loops more than while loops. @PatrickCollins there is a good reason why we have while too. And while(*foo++), by the way @keltar while(*foo++) doesn't work in this case, since it would increment the pointer before the first printf call, or else I would agree that that is cleaner. This is correct, but you're unnecessarily dereferencing -- you don't need *(str++), just str++. That aside, you probably just want for(; *str; str++) { printf("%c\n", *str); } Instead. See also this question. Yes, C has no strings, only char*s. A string literal will be a stack-allocated char*. f("some string") is the same as char[] tmp = "some string"; f(tmp). A string literal is not usually found on the stack. char[] tmp = "some string"; may be on the stack if tmp is a local variable, but this is the location of tmp, not of the string literal "some string" which, if it is explicitly represented in memory at all, is probably in a read-only segment somewhere near the code. from my research, char * str = "hello" is a allocated as static read only memory.
common-pile/stackexchange_filtered
How can I access a non-static member in a static event handler? I already know you can't normally do this but here's my situation: I have a non-static List<T> that gets added to during normal use, and then dumped to a database at an interval. I want to be able to use AppDomain.CurrentDomain.ProcessExit in order to dump any values in my List<T> that haven't been dumped yet. The List is cleared each time it is dumped. Is there any way I can access this List without the given context even though it is static -> not-static? No. But if you can get access to the instance variables then you can access the data. Ultimately everything is within some context and with the right designs, methods, and APIs you can do what you are trying. Make the handler non-static. @SLaks or make the list static. Just add your handler as a lambda, in a scope where it has access to your list. var list = new List<string>() { "Item 1", "Item 2" }; AppDomain.CurrentDomain.ProcessExit += (sender, theArgs) => { File.WriteAllLines(@"C:\temp\mylist.txt", list); }; One nice way would be to encapsulate this behaviour inside a subclass of List<T>. public class MyReallyPersistentList<T> : List<T> { public MyReallyPersistentList() { AppDomain.CurrentDomain.ProcessExit += (sender, args) => { var items = this.Select(i => i?.ToString()); File.AppendAllLines(@"C:\temp\mylist.txt", items); }; } } so i can set the handler for ProcessExit even though Application.Run() has already been executed? You never mentioned Application.Run before. Is this a WinForms application? I don't see why it would matter though - you can add a ProcessExit handler any time before your process exits (obviously not after!!) Well, whatever you are using, you are aiming to add the ProcessExit handler just after you create the list. As I say, encapsulating this logic inside a subclass of List is the easiest and safest (and most self-documenting) way to do it.
common-pile/stackexchange_filtered
angular.element with and without `document.querySelector` Inside my directive I want to access a id from my DOM. I integrated jQuery in my project, But the problem is two way I can access my desired DOM: var myEl = angular.element('#divID'); And var myEl = angular.element(document.querySelector('#divID')); My question is what is the difference between them? It is new to me that you can use directly angular.element( '#divID');. Are you sure that you really retrieve the element correctly? @quirimmo yes that will retrieve the correct element. yes, but i don't know how and why. See my updated answer below: angular.element is equivalent to jQuery because angular is built on top of jQuery. did you include also jQuery before the inclusion of angularjs? yes i did include jquery before angularjs try commenting out that line and (hopefully) the element should be not retrieved in the first way The reason why it's working also in the first way is that you included jQuery before AngularJS, so actually when you call angular.element angular returns to you an instance of $, and you are able to use angular.element with selectors too. Otherwise AngularJS has a built-in version of jQuery that is jQlite, and you need to pass the HTML element inside angular.element in order to get back a jQlite object, that is a standard jQlite object with some more properties added by AngularJS. This snippet includes jQuery after AngularJS. In this case that selector doesn't work. console.log(angular.element('#test')); <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.23/angular.min.js"></script> <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <div id="test"> ciao </div> This snippet includes jQuery before instead, and the selector actually works: console.log(angular.element('#test')); <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.23/angular.min.js"></script> <div id="test"> hello </div> p.s. I don't know if it is an issue with the actually code snippet engine here in stack overflow, because I usually use angular.element passing the HTML element, but I noticed a really decrease of performances using directly the selector with angular.element. The following snippet will use the angular.element passing the HTML even if jQuery is included before AngularJS, and it seems to run really really faster than the previous one. You can compare them. But again, maybe it just depends on the current snippet engine in SO console.log(angular.element(document.querySelector('#test'))); <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.23/angular.min.js"></script> <div id="test"> hello </div> yep done :) you are welcome. Btw my suggestion is that if you have to use angular.element, provide HTML elements as parameters, otherwise use directly $/jQuery if you want to use jQuery The reason that angular.element('#divId'); is working is that angular.element is a reference to jQuery (as long as jQuery is also included in the project before Angular, otherwise the default is jQuery lite which comes with some limitations). The standard behavior when jQuery(queryString) is called is to return a jQuery-wrapped element list of element(s) matching the passed in queryString. document.querySelector is a native browser API. Over time, browsers have implemented more and more APIs. One of jQuery's goals is compatibility with many browsers, and it allowed one to select elements with a query selector before many browsers had implemented the native document.querySelector. As you can see at http://caniuse.com/#feat=queryselector - document.querySelector is now supported natively by all major browsers. However, in your case, there is a third and arguably more efficient option since you avoid the browser having to parse the '#divID' string: var myEl = angular.element(document.getElementById('divID')); // note no # I think that his question is why also directly var myEl = angular.element( '#divID'); is working Thanks @quirimmo I have updated the answer to reflect that. @NateFerrero what angularjs uses by default is jQlite, which is a lite version of jQuery. If you include jQuery before angularjs, then angular will use the included jQuery. But if you are only with angularjs, you actually use jQlite. It's pretty different, you don't have all the methods, and also the methods you have are different and you have limitations with them. For example the find() method in jQuery accepts any kind of selectors, in jQlite you can only provide elements tags @quirimmo yes you are correct. Updated again. Thanks!
common-pile/stackexchange_filtered
Generator and functions I'm not sure what's happening here. Today is my first foray into the world of generators. I put this into the pythontutor visualizer but I'm not understanding why this is happening. The visualizer spits out "generator return instance". I've read the other SO threads that are similar to mine though I'm unfortunately not understanding why this is. In addition to this specific issue I'd greatly appreciate any thoughts on great ways to learn to use generators correctly and efficiently. Thank you! def even(nums): for number in nums: if number % 2 == 0: yield number def find_evens(number_list): return even(number_list) >>> find_evens([1,2,3,4,5,6]) <generator object even at 0x104f7af10> Take a look at http://stackoverflow.com/documentation/python/196/comprehensions/739/generator-expressions#t=201612122216342369916 The Python REPL is just giving your back the default representation of the generator object. A generator won't yield it's values on its own, you'll have to force it to. Many options exist to force, for example wrapping it in a list: list(find_evens([1,2,3,4,5,6])) [2, 4, 6] Or, similarly, by iterating through it: for i in find_evens([1, 2, 3, 4, 5, 6]): print(i) Both these examples (looping, calling list) will call __next__ on the generator object forcing it to return the next value according to the statements you've written. As for a bit more information on these, you could always take a look at the Python wiki Page on Generator. A generator object has the method __next__(). This method must be called (explicitly or implicitly) to obtain its next value. (It is called implicitly and repeatedly until the exhaustion by using the generator object in some contexts - as in for loop or list() function.) Once a generator object is created: the 1st  use of the __next__() method returns the value of the 1st  yield statement, the 2nd use of the __next__() method returns the value of the 2nd yield statement, ....... and so on - until there is no additional yield statement, in which case the generator object is exhausted (and useless) as it from this moment gives - instead of the next value - only the exception StopIteration. Compare >>> even([1, 2, 3, 4, 5, 6]).__next__() 2 >>> even([1, 2, 3, 4, 5, 6]).__next__() 2 >>> even([1, 2, 3, 4, 5, 6]).__next__() 2 with >>> gen = even([1, 2, 3, 4, 5, 6]) >>> >>> gen.__next__() 2 >>> gen.__next__() 4 >>> gen.__next__() 6 >>> gen.__next__() Traceback (most recent call last): File "<pyshell#13>", line 1, in <module> gen.__next__() StopIteration >>> >>> gen.__next__() Traceback (most recent call last): File "<pyshell#13>", line 1, in <module> gen.__next__() StopIteration In the first case a generator is created again and again, while in the second one it is created once and then used repeatedly until its exhaustion (and 1 more time - only for the illustration). You can imagine that you only get a pointer to where a generator is located. By printing it, you print the location where it is. But to actually get the values, you have to say give me the values, and you do that either by doing list or using it in a for loop.
common-pile/stackexchange_filtered
Error with package versions In a large document of mine I encountered a strange error, which I reduced to this MWE The environment NormalText is excluded, so I expect that nothing in it appears in the output. The peculiar thing is, that when it contains an \iffalse \fiAND within that a\begin{verse} \end{verse}`, the compilation stops with ! Extra \fi. l.24 \end {NormalText} After continuing, the output also shows a part of the excluded NormalText part. If I outcomment either the \iffalse \fi or the \begin{verse} \end{verse} all seems to be normal. Is there a bug in package versions or in the implementation of the environment verse? Or is it not allowed (then: why not?) to use such constructions? My MWE \documentclass{article} \usepackage{versions} \excludeversion{NormalText} \includeversion{Remarks} \begin{document} \begin{Remarks} Text in Remarks \end{Remarks} \begin{NormalText} Normal Text 1 \iffalse Normal Text 2 \begin{verse} Text in verse \end{verse} \fi Normal Text 3 \end{NormalText} \end{document} versions does not seem to have been updated since 2005! A lot have changed since then. do not use \iffalse and \fi, even if skipping over text, TeX sees them and that can get very tricky. @Jack maybe have a look at https://www.ctan.org/pkg/multiaudience ? @daleif looking at the code I don't think this ever worked @Ulrike I am working on several (non-technical) documents with 250+ pages and about 20 chapters. I use versions environments like NormalText, Summary, Remarks, Excerpt etc. to select specific pieces of the text. This selection (and more) is done by using GNU make. NormalText constitutes the larger parts. Herein I sometimes have to outcomment parts that are not relevant anymore, at least at the moment, but I might change my mind about that, so I don't want to delete them. Here I used the iffalse fi construct. But after it bit me, I will use % for commenting out. @samcarter I will check multiaudience. If it allows iffalse fi constructs (but I'm afraid it will not) I will use it.
common-pile/stackexchange_filtered
Git issues with shared folders in Vagrant I have never seen this issue before while using Vagrant, so I'm hoping this makes sense to someone. I have a folder that contains a git repository, that is being synced with a Vagrant machine running CentOS 6.5, and seeing some inconsistencies with Git. On my host machine (Mac OSX) if I run git status I get the following: ~/Folder/repo$ git status On branch master Your branch is up-to-date with 'origin/master'. nothing to commit, working directory clean But if I run the same command within my vagrant box, I get the following: vagrant@localhost ~/repo$ git status master # On branch master # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: .codeclimate.yml # modified: .gitattributes # modified: .gitignore # modified: CONTRIBUTING.md # modified: app/commands/.gitkeep # modified: app/commands/CreateModule.php # modified: app/commands/FbAdRevenue.php .... And the list goes on, basically git locally seems to think that every single file has been modified and not committed which is not true. Any idea why this would be the case and how to fix it? Make sure your OSX machine is setup to see the remote host, for updates. git remote -v Do a local diff git diff HEAD^^ <file> and see what it thinks changed. Sometimes timezone issue could cause all your files to show as being different. Good to go on item one @NeerPatel. Ran the git diff command and the only thing it shows is that the mode has changed from 100644 to 100755 Based on the information after you ran the diff, it looks like file permissions changed on the server. Use the following command to ignore permissions git config core.filemode false How do I remove files saying "old mode 100755 new mode 100644" from unstaged changes in Git? I actually wound up changing some configuration in my Vagrant box, and the issue was resolved. But since your solution would work as well, will try to mark this as the accepted issue. @HunterSkrasek - Thanks! Could you then post your own answer seeing as you fixed it? @HunterSkrasek would you mind sharing what Vagrant configuration you adjusted to make it work without changing Git's configuration? we're on the edge of our seats wondering! :] I'll try to, this is like a year old and honestly forgot what I did.
common-pile/stackexchange_filtered
Attempt to insert record on page where this table is not allowed I get the message above (Attempt to insert record on page where this table is not allowed) when trying to create a new record from an extension. The "page" i try to create the record in is a sysfolder and not a page t3lib_extMgm::allowTableOnStandardPages('user_myext_categories'); is set there are already records of categories on this page there is no 'type' => definition in TCA, but the table itself is registered in TCA like the other tables from this extension (and they work) Any hints on this? Perhaps the creation of tables of this type is deactivated via Page-TS-Config? Via mod.web_list.allowedNewTables / deniedNewTables it is possible to disable the creation of new tables. You need to check each Page-TS-Config in the rootline or have a look at the info module. Perhaps try to create a new "root" page and add your table there. If that works, it is a Page-TS-Config configuration:) wow, never thought to meet so much irc.freenode.net#typo3 people here ;) thanks, will try this. Nope, no allowedNewTables or deniedNewTables, scanned the whole tree you are root? Tried to create table somewhere else? What happens if you edit some of the existing tables? (perhaps we can chat in #typo3, but i am afk for some minutes) the rootiest, yes. Editing existing records is working. I don't even see the record I want to create in the list when clicking "create new record". There are already records, so I just can press the "new record" icon in the lists table header. (unfortunately no, currently behind a blocking proxy) Check TCA of your table. In section ctrl which value has rootLevel perhaps remove it or set it to 0. in the ext_table you need this: \TYPO3\CMS\Core\Utility\ExtensionManagementUtility::allowTableOnStandardPages('XXXXXX');
common-pile/stackexchange_filtered
Visual c++ Form - How to change the visible properties of a label through a button click? So this is the first time I've attempted to change the visible properties of objects in forms and I've run into some problems. I was hoping someone here could offer some insight. Essentially what i'm trying to do is create a Quiz game where the user is given a question and then provided with four possible answers, upon clicking one of these four buttons I want a label to appear indicating whether that was the correct or incorrect answer. I have a label where I've set its visibility to false, and then through my button's event I want to change the visibility to true. I attempted this through this line of code in the button's event: private: System::Void Answer1_Click(System::Object^ sender, System::EventArgs^ e) { label2.Visible = true } Evidently this doesn't work. Any ideas why? I feel like a total donkey, I was writing in Visual Basic! The proper code would be label2->Visible = true; instead of label2.Visible = true. I hope this can help someone.
common-pile/stackexchange_filtered
CONSUME ABAP stack OData from JAVA Stack I have a ABAP application up and running in ABAP Stack of my Solman. If I build a UI5 application and deploy it in JAVA Stack, will I able to consume the ABAP OData from the UI5 application. If yes, how we can do this since both are in different stack? Generally yes, your SAPUI5 application will be able to consume the OData service. The "generally" means that you may encounter issues with the same origin policy that browsers apply as a safety regulation, but there are usually ways to solve that. In theory, SAPUI5 applications can connect to any OData service, no matter what system or stack that service resides in - for the application, it is only a URL. You can even connect to OData services hosted by systems other than SAP, as OData is an open standard. The OData service must be visible in the application's network zone, of course. Thanks Florian, but is it possible open a safe cross origin connection in ABAP stack? I know that using SAP Gateway enables you to "republish" OData services that reside in one ABAP system on another ABAP system, thus redeclaring the origin. This is commonly used to serve all SAPUI5 applications from a single frontend hub, while the actual data calls are delegated to the real backend data-providing systems. When mixing Java and ABAP stacks, I don't know whether there is a similar solution.
common-pile/stackexchange_filtered
I found code that I would like to use in my Dev Org, but I don't know how to. Can anyone offer guidance on implementing this code? Here's a link to the code that I would like to implement in my Developer Edition org. The code is used to offer a combination of B2B and B2C in the same org without using person accounts. Any guidance would be appreciated, thanks! EDIT: To clarify, I've so far just copied and pasted the code into the proper place (class, trigger, page, etc.) and created the fields manually, but I feel like I'm missing a better way to be doing this. UPDATE: Copying and pasting each code item in seems to have worked, in addition to a few manual changes indicated in the readme, however, it still refers to 'Downloading the Code', which makes me feel like there is a better way to do this than just copying and pasting all of the code over. Any idea what this is referring to? As mentioned by DavinC, the Force.com IDE is the most common way of handling code deployments, although there is also a command line tool called ant that salesforce provides a toolkit for using in deployments. The IDE however is far friendlier to non-developers, although it's still a rather large and complex peice of software. If I wanted to deploy that source to my org I'd download the whole thing, from the root level and then load the project into the IDE. From there the IDE has a deploy menu that can be used to push all of the data at once to your org. For small projects it can be faster to copy/paste as you've done but once you start working with large projects that include dozens of classes or workflows it quickly becomes more efficient to invest in learning the IDE. Thanks, this might be a stupid question, but from the link you provided to 'root level', how do you actually download the project? That's a whole other can of worms. It's actually hosted using a tool called subversion (http://en.wikipedia.org/wiki/Apache_Subversion), which you can think of a fancy folder except that it keeps track of who made changes, when, and why. The best windows tool for working with subversion (aka svn for short) repositories is probably tortiseSVN (http://tortoisesvn.net/downloads.html) I would use the Force.com IDE to manage this code. Force.com IDE
common-pile/stackexchange_filtered
How do I continually monitor for new TCP clients? I have a TCP server that continually monitors for new incoming clients asynchronously and adds them to a client list: public class TcpServer { public List<TcpClient> ClientsList = new List<TcpClient>(); protected TcpListener Server = new TcpListener(IPAddress.Any, 3000); private _isMonitoring = false; public TcpServer() { Server.Start(); Server.StartMonitoring(); } public void StartMonitoring() { _isMonitoring = true; Server.BeginAcceptTcpClient(HandleNewClient, null); } public void StopMonitoring() { _isMonitoring = false; } protected void HandleNewClient(IAsyncResult result) { if (_isMonitoring) { var client = Server.EndAcceptTcpClient(result); ClientsList.Add(client); StartMonitoring(); // repeats the monitoring } } } However, I'm having two issues with this code. The first is the StartMonitoring() call in HandleNewClient(). Without it, the server will accept only one incoming connection and ignore any additional connections. What I'd like to do is have it continually monitor for new clients, but something rubs me wrong about the way I'm doing it now. Is there a better way of doing this? The second is the _isMonitoring flag. I don't know how else to stop the async callback from activating and stop it from looping. Any advice on how this can be improved? I'd like to stick to using asynchronous callbacks and avoid having to manually create new threads running methods that have while (true) loops in them. Basically, your StartMonitoring function, needs to loop - you'll only accept a single client at a time, and then you'd typically pass the request off to a worker thread, and then resume accepting new connections. The way its written, as you've stated, it will only accept a single client. You'll want to expand on this to suit your startup/shutdown/terminate needs, but basically, what you're looking for is StartMonitoring to be more like this: public void StartMonitoring() { _isMonitoring = true; while (_isMonitoring) Server.BeginAcceptTcpClient(HandleNewClient, null); } Note that if _isMonitoring is going to be set by another thread, you'd better mark it as volatile, or you'll likely never terminate the loops. There's a problem with this code. The while() loop will tie up the thread so that I can never set IsMonitoring = false. Does this mean I have to spawn it in a new thread? Yes, you would likely need another thread, or you'd need to poll for incoming connections, instead of blocking on an accept.
common-pile/stackexchange_filtered
Time series label in R I have a dataframe in R where: Date MeanVal 2002-01 37.70722 2002-02 43.50683 2002-03 45.31268 2002-04 14.96000 2002-05 29.95932 2002-09 52.95333 2002-10 12.15917 2002-12 53.55144 2003-03 41.15083 2003-04 21.26365 2003-05 33.14714 2003-07 66.55667 . . 2011-12 40.00518 And when I plot a time series using ggplot with: ggplot(mean_data, aes(Date, MeanVal, group =1)) + geom_line()+xlab("") + ylab("Mean Value") I am getting: but as you can see, the x axis scale is not very neat at all. Is there any way I could just scale it by year (2002,2003,2004..2011)? It's likely your Date variable is a character class here given its format. I'd suggest converting it to a Date class and then plotting. @zack I tried df$Date <- as.Date(df$Date, format="%Y-%m"), it becomes once i do so. this should work: df$Date <- lubridate::ymd(paste0(df$Date, "-01")), you'll need to have installed the lubridate package at some point. Maybe a dupe of this: https://stackoverflow.com/q/11547414/5325862 A reproducible data set would be excellent in this case. Let's use lubridate's parse_date_time() to convert your Date to a date class: library(tidyverse) library(lubridate) mean_data %>% mutate(Date = parse_date_time(as.character(Date), "Y-m")) %>% ggplot(aes(Date, MeanVal)) + geom_line() Similarly, we can convert to an xts and use autoplot(): library(timetk) mean_data %>% mutate(Date = parse_date_time(as.character(Date), "Y-m")) %>% tk_xts(silent = T) %>% autoplot() This achieves the plot above as well. library(dplyr) mean_data %>% mutate(Date = as.integer(gsub('-.*', '', Date)) %>% #use the mutate function in dplyr to remove the month and cast the #remaining year value as an integer ggplot(aes(Date, MeanVal, group = 1)) + geom_line() + xlab("") + ylab("Mean Value") While this code may answer the question, providing additional context regarding why and/or how this code answers the question improves its long-term value.
common-pile/stackexchange_filtered
Characteristics for nonlinear waves governed by $u_t + c(u)u_x = 0$ I have some confuse about the characteristic of nonlinear wave propagation. I read the PDF from A. Salih at IIST about the Inviscid Burgers’ Equation, where the original PDF can be find at the (https://www.iist.ac.in/sites/default/files/people/IN08026/Burgers_equation_inviscid.pdf) So, I am not fully understand some statement about the characteristic line in this PDF. Consider the 1D nonlinear advection equation $$ u_t + c(u)u_x = 0$$ where the wave speed is not constant but a nonlinear term $c(u)$. As above PDF state, we defined the characteristic curve as $$ \frac{dx}{dt} = c(u).$$ Then Let $x = x(t)$, we have $$\frac{d}{dt}u(x(t),t) = \frac{\partial u}{\partial t} + \frac{\partial u}{\partial x}\frac{dx}{dt} = u_t +c(u)u_x = 0$$ Therefore the $u$ is constant long the characteristic curve, and the characteristic curve is straight line since $$ \frac{d^2x}{dt^2} = \frac{d}{dt}(\frac{dx}{dt}) = \frac{dc(u)}{dt} = c'(u)\frac{du}{dt} = 0$$ I didn't understand 3 places, (1) why we can just assume the $x$ is dependence of $t$. (2) why we say the $u$ is constant long the characteristic curve. For sure, the $\frac{d}{dt}u(x(t),t) = 0$ shows that the solution $u$ does not change along time, but I don't understand what logic shows that the $u$ is constant along $ \frac{dx}{dt} = c(u)$. (3) why the characteristic curve is straight line because $ \frac{d^2x}{dt^2} = c'(u)\frac{du}{dt} = 0$. How do I know the derivative of $c(u)$ is equal to zero? Could someone can help me? There is some confusion with the derivation's steps and notation, which OP presents in the wrong logical order. Below, the proper way is shown (see also this related post). First things first, we write the initial-value problem for the quasi-linear conservation law $$ u_t + c(u) u_x =0 , \qquad u(x,0) = \phi(x). $$ The dependent variables are position $x$ and time $t$, and the unknown is $u(x,t)$. The method of characteristics consists in seeking a parametrisation $s \mapsto \big(x(s),t(s),u(s)\big)$ of these quantities, in such a way that the PDE transforms into ordinary differential equations which we might be able to solve. Note in passing that the dependence of $u$ w.r.t. $s$ can be expressed as $u = u(x(s), t(s))$, and similar notation can be used for the partial derivatives. Using the chain rule, the evolution of $u$ is governed by \begin{aligned} \frac{d}{ds}u(s) &= x'(s) u_x(s) + t'(s) u_t(s) \\ &= \left[x'(s) - c(u(s))t'(s)\right] u_x(s) \end{aligned} where we have used the PDE. As shown in Wikipedia, we may write the system $$ \frac{dt}{ds} = 1, \quad \frac{dx}{ds} = c(u), \quad \frac{du}{ds} = 0 $$ with initial condition $t(0) = 0$, $x(0) = x_0$ and $u(0) = \phi(x_0)$, which resolution can be tackled by hand. Here, we find $$ t=s, \quad x = x_0 + c(\phi(x_0))\, s,\quad u = \phi(x_0). $$ Now let us go back to OP's questions. Given that $t=s$, this can be rewritten as $$ x = x_0 + c(\phi(x_0))\, t,\quad u = \phi(x_0), $$ as a consequence of the choice $x' = c(u)$ with $u'=0$. Now, we can express the unknown as $u = u(x(t), t)$, and similar notation can be used for the partial derivatives. We note that $u = \phi(x_0)$ is constant along the characteristic curve $t \mapsto x(t)$ starting at $(x_0, 0)$, and that those curves are straight lines in the $x$-$t$ plane. In fact, computation of the derivative of $u$ along those lines gives $$ \frac{d}{dt}u(t) = x'(t) u_x(t) + u_t(t) = 0 $$ according to the definition of the characteristic curves and the PDE itself. Since the slope $x' = c(u)$ of these lines is a function of $u$ with $u$ constant, we can conclude that $x'$ is constant too.
common-pile/stackexchange_filtered
How to manipulate a town to improve it? So there's this town of 80.000-90.000 people which grew quickly with the industrialization boom, but nowadays there are no big companies here, nor good opportunities for young people. So the youth is leaving this town either to study anywhere else or to find better jobs, and older people are the only ones who stay. The setting is like Earth today, except some random people (only 2 in this town) have certain superpowers. There's a kind of overpowered "hero" (call him Mike) with superspeed and pyrokinesis who fights crime or anything he considers wrong. The other one plays the role of the super-rich "villain" (call him James). He has the power to manipulate people. So the thing is: James wants to actually improve this town, he just doesn't care how. Anything that works would be fine. Considering that he can convince local politicians to make any law he wants and that he has the money to invest in anything, how should he use this in order to make people want to work / live here? Requirements: I'm looking for a way to create jobs for the people living here (and making it attractive for outsiders to move to this town from anywhere else). To achieve this, he will use mainly his wealth, but he can use his manipulation as a secondary tool. He's not giving his money as a present, he will also get benefits from this on the long run. So, he has to start some kind of business aiming to ruthlessly take control of the city at the same time that he is creating opportunities for the people. Edit: I wanted to make the line between Hero/Villain kinda blurry. Mike doesn't care for the wellbeing of the people, he just uses his powers to stop those he thinks are wrong. James is a ruthless businessman. He does what he thinks would be beneficial on the long run, even to the point of ruining the lives of certain people on short term. Think for example wrecking an abandoned building where homeless people sleep at night. He buys the place and builds a factory/hotel... Something that will create jobs, etc. Additional info: James is in his 30s, Mike in his 20s. Both stay here for emotional reasons (both where raised here but other than that, no common past). James inherited a fortune from his father's early death, and as he didn't earn it himself, he feels as if he has to use the money for a good cause. The reason for superpowers is still unknown, some people just have them since they were young. Ok you need to explain WHY he want to do that. Getting people and business back is not enough because he could just convince to build superhighway from the town to any hub and the city would bloom because the people would live there and spend money in the city hence attracting business. Making any way always harm someone either people who live in the way or need to spend land cheap cos they cannot use it. Welcome scrp. Interesting scenario you set up here and I think we can make it on-topic. SE is different from a standard discussion forum. The site is about specific questions with specific supported answers. As it stands your question is really broad. Thousands of equally valid answers exist. I would recommend adding a bulleted list of 1) Constraints: Tell us key things that the situation allows and does not allow. AND 2) Requirements: what does the end state need to look like. In the meantime please check out the [tour] and [help] to get a better idea how the site functions. This is a good resource in particular: How to ask There are entire University departments (Economics, Urban Planning, Geography, etc) full of PhD theses, and many rooms full of books in their libraries dedicated to this kind question without superpowers....so it seems a bit broad for an answer here. I already found the answer I was looking for. Should I edit the question anyway? Yes, it's always a good idea to keep the question high quality. Someone else might come up with similar idea but have a different set-up so they'll be able to decide on using this question and answers here or just ask their own. But they need more details from you. Also in general OPs on Worldbuilding are encouraged to wait at least 24h from asking their question before they accept an answer as an accepted answer might discourage some others from publishing their own different and valuable answers. What could work for James is killing two birds with one stone: Make the city prosperous (at least look like one from the inhabitants perspective) Make money out of it (concealing it as much as possible) and preferably gaining more and more control over the city So one possible course of actions is to establish a company that will be very diverse, getting into control of almost everything that is happening in the city. On one hand it'll provide services external to the city (to earn money), offering a well paid jobs but it'll also keep offering more and more goods and services to the city. So, build a chain of cheap shops everywhere in the city. They are cheap so most people will turn to buy there, effectively ruining all the local merchandise (but hey, you can always work at Jame's-buy-it-all, you'll just never get really rich). Control education, with the education path suited perfectly for the needs of the company, offering it for free to the citizens (and enforce politicians to subside it). There is just one catch - if you use this, you need to work somewhere for James for n year, otherwise you'll have to pay back all the investments for your education (fair, isn't it?). A deal is good anyway as at some point the only place you can work will be James but hush... Build extra good healthcare, with hospitals and other medical facilities run by James company. Again, this is free for citizens and subsided by the city. Run the municipal transport, restaurants and so on. Promote electrical transport (but only cars produced in Jame's factory are subsided and can park in those green parking lots that now occupy half of all city). In general grab control over everything in the city. Plus build a business to the external world as well. There are few tricks happening here. One is using James superpowers to persuade city council to give more and more of what is happening in the city under James' company's control. Of course this will be combined with simply social-engineering that doesn't really require any superpowers. But persuade the council for instance to subside those services that are to be offered for free so in practice almost all taxes go back to James. Second is an overwhelming control over the city. While everyone feel well treated and prosperous in the city (they get decent job, good healthcare etc), they are locked in a middle-class and literally everything is controlled by James. At some point anyone trying to run something on their own will be either ought, destroyed or will have to leave the city (which might no longer be as easy as it used to be). Third, as you suggest, while James cares for the big picture, he discards some part of the population that doesn't want to adapt. You don't want to move from this place where I plan to build an airport. I build it anyway if you are here or not. It'll be seen for everyone as a great thing (superpowers if needed!) and the mere fact that few families with children became part of airport walls... Well, no-one has to really know about it (except Mike perhaps, but who'll listen to him). Fourth, the company tax is close to zero. He still have to pay people that work in his company but those will be going back to him mostly anyway as people will have no other opportunity to spend the money. The bank holding the savings is his as well and makes money out of money, so technically almost all wealth of the city is in James' hands. Perhaps ironically, it's almost certainly easier for your antagonist to solve this problem with his inherited fortune rather than his superpowers, and if he must use his powers, it's probably best to act on people with money rather than politicians. In general, any good-sized town or small city that's getting less than its fair share of commercial attention (and most of the ones that are getting their fair share) will have an economic development board of some description that has plenty of ideas for exactly what will help bring about this kind of improvement. What they chronically lack are resources. Getting businesses to invest in places is primarily an infrastructure problem. (I actually live in a university town of about 100,000 that's struggling with exactly this issue.) Businesses love towns without a lot of established industries because they tend to have low costs of living and things like buying land, renting space, etc. are cheap. But they hate those kinds of towns because they rarely have the kind of infrastructure businesses need (especially, in the modern world, robust and reliable financial and telecom services) and they don't attract the kind of people businesses want (mostly young, well-educated, ambitious ones). The problem is, infrastructure is itself a business, and from a certain perspective so is skilled labor: they both go where the customers are. Businesses don't go to small towns because they lack resources; resources don't go there because they lack business. This is the Catch 22 that keeps your community from growing economically. But your antagonist can trivially solve this problem by the application of cold, hard cash. A lot of cash. He buys a sufficient-sized company (if he doesn't have one already) and declares that they're going to set up shop there. Any infrastructure the town lacks, they'll fund it. Any educated specialists they can't find locally, they'll pay relocation benefits for. By forcing the issue with enough money, they will create a market for the kinds of development that towns need to attract more businesses, more commerce, more young, educated professionals. Just to add to Cadence's answer which is spot on, infrastructure is the key so below is a small list of things that would assist in a small economic incentive to encourage businesses to move to the town: Using his Money Build a couple of small office buildings that could house potential new investors Install fibre internet to every home and business at low cost Pay for a massively improved backbone to your towns internet to ensure that the highest speeds are available out to the WWW Improve the road/rail/air networks where possible to ensure that shipping goods to and from your town is simple and hassle free Using his Powers Convince the politicians to approve the construction of some business centres or large office buildings to house the potential new clients Make the politicians allow greater tax breaks to businesses (if possible depends on the Countries Tax Laws) Get put on the board that's whole purpose is to encourage businesses to move to the town and when you meet with these people, make them decide its a good idea Obviously it depends on how much money and how effective his powers are but varying levels of these in the ways above would definitely help attract businesses. Its also worth noting that large construction projects no matter where they happen, often have a massive effect on local businesses, Building a new skyscraper? that takes a lot of delivery drivers that will need to fill up their trucks, means a bigger gas/petrol stations might be required. a lot of tradesman building the thing means they need places to eat and sleep so hotels and restaurants get a boom, they need somewhere to entertain themselves, so cinemas, bars, sports grounds and clubs of other varieties... get a boom in customers. they need somewhere to put their money so banks will look to open a branch to help their customers, all this extra money moving around means the locals have more money to move so they spend as well. This was one of the main reasons that after the great depression and every recession big or small is usually combated with a lot of new building work paid for by the government. whether it be new schools or airports etc. Building work is often a key to economic growth Here are some ideas that are ethically grey (which is what I think you are looking for, and examples). Use your powers to "persuade" farm owners to sell or give away their property to developers looking to make a quick buck with a project; even one that is highly speculative, such as an industry that might move in if you build them a factory (example: many developing rural towns). See the family destitute after they've spent the pittance they were paid for the land, since they have no other skills to highlight why this was a really unkind act. "Convince" everyone involved in that shootout between two biker gangs and federal agents at the mall that it's in the town's best interest if this does not get shared with reporters, or any one else (example: Waco, TX Twin Peaks shootout) After getting access to the leaders, and a short "conversation" with the anti-hero federal agents stand down and forget about it when they lay siege to the compound of a highly armed cult whose leader is accused of all manner of terrible things, including assassinating federal circuit court judges. Maybe have a "conversation" with the cult leader about laying off circuit court judges also. (example: Waco, TX Branch Davidian standoff) "Nudge" the people during a town hall to ignore the results of home bought water test kits because the anti-hero believes the city council when they say they are playing chicken with the regional water supplier to negotiate a cheaper water rate by threatening to build their own water treatment plant, which is not doing a good job of treating the water (example: Flint, MI) Maybe followed up with an epiphany that the anti-hero should "talk" to the water supplier himself.
common-pile/stackexchange_filtered
Conditional calculation based on hierarchy level using mxd and ssas 2012 I am trying to do some conditional logic based on a hierarchy level. In my ssas cube I have the following hierarchy defined: Team Subteam Employee I want to create a calculated member "efficiency" which does DIVIDE([Measures].[A], [Measures].[B]) But only for the Employee level. For all other levels I need to exclude employees where [Measures].[c] = 1 I'm not sure on how to achieve this and I hope someone can help me. Thanks ahead! EDIT My current code works like this. The problem is that the members are not filtered on subteam and team level case when [Organigram].[Hierarchy].Currentmember.level IS [Organigram].[Hierarchy].[Employee] then DIVIDE([Measures].[a] , [Measures].[b]) else case when [Measures].[c] = 0 then DIVIDE([Measures].[a] , [Measures].[b]) else NULL end END, You should be able to make a start via IIF and the function 'Level': https://learn.microsoft.com/en-us/sql/mdx/level-mdx ? Do you want to add this measure to your cube-script or is is just for an MDX script via a WITH clause ? Thank you for your comment. I will supply an edit in my question with my current code. I want to add a measure to my cube (calculations tab in visual studio) Following on from the comment by @whytheq Create a new measure using WITH MEMBER and use the IIF test within that. Get something like this working properly first, before you nest a second IIF in there to check [Measures].[c] too... WITH MEMBER [Measures].[Efficiency] AS 'IIF( [Organigram].[Hierarchy].Currentmember.level IS [Organigram].[Hierarchy].[Employee], [Measures].[a] / [Measures].[b], NULL )' SELECT {[Organigram].[NameOfLevel].members} ON ROWS, {[Measures].[Efficiency]} ON COLUMNS FROM [CubeName] Once a simple query is working, gradually add little bits to get more complicated. You can also check what level you're on with the .LevelName and .LevelDepth properties. That might make your MDX shorter and more readable, or maybe not.
common-pile/stackexchange_filtered
Linker error when trying to use MPMoviePlayer On a fresh install of Xcode 3.1.2, I'm trying to use the iPhone MoviePlayer as shown in the sample code at http://developer.apple.com/iphone/library/codinghowtos/AudioAndVideo/index.html#INITIATE_VIDEO_PLAYBACK_IN_MY_CODE However, Xcode reports the following linker errors when I try to build-n-go: Building target “EOY” of project “EOY” with configuration “Debug” — (2 errors) cd /Users/ed/dev/EOY setenv MACOSX_DEPLOYMENT_TARGET 10.5 setenv PATH "/Developer/Platforms/iPhoneSimulator.platform/Developer/usr/bin:/Developer/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin" /Developer/Platforms/iPhoneSimulator.platform/Developer/usr/bin/gcc-4.0 -arch i386 -isysroot /Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator2.2.1.sdk -L/Users/ed/dev/EOY/build/Debug-iphonesimulator -F/Users/ed/dev/EOY/build/Debug-iphonesimulator -filelist /Users/ed/dev/EOY/build/EOY.build/Debug-iphonesimulator/EOY.build/Objects-normal/i386/EOY.LinkFileList -mmacosx-version-min=10.5 -framework Foundation -framework UIKit -framework CoreGraphics -o /Users/ed/dev/EOY/build/Debug-iphonesimulator/EOY.app/EOY Undefined symbols: ".objc_class_name_MPMoviePlayerController", referenced from: literal-pointer@__OBJC@__cls_refs@MPMoviePlayerController in MediaSupport.o "_MPMoviePlayerPlaybackDidFinishNotification", referenced from: _MPMoviePlayerPlaybackDidFinishNotification$non_lazy_ptr in MediaSupport.o ld: symbol(s) not found collect2: ld returned 1 exit status ".objc_class_name_MPMoviePlayerController", referenced from: literal-pointer@__OBJC@__cls_refs@MPMoviePlayerController in MediaSupport.o "_MPMoviePlayerPlaybackDidFinishNotification", referenced from: _MPMoviePlayerPlaybackDidFinishNotification$non_lazy_ptr in MediaSupport.o ld: symbol(s) not found collect2: ld returned 1 exit status Build failed (2 errors) that is right. Another way to do it is by adding the MediaPlayer to the Project Target by selecting TARGETS->Build Phases->Link Binary with Libraries (here add MediaPlayer) Yes, if your code calls into a framework, you have to add that framework to your target and link against it. Make sure the framework is "Relative to Current SDK" (select the framework > Get Info > General tab) so that when you build for the device, it links against the device's version, not the simulator's. Found the problem. I haven't read all the docs, but there are a lot of them... Anyway, I fixed this by dragging the directory /Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator2.2.1.sdk/System/Library/Frameworks/MediaPlayer.framework/ into the Frameworks folder in XCode and clicking OK on the import dialog.
common-pile/stackexchange_filtered
How can we use a string variable in a declare statement? A "normal" declare statement for a DLL function looks something like this: Declare Sub subName Lib "path_to_lib" _ Alias "aliasName" _ (...) In my application, it would be nice to have the user select their library location, after which I write that location to a cell. I'd like to pass this value to the "path_to_lib" argument, but I'm having difficulty extracting the cell value. I tried assigning the cell value to a global variable, say pathVariable and writing: Declare Sub subName Lib " & pathVariable & " _ Alias "aliasName" _ (...) But that returns the error: File Not Found: & pathVariable & I also tried double quotes, which returned the error: File Not Found " & pathVariable & " I then tried triple quotes, which VBA helpfully reduced to double quotes giving me the same error. Is there some special syntactical sauce here; or even an alternative method? Or should I abandon this (helpful) feature? You can't. Everything in the (declarations) section of a module can only ever be declarative statements, which aren't executable: a variable means nothing in a declarative context, it can't have a value. A Const could conceivably work though, but if you try it you get a compile error: VBA will only accept a string literal for it. Very true. The quotes are meant to delimit the string only, not to indicate it's a variable. This allows for paths containing parentheses, for example. Compiler constants also can't fill this gap. Interesting, thanks for the prompt reply. If only the user path didn't only work for spawned processes, it'd be nice to declare with just "nameOfDLL.dll" and use the input to add the folder location to the search path. @AndresSalas barring VBA macro security settings, nothing forbids using the VBIDE API to generate code in a standard module at run-time, that includes the specific Declare statements you need to have; you could have procedure/function stubs at compile/design-time, and at run-time replace them with the appropriate Declare statements, and on teardown re-rewrite the module to remove the declares and put the stubs back in, in one go. If you go that route, make sure you save often - and be prepared to crash often before achieving a stable solution ...if something like it can work, anyway! Mathieu, I had this idea and dismissed it as "no way that would work" haha. What's neat is that before I read your comment, someone from '06 posted a similar solution: https://blogs.msdn.microsoft.com/pranavwagh/2006/08/30/how-to-load-win32-dlls-dynamically-in-vba/ . I'll try this out. Thanks for the save often advice! I highly recommend against that. It requires a recompile and triggers a state loss, which is not something that should happen in finished applications. It can work, but proceed with a lot of caution and a lot of backups @ErikvonAsmuth TBH anything that requires programmatic access to the VBIDE API shouldn't be production code in the first place. It's just the first "this might work" solution that crossed my mind. Interestingly that MSDN blog article is failing to address the fact that there's likely other code that needs to invoke that imported procedure/function, and that code will have to be compiled - and removing Option Explicit to make that happen is out of the question, at least as far as I'm concerned! Not sure what the ideal solution would be. In addition to those reasons, this method also requires a checkbox to allow for programmatic access to VBA, which is about as inconvenient as adding the library location to the Path variable anyway. I'm nixing this method. I think I'll look into loading the library in a child process, where I can set the user path variable. Calling ChDir(dllFolderPath) did the trick! Thanks for the guidance fellows. I accepted Mathieu's answer since it gives a straightforward yes/no answer to my question. However, for any users wondering how to get around this problem of dynamic dll locations, I have the following solution: When a dll is called, the system first searches the current working directory and then searches the user and environment path variables. I found difficulty in modifying the path user variable for use in dll calls, so I exploited the first part by adding: ChDir (dllFolder) Before the dll call. The declare statements can remain as they did before, with just the library name in quotes: Declare Sub subName Lib "DLLName.dll" _ Alias "aliasName" _ (...) Feel free to keep track of the previous directory and change it back after the call if other parts of your program expect to be in a certain directory. This should be the accepted answer, really. Nicely explained, +1! Although you can't do declares in this way, you can at least load the library and function. hDLL = LoadLibrary(myDLL) hProc = GetProcAddress(hDLL, "myProcName") So now we have a pointer to the function. To call the function is a bit of a mess but you can use DispCallFunc to do this. As said, this is a bit of a mess however LaVolpe on the vbforums made a neat class for this kind of thing Private cMyDLL as cUniversalDLLCalls set cMyDLL = new cUniversalDLLCalls '... later ... cMyDLL.CallFunction_DLL(myDllPath, "myMethodName", hasStrinParams, returnType, CC_STDCALL, param1, param2, param3, ...) Caveat: This class is only supported on Windows OS
common-pile/stackexchange_filtered
Alert shown in onActivityResult I am trying to show custom alert from onActivityResult. I need to inflate it, so am getting context with getApplicationContext() and everything is fine until I execute alertDialog.show() - then it fails with: Unable to add window -- token null is not for an application Do you know why I cannot do it? (it happens on 1.6 and 2.0 - I didn't test others) I've found the sollution! This thread was very helpful<EMAIL_ADDRESS>I was doing: Context mContext = getApplicationContext(); ... builder = new AlertDialog.Builder(mContext); Instead of: builder = new AlertDialog.Builder(this);
common-pile/stackexchange_filtered
Why doesn't close vote count toward Deputy/Marshal I'm just curious why close votes dont seem to count as helpful for deputy/marshal badges. It seems like they should since when you get to 3k you get your ability to flag questions replaced by vote close. Does anyone know why this is? Good point. Before I reached 3k on MSO, I used to get tons of flags a day. Now I get very few. Remember, it's not replaced, more of shunted -- you still can flag. i think you can only get them with the evil low quality tag after 3k @Manishearth it seems like the only one that actually flags is low quality all others seem to be converted to close votes without incrementing your helpful flags Really? Didn't know that o_0 No. The only ones that are converted to close votes after 3k are the "it does not belong here" options. You can tell that the pop-up visually changes to a different one after you click it. The "very low quality", "other", "spam", and "it is not welcome in our community" options all raise a flag. Voting to close is not the same as flagging. Once you've reached 3k rep, the system trusts you enough to know when to cast a vote to close. Flagging should be reserved for serious incidents. If you're aiming for the badges - you can still achieve them by going through review route & use Not an Answer / Custom mod flags judiciously. I suppose the point isnt to try for the flags they are really a reward for good behavior. But i suppose it seems wierd that low quality flagging is encouraged by this logic. @LukeMcGregor I wouldn't say it's encouraged - flags can be declined and results in you getting lesser number of flags @LukeMcGregor - I only use Very Low Quality to flag things I think should be deleted and aren't worth editing to fix any obvious problems because there's still a fundamental problem even after the best possible edit that could be made. @LukeMcGregor - The original purpose of this badge was to make people aware of the fact that they could cast flags for items that can't easily be handled through normal means. For example, we're flooded with non-answers every day (people asking questions in answers, chiming in with "me too", or spamming irrelevant links). You can quickly get these badges from running several of these queries over a few days: http://meta.stackexchange.com/questions/83075/what-are-the-best-ways-to-find-answers-that-should-be-flagged-or-edited . I say this as someone who's cast 7900 flags, with only 12 declined. However, gaining the badges via this route is vastly more difficult than when sub 3k... Simply, a vote to close is not a flag. You're not flagging any more so you can't increment your helpful flag count by this method. If you are still trying to flag that something should be closed once you have passed 3K reputation then you're doing it wrong. This is why flags to close are converted to votes to close. You can still flag for other reasons - Spam, Very Low Quality etc. Also if you run out of close votes for the day you can revert to flagging. I agree trying to flag when you can vote close is wrong. I suppose my point was that these badges arent really achievable to people with higher reputation which seems wierd. Prehaps there should be a counterpart for closing questions @LukeMcGregor - yes they are. My helpful flags are still inching upwards on SO and SU despite having full privileges. Use the review page to find low quality stuff to edit and flag. Spam always turns up and needs flagging. there seems to be a lot of discussion on low quality flagging, it seems like we should be using other flags/votes instead. However low quality flag is mildly encouraged by making it the only path to helpful flags @LukeMcGregor - the primary route for low quality stuff should be editing and down-votes. It's the answers that should be comments etc that need flagging. @LukeMcGregor - there's plenty of other things that are worth flagging. I suspect the most helpful flags are the one where you have spotted a more complex situation and have to describe it in the "other" box instead of using a canned flag. @awoodland yeah i kinda agree with that "if you run out of close votes for the day you can revert to flagging." - Actually you can't. @haydoni - you can always use the "other" option. @ChrisF Good point, I'll try that and see if it works! @ChrisF you can but it's seemingly ineffective (perhaps there is a large backlog?).
common-pile/stackexchange_filtered
Angular Js- Bootstrap table not refreshing immediately after new data is added books.html <div ng-controller="BookController"> <table datatable="ng" class="row-border hover" ng-table="tableParams"> <thead> <tr> <th>BookID</th> <th>BookName</th> <th>Author</th> <th>ISBNCode</th> <th>NoOfBooks</th> <th>PublishDate</th> <th>NoOfBooksIssued</th> <th>Edit</th> <th>Delete</th> </tr> </thead> <tbody> <tr ng-repeat="book in books"> <td>{{book.BookId}}</td> <td>{{book.BookName}}</td> <td>{{book.Author}}</td> <td>{{book.ISBNCode}}</td> <td>{{book.NoOfBooks}}</td> <td>{{book.PublishDate}}</td> <td>{{book.NoOfBooksIssued}}</td> <td><p data-placement="top" data-toggle="tooltip" title="Edit"><button class="btn btn-primary btn-xs" data-title="Edit" data-toggle="modal" data-target="#edit"><span class="glyphicon glyphicon-pencil"></span></button></p></td> <td><p data-placement="top" data-toggle="tooltip" title="Delete"><button class="btn btn-danger btn-xs" data-title="Delete" data-toggle="modal" data-target="#delete"><span class="glyphicon glyphicon-trash"></span></button></p></td> </tr> </table> </div> Bookcontroller.js "use strict"; (function () { angular.module("Bookapp") .controller("BookController", ["$scope", "BookService", function ($scope, bookService) { bookService.getRequest() .then(function (response) { $scope.books = JSON.parse(response); }); }]); })(); AddBookController.js "use strict"; (function () { angular.module('Bookapp') .controller('AddBookController', ["$scope", "BookService", function ($scope, bookService) { $scope.save = function (item) { bookService.postRequest(item) .then(function () { location.path("books"); }); } }]); })(); Both the js files are 2 different custom files which are included in the master page. I have also written BookService.js. Which is as follows: "use strict"; (function () { angular.module("Bookapp") .factory("BookService", ["$http", "$q", function ($http, $q) { var baseURL = "http://localhost:27136/api/book"; var getRequest = function (query) { var deferred = $q.defer(); $http({ url: baseURL, method: "GET" }) .success(function (result) { deferred.resolve(result); }) .error(function (result, status) { deferred.reject(status); }); return deferred.promise; }; var getByIdRequest = function (id) { var deferred = $q.defer(); $http({ url: baseURL + "/" + id, method: "GET" }) .success(function (result) { deferred.resolve(result); }) .error(function (result, status) { deferred.reject(status); }); return deferred.promise; }; var postRequest = function (data) { var deferred = $q.defer(); $http({ url: baseURL, method: "POST", data: JSON.stringify(data) }) .success(function (result) { deferred.resolve(result); }) .error(function (result, status) { deferred.reject(status); }); return deferred.promise; }; var updateRequest = function (data, id) { var deferred = $q.defer(); $http({ url: baseURL + "/" + id, method: "PUT", data: JSON.stringify(data) }) .success(function (result) { deferred.resolve(result); }) .error(function (result, status) { deferred.reject(status); }); return deferred.promise; }; var deleteRequest = function (id) { var deferred = $q.defer(); $http({ url: baseURL + "/" + id, method: "DELETE" }) .success(function (result) { deferred.resolve(result); }) .error(function (result, status) { deferred.reject(status); }); return deferred.promise; }; return { getRequest: getRequest, getByIdRequest: getByIdRequest, postRequest: postRequest, updateRequest: updateRequest, deleteRequest: deleteRequest }; }]); })() My problem is when I click on the add button below my table the details of the book that i have entered must update in the table immediately which is not happening in my case. I have 2 different controllers one is BookController which will get all the books details from db using a service method and display in the table. The other one is AddBookController which will add the new book details to the table.In AddBookController itself i have written code to get the data after posting it to db. But i am not able to refresh the table with new data.Please help me. Thank you so Much in advance! First of all you have a code smell in your service because you don't need to use $q service for retrieving a promise from $http. $http always return a promise itself! so you can simplify all your functions like this: var getRequest = function (query) { return $http({ url: baseURL, method: "GET" }); }; For your question Have you try debugging the bookService.getRequest() request? Try putting a console.log in your book controller and see if it's called after the add. Maybe you need to trigger the get request after the add. bookService.getRequest() is working fine @eliagentili. And I have placed a console.log() after the add and I am able to see the book details i just added in the console after i click on add button. Please check below for modified AddBookController.js file. Thanks In Advance Even after i changed the code as follows, the table is not getting refreshed immediately "use strict"; (function () { angular.module('Bookapp') .controller('AddBookController', ["$scope", "BookService", function ($scope, bookService) { $scope.save = function (item) { console.log(item); bookService.postRequest(item) .then(function () { bookService.getRequest() .then(function (response) { $scope.books = JSON.parse(response); }); }); } }]); })();
common-pile/stackexchange_filtered
Web component tester 504 Gateway Timeout I installed web component tester for my polymer application. Running wct command from my project is throwing me this error. Has anyone encountered this issue? TestM:my-app 212394486$ wct Installing and starting Selenium server for local browsers Selenium server running on port 58961 Web server running on port 2000 and serving from /Users/212394486/Desktop/projects/responsemax/resmax-app firefox 41 Tests failed: <html><head><title>504 Gateway Timeout</title></head> I got this 502 issue fixed. It was due to proxy. In your environment variable / bashprofile, you have to add export no_proxy = "*.local, localhost, <IP_ADDRESS>"
common-pile/stackexchange_filtered
get preceding value based on priority I've XML like below. <root> <title></title> <section> <title> <content-style font-style="bold">Short title</content-style> </title> <figure> <caption> <para></para> </caption> <graphic/> </figure> <para> <phrase>U2/1</phrase> </para> </section> </root> Here I've 2 questions. I want to check if there is a title or para, preceding the phrase, if there is title i want to apply-templates on title node, else if there is para i want to apply-templates on preceding para. the priority is to be given to title if there is no title, then para should be consdered. I've tried it using <xsl:apply-templates select="preceding::title/node()[1]"/>, but i was going to the title in root and printing blank. please let me know were am i going wrong and how to fix this. Thanks Can you post the XSLT you did try. And also the expected output.. Why don't you write exactly what you said, using xsl:choose?
common-pile/stackexchange_filtered
Is there an hybrid app framework which is HTML-CSS-JavaScript → all vanilla and if so what is it? I need an Hybrid app framework which is HTML-CSS-JavaScript → all vanilla, that would allow me to develop hybrid applications usable in both Desktop computers (laptops or otherwise) and pocket computers (smartphones or otherwise). These apps should be able to Run on a LAMP server environment Work both natively and in browsers while sharing the exact same database A smartphone native version will look 100% the exact same as the smartphone native browser version by design principle Is there an hybrid app framework which is HTML-CSS-JavaScript → all vanilla and if so what is it? React uses native Javascript. Use it with React Native for mobile apps. If you need to package Desktop apps as well then maybe packaging the React app with Electron might be an option. I also have a book recommendation for you covering this stack: "Javascript everywhere". The "everywhere" stands for frontend, backend, mobile. https://www.jseverywhere.io . (the paper version isn't released yet, but you can read an early version on the oreilly.com learning platform with a free trial account). This is a real recommendation, I'm in no way associated with the author/publisher. Hello, thanks; please only clarify what you mean by packaging an app with Electron framework. Electron lets you package web apps (html/css/javascript) into Desktop apps for Windows, MacOS, Linux (.exe, .app, etc). This allows you to do some things that aren't possible in the browser (i.e. accessing the local file system or running shell commands with javascript). It is based on Chrome. Think of it in a simplified way like: It creates an executable for you (i.e. yourapp.exe) that contains a hidden chrome installation that runs your web application (in the simplest case: index.html). Famous example: Visual Studio Code
common-pile/stackexchange_filtered
getClass() method returning different class than expected I am removing duplicates of path from list using: paths.removeAll(Collections.singleton(path)); I am running above code from TestClass.java running Junit test cases. Equals and hashcode methods consider string value inside path object. Equals method fails in below code. if (getClass() != obj.getClass()) return false; Even though all objects inside paths list are of same type Path. Above code fails to match class name. I saw it was giving class name as junit class name TestClass$5$1$1 as first value and TestClass$5$1$2 as second value, hence it fails. Anything I am doint wrong here? Thanks in advance. I am creating list of paths using below code. Paths paths = new Paths(){ { setPaths(new ArrayList<Path>(){ { add(new Path(){ { setValue("c:\\\\test"); } }); add(new Path(){ { setValue("c:\\\\test1"); } }); add(new Path(){ { setValue("c:\\\\test1"); } }); } }); } }; If I create list "paths" by normal java code, equal method works properly and it removes duplicate path. What is Path? How are you instantiating it? Please post a [mcve]. Path is class with field value inside it. Equals and hashcode are written on this field. Can you show the line where you assign a value to the path variable? And the line where you create the object that it references? I know it's a class. We need to see the class and how you're instantiating it. When you've added the relevant code, you can vote to reopen @Bohemian Please reopen. With new Path() { } you create an anonymous subclass of Path (you define similar anonymous subclasses of Paths and ArrayList) @shmosel 'tis done This code creates an anonymous class of Path : add(new Path(){ { setValue("c:\\\\test"); } }); And according the actual Path.equals() snippet you posted, to check the type compatibility, it doesn't rely on instanceof but getClass() : if (getClass() != obj.getClass()) return false; } So these two objects are not equal : Path p1 = new Path(){ { setValue("c:\\\\test"); } } Path p2 = Paths.get(("c:\\\\test"); As these are from two distinct classes : Path class and a anonymous Path class. As workaround you could change the Path equals()to use instanceof such as : if (!(obj instanceof Path)) return false; } But in fact, you don't need to create anonymous classes. You should rather take advantage of constructors to initialize your objects rather than using initializers. By introducing Paths(List<Path> pathes) and Path(String path) constructors, you could so write something as : Paths paths = new Paths( new ArrayList<>(Arrays.asList( new Path("c:\\\\test"), new Path("c:\\\\test1"), new Path("c:\\\\test1")))); You're creating a distinct anonymous class for each object you're instantiating. Just create them normally, without a custom initializer block: Path path1 = new Path(); Path path2 = new Path(); Path path3 = new Path(); path1.setValue("c:\\\\test"); path2.setValue("c:\\\\test1"); path3.setValue("c:\\\\test1"); Paths paths = new Paths(); paths.setPaths(new ArrayList<>(Arrays.asList(path1, path2, path3))); You can reduce the verbosity of creating and initializing each Path object by adding a constructor: class Path { private String value; public Path(String value) { this.value = value; } //... } Same for Paths: class Paths { private List<Path> paths; public Paths(List<Path> paths) { this.paths = paths; } //... } Now you can call it like this: Paths paths = new Paths(Arrays.asList( new Path("c:\\\\test"), new Path("c:\\\\test1"), new Path("c:\\\\test1"))); Before running the step to compare classes, try using this if statement or printing out the object's class type in code. if((obj instanceof Path) == false) return false; I see you are trying to remove a singleton from a list. The "singleton" method returns a Set object, which may be obj's type and why comparing a Path object to a Set object will return false. However, it may not be the line that you are referencing that is failing. Please provide the declarations of all referenced objects and the full equals method of your Path object.
common-pile/stackexchange_filtered
1: The momentum operator has that negative i in front - why not positive i? Both seem mathematically valid when you square them to get the kinetic energy operator. 2: Consider the plane wave $e^{ikx}$. When you apply $-i\hbar \frac{\partial}{\partial x}$ to it, you get $\hbar k$ times the original function. That eigenvalue $\hbar k$ matches the de Broglie momentum exactly. 1: But the positive version would give $-\hbar k$. That's still a valid momentum, just pointing the opposite direction. 2: True, but think about the phase velocity. A wave $e^{i(kx-\omega t)}$ travels in the positive x direction when k is positive. We want the momentum eigenvalue to have the same sign as the wave vector k, not opposite. 1: So it's about maintaining consistent directionality between the wave propagation and momentum measurement. 2: Exactly. Plus, when you derive this from the exponential building block, the natural differentiation gives you that negative i. The operator emerges from requiring that momentum eigenstates have the right physical interpretation. 1: This explains why quantum momentum looks so different from classical $mv$. Instead of mass times velocity, we get this differential operator acting on probability amplitudes. 2: The conservation laws still hold though. The operator formalism just captures how momentum manifests in the wave description. When you measure it, you get definite classical-looking values. 1: And the canonical versus kinetic momentum distinction
sci-datasets/scilogues
So, if we needed to get pure water from a salt solution, simple distillation seems like the most direct method. It is, provided the components have sufficiently different boiling points. In the case of salt and water, the difference is massive. Right. Sodium chloride boils at over 1400°C, while water boils at 100°C. That's the key to the entire process. Exactly. When you heat the saltwater, only the water will reach its boiling point and turn into vapor. The salt just stays behind in the original flask, as a solid residue eventually. And that water vapor then travels into the condenser, which is cooled, usually by circulating water. The cooling causes a phase change, forcing the vapor to condense back into liquid form. Pure liquid water, which then drips into a receiving flask. So, the core principle is a cycle of vaporization and condensation, driven by a significant boiling point differential. It's a physical separation technique. No chemical reactions are involved; we're just exploiting a difference in physical properties. It's elegant. You start with a homogenous mixture and end with two completely separate, pure substances. The energy input to vaporize the water is the main driver, and the cooling system is what completes the separation.
sci-datasets/scilogues
List size isnt working as expected for partner user I have observed very weired nature in Lists result. I run this query in query editor: select id, Title, Description, FileType,FeaturedContentDate,FeaturedContentBoost,ContentDocumentId,TagCsv from ContentVersion where FeaturedContentDate!=null and FeaturedContentBoost!=null Order By CreatedDate Desc I got 2 records. I used same query in Apex: for sfdc user- 2 records,partner users-87 records I received very shocking result for partner users: system.debug('before content list-->'+contentList); //zero for(ContentVersion cont:[ select id, Title, Description, FileType,FeaturedContentDate,FeaturedContentBoost,ContentDocumentId,TagCsv from ContentVersion where FeaturedContentDate!=null and FeaturedContentBoost!=null Order By CreatedDate Desc] ){ contentList.add(cont); } system.debug('--contentList size()--->'+contentList.size()); //87 I received 87 records. How its possible? I logged in as partner user & tried this logic works in sandbox (its working as expected) & moving the code to another sandbox,while doing the testing I rec ivied above result. I have used withsharing also.but I can see same result for testing purpose: public class testContent { public static void test(){ List<ContentVersion>contentList1=[select id, Title, Description, FileType,FeaturedContentDate,FeaturedContentBoost,ContentDocumentId,TagCsv from ContentVersion where FeaturedContentDate!=null and FeaturedContentBoost!=null Order By CreatedDate Desc]; system.debug('----contentList1----'+contentList1.size()); } } for sfdc user i will getlist count is 2.for partner user result is 87 . with sharing and without sharing are "enough" to handle most sharing issues with regular sObjects, but sharing works differently on Content. Running a class without sharing does not guarantee that the running user can see all Content. It is normal and to be expected that different running users would be able to see different sets of Content. Additionally, Anonymous Apex runs in user context, so comparing Anonymous Apex to Apex running in system mode is not an apples-to-apples comparison. I would suggest reading through Who Can See My File?, because the differences between Content sharing and regular sObject sharing are sometimes quite confusing - particularly when Files are shared on record pages. It's particularly important to note that Users with “Modify All Data” permission can view, preview, download, share, attach, make private, restrict access, edit, upload new versions, and delete files they don't own. However, if the file is in a private library, then only the file owner has access to it. Users with “View All Data” permission can view and preview files they don't own. However, if the file is in a private library, then only the file owner has access to it. Emphasis mine. This means that the normal System Administrator permissions to access everything in the system do not apply to Content, and there are situations where you will not be able to see Files owned by other users - even as a System Administrator. This may be the issue you are observing; it's certainly a common issue when trying to do broad-based queries against an org's Content. If your test internal user is not a System Administrator, the number of situations where this user will be unable to see Content (regardless of sharing declaration) increases. Overall, it will be hard to provide you with a more specific answer to your question without being in your org, because so many factors influence sharing and visibility. The first factors I would review to try to understand why each user sees what they see are: Who owns the files in question? What is the sharing setting, as described in Who Can See My File?, for these items? The sharing setting and icon appear on a file's detail page and on the Shared With list on a file detail page. Are the files in a library? Are they shared on record pages? What is the total sharing landscape? If the files are shared on record pages, who can see those records? What permissions and profiles are involved for both classes of users? Spring '19 will provide a preview of a new Query All Files permission: With the new Query All Files permission, View All Data users can query ContentDocument and ContentVersion and retrieve all files in the org, including files in non-member libraries and files in unlisted groups. On its own, the View All Data permission only allows you to query files you own or have access to. this is because of that query editor, anonymous apex and standard controller enforce sharing rules. Meanwhile, without explicitly mentioning sharing keyword for apex class it runs in system mode. The new feature starting with 44.0 is Inherited Sharing is preferable to use An Apex class with inherited sharing runs as with sharing when used as a Lightning component controller, a Visualforce controller, an Apex REST service, or any other entry point to an Apex transaction. Add with sharing to your apex class, where you perform SOQL and you will receive 2 records instead of 87
common-pile/stackexchange_filtered
Try Except in python :syntax issue class ShortInputException(Exception): '''A user-defined exception class.''' def __init__(self, length, atleast): Exception.__init__(self) self.length = length self.atleast = atleast try: s = raw_input('Enter something --> ') if len(s) < 3: raise ShortInputException(len(s), 3) except ShortInputException, x: print 'ShortInputException: The input was of length %d, \ was expecting at least %d' % (x.length, x.atleast) I dont understand the syntax of this line: except ShortInputException, x: what is x here for ?? and why is it acting as an object ??? waht does this line do ? : Exception.__init__(self) Thanks Note that, for reasons you can find outlined in the docs, its a bit better to do super(ShortInputException, self).__init__(self) than Exception.__init__(self); but in your code as written, it will work out the same. except ShortInputException, x: catches an exception of class ShortInputException and binds the instance of the exception object to x. The more common syntax for this is except ShortInputException as x which is to be preferred as described in PEP3110. Unless you need to support Python 2.5, you should use the as version. Exception.__init__(self) calls the constructor for the super class, the class that this user defined class derives from. Tell me if i am wrong. The flow is like this s = raw_input('Enter something --> ') if len(s) < 3: raise ShortInputException(len(s), 3) except ShortInputException, x: print 'ShortInputException: The input was of length %d, was expecting at least %d' % (x.length, x.atleast) where in between the class ShortInputException(Exception): comes ???? This answer is just correct. Being correct is enough to be worth an upvote ;) You need __ around the init, i.e., __init__. Also, it isn't really the "more common syntax": the as syntax was added in Python 2.6 (IIRC), before that, the except FooType, foo syntax was used. The old syntax is still supported in Python 2, but if you're on Python 2.6, you should prefer the newer syntax. But if you need to support Python 2.5 (poor you) then you'd have to use the old syntax, to be compatible. @thanatos Thanks. The init was a copy/paste error from original formatting of the question and I just found PEP3110 which describes the change from old comma syntax to the new syntax. waht does this line do ? : Exception.__init__(self) ShortInputException(Exception) declares your class ShortInputException as sub class of Exception. Exception.__init__(self) calls the constructor of parent class. except ShortInputException, x: From the doc: When an exception occurs, it may have an associated value, also known as the exception’s argument. The presence and type of the argument depend on the exception type. The except clause may specify a variable after the exception name (or tuple). The variable is bound to an exception instance with the arguments stored in instance.args. x in your example is the exception object raised.
common-pile/stackexchange_filtered
How to flatten a hetrogenous list of list into a single list in python? I have a list of objects where objects can be lists or scalars. I want an flattened list with only scalars. Eg: L = [35,53,[525,6743],64,63,[743,754,757]] outputList = [35,53,525,6743,64,63,743,754,757] P.S. The answers in this question does not work for heterogeneous lists. Flattening a shallow list in Python this works if pop() returns a only one scalar at a time http://stackoverflow.com/a/10546929/1321404 you can modify this code with if len(returned_pop_element)>1 then call recursive function again for returned_pop_element(list). Also see: http://stackoverflow.com/questions/5828123/nested-list-and-count/5828872 see also https://stackoverflow.com/a/40857703/4531270 Here is a relatively simple recursive version which will flatten any depth of list l = [35,53,[525,6743],64,63,[743,754,757]] def flatten(xs): result = [] if isinstance(xs, (list, tuple)): for x in xs: result.extend(flatten(x)) else: result.append(xs) return result print flatten(l) I think I can make it isinstance(xs,collections.Iterable) and not isinstance(xs,str) so that it includes set and other possible iterables as well. and not isinstance(xs,basestring) for python < 3, but yes good idea it could be done neatly in one line using numpy import numpy as np np.hstack(l) you end up with an ndarray array([ 35, 53, 525, 6743, 64, 63, 743, 754, 757]) >>> data = [35,53,[525,6743],64,63,[743,754,757]] >>> def flatten(L): for item in L: if isinstance(item,list): for subitem in item: yield subitem else: yield item >>> list(flatten(data)) [35, 53, 525, 6743, 64, 63, 743, 754, 757] Here is a one-liner version for code-golf purposes (it doesn't look good :D ) >>> [y for x in data for y in (x if isinstance(x,list) else [x])] [35, 53, 525, 6743, 64, 63, 743, 754, 757] The first version breaks strings into characters, I don't think that is desirable. @JanneKarila It doesn't say there will be strings. If you use hasattr(item, '__iter__') you can avoid the string problem without limiting the range of iterables. @JoelCornett The question only mentions the use of list so I will just use isinstance as the other answers have also done. l = [35,53,[525,6743],64,63,[743,754,757]] outputList = [] for i in l: if isinstance(i, list): outputList.extend(i) else: outputList.append(i) @jamylak, thanks for editing, but i like two spaces for indentation :-( @jamylak, thanks for the ref. I always thought that the recommendation is to use spaces. Did not know it was 4 spaces. Yeah but of course they are filled in by the editor when you press tab anyway. It is usually set to 4 or 3 by default. Also for some reason i cannot write @Vikas at the beginning of my comments, I don't know why it is not letting me... outputList = [] for e in l: if type(e) == list: outputList += e else: outputList.append(e) >>> outputList [35, 53, 525, 6743, 64, 63, 743, 754, 757] Here's a oneliner, based on the question you've mentioned: list(itertools.chain(*((sl if isinstance(sl, list) else [sl]) for sl in l))) UPDATE: And a fully iterator-based version: from itertools import imap, chain list(chain.from_iterable(imap(lambda x: x if isinstance(x, list) else [x], l))) It's a oneliner, it's not meant to be pretty. Well in that case i think i have a smaller one liner, i will post it. Additionally, sum((i if isinstance(i, list) else [i] for i in L), []) @JoelCornett +1 That is what I thought of as well but i didn't like it since it has to construct a new list each iteration. def nchain(iterable): for elem in iterable: if type(elem) is list: for elem2 in elem: yield elem2 else: yield elem Recursive function that will allow for infinite tree depth: def flatten(l): if isinstance(l,(list,tuple)): if len(l): return flatten(l[0]) + flatten(l[1:]) return [] else: return [l] >>>flatten([35,53,[525,[1,2],6743],64,63,[743,754,757]]) [35, 53, 525, 1, 2, 6743, 64, 63, 743, 754, 757] I tried to avoid isinstance so as to allow for generic types, but old version would infinite loop on strings. Now it flattens strings correctly (Not by characters now, but as if it's pretending a string is a scalar). I would not expect strings to be flattened (broken to single characters). Technically strings are iterables, which is why I included it. It does seem kind of odd when I look at it more closely. >>> L = [35,53,[525,6743],64,63,[743,754,757]] >>> K = [] >>> [K.extend([i]) if type(i) == int else K.extend(i) for i in L ] [None, None, None, None, None, None] >>> K [35, 53, 525, 6743, 64, 63, 743, 754, 757] This solution is only for your specific situation (scalars within lists) and assumes the scalars are integer. It is a terrible solution but it is incredibly short. outputlist = map(int,",".split(str(L).replace("[","").replace("]",""))) The answer is quite simple. Take advantage of recursion. def flatten(nst_lst, final_list): for val in nst_lst: if isinstance(val, list): flatten(val, final_list) else: final_list.append(val) return final_list #Sample usage fl_list = [] lst_to_flatten = [["this",["a",["thing"],"a"],"is"],["a","easy"]] print(flatten(lst_to_flatten, fl_list))
common-pile/stackexchange_filtered
Fragment WebView should load last URL In my application I have a sliding menu, In sliding menu I have four options, those are fruits, vegetables,animals, birds. And these are fragments. In each fragment I have a Web view. now what I want is when I am loading a web and performed some search , now I will open another fragment and would like to perform another search. it is going good . but what I want is if I am performing some search then it has to save the last URL which I have visited. I am able to save last URL in single web but unable to do in multiple fragments. here is my code in which I can save last URL with single class. public class SearchFragment extends Fragment { WebView wv; private ProgressBar bar; private String url="http://www.google.com"; public SearchFragment(){} @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { View rootView = inflater.inflate(R.layout.fragment_search, container, false); wv = (WebView)rootView.findViewById(R.id.search_web); bar = (ProgressBar)rootView.findViewById(R.id.search_bar); bar.setVisibility(View.VISIBLE); loadUrl(); return rootView; } private void loadUrl() { WebSettings webSettings = wv.getSettings(); webSettings.setJavaScriptEnabled(true); webSettings.setDomStorageEnabled(true); webSettings.setCacheMode(WebSettings.LOAD_CACHE_ELSE_NETWORK); /* below function saved the password in WebView */ webSettings.setSavePassword(true); wv.setWebChromeClient(new WebChromeClient() { @Override public void onProgressChanged(WebView view, int progress) { if (progress == 100) { bar.setVisibility(View.GONE); wv.setVisibility(View.VISIBLE); } } }); wv.setWebViewClient(new WebViewClient() { @Override public void onReceivedError(WebView view, int errorCode, String description, String failingUrl) { //Toast.makeText(CommunityFragment.this, description, Toast.LENGTH_SHORT).show(); Toast.makeText(getActivity(), description, Toast.LENGTH_LONG).show(); } }); wv.loadUrl(url); } // flipscreen not loading again @Override public void onConfigurationChanged(Configuration newConfig) { super.onConfigurationChanged(newConfig); } // Handling BackPress events public void onBackPressed() { if (wv.canGoBack()) { wv.goBack(); } else { getActivity().finish(); } } }
common-pile/stackexchange_filtered
Make 2 workspaces on 1 monitor in GNOME Shell? Is there any way to make 2 workspaces on one screen with GNOME Shell 3.36 on Ubuntu 20.04? I need this because I want to use browser (or any other app) in full screen mode but only on half of my screen, but I've found no solutions. So maybe there is a way to place 2 workspaces on one screen? It is not possible to have two workspaces displayed on a single screen, not in Gnome Shell and not in any other desktop environment I know. An easy way to put your browser in half of the screen is to use the shortcut key Super+Left/right. This will tile the browser window on the left or right of your screen. With similar hotkeys, other applications can then be made to occupy the other half. A similar approach works for maximizing the window on the entire screen: use Super+Up or drag the window to the top edge. With the mouse, you quickly achieve the same by dragging the window to the left or right edge of the screen. Once the cursor is close to the edge, a colored area will appear, indicating that you can now release the mouse button to have the window tiled. Yes, I know about this, but it puts window in half of the screen in its usual mode, and I'm talking about any possibility to make the same with the window in fullscreen mode, like here Sorry, but I started my answer with answering your question: telling that it is not possible. It is a disappointing answer, but, unless I am proven wrong (which I would hope), may be the valid answer to your question. You can do this with the GNOME-Extension ShellTile The simplest way to use the extension is to first, enable it by clicking the slider to set it one then holding down the Ctrl key, slide another window over the first (during which there will be a screen highlighting) after which release and the two windows will self position. The trick is to move windows to corners to achieve 4 in a single screen. Here, I have created 4 different windows on one screen. You can install GNOME Shell Extensions on 20.04 by following the steps here I've installed it, but still can't understand how to make two workspaces on one screen or how to resize a fullscreen window. @Kalich I've amended my answer to show how to use the extension. If that works for you please be kind enough to accept and upvote my answer. But on your screenshot the browser is in its usual mode, and I'm talking about any possibility to split the window in fullscreen mode, like here @Kalich then I'm confused by your question. One window is a browser the other is an app. Your screenshot seem to imply the same - only one bottom bar but 4 windows which is possible with Shell Tile. What am I missing here? On my screenshot all windows are opened in fullscreen mode and then split to four windows, while on yours windows are not in fullscreen mode
common-pile/stackexchange_filtered
Threading time in quiz game How can I use a threading timer to get a countdown while waiting for the user to get input and if it is out of time, the next question auto pops up? Is it possible for me to stop threading immediately? Like if the user input and it stop counting with open("question.txt", "r") as question_file: questions = question_file.read().splitlines() with open("choice.txt", "r") as choice_file: choices = choice_file.read().splitlines() for i in range(len(questions)): print("Question:", questions[i]) print(choices[i]) guess = input("Enter answer: ").upper() while guess not in ['A', 'B', 'C', 'D']: guess = input("Invalid input. Enter answer (A, B, C, D): ").upper() if guess == answers[i]: # Compare with the correct answer for the current question score += 1 Not the cleanest approach since you might have to use global boolean variable input_entered to keep track if the user entered an input. When creating a timer thread, you can end the timer thread with SIGINT which will raise KeyboardInterrupt exception in the main thread which can then be catched and move on to the next question from threading import Thread import time import signal import os input_entered = False def timer(duration): global input_entered time_waited = 0 while (not input_entered) and (time_waited < duration): time.sleep(1) time_waited += 1 if not input_entered: os.kill(os.getpid(), signal.SIGINT) def main(): global input_entered with open("question.txt", "r") as question_file: questions = question_file.read().splitlines() with open("choice.txt", "r") as choice_file: choices = choice_file.read().splitlines() # Sample answers answers = ['B', 'C', 'D'] score = 0 duration = 5 # wait 5 seconds for i in range(len(questions)): print("Question:", questions[i]) print(choices[i]) input_entered = False thread = Thread(target=timer, args=[duration]) thread.start() try: guess = input("Enter answer: ").upper() while guess not in ['A', 'B', 'C', 'D']: guess = input("Invalid input. Enter answer (A, B, C, D): ").upper() if guess == answers[i]: # Compare with the correct answer for the current question score += 1 input_entered = True except KeyboardInterrupt: print('Time\'s up!') finally: thread.join() if __name__ == '__main__': main()
common-pile/stackexchange_filtered
Why is my Axios fetch giving CORS errors? I have spent more than 3 hours trying to research and find a solution to this. I have looked at numerous other answers on StackOverflow, and nothing was able to help. List of my research at the bottom I am trying to access a public API. When I do curl it is fully accessible. When I try to access it in a React app, I get an error. Here is my code: const API = 'https://btcilpool.com/api/status'; const config = { headers: { 'Access-Control-Allow-Origin': '*', 'Content-Type': 'application/json', }, }; axios.get(API, config); Here is the error: GET https://btcilpool.com/api/status net::ERR_HTTP2_PROTOCOL_ERROR Uncaught (in promise) Error: Network Error at createError (createError.js:16) at XMLHttpRequest.handleError (xhr.js:84) Uncaught (in promise) TypeError: Failed to fetch Here is what the API looks like: //<PHONE_NUMBER>3327 // https://btcilpool.com/api/status { "x17": { "name": "x17", "port": 3737, "coins": 1, "fees": 3, "hashrate": 0, "workers": 282, "estimate_current": "0.00000000", "estimate_last24h": "0.00000000", "actual_last24h": "0.00000", "mbtc_mh_factor": 1, "hashrate_last24h":<PHONE_NUMBER>.0569 } } My Reasearch: https://blog.container-solutions.com/a-guide-to-solving-those-mystifying-cors-issues What's the net::ERR_HTTP2_PROTOCOL_ERROR about? CORS error - my headers Does this answer your question? Why does my JavaScript code receive a "No 'Access-Control-Allow-Origin' header is present on the requested resource" error, while Postman does not? Basically, it's saying that the CORs error needs to be fixed on the API side of things, not on the React side of things? @NinoFiliu CORS requirements are set by the host, there is nothing you can do about it except asking if they will allow CORS headers. A workaround is using a proxy. So you'll make a request on your own server and pass the result back to your client. Here is an example with a free proxy, though I do not recommend doing this in production: // This will result in your error axios .get('https://btcilpool.com/api/status') .then((response) => console.log('Response', response)) .catch((error) => console.log('Error', error)) // This will give you your expected result axios .get(`https://api.allorigins.win/get?url=${encodeURIComponent('https://btcilpool.com/api/status')}`) .then((response) => console.log('Response', response)) .catch((error) => console.log('Error', error)) https://jsfiddle.net/51qhnfw0/ Thank you very much, this works... I will talk to the API guys and try and get them to add the CORS: * to their side.
common-pile/stackexchange_filtered
Implementing -hash and -isEqual for a geo coordinate class I have an object which stores a latitude/longitude/altitude, and need reliable and fast -hash and isEqual implementations. I am using double to store all of the primitives. The accepted answer for Best practices for overriding isEqual: and hash, looks good, but it only talks about integer values. My question is how to deal with doubles, since they aren't precise values. I want to compare the primitives within 8 decimal places, which is already quite a bit more accurate than the GPS chip itself. Here is what I've come up with so far, have I done it right or does it need improvement? My -isEqual: implementation is fairly simple: - (BOOL)isEqualToAGPoint:(AGPoint *)otherPoint { if (fabs(otherPoint->latitude - latitude) > 0.00000001) return NO; if (fabs(otherPoint->longitude - longitude) > 0.00000001) return NO; if (fabs(otherPoint->altitude - altitude) > 0.00000001) return NO; return YES; } But I'm not so sure about my -hash implementation: - (NSUInteger)hash { NSUInteger prime = 31; NSUInteger result = 1; result = prime * result + lround(latitude * 100000000); result = prime * result + lround(longitude * 100000000); result = prime * result + lround(altitude * 100000000); return result; } A quick test demonstrates it seems to work as I need it to: // all three have the same longitude and altitude, while a and b have slightly different (but should be considered identical) latitudes, while c's latitude is just different enough to be considered not equal to the a and b AGPoint *a = [[AGPoint alloc] initWithLatitude:-16.922608127 longitude:145.77124538 altitude:2.74930134]; AGPoint *b = [[AGPoint alloc] initWithLatitude:-16.922608128 longitude:145.77124538 altitude:2.74930134]; AGPoint *c = [[AGPoint alloc] initWithLatitude:-16.922608147 longitude:145.77124538 altitude:2.74930134]; NSLog(@"a == b: %i", (int)[a isEqual:b]); NSLog(@"a == c: %i", (int)[a isEqual:c]); NSLog(@"hash for a: %lu b: %lu c: %lu", (unsigned long)[a hash], (unsigned long)[b hash], (unsigned long)[c hash]); output: a == b: 1 a == c: 0 hash for a:<PHONE_NUMBER> b:<PHONE_NUMBER> c:<PHONE_NUMBER> Does this look correct? Don't forget to cache your hash. Where will -hash be called repeatedly on the same object? Adding another NSUInteger instance variable to this object would increase my memory consumption by ~30MB. I'm going to have a lot of these objects. "which is already quite a bit more accurate than the GPS chip itself" Why did you choose to do this? Possible forward compatibility? If the data are significant to 1e-5? You gain nothing, and any datum with values in the 1e-6, 1e-7, 1e-8 ranges are "garbage" anyway. What happens near the equator or the Greenwich meridian? Apple is using doubles everywhere, so I decided to do the same. I primarily want isEqual/hash to check if a point is a copy of another point, not if a user is in the same location as they were previously (which is possible but highly unlikely). An 8th decimal place is accurate to somewhere between 1.1mm and 1/100,000th of a millimetre, depending on which value/where in the world. These values will primarily be used for drawing, which also uses floating point values everywhere. I don't want to convert a double to an int and then convert the int back to a double when drawing it. You're in trouble with values like (0.5 ± 0.015625)*1e-8. The absolute difference of the coordinate is less than the tolerance, but the rounding leads to different integers. EDIT: This means two objects can be considered equal, but have different hash codes. Inconsistent equality and hash code can pose serious problems if you ever use a hash map. A solution is to compare each object's hash inside isEqual: - (BOOL)isEqualToAGPoint:(AGPoint *)otherPoint { if ([otherPoint hash] != [self hash]) return NO; if (fabs(otherPoint->latitude - latitude) > 0.00000001) return NO; if (fabs(otherPoint->longitude - longitude) > 0.00000001) return NO; if (fabs(otherPoint->altitude - altitude) > 0.00000001) return NO; return YES; } When would this cause problems? The 8th decimal place is around 1 millimetre or less (depending on distance from equator). The iPhone's GPS receiver is accurate to a few meters in ideal conditions, so I don't see any issues with two objects being considered equal by NSDictionary or NSSet if they're 1mm apart? The problem is that two objects can be considered equal, but have different hash codes. Inconsistent equality and hash code can pose serious problems if you ever use a hash map. I see. How can I solve this? Perhaps isEqual should just be [self hash] == [otherPoint hash]? Two objects are equal if they're close and have equal hash code? That would work. On the other hand, perhaps it would be better to store the coordinates as integers, multiples of 1/1000-th second or something. That depends on what you need to do with them. Apple is using doubles everywhere in the location API (the variables are actually CLLocationDegrees/etc, which is defined by Apple as a double), and they will eventually be used to either draw to the screen as CGFloat values (double on 64 bit, float on 32 bit) or exported to an XML file (where it will be a string with 8 decimal places). Keeping all that in mind, sticking with CLLocationDegrees/double seems like the correct thing to do. Is there any reason to do the equality math and compare the hash code? Seems like I could get away with only comparing the hash code? Yes, if the API uses double, stick with that. And yes, there is a pretty good reason, hash functions are in general not injective, so far away locations can have the same hash. If you can prove that that won't happen, hash would be enough. But the hash is an integer, probably 64 bits, long, lat and alt can each take something in the region of 10^9 values (after rounding), makes 10^27 or so locations, while a 64-bit integer can take only about 1.8*10^19 values. So your hash can't be injective. Thanks for your help. I'll update my -isEqualToAGPoint: method to compare the hash before doing the other comparisons. Do you mind if I edit your answer to include the results of these comments? But you're multiplying with 10^8 before the round, so I supposed you have an accuracy of something like 10^(-7) or 10^(-8) degrees, similarly for altitude. Re edit: go ahead, if I don't like it, I can re-edit :)
common-pile/stackexchange_filtered
Should person with schizophrenia have children? I wanted to ask you whether it is ethical for a person with schizophrenia to have biological children. Do any Buddhist traditions teach anything in that regard? When one parent has this illness, the chance that kid would have it too is about 17%, as compared to 1% in general population. The doctor says that it shouldn't prevent one from having children, as similar situation is for many diabetics, where there is also hereditary factor, and they decide to become parents. And some other illnesses have even greater factor than 17% and those people decide to have children, who are in many cases healthy. Some may suggest that it's better in this situation not to have biological children, but to adopt. However, mental illnesses are one of the main reasons why adoption is impossible. According to some statistics (I don't know how accurate) one person with schizophrenia in ten commits suicide; three or four in ten have at least one suicidal attempt. Nonetheless, the illness can be treated so that there is full remission, i.e. no symptoms whatsoever as long as one takes medicines. And we may expect even better understanding and treatments in the future. Is it ethical for a schizophrenic to have children? I am not a Buddhist, but I am a medical professional. Buddhism teaches "right view/understanding" (see things as they truly are without delusions or distortions). I would advise that you learn more about your schizophrenia. There are types that are more likely to be progressive than others, and while medications may control the symptoms, they cannot prevent progression. Know your illness without distortion before making the decision to bring into the world a perfectly normal child that you might not be able to care for or one who has the disorder. Either way, there will be joy and suffering. "Right thinking" involves a dedication to overcoming self-centered craving through the development of loving kindness, empathy and compassion. Is it self-centered craving to want a child? I don't know; I think it was for me. Once they are grown, though, life is given more meaning through the above. No children necessary. The decision to have children is always difficult for people who are self-aware. Which is worse to you, to suffer physically or to suffer mentally? To my thinking, I would choose physical suffering over mental suffering. Giving birth to a child who may develop a physical illness is different than one who might suffer mental illness. That should be considered. If the risk is unacceptable, your wife can become pregnant via sperm donation, which would reduce the risk of schizophrenia, but carries other risks. It is unlikely, however, that a reputable sperm bank would accept sperm from someone with a serious mental illness. Do any Buddhist traditions teach anything in that regard? I don't know, but I doubt it. A lot of Buddhist law is monastic, with something of a line drawn between the sangha and lay society. There are ethical rules or guidelines for laypeople but (IMO) they tend to be brief or broad (e.g. the five precepts), rather than detailed. I think there's no Buddhist law that regulates marriage, for example. There may be cultural rules (but I'd suspect they're cultural or national rather than "Buddhist" and timeless). If you want to pursue this question, the one thing I can suggest is this: Look at this answer, which includes Buddhist advice on how to choose a marriage partner -- it summarises one chapter of this book, which is an anthology of advice for laypeople (taken from Pali suttas) I think the next chapter of that book contains advice for parents ... but I haven't read it, and I no longer have the book. I recommend you get the book and read it, and see whether that chapter (of advice for parents) sounds like something you're able and willing to take on! Say if you want me to summarise it, like I did the other chapter (if so I'll try to buy or borrow the book again). The doctor says that it shouldn't prevent one from having children ... in many cases healthy If one responds well to doctors' treatment then I think they say that one can have a normal life, without disability. And, yes, schizophrenia is not the only malady that can affect children and other people. Given that children are vulnerable to one thing or another, perhaps this question isn't different from (or is just a special case of) a more general question, which is whether a Buddhist "should" have children at all -- so you might find that answers to some of these questions may help to answer yours: Should a Buddhist have Children? Benefits of Producing and Caring for Children Pursuit of raising children ok? How to combine Buddhism with being a parent? Is there any mention of child adoption in any Buddhist writings and what is the view on child adoption with regard to Buddhism? Does buddhism allow for families? Your questions seem to concentrate on whether the child will be healthy (or inherit an illness). I think my first (and perhaps only) concern was whether the parent[s] will be healthy, but it's more your decision to make than mine. mental illnesses are one of the main reasons why adoption is impossible As an example of different possibility, my wife (and my mum too, for that matter) became teachers ... preschool teachers, actually, after a year or two or more of training. If you take a job or a career like that, you could have 20 children a year (shared with colleagues). :-) And be good at it, and maybe sleep at night as well. Of course you must be healthy (symptom-free) to do that, but it has advantages: no adoption red tape, less stress than children of your own, social ... and more short-term, e.g. if you're healthy this year then do it, without having to worry about whether things will still be good 10 years from now. According to some statistics I don't completely agree with your posting these statistics here. Schizophrenia is a bit complicated, for example hard to diagnose properly, and varies a lot from person to person. So if you're asking a personal question (about someone in real life) then maybe you could get better (more specific, personalised) estimates than the non-specific statistics you posted. For example you were trying to estimate a risk of suicide. I'm pretty sure that depends on whether there's comorbid depression and comorbid substance abuse and so on (see e.g. Psychiatric Comorbidities and Schizophrenia). Things may also vary depending on how good and how available a person's doctors are, their family, friends, and society, and so on, as well as on their mental health. As anongoodnurse wrote, I'd recommend you learn more. The "best case scenario" could be better than you expect (and the worst case scenario less good). Nonetheless, the illness can be treated so that there is full remission, i.e. no symptoms whatsoever as long as one takes medicines. I agree there's good reason to be optimistic if that's what your doctor is saying, but maybe don't be too complacent either. In particular you said, "no symptoms whatsoever as long as one takes medicines": I agree there may be no symptom of mental illness, and that one may have a good reason to take medicine even if there are side-effects. Nevertheless beware there may be important physical symptoms (side-effects) of the medicine, so do be careful (to consult the doctors because there is a variety of medicines) rather than heedless. You might want a plan as well (possibly a family/medical/legal plan), for if ever there are symptoms of mental illness (sometimes a prescription ought to be varied, or sometimes one is "non-compliant" and stops taking medicine, and mental illness may affect the ability to make informed decisions). Sorry this based more on personal experience than on references, so it may be too personal and not very useful to you. Maybe you'll at least find useful the references to other topics (about choosing a marriage partner and about Buddhists having children). People who develop schizophrenia in there life should not have society discriminate against them, and should not have to feel that if they may give birth to a daughter or a son that will have a similar condition they should not have to feel society would view them as making a bad personal choice. Schizophrenia as well as other illnesses should be treated similarly. When schizophrenia can become a burden to "society" is when the recipient is not treated accordingly, if they are they can be come a valued member of that society and equally contribute to it as any other person less disabled than they are. Society should recognise this part of the community to be as equally capable of and as productive as any other member within in it. I don't think they should be eugenically separated from it.
common-pile/stackexchange_filtered
Acting up for Daddy My partner's 4 year old daughter lives with us. I look after her more than my partner. She is generally well behaved, polite little girl and we have lots of fun. As soon as Daddy comes home she turns into this demanding whiny little brat. I can not stand it! I think it is because Daddy is much softer than I am so she knows she can be more demanding and play him up. Is this learned behavior? Why is she so well behaved for me? Same thing here with my 3 year old. Usually good for me, but will whine/cry much more for mommy, since she knows mommy is more likely to give her what she wants. Children have a very sophisticated sense of what works to get what they want. Your partner's daughter isn't a whiny brat around you, because you don't respond well to that behavior, but apparently your partner gives in easily to that kind of behavior. My own daughter has two personalities she puts on for me and my wife. For me, she is sweet and sugary, because she knows I can be charmed, but refuse to give into whining. For my wife, she is whiny, because my wife can't be charmed, but gets worn down by whining. The upshot is the behavior won't change unless your partner commits to not giving into it. Yup, agreeing with everyone else, your kid whines for Daddy, because she has empirically learned that it gets her what she wants from him, not from you. Your way is better, you are doing more child care, he needs to get on board, and not give in. He needs to be saying "I know you are disappointed, (acknowledging the child's feelings, helping the child to name her own feelings is really helpful) but I'm not giving in". I have worked in childcare for many years, and it is my experience that kids will behave much better for caretakers other than their parents (this is true with young elementary school kids as well). It is possible that since "daddy" is a biological parent and (I assume) you came into the picture sometime after she was born, she views you in this way. I also could be completely wrong, seeing as I don't know your situation. I recognize how you might take offense to this and I assure you I mean none.
common-pile/stackexchange_filtered
How to automatically create sqlite test db when executing parallel tests? When executing tests separately I have added this functions to make sure sqlite db would be created if it's not there: abstract class TestCase extends BaseTestCase { use CreatesApplication, DatabaseMigrations; public function setUp(): void { $this->initializeTestDB(); parent::setUp(); Notification::fake(); if ($this->tenancy) { $this->initializeTenancy(); } } public function initializeTestDB(): void { $dbname = env('DB_DATABASE'); $filePath= "./" . $dbname; if (!file_exists($filePath)) { touch($filePath); } } But this does not help when using php artisan test --parallel. I have tried to move this method into AppServiceProvider as in the documentation. Also didn't helped: class AppServiceProvider extends ServiceProvider { /** * Bootstrap any application services. * * @return void */ public function boot() { ParallelTesting::setUpProcess(function ($token) { $this->handleTestDB(); }); Any ideas to handle this properly for both parallel and separate tests ? What does "it does not work" mean in context of your question? How does it fail? Silently or with an error? Does it fail at all or are you just not confident with the result? What have you expected instead? I thought it was pretty self-explanatory. The case is when not having a sqlite file the test is failing due to missing db. The expected solution is being able to start tests without creating the file first, it supposed to be created during test execution.
common-pile/stackexchange_filtered
Center a canvas element without affecting coordinates for game I have a game that I'm coding on the HTML5 canvas, and I centered the canvas, but now it's messing with the coordinates in event.clientX and event.clientY for my event listener I have for clicks, I tried changing document.addEventListener to canvas.addEventlistener, but it did nothing. Any suggestions? My code is here- var canvas = document.getElementById('infRunnerCanvas'); var ctx = canvas.getContext('2d'); var playingGame = false; function displayMainMenu(){ ctx.font = '35px Comic Sans Ms'; ctx.textAlign = 'center' ctx.fillRect(0,0,canvas.width, canvas.height); ctx.fillStyle = 'red'; ctx.fillText('1 button run!',canvas.width/2, 200); ctx.font = '15px Comic Sans Ms'; ctx.fillText("Any key or click the screen to jump, don't hit the side of a platform.", canvas.width/2, 250) ctx.fillRect(275,300 , 150, 60) ctx.fillStyle = 'black'; ctx.font = '25px Comic Sans Ms'; ctx.fillText('PLAY', 350, 340); } displayMainMenu(); function handleClicks(){ if(playingGame === false && event.clientX > 275 && event.clientX < 425 && event.clientY > 300 && event.clientY < 360){ alert() } } canvas.addEventListener('click', handleClicks) <!DOCTYPE html> <html> <div style = 'text-align: center;'> <canvas width = '700' height = '500' id = 'infRunnerCanvas' > sorry, looks like your browser doesn't support the canvas element. </canvas> </div> <script src = 'script.js'> </script> </html> Can you explain more? Everything seems to look aligned well. I added a fillRect at (100, 100) and it showed up where it was supposed to. What exactly do you mean by messing with the coordinates? the event listener's coordinates act funny, the coordinates for the play button put into the handleClicks function don't work, it thinks the coordinates refer to a spot that's not even on the canvas. also, are you using the full page? From the documents "The clientX read-only property of the MouseEvent interface provides the horizontal coordinate within the application's viewport at which the event occurred (as opposed to the coordinate within the page)." so is there something else I can use? Let me know if the answer below doesn't make sense. Once you console log the canvas.getBoundingClientRect() you'll see the x and y position. That is what you'll subtract from the mouse. You will want to account for the canvas position by using getBoundingClientRect() on the canvas and subtracting those values form the mouse position. var canvas = document.getElementById('infRunnerCanvas'); var ctx = canvas.getContext('2d'); canvas.width = 700; canvas.height = 500; var playingGame = false; let mouse = { x: null, y: null, }; function handleClicks(){ if(playingGame === false && mouse.x > 275 && mouse.x < 425 && mouse.y > 300 && mouse.y < 360){ alert() } } window.addEventListener('click', function(e) { mouse.x = e.x - canvas.getBoundingClientRect().x; mouse.y = e.y - canvas.getBoundingClientRect().y; handleClicks(); }) window.addEventListener('resize', function(e) { displayMainMenu(); }) function displayMainMenu(){ ctx.fillStyle = 'black' ctx.font = '35px Comic Sans Ms'; ctx.textAlign = 'center' ctx.fillRect(0,0,canvas.width, canvas.height); ctx.fillStyle = 'red'; ctx.fillText('1 button run!',canvas.width/2, 200); ctx.font = '15px Comic Sans Ms'; ctx.fillText("Any key or click the screen to jump, don't hit the side of a platform.", canvas.width/2, 250) ctx.fillRect(275, 300, 150, 60) ctx.fillStyle = 'black'; ctx.font = '25px Comic Sans Ms'; ctx.fillText('PLAY', 350, 340); } displayMainMenu(); canvas { position: absolute; top: 0; left: 50%; width: 700px; height: 500px; transform: translate(-50%, 0) } <!DOCTYPE html> <html> <div style = 'text-align: center;'> <canvas id = 'infRunnerCanvas' > sorry, looks like your browser doesn't support the canvas element. </canvas> </div> <script src = 'script.js'> </script> </html> run this and you'll see console.log(canvas.getBoundingClientRect())
common-pile/stackexchange_filtered
how to unpickle binary data stored in postgresql by psycopg2 module in python? I am using cPickle and psycopg2 to store some vectors into database. This is my code to store binary data binary_vec = cPickle.dumps(vec, -1) db.cur.execute(''' INSERT INTO feature_vector (vector, id) VALUES (%s, %s); ''', (psycopg2.Binary(binary_vec), thread_id) db.conn.commit() However when I use fetchall() to load my data back, the type is buffer. I can't find how how to restore this buffer object back to a list (vec). This is how I fetch the data db.cur.execute("SELECT * FROM feature_vector;") m = db.cur.fetchall() The result looks like this [(3169187, <read-only buffer for 0x1002b0f10, size 3462, offset 0 at 0x1004a7430>), (3169275, <read-only buffer for 0x1002b0f50, size 3462, offset 0 at 0x1004a7570>), (3169406, <read-only buffer for 0x1002b0f70, size 3462, offset 0 at 0x10140b0b0>), (3169541, <read-only buffer for 0x10141c030, size 3462, offset 0 at 0x10140b2b0>), (3169622, <read-only buffer for 0x10141c050, size 3462, offset 0 at 0x10140b3f0>),... When I try to use cPickle.loads(m[0][1]), it will return the error message Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: must be string, not buffer Did you check manually (e.g., mysql -e 'select * from ...') that the data is indeed stored in the database? Yes, data is stored in the database. It looks like this \x80025d7101284b6a4b6a4b6a4b6a4b6a4b6a4b6a4b6a4b6a4b6a4b6a652e it would help if you pasted the code for fetching.. Did you try str(the_buffer) or bytes(the_buffer)? And here I am, a little bit far in the future, wondering what works and what not... until I try it out myself. You can create a customized typecaster to automatically convert pickled values to Python: import cPickle obj = {'a': 10} data = cPickle.dumps(obj, -1) import psycopg2 def cast_pickle(data, cur): if data is None: return None return cPickle.loads(str(psycopg2.BINARY(data, cur))) psycopg2.extensions.register_type( psycopg2.extensions.new_type( psycopg2.BINARY.values, 'BINARY-PICKLE', cast_pickle)) cnn = psycopg2.connect('') cur = cnn.cursor() cur.execute("select %s::bytea", [psycopg2.Binary(data)]) cur.fetchone() # ({'a': 10},) in data should we use cursor return object?
common-pile/stackexchange_filtered
This recursive function puzzles me, what is going on? I was playing around with recursion and did this simple function. I was assuming that it would print out 9-0 to stdout, but, it prints 0-9. I can't see how that happens at all. int main() { rec(10); return 0; } int rec(int n){ if(n > 0) printf("%d\n", rec(n -1)); return n; } If the explanations below don't 'click', you might do well to step though the execution in a debugger to see what's going on. By the way, while a good programmer is expected to be able to read this function (during an interview maybe?), they should not ever write code like this. Good code should not make you think. The rec function on the printf line is evaluated before the printf itself. Thus the deepest instance of the rec function is printed first. I guess that I was confused by the fact that the printf is also part of the rec function. Thanks for the explanation, I just started with this. No problem, glad to help. Just remember that evaluation of functions always goes inside-out: the parameters are evaluated before the function. Think of it like this. rec(10) rec(9) rec(8) rec(7) rec(6) rec(5) rec(4) rec(3) rec(2) rec(1) rec(0) Start Unwinding printf("%d\n", 0); printf("%d\n", 1); printf("%d\n", 2); printf("%d\n", 3); printf("%d\n", 4); printf("%d\n", 5); printf("%d\n", 6); printf("%d\n", 7); printf("%d\n", 8); printf("%d\n", 9); Thanks you, that is a good explanation. I'll take a look in the debugger as well, as was suggested. Let's rewrite your code like this: int rec(int n){ if(n > 0) { int retval = rec(n -1); printf("%d\n", retval); } return n; } Does it make it clear why 0 printed first before 9? I usually nest functions like that if I intend to print them, It's the fact that the printf is also part of the rec function that got me confused I think. Thanks Because you're creating 9 ambients 9 > 8 > 7 > 6 > 5 > 4> 3 > 2 > 1 > 0, now these ambients are treated the same this would a(b(c(d(e(f(g())))))), going from the deepest one to the first one. Remember that when you do this printf("%d",n(m)); you're actually not printing anything, you're saying "print the result of n(m)" then when it tries to resolve n(m) you're calling another print and saying once again "resolve n(m-1)". Now, when you reach n(0) it will return 0 to be printed by the last call of printf, therefore it prints 0 then 1 .. 9. Thanks, that is very useful! I haven't really give recursion that much though before, and just decided to start to make some experiments with it. That makes sense. int main() { rec(10); return 0; } int rec(int n){ if(n > 0) printf("%d\n", rec(n -1)); return n; } In general, consider some piece of code. We say there is a direct relation between iterative and recursive solutions such that any solution can be written iteratively and vise versa. However, in some cases it is seen to be easier to express an algorithm recursively (eg. Tower of Hanoi.) In the case of the code above the equivalent would be the for loop construct. This can be implemented as a function as follows: void _for(int i, int n) { if( i >= n ) return; // TERMINAL<br /> // some expression (ie. printf("%d\n",i);)<br /> _for(i+1,n) // RECURSION<br /> } Note, this can be extended using function pointers. Might want to check out the markdown editor FAQ. http://stackoverflow.com/editing-help
common-pile/stackexchange_filtered
Using Javascript or Jquery to dynamically create incrementing id for table as each row is created I would like for a user to click an image in this table which is created dynamically based on the JSON data sent from the web service, and have the image change. When clicking the image again it will change back to the first image (inter-changing between only two images). I have a table being created via jQuery's $.ajax() function which looks like this: <table border="0" width=80% id="table"> <thead> <tr> <td><h3>Check to Renew</h3></td> <td width=40%><h3>Vendor Part</h3></td> <td width=100%><h3>Part Description</h3></td> <td><h3>Unit Price</h3></td> <td><h3>Quantity</h3></td> </tr> </thead> <tbody> <tr class="template"> <td><img src="images/checkmark.png" alt="check" id="row1" onclick=""></td> <td>$vendorPart</td> <td>$partDescription</td> <td>$price</td> <td><input class="quantityClass" name="quantity" value="$quantity"/></td> </tr> </tbody> </table> Here is the simple Javascript function which changes the images: renewimg=Array("images/checkmark.png","images/gray_x.png"); function rowImageRefresh(y) { document.getElementById(y).src=renewimg[++m]; if (m==1) {m=-1;} } This Javascript function work's beautifully, however only if I pass it the images id (in this case specific to the row). Hard coding for testing purposes proved this. My issue is I would like to be able to create a row id on the fly as the table row is created and then have functionality where that id can be passed. I guess if I were to illustrate this more it would look something like this: JavaScript: var row = 1; HTML: //table data <td><img src="images/checkmark.png" alt="check" id="row+[i];i++;" onclick="rowImageRefresh(this.row.id)"></td> //more table data Where the id is created dynamically on the fly as each row is created and the onclick function passes the row of the image clicked. You don't actually need an id at all. For example: <img src="images/checkmark.png" alt="check" onclick="toggle(this)"> Then the script: function toggle(element) { var sources = ["images/checkmark.png", "images/gray_x.png"], current = $(element).data('state') || 0; current = 1 - current; $(element).data('state', current); element.src = sources[current]; } It toggles between the two states, remembering the current state using .data(). You could set individual row id's with id="x". Or, you could just use jQuery to find the index from where the click event occured and get the index of that part. $('img.check').click(function(){ var id = $(this).index(); rowImageRefresh(); }); So what your saying is I can just set the html to look like this: , and then when the image is clicked it will hit that jquery function? I am wondering what do I send to rowImageRefresh since it changes the img src based on the element ID This did not work for me, in fact it seems it did not even register as a click event. Consider using a custom data attribute instead of trying to generate unique ids. <img src="..." data-partid="$vendorPart" class="part-img" /> $(".part-img").on("click", function() { var id = $(this).data("partid"); // or var id = $(this).attr("data-partid").val(); rowImageRefresh(id); }); Edit: Seems like you are trying to toggle a checkbox image. A better way might be to change the background image using css and sprites. Then on click you swap the class depending on state. Hey Jasen, this answer makes a lot of sense to me. I think I may be doing something wrong however since the on click may not get called. Do I just place the jquery code inside the script tag within the head of this html page? Here's one way you might assign an id to each image based on its row index: $.each(rows, function(index, row) { var data = $.extend({}, row, { id: "row" + index }); var $row = $(template(data)); $row.removeClass('template'); var $img = $('img', $row); $img.on('click', function() { var $this = $(this); if ($this.data('checked') === true) { var checked = $this.attr('src'); var unchecked = $this.data('src'); $this.attr('src', unchecked); $this.data('src', checked); $this.data('checked', false); } else { var unchecked = $this.attr('src'); var checked = $this.data('src'); $this.attr('src', checked); $this.data('src', unchecked); $this.data('checked', true); } }); $tbody.append($row); }); The HTML for the image would look like this: <img src="checked.png" alt="check" data-checked="true" data-src="unchecked.png" id="${ id }" /> Here's a working example: http://jsfiddle.net/potatosalad/MJYpA/1/ I used lodash.js for the template, but whatever you're doing to generate the row HTML should work the same.
common-pile/stackexchange_filtered
Foreign key and transaction I'm trying to use transaction when creating table group, and table with relation user-group. It works ok when I don't use transaction, so the naming of the attributes is correct. Here is the code: $db = Yii::app()->db; $transaction = $db->beginTransaction(); try { $model->attributes=$_POST['MyGroup']; $model->save(); $model->refresh(); $userMyGroup = new UserMyGroup(); $userMyGroup->IDMyGroup = $model->IDMyGroup; $userMyGroup->IDUser = Yii::app()->user->id; $userMyGroup->save(); $transaction->commit(); } catch (CDbException $ex) { Yii::log("Couldn't create group:".$ex->errorInfo[1], CLogger::LEVEL_ERROR); $transaction->rollback(); } The error is: The INSERT statement conflicted with the FOREIGN KEY constraint "FK_UserMyGroup_MyGroup". The conflict occurred in database "MyDatabase", table "dbo.MyGroup", column 'IDMyGroup'.. The SQL statement executed was: INSERT INTO [dbo].[UserMyGroup] ([IDMyGroup], [IDUser]) VALUES (:yp0, :yp1). Bound with :yp0=4022, :yp1=1 Problem is probably that the saved model might not be in database while saving the second model(userMyGroup) with the foreign key. How to do the transaction correctly? EDIT: I've found out that the problem is caused by audit module, it is trying to log the query, but can't as it is in transaction and not really saved yet in database. I'm trying to figure out how to use this transaction along with the module... I've found out that the problem is caused by audit module which I'm using, it is trying to log the query, but can't as it is in transaction and not really saved yet in database. Unfortunately, I didn't figure out how to use this transaction along with the module, so the result is to disable audit module on the classes used in transaction. The refresh method repopulates active record with the latest data. While transaction is not commited latest data is existing data in table. Move $model->refresh(); after $transaction->commit(); Thank you for your response, but it still doesn't work.
common-pile/stackexchange_filtered
What is the analog to Fourier Transform for the sum of normals? In electric engineering (my field) we use Fourier Transform to represent an arbitrary signal as a sum of sinusoidal signals. I've stumbled upon a statistics problem where I want to decompose a distribution function as a sum of normal curves. Like the drawing bellow: I think there is an analog to Fourier transform to this but, instead of frequency, the coefficients to be determined would be the mean and the standard deviation of each normal component. In the college we see a lot of links between exponentials and sinusoidals functions, which makes me think there is a missing link here that wasn't presented to me. I don't want a lecture, just someone to point me the right direction. The decomposition of a Fourier transform "works" because of orthogonality of components, leading to a rather explicit way to express (perhaps approximately) a function as a sum of sines and cosines. Your normal distributions are not orthogonal in any easily discernable way, but we can think of fitting a sum of normal distributions ("Gaussians") to an input curve as a combination of linear and nonlinear optimizations. @hardmath does it mean that each term added with the Fourier will best approximate the original function independent of the next terms, but each gaussian component added with this normal decomposition depends on all subsequent terms to be a better approximation? Yes, depending on how you measure the "goodness" of the approximation. Fourier series work well with a least-squares measure of "error", and the errors (residuals) after a finite number of terms are orthogonal to the preceding terms. We don't have this nice property for Gaussian sums. A practical approach would be to identify peaks in the signal and located Gaussian peaks at those locations. The heights and widths of the corresponding "normals" become the unknowns that you would optimize to get the best fit. You're probably looking for the Weierstrass transform, defined by $$ W[f](x) \frac{1}{\sqrt{4\pi}}\int_{-\infty}^{\infty} f(y) e^{(x-y)^2/4} \, dy. $$
common-pile/stackexchange_filtered
Is there a way to get serializer or content-type from inside Catalyst::Controller::REST class? Generally it seems that the way Catalyst::Controller::REST is that you put a reference into "entity" and then Catalyst::Action::Serialize picks a content-type and a serializer after you're done. In my case, I may be dealing with very large data and I can't hold the entire thing in memory at once (it's coming from a different server and I'm reformatting and returning it). If I knew what content-type Serialize was going to choose, I could transform the incoming data and write it to a file as it comes in and then serve it back out from disk. Is there any way for me to find out what content-type I'm being asked for beyond copying the code in Catalyst::Action::SerializeBase? The workaround will be to say "I don't care what you asked for, here's your JSON" but it'd be nice to actually provide what's requested. :) There is no stored serializer object. It creates that on the fly in https://metacpan.org/source/JJNAPIORK/Catalyst-Action-REST-1.20/lib/Catalyst/Action/Serialize.pm and that will be called automatically to handle the request. You might be able to stop the flow and forward to that thing later.
common-pile/stackexchange_filtered
PERL: Check if file exists, email a flag to the user that the report didn't exist, zip the files and transfer them Can any one help me creating a zip file out of two files one a excel and one a pdf. Im doing this without the Archive::zip because I can not install it. I am trying to go through the directory and pick up two files an excel and pdf and then zip them and send an error message or alert that says it has been zipped. sub monthly_report_in { ### configure local variables $StatusP="false"; $StatusX="false"; local $id,$pw,$tpwd_id,$geek_pw,$reportm_date,$file_count,$geg_id,$geg_pw,$month_abbrv; local $org1="bfn"; local $org2="geg"; local $db_server_name=" "; local $home_dir=" "; local $archive_dir=" "; local $smb_server=""; local $smb_folder=""; local $smb_folder=""; local $reportm_temp=""; local $input_name2="RegistrationStatsexcel.xls"; local $zip_input_file="RegistrationByCity*.*"; ############################################################ # clean up and create file names # # gets the date, month, and year. # #Then creates zipped file named montly.month.year.zip # ############################################################ get_reportm_date($reportm_temp,$reportm_date); get_month_abbrv($month_abbrv); get_year($year); local $file_folder_name="$db_server_name"."."."$reportm_date"."."monthly"; local $sftp_dir=""; local $zipped_file="monthly_statsreport.$month_abbrv"."$year".".zip"; ############################################################## # configure email message content # # sends user the message that the file has been transferred # # or an error message that says there were no files # ############################################################## local $send_mail="email addy"; local $good_subject="$zipped_file file transferred to server: $DATE"; local $good_message="$good_subject"; local $error_messsage1="Error! No Monthly Reports Found: $DATE"; local $error_message1="No monthly reports were found.'\n\n' Contact The Help Desk.'\n\n'script name: $SCRIPT"; local $smb_subject=" $zipped_file transfered to server "; local $smb_message="$zipped_input_file transferred to server $smb_folder\n"; local $zero_subject = "Monthly Stats files are 0 bytes: $DATE"; local $zero_message = "Monthly report Stats files are 0 bytes in size.\n\n Please Contact the Help Desk ."; ######################################################### #creates new directory and changes to new directory # #get the ID and password for the organization # ######################################################### mkdir($current_dir); chdir($current_dir); get_id_and_pw($org1,$id,$pw); ########################################################## #smb transfer and archive pdf & excel file # ########################################################## checkif_fileexists($current_dir,$pdf_ext,$StatusP,$error_message,$good_message); checkif_fileexists($current_dir,$xls_ext,$StatusX,$error_message,$good_message); print "---$StatusP---\n"; if (($StatusP =~ "false") && ($StatusX =~ "false")) { good_mail($error_message,$error_subject1,$send_mail); } elseif (($StatusP =~ "zero") && ($StatusX =~ "true")) { good_mail($good_message,$good_subject,$send_mail); do_zip_files($current_dir,$zip_input_file); get_id_and_pw($org2,$geg_id,$geg_pw,$smb_server,$smb_folder,$input_file); smb_put($current_dir,$geg_id,$geg_pw,$smb_server,$smb_folder,$zip_folder,$zip_input_file); do_move($current_dir,$zip_input_file,$archive_dir,$zip_input_file); good_mail($smb_message,$smb_subject,$send_mail); } else { ### if pdf file exists, this will transfer the file and notify users if ($StatusP =~ "true") { do_zip_files($org2,$geg_id,$geg_pw,$smb_server,$smb_folder,$input_folder); local $error_subject_pdf="$input_file2 report not found: $DATE"; local $error_subject_message1a="monthly report $input_file2 was not found today. 'n\n\'Compressed $input_file has been transferrd to $smb_server $smb_folder.'n\n\' Please Contact The Help check $SCRIPT"; good_mail($error_message1a,$error_subject1a,$send_mail); } elseif ($StatusP =! "true");{ do_zip_files($current_dir,$pdf_ext,$input_file2); get_id_and_pw($org2,$geg_id,$geg_id,$geg_pw); smb_put($current_dir,$geg_id,$geg_pw,$smb_server,$smb_folder,$input_file2); do_move($current_dir,$zip_input_file,$archive_dir,$input_file2); local $error_subject2="$input_file1 report not found: $DATE"; local $error_message2="Monthly report $input_file1 was not found today. 'n\n\' Please Contact The Help Desk $SCRIPT"; good_mail($error_message1,$error_subject1,$send_mail); } } chdir($home_dir); ### } Your subroutine won't compile. Is that part of a bigger program, or have you written it in isolation? If you just put what you have written through Perl it will give you a list of things that need fixing. Software shouldn't be written in a big chunk like that - you should write maybe six lines of code at a time before testing that it will at least compile. You should use local very rarely, and you should indent your code so that it reflects the structureof the algorithm. I don't think anyone can help you better than that before you make some improvements this is written in isolation. could you help me. I am in desperate need of understanding what I'm doing wrong. Do you need help to install Archive::Zip? Or is it a procedural limitation? There are other modules that will help you unpack a zip archive If you're modifying existing code then you really should say so. People will understand the problem better, as well as being more forgiving of the quality of code This isn't really an answer to your question, but it is way too big for a comment and should help you towards a solution. All I can do is offer a better-formatted version of your subroutine. I hope you can see how much easier it is to read? As I said in my comment, local is almost never the right thing to use. You should also write very small sections of code and test thoroughly as you write more. If you create a whole subroutine like that then it is pretty much bound to be wrong. Once more thing, please don't just submit what I've written. I have no idea whether any of it is correct, and it is still a very lazy piece of programming. You should start by making sure that your subroutine is even being called, with just sub monthly_report_in { print "entered 'monthly_report_in'\n"; } and add functionality incrementally from there. Here's the reformat. Please treat it with suspicion sub monthly_report_in { ### Configure local variables $StatusP = 'false'; $StatusX = 'false'; my ($id, $pw, $tpwd_id, $geek_pw, $reportm_date, $file_count, $geg_id, $geg_pw, $month_abbrv); my ($org1, $org2) = qw/ bfn geg /; my $db_server_name = ' '; my $home_dir = ' '; my $archive_dir = ' '; my $smb_server = ''; my $smb_folder = ''; my $smb_folder = ''; my $reportm_temp = ''; my $input_name2 = 'RegistrationStatsexcel.xls'; my $zip_input_file = 'RegistrationByCity*.*'; ############################################################ # clean up and create file names # # gets the date, month, and year. # # Then creates zipped file named montly.month.year.zip # ############################################################ get_reportm_date($reportm_temp, $reportm_date); get_month_abbrv($month_abbrv); get_year($year); my $file_folder_name = "${db_server_name}.${reportm_date}.monthly"; my $sftp_dir = ''; my $zipped_file = "monthly_statsreport.${month_abbrv}${year}.zip"; ############################################################## # configure email message content # # sends user the message that the file has been transferred # # or an error message that says there were no files # ############################################################## my $send_mail = 'email addy'; my $good_subject = "$zipped_file file transferred to server: $DATE"; my $good_message = $good_subject; my $error_messsage1 = "Error! No Monthly Reports Found: $DATE"; my $error_message1 = "No monthly reports were found.\n\nContact The Help Desk.\n\nScript name: $SCRIPT"; my $smb_subject = "${zipped_file} transfered to server "; my $smb_message = "${zipped_input_file} transferred to server ${smb_folder}\n"; my $zero_subject = "Monthly Stats files are 0 bytes: $DATE"; my $zero_message = "Monthly report Stats files are 0 bytes in size.\n\nPlease Contact the Help Desk."; ######################################################### # creates new directory and changes to new directory # # get the ID and password for the organization # ######################################################### mkdir($current_dir); chdir($current_dir); get_id_and_pw($org1, $id, $pw); ########################################################## # smb transfer and archive pdf & excel file # ########################################################## checkif_fileexists($current_dir, $pdf_ext, $StatusP, $error_message, $good_message); checkif_fileexists($current_dir, $xls_ext, $StatusX, $error_message, $good_message); print "---${StatusP}---\n"; if ( $StatusP eq 'false' and $StatusX eq 'false') { good_mail($error_message, $error_subject1, $send_mail); } elsif ($StatusP eq 'zero' and $StatusX eq 'true') { good_mail($good_message, $good_subject, $send_mail); do_zip_files($current_dir, $zip_input_file); get_id_and_pw($org2, $geg_id, $geg_pw, $smb_server, $smb_folder, $input_file); smb_put($current_dir, $geg_id, $geg_pw, $smb_server, $smb_folder, $zip_folder, $zip_input_file); do_move($current_dir, $zip_input_file, $archive_dir, $zip_input_file); good_mail($smb_message, $smb_subject, $send_mail); } else { ### If the PDF file exists, this will transfer the file and notify users if ($StatusP eq 'true') { do_zip_files($org2, $geg_id, $geg_pw, $smb_server, $smb_folder, $input_folder); my $error_subject_pdf = "$input_file2 report not found: $DATE"; my $error_subject_message1a = "monthly report ${input_file2} was not found today.n\n\Compressed ${input_file} has been transferred to ${smb_server} ${smb_folder}.n\nPlease Contact The Help check $SCRIPT"; good_mail($error_message1a, $error_subject1a, $send_mail); } elsif ($StatusP ne 'true') { do_zip_files($current_dir, $pdf_ext, $input_file2); get_id_and_pw($org2, $geg_id, $geg_id, $geg_pw); smb_put( $current_dir, $geg_id, $geg_pw, $smb_server, $smb_folder, $input_file2 ); do_move($current_dir, $zip_input_file, $archive_dir, $input_file2); my $error_subject2 = "${input_file1} report not found: $DATE"; my $error_message2 = "Monthly report ${input_file1} was not found today.n\nPlease Contact The Help Desk $SCRIPT"; good_mail($error_message1, $error_subject1, $send_mail); } } chdir $home_dir; } my apologies, i'm editing this for someone and I'm not used to it. I'm cleaning up someone elses code.I thought I could help but got confused since I couldn't use Archive::Zip If your colleague writes code like that then they need all the help they can get Give them plenty of chocolate and strong coffee and get them off the local habit I could tell this person doesn't need any of that at all. more like some yoga and a hug. Hugs and yoga are spritual chocolate and coffee. Give them both. And a massage I didn't see the previous posts, but there is a limitation I we can not install Archive::zip or any other module doh! and yes it is previous code. I'm just subbing and assisting -doing a terrible job. I haven't seen perl since 98 @user3689280: There is nothing in that subroutine that works with zip files. If you need to extract from a zip then you need a module to do it. What are your managers thinking? The only alternative is to shell out to gzip or similar, and that makes your code a thousand times more fragile straight away there are lower level modules that make it zip. Borodin! What would be a good place to do a refresher on perl? - thanks Bria @user3689280: You shouldn't post question in the comments section of another question. Stack Overflow is primarily a resource where people can look up problems that other people have had and see hwo it was resolved. It is pretty much impossible to find questions posted in side comments like that so start a new post. You also need to explain what the code is doing that is wrong and show any output if possible. Also explain how comp_files is called (what are the four parameters?) and post the code for get_file_size_2.
common-pile/stackexchange_filtered
"Special case" of Brianchon's theorem for any conic section. Let $c$ be an arbitrary conic section. Choose $6$ distinct points on $c$. Draw $6$ lines $t_1,t_2,...t_6$ through these points that are tangents to $c$. Denote points $T_1=t_1\cap t_2,T_2=t_2\cap t_3\cdots T_6=t_6\cap t_1$. Then lines $\leftrightarrow T_1T_4$,$\leftrightarrow T_2 T_5$,$\leftrightarrow T_3,T_6$ meet at 1 point. I would like to get a reasonable explanation on how this can work or a rigorous proof. I couldn't find a starting point for this. What I am familiar with is the proof of the classic Brianchon's theorem for hexagon circumscribed around a circle, using radix axes, it's easy to prove, but I am struggling with proving this for general conic section. Thanks for any tips and hints. There are two topics important to constructing a proof polarity and duality. (On a conic polar and dual are somewhat 'interchangeable') A few things are interesting about the polar/dual of Brianchon's theorem. If you have demonstrated that the theorem holds for an arbitrary hexagon circumscribed around a circle, including a non-convex one (i.e. with points numered in different order), then you can build the general proof on that. Any non-degenerate real conic section is equivalent to any other under a projective transformation. A projective transformation preserves incidence, which implies that tangents will remain tangents. Thus there always exists a projective transformation to turn the general case of a conic into the special case of the circle without loss of generality. For ellipses, you could show the same using affine transformations only. But for parabolas or hyperbolas, you need projective transformations to turn them into circles. As Patrick Abraham suggested, another way to demonstrate this is using the point-line duality of projective geometry. Take the theorem statement and exchange the terms “point” with “line”. Exchange “point on line” with “line through point”. Exchange “point where two lines intersect” with “line connecting two points”. Exchange “point on conic” with “tangent to conic”. Exchange “collinear” with “concurrent”. Similar for other related formulations. In the end the theorem about six tangents to a conic leading to three concurrent lines becomes a theorem about six points on a conic leading to three collinear points: Pascal's theorem. Both suggested approaches make use of projective geometry. That's a very natural setup to use when working with arbitrary conic sections, so if you are not familiar with it, I suggest you familiarize yourself since doing so will likely be easier and more generally useful than finding a non-projective proof for this specific scenario. At least in my personal opinion, which might be biased.
common-pile/stackexchange_filtered
How do I install mingw-std-threads on Ubuntu? I am downloading the Bitcoin source code for Windows, and trying to compile it using these steps. I am receiving these errors when I make my Bitcoin source code on Windows Subsystem for Linux error: ‘mutex’ in namespace ‘std’ does not name a type mutable std::mutex mutex; It would appear that the sudo apt install g++-mingw-w64-x86-64 package does not include some important threading stuff I need to run by Bitcoin source code. With a bit of digging, it seems that I need to additionally install mingw-std-threads How do I do this? uhm... why're you trying to compile Bitcoin within WSL? That's bound to have problems. uhm... That's one of the standard ways suggested on the Bitcoin Github. https://github.com/bitcoin/bitcoin/blob/master/doc/build-windows.md that link 404s... Works for me... It looks like you missed a part of the instructions. On Ubuntu 16.04, you have to run: sudo apt install software-properties-common sudo add-apt-repository "deb http://archive.ubuntu.com/ubuntu zesty universe" sudo apt update sudo apt upgrade sudo update-alternatives --config x86_64-w64-mingw32-g++ # Set the default mingw32 g++ compiler option to posix. On Ubuntu 17.10+: sudo update-alternatives --config x86_64-w64-mingw32-g++ # Set the default mingw32 g++ compiler option to posix.
common-pile/stackexchange_filtered
Installing zend-server php 5.3 on debian 7.x using chef cookbook When trying to install zend server (6.3) with php5.3 on debian 7.x (wheezy) using the zendserver cookbook for Chef I get the following error: (needs to be php5.3 because of old code in PHP application) ==> default: [2014-12-13T16:36:23+00:00] INFO: Starting install for package zend-server-php-5.3 ==> default: ==> default: ================================================================================ ==> default: Error executing action `install` on resource 'apt_package[zend-server-php-5.3]' ==> default: ================================================================================ ==> default: ==> default: Mixlib::ShellOut::ShellCommandFailed ==> default: ------------------------------------ ==> default: Expected process to exit with [0], but received '100' ==> default: ---- Begin output of apt-get -q -y install zend-server-php-5.3=6.3.0+b41 ---- ==> default: STDOUT: Reading package lists... ==> default: Building dependency tree... ==> default: Reading state information... ==> default: Some packages could not be installed. This may mean that you have ==> default: requested an impossible situation or if you are using the unstable ==> default: distribution that some required packages have not yet been created ==> default: or been moved out of Incoming. ==> default: The following information may help to resolve the situation: ==> default: The following packages have unmet dependencies: ==> default: zend-server-php-5.3 : Depends: zend-server-php-5.3-common (= 6.3.0+b41) but it is not going to be installed ==> default: Depends: libapache2-mod-php-5.3-zend-server(>= 5.3.21) but it is not going to be installed ==> default: STDERR: E: Unable to correct problems, you have held broken packages. ==> default: ---- End output of apt-get -q -y install zend-server-php-5.3=6.3.0+b41 ---- ==> default: Ran apt-get -q -y install zend-server-php-5.3=6.3.0+b41 returned 100 ==> default: Resource Declaration: ==> default: --------------------- ==> default: # In /tmp/vagrant-chef-3/chef-solo-1/cookbooks/zendserver/recipes/default.rb ==> default: ==> default: 65: package package_name do ==> default: 66: :install ==> default: 67: notifies :restart, 'service[zend-server]', :immediate ==> default: 68: end ==> default: 69: ==> default: ==> default: Compiled Resource: ==> default: ------------------ ==> default: # Declared in /tmp/vagrant-chef-3/chef-solo-1/cookbooks/zendserver/recipes/default.rb:65:in `from_file' ==> default: ==> default: apt_package("zend-server-php-5.3") do ==> default: action :install ==> default: retries 0 ==> default: retry_delay 2 ==> default: default_guard_interpreter :default ==> default: package_name "zend-server-php-5.3" ==> default: version "6.3.0+b41" ==> default: timeout 900 ==> default: cookbook_name :zendserver ==> default: recipe_name "default" ==> default: end ==> default: [2014-12-13T16:36:23+00:00] INFO: Running queued delayed notifications before re-raising exception ==> default: [2014-12-13T16:36:23+00:00] ERROR: Running exception handlers ==> default: [2014-12-13T16:36:23+00:00] ERROR: Exception handlers complete ==> default: [2014-12-13T16:36:23+00:00] FATAL: Stacktrace dumped to /var/chef/cache/chef-stacktrace.out ==> default: [2014-12-13T16:36:23+00:00] ERROR: apt_package[zend-server-php-5.3] (zendserver::default line 65) had an error: Mixlib::ShellOut::ShellCommandFailed: Expected process to exit with [0], but received '100' ==> default: ---- Begin output of apt-get -q -y install zend-server-php-5.3=6.3.0+b41 ---- ==> default: STDOUT: Reading package lists... ==> default: Building dependency tree... ==> default: Reading state information... ==> default: Some packages could not be installed. This may mean that you have ==> default: requested an impossible situation or if you are using the unstable ==> default: distribution that some required packages have not yet been created ==> default: or been moved out of Incoming. ==> default: The following information may help to resolve the situation: ==> default: ==> default: The following packages have unmet dependencies: ==> default: zend-server-php-5.3 : Depends: zend-server-php-5.3-common (= 6.3.0+b41) but it is not going to be installed ==> default: Depends: libapache2-mod-php-5.3-zend-server(>= 5.3.21) but it is not going to be installed ==> default: STDERR: E: Unable to correct problems, you have held broken packages. ==> default: ---- End output of apt-get -q -y install zend-server-php-5.3=6.3.0+b41 ---- ==> default: Ran apt-get -q -y install zend-server-php-5.3=6.3.0+b41 returned 100 ==> default: [2014-12-13T16:36:23+00:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1) Chef never successfully completed! Any errors should be visible in the output above. Please fix your recipes so that they properly complete. It looks like this command is being executed: $ apt-get -q -y install zend-server-php-5.3=6.3.0+b41 And it gives this output: Reading package lists... Building dependency tree... Reading state information... Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: zend-server-php-5.3 : Depends: zend-server-php-5.3-common (= 6.3.0+b41) but it is not going to be installed Depends: libapache2-mod-php-5.3-zend-server (>= 5.3.21) but it is not going to be installed E: Unable to correct problems, you have held broken packages. I'm stuck here and don't know what to do. Can anybody help me please? not too certain this is still applicable but we had quite an issue installing zend 5.3 server on wheezy. We got it running after much fudging but ended up in a situation where the SSL libs were invalid and had to run apache with non ssl sites! If you're still after a solution my advise would be to add these to your /etc/apt/soruces.list and install the squeeze php 5.3 stack (it works quite well). deb http://ftp.us.debian.org/debian/ squeeze main contrib non-free deb-src http://ftp.us.debian.org/debian/ squeeze main contrib non-free HTH
common-pile/stackexchange_filtered
Equidistribuion of distances of integer points to a circle I have noticed in the following graph that the distance between points $k \in\mathbb{Z}^2\cap C_7^1$ ($C_7^1$:=Circle with radius 7 and shell with thickness 1) and the nearest point on the inner circle is quite random. I did some further investigations in R and figured out that the distances for large radii are seemingly uniformly distributed. This leads to the following question. Let $C_r^1$ be the centered circle with radius r and shell with thickness 1. We denote by $A_r$ the set $C_r^1 \cap \mathbb{Z}^2$. We replace each element in the set $A_r$ with the corresponding distance to the nearest point on the centered circle with radius r. Now i want to proof that for every subinterval $[a,b]$ in $[0,1]$ we have $$ \lim_{r \rightarrow \infty} \frac{\text{card}(A_r\cap [a,b])}{\text{card}(A_r)}=b-a. $$ Does anyone have an idea how to do this proof? Assuming that the points are uniformly distributed within the intersection of circles of radius $7$ and $8$, there should be more points towards the outer circle , since there is more area where points can be. Thus, the distances will not have a uniform distribution, since there will be a skew towards larger distances. Thank you for your feedback. I did some further investigations and i could not find a "skew" effect. @Bergson: Since the thickness of the shell is fixed at $1$ while the radius goes to $\infty$, the difference in densities decreases with $\frac1r$ and doesn't survive the limit being considered here. Are you aware of the Gauss circle problem? I don't know whether the existing bounds for that decide your question either way, but it would certainly make sense to look at the methods used to derive them.
common-pile/stackexchange_filtered
Print margin at position Xpx (visual only) I'm using tinyMCE v4. Is it possible to create a print margin (visual only) at custom position (e.g. 700px)? In the image below, print margin is marked with orange color. TinyMCE has no built in way to do this. You could use CSS to apply something to the <body> of the document inside the editor to get that effect.
common-pile/stackexchange_filtered
Converting OCaml to F#: OCaml equivalent of F# spec, specifically initialization In the process of converting the OCaml Format module to F# I find that I need to understand the initialization process in detail. For F# this is explained in section 12.5 Program Execution of the F# spec. While the OCaml documentation page list several good documents, I am unable to find any document that gives the same level of detail as found in the F# spec. Are there any documents that give the corresponding level of detail for OCaml initialization? Ultimately an OCaml program is a series of module implementations. Evaluation of modules is described (extremely briefly) in Section 6.11.2 of the OCaml manual. In essence, each top level form is evaluated in turn. I suspect you need more detail than this, but I don't know where to look. Maybe you have a more specific question? @JeffreyScofield There is a more specific question some where down the line, but at present I can't even formulate it. When I post those kinds half understood questions here they only put a burden on those answering. I find it better for me to do as much research as possible before asking. I think your best bet is http://caml.inria.fr/pub/docs/manual-ocaml-4.00/language.html - evaluation order is mostly top-down left-to-right, but there are notable exceptions in the core language, where it is unspecified (for example, tuple or record components). The module language has fewer surprises. When I orginally did research before asking this question I could not find what I needed in a document. The only way I know of to get the detail needed about initialization is to read the source code. In short, is there a specification manual for OCaml like the one for F#? No I spent an hour today looking and still could not find one. I goggled, check some OCaml mailing list and looked over all of the documents from the OCaml site. Others in an OCaml mailing list also noted the lack of an OCaml specification manual. As always with these no answers, if someone does answer here with a reference to the OCaml spec manual like the one for F# then I will gladly give them the accept vote. I can't offer any information about OCaml module initialization, but I did port the Format module to F# as part of my FSharp.Compatibility project. If you want to have a look, it's available here: https://github.com/jack-pappas/FSharp.Compatibility/tree/master/FSharp.Compatibility.OCaml.Format
common-pile/stackexchange_filtered
Is the interesting/ignored tags feature having problems? I thought adding an interesting/ignored tag would update the question list accordingly. Now it gives an error trying to do the update ('k' is undefined). It will remove the highlighting for questions with an interesting tag and show questions that were previously hidden due to my ignored tags. I'm using Firefox 3.6.13, and I've already tried clearing my cache. My bad, I introduced a bug when changing the way interesting/ignored highlighting works on /tagged pages. A fix will go out with the next build.
common-pile/stackexchange_filtered
How to pass in URI variable values to url Forgive the rookie question-- how do you pass in a the value for a uri variable in a url? (this is for a spring boot app making a REST call) for example if I have the url: "http://example.com/hotels/{hotel}/bookings/{booking}" And assuming 'hotel' and 'booking' have String values, how do I pass these in? how do you create a requesr? Do you use rest template? If yes then you can pass them as uri variables to the method and template will replace them with values. @Reddy yes I use a rest template. right now I'm using restTemplate.exchange() Someone please give detailed answer to this Question, My scenario is same- I am using restTemplate.exchange() how do i pass variable value from method parameter? import org.springframework.web.util.UriTemplate; import java.net.URI; UriTemplate uritemplate= new UriTemplate("http://example.com/hotels/{hotel}/bookings/{booking}"); URI uri = tmp.expand(hotel, booking); Good answers to this question are suprisingly difficult to find. Spring doc has some useful examples. Easy, let's say a {hotel} is called "Casket" and {booking} has value of "Blue" So your route: "http://example.com/hotels/{hotel}/bookings/{booking}", should look like: "http://example.com/hotels/Casket/bookings/Blue" All you need to do is call that URL and read the response server gives you But is there a way to do it without just placing the values in the Url - for example let's say there's a method getHotel() and getBooking() for a particular customer object And I want to do something like hotel = customer.getHotel() I'm not sure I understand what you mean. You could create request string dynamically, but those values need to be in the URL, that's the whole point of routes like those and that's how the GET requests work.
common-pile/stackexchange_filtered
How can I rearrange paragraphs of information into a form which can be easily taken to csv? I'm new to PS and I am currently trying to make a PS script which can rearrange paragraphs with specific information into a form which can be easily taken to csv file. The initial information looks like that: IP Address: <IP_ADDRESS> Host Name: Test Domain: contoso.com IP Address: <IP_ADDRESS> Host Name: Test2 Domain: contoso.com And i want to rearrange this info to look like that: IP Address; Host Name; Domain; <IP_ADDRESS>; Test; contoso.com <IP_ADDRESS>; Test2; contoso.com Would it be possible to do it and could you give me some examples? Thanks in advance! Hi. Can you post the code you used already? It goes a long way to showing these folks way smarter than me how you got what you did. Here is a solution : Sample input file : IP Address: <IP_ADDRESS> Host Name: Test Domain: contoso.com IP Address: <IP_ADDRESS> Host Name: Test2 Domain: contoso.com Script Clear-Host #loads the input file and removes the blank lines $raw = Get-Content "G:\input\rawinfo.txt" | Where-Object { $_ -ne "" } #declare results array $results = @() #foreach line in the input file $raw | % { #split the line on ": " $data = $_ -Split ": " #switch on first cell of data switch ($data[0]) { #store ip address "IP Address" { $ipaddress= $data[1] } #store host name "Host Name" { $hostname = $data[1] } #store domain "Domain" { $domain = $data[1] #since this is the last field for each item #build object with all fields $item = [PSCustomObject]@{ "IP Address" = $ipaddress; "Host Name" = $hostname; "Domain" = $domain; } #add object to results array $results += $item } } } #output results array $results Example output : IP Address Host Name Domain ---------- --------- ------ <IP_ADDRESS> Test contoso.com <IP_ADDRESS> Test2 contoso.com You can then pipe it to Export-Csv : $results | Export-Csv "host_info.csv" -Delimiter ";" -NoTypeInformation I am going to assume that the information is on a text file First you need to import the file using Import-CSV , process the data and export it as NewCSV.csv Here is a code example : Import-Csv C:\test.txt -Delimiter ":" -Header "Name", "Value" |export-csv c:\test.csv -NoTypeInformation This should create a file that looks like this : "Name","Value" "IP Address","<IP_ADDRESS>" "Host Name","Test" "Domain","contoso.com" "IP Address","<IP_ADDRESS>" "Host Name","Test2" "Domain","contoso.com" you can change the delimiter using -Delimiter ";" Import-Csv C:\test.txt -Delimiter ":" -Header "Name", "Value" |export-csv c:\test.csv -NoTypeInformation -Delimiter ";" Thanks for answering. Maybe i was misunderstood. Actually i'm trying to sort out all IP addresses under one column, in a second column all host names and in a third column all domain names. Sorry i misunderstood your question. i'll update as soon as i find a solution.
common-pile/stackexchange_filtered
Prevent software to log out by interacting I have some softwares that will log me out if I don't interact with them for a given amount of time (as a security option). However I don't want that to happen so I'm trying to find a way to automatically interact with them every 10 mins for example. How can I do that? Possibly in background. AutoHotKey again. Here's a detailed article on how to get a key press on a set time. This can be modified to other buttons, or mice movement if need be.
common-pile/stackexchange_filtered
Finding out what makes your file size big? Is there a way to see exactly which objects/textures/etc in my scene take up the most memory? My scenes often get rather big, bigger than I feel they should be, and rendering them obviously then tends to take quite some time. So is there a way to see what uses the most data/memory? Thanks. (I'm not at liberty to share these files.) Hi, your files are not needed, imho. Have you sen this https://blender.stackexchange.com/questions/72098/is-there-an-easy-way-to-see-what-is-consuming-the-most-memory ? Also https://blender.stackexchange.com/questions/67630/blender-2-78-file-size-grows-and-does-never-shrink A real answer to this question would be really nice. So far the best I can find is to "delete randomly and check file size". https://blender.stackexchange.com/a/45665/33487 I second the comment from @SephReed about a "real answer." "Delete objects and check again" isn't very helpful.
common-pile/stackexchange_filtered
Netgear router becomes unresponsive frequently I have a Netgear WNR3500v2 router that becomes unresponsive frequently (once a day or so); all connections through the router drop and all pings to the router time out. New computers timeout when trying to connect wireless; and wired-connected devices don't get assigned IP address. So far, my only remedy has been to restart the router. How should I debug the problem? Is there a known problem with the router? More details: The network consists of two Macs (one running Mountain Lion), iOS devices, two printers, and Boxee I couldn't figure out a direct correlation between any event/time with router unresponsiveness, but it become more frequent recently How long you've had this router? This could be a sign that its dying, remember its a device thats basically 24 hours on. You can try switching it for another one to see if it keeps happening. The router is just over a year old. It's unlikely that's the router is dying quite yet. When you get the problem again connect one of the laptops directly to your modem. If you get normal connectivity then its definitely the router.
common-pile/stackexchange_filtered
Gatsby Lifecycle API question relating to Create Node and Create Page I am trying to learn Gatsby by writing a plugin using the lifecycle API. I am using the Gatsby default starter and I put in my local plugin's gatsby-node.js some debug statements. Here's the output of gatsby build: - creating node: [Site] children: [] - creating node: [Directory] children: [] - creating page: [/404/] json name [404-22d] - creating page: [/] json name [index] - creating page: [/page-2/] json name [page-2-fbc] - creating node: [SitePage] path: [/404/] children: [] - creating node: [SitePage] path: [/] children: [] - creating node: [SitePage] path: [/page-2/] children: [] - creating page: [/404.html] json name [404-html-516] - creating node: [SitePage] path: [/404.html] children: [] Here's my gatsby-node.js in my local plugin: function onCreateNode({node, loadNodeContent}) { let message = `- creating node: [${node.internal.type}]`; if (node.path) { message += ` path: [${node.path}]`; } if (node.children) { message += " children: " + JSON.stringify(node.children); } if (node.owner) { message += ` owner: [${node.internal.owner}]`; } console.log(message); } function onCreatePage({page, actions}) { let message = `- creating page: [${page.path}] json name [${page.jsonName}]`; console.log(message); } exports.onCreateNode = onCreateNode; exports.onCreatePage = onCreatePage; I don't understand the followings: Why are there two create page calls for 404, one for /404/ and the other for /404.html? It appears in the debug statements that create page is called before create node (see above where creating page precedes creating node). Why is that? Shouldn't it be the other way around? (maybe the callbacks are done in a different order?) Ultimately, I don't fully understand the relationship between a node and a page. Is there a relationship? Thanks in advance. 1. Why are there two create page calls for 404, one for /404/ and the other for /404.html? By default all gatsby generated pages, except for index, is put in a folder with a index.html file in it, i.e my-domain.com/page1 will have a page1 folder with index.html in it. Why? I'm guessing that if someone turns off javascript they'd still get the nice url (my-domain.com/page1/ instead of my-domain.com/page1.html. The reason why 404 is copied from 404/ to 404.html is because many static site hosts expect site 404 pages to be named this. — source 2. It appears in the debug statements that create page is called before create node. Why is that? This has tripped me off a few times too & I'm still not too sure why. The createPages api is definitely called after sourceNodes, so here's my guess: onCreateNode & onCreatePage are triggered asynchronously; however, gatsby will call these hooks of each individual plugin serially. Perhaps by the time it's your turn (your gatsby-node.js hooks will be called last), gatsby has already moved on to the next steps (again, just speculation). Also, keep in mind that when createPage is run, gatsby doesn't actually write out the page with data, it just keeps track of the page's metadata stuff. Here's some helpful resouces: Gatsby: behind the scenes Gatsby: how apis are run 3. Is there a relationship (between a node and a page)? Not really. The user are the one who creates relationship between node and page by writing graphql queries that request data for a page. By default, gatsby includes this plugin for all site, which turns files in src/pages/ into pages. User'd still have to query the data for those pages themselves. So except for the node created for each pages by the internal plugin data bridge, there's no relationship between node & page. Thank you for your response. Even though it did not fully answered my questions, it provided some reference materials for me to go over.
common-pile/stackexchange_filtered
jQuery Slimscroll to HTML table with fixed header I am researching my way to use slimscroll plugin to apply to html table, which is getting filled with the business data from a web service. However as the title depicts, slimscroll scroll the entire div and header of table too. I am trying to achieve fixed header with tbody scrolling. <div class="slimscrolldiv"> <table> <thead></thead> <tbody></tbody> <tr> <td></td> </tr> . . . . . </table> </div> I am not posting any code but above code snippet would suffice to understand the problem. If need any other information, please post a comment. now whole table got slim scroll property?? One thing that worked for me (at least partially) was to create 2 tables, one for thead and one for tbody and to wrap the slimscroll around the tbody table. The reason this works 'partially' is because you then have to play around with css to get the 2 table columns to line up.
common-pile/stackexchange_filtered
Steps to create or steps to creating I am writing some technical documentation and I got confused when I saw the following paragraph title: Steps to creating a new thing* in production I think the correct title should be: Steps to create a new thing* in production But I'm not an English native speaker, the person who wrote it is, so I'm no sure. * Replace thing with the real thing we do on production :) Your question seems to be a duplicate of another that has already been answered. Nevertheless, you just need to know that sometimes To is used as a preposition, and sometimes used as a verb. When it's used as a preposition, it may be proceded by -ING. There are some verbs that are proceded by To as a preposition: I'm addicted to playing - I'm allergic to sleeping - I'm used to staying up late. I did a search before asking and the word "steps" in this case is not a verb, but noun (the plural of step, not the present third person of to step). So I think the case is different. I need a native speaker or someone with good command of English to tell me which is the right one in these cases. Did you read the link I sent you? It says if the verb is performing a noun's function, then "To" will be performing the function of preposition, so you can use that way. Check out this post: http://english.stackexchange.com/questions/329/when-should-a-verb-be-followed-by-a-gerund-instead-of-an-infinitive And see the list of verbs that can and can not be followed by gerund. Let me understand, do you think steps is a verb or do you think creating is used as a noun? creating a new thing is used as a noun, which makes to perform a preposition's function. It's hard to explain to you, you'd better take a look at the links I sent you. Check this out: http://english.stackexchange.com/questions/13386/is-to-ing-to-becoming-correct Looks like, but, in my opinion, it's incorrect. The focus of the sentence is not about the creating action (these are not the steps to setup the creating environment), but to create the thing we need on our prod server (these are the steps you need to follow to have the thing we need). In the example you linked (We're on track to becoming developed nation.), the focus is on the process to become a developed nation, they will be on track to become, but they are not becoming yet a developed nation. This is how that sentence sounds to me. *we should always use: the first form of verb after the proposition 'to' *the correct sentence will be: "the step to create a new thing." *if you are using creating, you can use 'for' in front of that, just like, "the steps for creating a new thing."
common-pile/stackexchange_filtered
Overlaying 2 or more shapes in a bitmap file created in C? i was working on a program that will, depending on the input draw, shapes of different colors onto a bitmap file, it works fine if i just have to draw one shape, but if i for example take two or more shapes it just draws over the old picture and the old one gets lost but i need them to overlay to create more complex pictures. Is there a way when i am writing to a bitmap file to skip over parts i dont want to write over ? I also tryed making an array in which i would save all the pixel data, but that doesnt work if i take a bitmap of a size larger than 800x800, depending on the size of the type of the elements of the array. I am open for any suggestion and comment. Thank you in advance. 800x800x3 is about 2 mega bytes, you probably hit the stack limit. Allocate data on heap instead. Why are you not using an image library or why not the paint functions specific to the operating system? Thank you very much, i haven't thought about that, very stupid of me, now it's all working. I made a pixel structure array and allocated memory for it and it works perfectly. Now i just edit the pixels in this array that i want to edit and then write to the actual bimap file. You need to draw the second shape using a transparent background, how you would do that is entirely up to you as you don't provide any information about what technology you are using. Sorry for not mentioning all the details, i am doing this in C and using only standard C libraries. I figured that out but i dont know exactly what to do to get a transparent background. thank you. You need another bitmap to act as a mask. In that bitmap set the value to all 1s where the second shape is to be drawn and 0 elsewhere. Finally combine the result of the first draw with the second by: res1[i] = (res1[i] & ~mask[i]) | (res2[i] & mask[i]). Here 'res1' is the result of the first draw, 'res2' the second, 'mask' the masking bitmap I described and 'i' is the pixel index, this selects the new value from res2 where the second draw is to occur and the existing value from res1 everywhere else.
common-pile/stackexchange_filtered
Good book on asymptotics and especially equivalence I'm trying to deepen my knowledge of asymptotic analysis and I find very few resources must of them just state the definition and theorems. I'm especially looking to understand this definition more clearly $$ f(x)=g(x)(1+o(1))$$ sometines writen like this: $$u_n=(1+\epsilon(n))v_n$$ A classic is "Asymptotic Methods in Analysis" by N. G. de Bruijn. Hard copy about \$15, Kindle version \$10, both from Dover (in Amazon). A short exposition with examples is Hildebrand's "A Short Course on Asymptotics". There are more extensive works around, but I like this one as it is reasonably rigorous while short and readable. Be careful, the notations are sometimes defined slightly (or blatantly) differently (Hildebrand's seem to be the current consensus, in Computer Science it is often also understood that the functions are positive throughout and the interest is in $n \to \infty$). Great thanks, really wonder why it's so hard to find stuff like this in English! Mush easier in my other language Edit: I gave it a look, the PDF is nice but doesn't tackle the case where one function isn't strictly positive after a certain point
common-pile/stackexchange_filtered
Where does Visual Studio stores the default browser to use in debug? I'm using Firefox as my default browser but when working in Visual Studio, I'd like to fire up IE when I go in debug. We all know that in MVC application, there's no way to choose the default browser unless you add a web form file, right click it, select browse with and then force a browser to be the default one. Great. My simple question is: where does VS stores the browser I just tell him to use (registry? project file? some xml config file?) I'm asking because VS loose this preference several times a month. I'm fed up with making the brower trick again and again. Thanks in advance, Fabian I found these settings eventually. They are stored in an XML file called browsers.xml in thge following directory: **C:\Documents and Settings\%USERNAME%\Local Settings\Application Data\Microsoft\Visual Studio\9.0** The XML should look like this: <?xml version="1.0"?> <BrowserInfo> <Browser> <Name>Firefox</Name> <Path>"C:\Program Files\Mozilla Firefox\firefox.exe"</Path> <Resolution>0</Resolution> <IsDefault>True</IsDefault> <DDE> <Service>FIREFOX</Service> <TopicOpenURL>WWW_OpenURL</TopicOpenURL> <ItemOpenURL>%s,,0xffffffff,3,,,</ItemOpenURL> <TopicActivate>WWW_Activate</TopicActivate> <ItemActivate>0xffffffff</ItemActivate> </DDE> </Browser> <Browser> <Name>Internet Explorer</Name> <Path>"C:\Program Files\Internet Explorer\IEXPLORE.EXE"</Path> <Resolution>0</Resolution> <IsDefault>False</IsDefault> <DDE> <Service>IExplore</Service> <TopicOpenURL>WWW_OpenURL</TopicOpenURL> <ItemOpenURL>"%s",,0xffffffff,3,,,,</ItemOpenURL> <TopicActivate>WWW_Activate</TopicActivate> <ItemActivate>0xffffffff,0</ItemActivate> </DDE> </Browser> <InternalBrowser> <Resolution>0</Resolution> <IsDefault>False</IsDefault> </InternalBrowser> </BrowserInfo> The <IsDefault> tag determines whether or not the browser is used for debugging. Thanks, I found the file. Next time VS tries to fool me, I'll have a look at it. Note that the file is located at C:\Users<User>\AppData\Local\Microsoft\VisualStudio\10.0\ on windows 7 with Visual Studio 10. Yes of course - I should have stated that I was using VS2008 and XP Well, replacing the file with a IE configured one does not work as when I hit F5 the file is replaced by Visual Studio. Weird. :'-( I found the file, but this doesn't seem to work with VisualStudio 2010 and Windows 7. Also, the path for this file can be simplified: %LOCALAPPDATA%\Microsoft\VisualStudio<ver>\ Alternately you can use this extension: http://visualstudiogallery.msdn.microsoft.com/bb424812-f742-41ef-974a-cdac607df921/ Suggest from question: Visual Studio opens the default browser instead of Internet Explorer And, yes. This works with ASP.NET MVC applications as well.
common-pile/stackexchange_filtered
mysql 8 using a wildcard in query when selecting a date Im currently using this query: General error: 1525 Incorrect DATE value: '2020-02%' The SQL being executed was: SELECT '01', 2103, cssd._campaign_id, cssd.first, cssd.last, cssd.street, cssd.city, cssd.state, cssd.zip, cssd.customer_id, cssd.vin, cssd.email, cssd.phone, cssd.phone2, cssd.phonecell FROM combined_sales_service_data cssd WHERE cssd._campaign_id = 25 AND cssd.last_date >= '2020-02%' last_date is a DATE field. Which used to work in 5.7, however in 8.0 i'm getting the date error above. Now I can update the date to 2020-02-01 but I would like to be able to use the wildcard in some situations. Is there a better way to formulate this statement without setting the ALLOW_INVALID_DATES Thanks You really can't compare a date with a wildcard as a less than/greater than. Pick a specific date in February (such as Feb 1st). % is a wildcard only if you use it with the operator LIKE. Your query worked coincidentally because the character '%' is considered less than the character '-' which follows the month in a Date value. Don't use it. Use proper date comparisons. there are many option but you can make something like this CREATE TABLE table1 (last_date date) INSERT INTO table1 VALUES('2022-02-01'), ('2022-03-01'),('2022-01-01'),('2021-01-01'),('2023-01-01') SELECT * FROm table1 WHERE DATE_FORMAT(last_date ,'%Y-%m') >= '2022-02' | last_date | | :--------- | | 2022-02-01 | | 2022-03-01 | | 2023-01-01 | db<>fiddle here
common-pile/stackexchange_filtered
Ubuntu Touch RC version 42 bricked my Nexus 4 I have been running a Nexus 4 with Ubuntu Touch (mako) rc (http://system-image.ubuntu.com/ubuntu-touch/rc/ubuntu/mako/) for about the last month. It's really great overall. This morning I got a notification that an update to the Ubuntu Touch was available. "Sweet!" I thought, and went ahead and clicked update. Phone said it needed to restart to install update. After it restarted, I saw the Ubuntu progress bar increasing. I left the phone and came back half an hour later to a backlit black screen. Phone didn't respond so I restarted holding down the power button. Now the phone boots through the Google boot screen then into the ubuntu splash screen with the dots. It cycles through the dots a few times then screen goes black (still backlit). Backlighting responds to me clicking the screen lock and unlock (power) button. Other than that it does nothing. I've tried to attach to my laptop (ubuntu gnome 16.10). Nautilus sees the device but when i click on it to mount it I get the error Unable to access "Nexus 4" Unable to open MTP device '[usb:003,006]' The phone does still boot into recovery without a problem. Unforunately I don't have a backup to restore to (doh!). What can I do to try and get this phone working again? Edit: I can interact with the phone over adb. Ran: $ adb devices $ adb shell which has given me the phablet@ubuntu-phablet: prompt. I can see stuff in my home directory which is nice. Edit 2: While my phone has just been sitting on my desk, it did start ringing at one point. So it is still receiving calls. But the screen is blank so I couldn't answer the call. Your title is about OTA 42. Isn't your question about OTA 14? I had trouble with the version 42 update (http://system-image.ubuntu.com/ubuntu-touch/rc/ubuntu/mako/). Is that not considered OTA 42? If I've used the wrong nomenclature I will be happy to modify the title of the question I think your Image is could be called RC version 42. Later on it will become an RC-Proposed version and then a Stable version which will then be called OTA 14. It seems to be just a UI failure in this update. So, when adb works, you can reflash the device e.g. via the ubuntu-device-flash tool keeping your personal data (use -h for documentation). Edit: Found this fortunely in the Landing team: https://lists.launchpad.net/ubuntu-phone/msg22825.html It's about the Meizu Pro5 having the same problem. Seems to be a harder problem. Good luck :) Thanks Bjarne. Your answer helped me. Reflashing to a previous version did the trick. After connecting to device (booted device into Ubuntu, not recovery), i issued ubuntu-device-flash --revision=41 touch --channel=ubuntu-touch/rc/ubuntu from my laptop, which flashed the phone to the previous version of UT. Device booted up perfectly after that with all my data intact.
common-pile/stackexchange_filtered
Geodesic Distance Transform and segmentation operations in Matlab I am interested in using the geodesic distance transform in Matlab (2015a) to obtain segmented regions of a picture, from which I can perform operations on a particular region. I have incorporated the code outlined from here (http://www.mathworks.com/help/images/ref/imseggeodesic.html) and can reproduce their images. However, I'm not sure how to perform operations on any particular segmented area. L = imseggeodesic(RGB,BW1,BW2); figure, imshow(label2rgb(L)); The above snippet would display the picture segmented into light and dark blue regions, of which the dark blue region represents the yellow flower from the original picture. How may I proceed to, for example, perform colour histogram equalisation for the yellow flower alone? Displaying L alone (without label2rgb) results in a plain white image leading me to assume it's blank (as in has no value for me to work with). So can I store label2rgb(L) into another variable, threshold the light blue region (which is the background, not the flower) and perform operations to influence the yellow flower alone? Or would it be better to use: [L,P] = imseggeodesic(RGB,BW1,BW2) for threshold purposes? Any sort of advice, especially coding would be of great assistance. L is from type double, it is a matrix with values 1 and 2. The pixels assigned a 1 are classified as flower (because of the first sample area: BW1), those assigned 2 are classified as background (because of the second sample area: BW2). If you want to transform the L matrix into a binary image use the following code: [r,c]=size(L);% row and column length matrix L BW=[]; for i=1 : r ; for j=1:c; if L(i,j)==2; BW(i,j)=1;% assign class 2 a 1 (true) else BW(i,j)=0;% assign class 1 a 0 (false) end end end You can see the result as follows : figure;imshow(BW);% background is assigned a 1 (1= true= white) and the flower 0 (0=false=black) The inverse of the image is done as follows: BWinverse=~BW; figure;imshow(BWinverse); % background is assigned a 0 and the flower 1 If you want to segment the background further use : maskedRgbImage = bsxfun(@times,RGB, cast(BW, class(RGB))); figure;imshow(maskedRgbImage);%puts a mask on the flower (assigns the background pixels a 0) If you want to segment the flower further use : maskedRgbImage = bsxfun(@times,RGB, cast(BWinverse, class(RGB))); figure;imshow(maskedRgbImage);% puts a mask on the background
common-pile/stackexchange_filtered
two Lists to Json Format in python I have two lists a=["USA","France","Italy"] b=["10","5","6"] I want the end result to be in json like this. [{"country":"USA","wins":"10"}, {"country":"France","wins":"5"}, {"country":"Italy","wins":"6"}, ] I used zip(a,b) to join two but couldn't name it Using list comprehension: >>> [{'country': country, 'wins': wins} for country, wins in zip(a, b)] [{'country': 'USA', 'wins': '10'}, {'country': 'France', 'wins': '5'}, {'country': 'Italy', 'wins': '6'}] Use json.dumps to get JSON: >>> json.dumps( ... [{'country': country, 'wins': wins} for country, wins in zip(a, b)] ... ) '[{"country": "USA", "wins": "10"}, {"country": "France", "wins": "5"}, {"country": "Italy", "wins": "6"}]' Can someone explain the python syntactic magic of [{'country': country, 'wins': wins} for country, wins in zip(a, b)]? I just tried the line in the interpreter and it works! But I can't find good documentation on this. Anyone can point me in the right direction. I am very interested in it. @KyleCalica-St, Follow the link in the answer. It will lead you to the Python tutorial explaning about the list comprehension. You first have to set it up as a list, and then add the items to it import json jsonList = [] a=["USA","France","Italy"] b=["10","5","6"] for i in range(0,len(a)): jsonList.append({"country" : a[i], "wins" : b[i]}) print(json.dumps(jsonList, indent = 1)) You can combine map with zip. jsonized = map(lambda item: {'country':item[0], 'wins':item[1]}, zip(a,b)) Tuple parameter unpacking does not work in Python 3.x. In addition to the answer of 'falsetru' if you need an actual json object (and not only a string with the structure of a json) you can use json.loads() and use as parameter the string that json.dumps() outputs. also for just combine two list to json format: def make_json_from_two_list(): keys = ["USA","France","Italy"] value = ["10","5","6"] jsons = {} x = 0 for item in keys: jsons[item[0]] = value[x] x += 1 return jsons print(ake_json_from_two_list()) result>>>> {"USA":"10","France":"5","Italy":"6"}
common-pile/stackexchange_filtered
How to extract and ignore span in markup? - python How to extract and ignore span in HTML markup? My input looks like this: <ul class="definitions"> <li><span>noun</span> the joining together of businesses which deal with different stages in the production or <a href="sale.html">sale</a> of the same <u slug="product">product</u>, as when a restaurant <a href="chain.html">chain</a> takes over a <a href="wine.html">wine</a> importer</li></ul> Desired outputs: label = 'noun' # String embedded between <span>...</span> meaning = 'the joining together of businesses which deal with different stages in the production or sale of the same product, as when a restaurant chain takes over a wine importer' # the text without the string embedded within <span>...</span> related_to = ['sale', 'chain', 'wine'] # String embedded between <a>...</a> utag = ['product'] # String embedded between <u>...</u> I've tried this: >>> from bs4 import BeautifulSoup >>> text = '''<ul class="definitions"> ... <li><span>noun</span> the joining together of businesses which deal with different stages in the production or <a href="sale.html">sale</a> of the same <u slug="product">product</u>, as when a restaurant <a href="chain.html">chain</a> takes over a <a href="wine.html">wine</a> importer</li></ul>''' >>> bsoup = BeautifulSoup(text) >>> bsoup.text u'\nnoun the joining together of businesses which deal with different stages in the production or sale of the same product, as when a restaurant chain takes over a wine importer' # Getting the `label` >>> label = bsoup.find('span') >>> label <span>noun</span> >>> label = bsoup.find('span').text >>> label u'noun' # Getting the text. >>> bsoup.text.strip() u'noun the joining together of businesses which deal with different stages in the production or sale of the same product, as when a restaurant chain takes over a wine importer' >>> bsoup.text.strip >>> definition = bsoup.text.strip() >>> definition = definition.partition(' ')[2] if definition.split()[0] == label else definition >>> definition u'the joining together of businesses which deal with different stages in the production or sale of the same product, as when a restaurant chain takes over a wine importer' # Getting the related_to and utag >>> related_to = [r.text for r in bsoup.find_all('a')] >>> related_to [u'sale', u'chain', u'wine'] >>> related_to = [r.text for r in bsoup.find_all('u')] >>> related_to = [r.text for r in bsoup.find_all('a')] >>> utag = [r.text for r in bsoup.find_all('u')] >>> related_to [u'sale', u'chain', u'wine'] >>> utag [u'product'] Using BeautifulSoup is okay but it's a little verbose to get what's needed. Is there any other to achieve the same outputs? Is there a regex way with some groups to catch the desired outputs? It still has a pretty well-formed structure and you've stated the set of rules clearly. I would still approach it with BeautifulSoup applying the "Extract Method" refactoring method: from pprint import pprint from bs4 import BeautifulSoup data = """ <ul class="definitions"> <li><span>noun</span> the joining together of businesses which deal with different stages in the production or <a href="sale.html">sale</a> of the same <u slug="product">product</u>, as when a restaurant <a href="chain.html">chain</a> takes over a <a href="wine.html">wine</a> importer</li></ul> """ def get_info(elm): label = elm.find("span") return { "label": label.text, "meaning": "".join(getattr(sibling, "text", sibling) for sibling in label.next_siblings).strip(), "related_to": [a.text for a in elm.find_all("a")], "utag": [u.text for u in elm.find_all("u")] } soup = BeautifulSoup(data, "html.parser") pprint(get_info(soup.li)) Prints: {'label': u'noun', 'meaning': u'the joining together of businesses which deal with different stages in the production or sale of the same product, as when a restaurant chain takes over a wine importer', 'related_to': [u'sale', u'chain', u'wine'], 'utag': [u'product']} PyQuery is another option to using BeautifulSoup. It follows a jQuery like syntax for extracting info out of html. Also, for regex...something like below can be used. import re text = """<ul class="definitions"><li><span>noun</span> the joining together of businesses which deal with different stages in the production or <a href="sale.html">sale</a> of the same <u slug="product">product</u>, as when a restaurant <a href="chain.html">chain</a> takes over a <a href="wine.html">wine</a> importer</li></ul>""" match_pattern = re.compile(r""" (?P<label>(?<=<span>)\w+?(?=</span>)) # create the label \ item for groupdict() """, re.VERBOSE) match = match_pattern.search(text) match.groupdict() outputs: {'label': 'noun'} Using the above as a template, you can build on that with respect to the other html tags too. It uses (?P<name>...) to name the matched pattern (i.e. label) and then a (?=...) lookahead assersion and a positive lookbehind assertion to perform the match. Also, look in to findall or finditer if you have a doc that has more than once instance of your mentioned text pattern.
common-pile/stackexchange_filtered
L78S05C only supplying 4.88 volts? I'm working on a linear 5 V power supply for the first time. I'm using the L78S05C as I was hoping for a little elbow room on the amperage over the standard 1.5 A version. Circuit tested perfectly on the bread board, so had at it. Completely done, it would not power up my device (needing 5 V). I tore it down to find the transformer (Hammond 166N6) 6.3+0+6.3, while reading over 12 V using a multimeter, is really NOT the RMS voltage. I don't remember exactly, but barely getting 8.5 V RMS using my oscilloscope. I'm guessing the lower voltage, voltage drop from the full-wave rectifier and then again the 2 V drop from the L78S05 was my issue. Moving on, to be sure I send 15 V to the L78S05 using my adjustable DC voltage regulator and the output of L78 is only 4.8 volts. I added two 100 uF capacitors, one on Vi and one on Vo to ground, besides the recommended 0.33 uF and 0.1 uF (see Figure 17 from the datasheet below). I figured instead of aimlessly de-soldering the additional caps, trial and error, I should ask here why the voltage is so low. What is your current draw? The 166N6 has a 6.3 volt centre-tapped secondary - that is, the full secondary voltage is 6.3 volts. The 8.5 V you mention is higer than I would expect (but possilbe) for the no-load voltage. The transformer is rated for 25.2 VA, which corresponds nicely to 6.3 V at 4 Amp. If you have no load at all the output voltage may be out of regulation. Even if you have load, it still would not output exactly 5.0000 volts, 4.88V is within specs. Im not sure, about the current draw, but to have a LED in circuit with a 330 Ohm resistor. The 6.3 + 6.3 to each other producing 12.6V is peak to peak 12.6. The RMS is 8V. I assumed that the output was RMS rated... Now I know Michal, Understood. Care to explain how you determined that (Or point me in the right direction ?) If you look at the data sheet, the nominal output voltage is given as 4.8 to 5.2 volts. Therefore an output of 4.88 volts is within the specification. I did notice that, and thought since it was advertised as a 5V that through circuit design you could adjust a little here or there. It seams strange that I would have to hunt through the lot to find one putting out 5V.. Am I missing something? Is there a better component that puts out 5v every day, all day long ? @Hoops There is no component that puts out exactly 5.000V every day, all day long. In the real world everything has a tolerance. You can find regulators that are more accurate than the 7805 but you must live with some non-zero error. Elliot, I see now there are variable voltage regs based off this (and other regulators) Im guessing that I should design around one of those and once I find what delivers the goods, pin the dial or swap out with a fixed cap or resistor for that value... a small short of maybe some tin splatter on your PCB hold it up to some strong light and look the board over (can be a tiny droplet or a tiny copper slither from board making ) . does the 7805 warm up quickly ? How would a short cause this? 2winners, wasnt ignoring you. Have been having time constraints, but plenty of splatter everywhere except the board. And not enough heat to know if it even has any real current through it. I do have a big heat sink, with the thinnest coat of thermal compound I could smoothly get on... Thank you for responding.. @pipe some times when making new boards some thin slithers of copper do not eat away by the acid correctly and work like a resistor and the 78XX voltage drops when exeeding rated current but don't blow up if its just over the limit but not a full short ..i mention the board short as you said it worked well on the breadboard
common-pile/stackexchange_filtered
I cannot format my PC I have a Toshiba Satellite(1) l505 with 6gb RAM and a 6.00 GB hard disk. Initially I had motherboard problems with another satellite (#2). Since I have HDD problems with the first one (#1) I decided to use the hard disk of #2 in #1. I formatted the HDD and erased the partitions it had into 1 partition (or no partition). The problem is that when I try to format with the OS CD, in the screen where I have to decide in which partition I want to install the OS, the only one option I have says "unallocated partition", and I receive the message "Windows cannot install the OS in this partition, run files do not exist or maybe corrupted". When I erased the disk with Parted Magic, did I erased any files needed for running the installiation disk? Is it possible to fix or reinstate the disk to install the OS? I checked the disk physical health with Parted Magic, and it is OK. One more thing when I erased the disc to 0, I used the safety option offered by the Parted Magic. You've used both SO and OS. Is SO a typo? Jesus I have edited your question to make it more readable, but it's still not clear. In what machine (#1, #2) did you do what? Make the story strictly chronological and don't add information at the end that points back to the beginning (leaving us to re-assemble your chain of actions). Computer 1 has hard drive issues. Computer 2 had motherboard issues. You took the hard drive from computer 2, and tried to use it in computer 1. You used third party software to erase the hard drive and create, as you say... 1 partition (or no partition) ... but you aren't actually sure whether you created a partition or not. Insert your Windows 7 installation CD. Follow the prompts to allow the Windows 7 installer to wipe the hard disk and create any partitions necessary. That means when you hit the screen in the installation process that wants to know what partition you want to install to, choose the Custom Options link at the bottom right. Then, select all the existing partitions, and Delete them. Make sure the only thing left in the list is "Disk 0 Unallocated Space". Select that, and hit Next. This will allow the Windows installer to create the partition it needs, and to format it properly. If there are no issues with the Ram in Computer 1, and there are no issues with the installation disc, Windows should then install without issue. Zero-filling the drive or secure erasing it or deleting all partitions will not affect the installation media. Either there's a problem with your DVD, or the drive is having problems reading the disc. Try again with a known good disc, or else if the DVD drive's bad use something like Rufus to transfer the contents of the DVD to a USB stick and boot from that instead. refer to this link. the partition you created using partition magic will be having any issues. You may be trying to install windows from a windows upgrade disk, which only lets you install to somewhere that already has windows on it. You may need to find a different (or even old version) windows disc to install from first, and then install Windows 7. I think I read somewhere that the Windows 7 upgrade EULA does not prohibit the use of a Windows 7 evaluation version being the previous version of windows that you are upgrading from. But I don't buy upgrade versions of windows so I haven't checked that EULA.
common-pile/stackexchange_filtered
Why was the question about All Halo Books List (100702) closed again, after it was re-opened? Ok, so the question, Is there a list of all Halo-related reading?, was closed and then re-opened, after a bit of discussions in the comments section. The question was then re-closed very soon after. See the revision history. And the Off-topic reason being: "Requests for lists of works or recommendations are off-topic as they do not fit our questions and answers format. Feel free to ask about people's favorites in chat." Why was this question closed? What makes it "off-topic"? It's definitely not asking for a recommendation. It's asking for all the relevant source material for that certain canon. It's: Answerable Definite list On-topic (Halo is very Sci-Fi) Not asking for our 'favourites', etc Related (on-topic) questions: Where does the background story in A Song of Ice and Fire come from? How many Marvel Earths (Universes) are there? What should I do if I want to cover 100% of Star Wars EU but don't want to play the games? Is there an "official"-ish complete chronological order for Star Wars C-canon material (books+comics+games)? I am most definitely not asking for people to justify themselves having gone against what I've said. I just want to get a consensus on whether or not this is on/off-topic and why. https://www.halowaypoint.com/en-us/forums/db05ce78845f4120b062c50816008e5d/topics/halo-canon-order-halo-3-and-before/23bec818-d9c5-4932-981b-61efe8532f61/posts - Insanely long answer anyone? Ooh, yes please. If the close reason was that the answer is too long, then the "too-broad" option should have been used no? I didn't close it. I was merely commenting that it's not readily answerable. That was your point #1 @Richard 1/2 I know you didn't. And again, this isn't to call out those who did. People's votes go where they will. Just want to understand. So, if it's not readily answerable, it's off topic? As in, do we concentrate on low hanging fruit? I agree that it does seem like it's a huge piece of canon, with lots of media, but surely the list is finite, and the info has to be somewhere, no? @Richard 2/2 Additionally, just so I get it, are we closing this because it's essentially asking "give me all the things from this canon"? Or that it just so happens that it's a huuuuge piece of work and too much for our site's format? The close reason explicitly states that answers require an extensively long answer are not a good fit for the site. For example, "what is the chronology of all Star Trek comics?"
common-pile/stackexchange_filtered
how to handle multiple paypal buttons on the same form I have three items, and next to each have the paypal "addtocart" button. each button works if it's the only one on the page ... I believe that's because each button is wrapped in it's own form tags. How can I get multple buttons on the page? I have tried single form tag, renaming the cmd buttons and image, but then they just go to the paypal home page. How can I do this? It sounds like you're using a regular Buy Now button. You need to make sure you're using an actual Add to Cart button. The buttons will still be within their own form tags, but they're a little different to handle the different tasks. Then you'll also include (optionally) a View Cart button. Here's another guide that should be useful.
common-pile/stackexchange_filtered
hana::tuple to auto && ... args Is there a way to use something like : constexpr auto foo = hana::make_tuple(hana::type_c<Foo1>,hana::type_c<Foo2>); with something like: template < typename ... Ts > struct Final { constexpr Final(Ts && ... args) {} }; hana::unpack(foo, [] (auto && ... args) { return Final(args...); }); Because with that code, unpack can't deduce lambda/function type. Basically I want to create a type which takes a list of arguments but I have a tuple which contains the arguments. That code isn't valid C++14 or C++17. Are you using Concepts? @KerrekSB I don't need concepts here, could you tell me why it's invalid please? auto is not a valid function parameter type in C++. It's only allowed in lambda expressions. @KerrekSB Oh I remenber clang couldn't do it, but with gcc 7 I can. Not something official? The constructor was my error thought. auto is a function parameter type is part of the Concepts TS (-fconcepts). this is allowed in -std=gnu++11 on ubuntu, and auto as parameter should be possible in C++17 with following compiler setting on windows: /Zc:auto- However, it still is not really working - @Kerrek SB, perhaps you could elaborate on the usage of -fconcepts @serup in fact auto will work if your compiler started to implement Concepts TS. You don't need to specifies any std with gcc, even -fconcept isn't needed. The problem is in your lambda: [](auto && ... args){ return Final(args...); } // ~~~~~~~ Final isn't a type, it's a class template. As such, you need to explicitly provide the types. Something like: [](auto&&... args){ return Final<decltype(args)...>( std::forward<decltype(args)>(args)...); } In C++17, with template deduction for class template parameters, the Ts&& does not function as a forwarding reference (see related answer), so the implicit deduction guide would not match your usage anyway as you are only providing lvalues and the guide requires revalues. But this would work: [](auto... args){ return Final(std::move(args)...); } Perhaps it would be better if the constructor took the Ts by value and did the std::move there instead of in the lambda. I think it should be Final<decltype(args)&& ...> in the second code snippet for perfect forwarding (--mind the &&). @davidhigh They mean the same thing - decltype(args) is always a reference here. "Here" means in the context of the OP, I guess, where a local variable foo is unpack'ed. But this doesn't hold in general, if I get it correctly, because args could be an rvalue ref but decltype(args) is deduced as lvalue ref. @davidhigh Here meant in the place you were suggesting to change. If I understand your question correctly, what you're actually looking for is template <typename ...Ts> struct Final { ... }; constexpr auto foo = hana::make_tuple(hana::type_c<Foo1>,hana::type_c<Foo2>); auto final_type = hana::unpack(foo, [](auto ...args) { return Final<typename decltype(args)::type...>; }); // now, final_type is a hana::type<Final<Foo1, Foo2>> You can also achieve the same thing using hana::template_: constexpr auto foo = hana::make_tuple(hana::type_c<Foo1>,hana::type_c<Foo2>); auto final_type = hana::unpack(foo, hana::template_<Final>); The problem I see with Barry's answer is that you'll end up creating a Final<decltype(hana::type_c<Foo1>), decltype(hana::type_c<Foo2>)>, which is probably not what you want. I think that you was influenced by my previous questions, because my goal is not to create a final type with a tuple of type. If you want more context, I was working with Jason Rice solution here : https://stackoverflow.com/questions/43089587/change-runtime-research-for-a-compile-time-one and I wanted to join several multi_map. Or maybe I just misunderstood your answer sorry ^^" No worries. All I'm saying is that with Barry's solution, if you do unpack(foo, the-lambda-he-provided), you end up with Foo<type<Foo1>, type<Foo2>>, which seems a little bit strange. In any case, I'm glad Barry's solution solves your problem. Ah, that's right. Barry's answer doesn't address the fact that the OP foo is a tuple of hana::type.
common-pile/stackexchange_filtered
Jquery width minus pixels compatible with IE I'm trying to use Jquery to calculate the width of my div minus a px gutter. I know this has been asked over and over again, but the answers I find are not IE compatible. Therefore, CSS calc won't do. The code I'm using at the moment is: $('.left').css('width', '100%').css('width', '-=40px'); But this also doesn't work on IE. Ideas welcome! Many thanks! $('.left').width(function() { return ($(this).parent().width() * 0.35) - 40; }); a width of 100% means the element is as wide as it's parent, so just subtract 40px from that Sorry, that was just for reference. It's actually 35% @Alga - still the same, just multiply it with 0.35 var $left = $('.left'); $left.css('width', '100%').width($left.width() - 40); $('.left').css('width', '100%').width($('.left').width() - 40); EDIT: For more than one element: $('.left').css('width', '100%').width(function(){ return $(this).width() - 40; }); that's involved all .left elements have the same width
common-pile/stackexchange_filtered
SSH timeout error when building AWS AMI with Vagrant I am trying to setup an AWS AMI vagrant provision: http://www.packer.io/docs/builders/amazon-ebs.html I am using the standard .json config: { "type": "amazon-instance", "access_key": "YOUR KEY HERE", "secret_key": "YOUR SECRET KEY HERE", "region": "us-east-1", "source_ami": "ami-d9d6a6b0", "instance_type": "m1.small", "ssh_username": "ubuntu", "account_id": "0123-4567-0890", "s3_bucket": "packer-images", "x509_cert_path": "x509.cert", "x509_key_path": "x509.key", "x509_upload_path": "/tmp", "ami_name": "packer-quick-start {{timestamp}}" } It connects fine, and I see it create the instance in my AWS account. However, I keep getting Timeout waiting for SSH as an error. What could be causing this problem and how can I resolve it? I use packer.io as well... this error just seems to happen sometimes... I think mainly because the launching of an amazon instance can be a little unpredictable time wise. I just keep trying until it finds an ssh connection... not a big deal. As I mentioned in my comment above this is just because sometimes it takes more than a minute for an instance to launch and be SSH ready. If you want you could set the timeout to be longer - the default timeout with packer is 1 minute. So you could set it to 5 minutes by adding the following to your json config: "ssh_timeout": "5m"
common-pile/stackexchange_filtered
Export and compare properties of expression in vba The overview in "Watches" in the vba editor of office 2016 seemed not clear/easily comparable, so I would like to export the expressions under "Watches". Because I have a manual "watch" on the only item in elements1a and I would like to be able to export all the properties (I think they are called "expressions", but I have not found the correct naming yet). So that I can compare the properties(every dropdown possiblility/expression such as: "checked", "innerText" etc. ) of the item with itself after a manipulation of the item. It seemed slightly confusing that the "sub-expressions of expressions are also called expressions. I foresee a problem since you can click a "sub-expression of a parent expression, and in that sub-expression click "parent expression of sub-expression again" Effectively creating an infinite expression loop. Hence I doubt whether it is possible at al. Instead of hacking your way through the implementation of the page, you should try to simulate the behavior of a real user. Something like: If Not Item.Checked Then Item.Click. Thank you, I believe I am trying to simulate the behavior of a real user. So after 2 attempts lucky guesses, I switched to trying to find out what the real user does by inspecting the changes in the expressions of the item that the user clicks. Yet a more structured approach will aid in my learning. Thank you! some are objects, some are properties of objects .... this explains some .... http://www.excel-spreadsheet.com/vba/objecthierarchy.htm .... if you want to compare two properties at runtime then use debug.print
common-pile/stackexchange_filtered
HTML Formatting tossing #error I am attempting to format an expression in SSRS with an HTML code and am getting #Error tossed upon report preview. My expression looks as follows: ="<b>Region: </b>" + Fields!RegionID.Value I have also ensured that HTML - Interpret HTML tags as styles has been selected under Placeholder Properties. Has anyone experienced this behavior before? Thanks! Are you just looking to get part of a text string displayed in bold or does it have to be HTML-based? I'm looking to get part of the text displayed in bold. Do you know another method? Yes, please see below. In SSRS 2008 and above, you can use placeholders to achieve this. See Formatting Text and Placeholders for more details. To give a quick example, in a textbox, enter some text then right click on the empty space to the right of the text: You can set the placeholder properties to display your required field in the Value property: You now have two distinct text parts in the textbox that can be formatted independently, e.g. one bold and one normal:
common-pile/stackexchange_filtered
Is there any way to create instance and assign values to data members in namespace? To make the code looks clean, I declare a class MaterialTexture in namespace Material to store the light attributes for OpenGL. And I want to create instance(such as metal, plastic) for this class in the same namespace. I do some googling and find that a previous discussion shows that data members can't be assigned in the namespace. And the constructor is suggested to initialize the data members. However, there're 12 data members in my case, and it would be tedious to pass 12 arguments to construct a instance. Instead, I want to use three functions (such as setDiffuse() ) to set the value of each instance. namespace Material { class MaterialTexture { private: float diffuse[4]; float specular[4]; float ambient[4]; public: // Three functions below set the values for data members above. void setDiffuse(float R, float G, float B, float A); void setSpecular(float R, float G, float B, float A); void setAmbient(float R, float G, float B, float A); /* the following function tell the OpenGL how to draw the Texture of this kind of Material and is not important here. */ void setMaterial(); }; void MaterialTexture::setDiffuse(float R, float G, float B, float A) { diffuse[0]=R; diffuse[1]=G; diffuse[2]=B; diffuse[3]=A; } // Create instances plastic and metal MaterialTexture plastic; plastic.setDiffuse(0.6, 0.0, 0.0, 1.0); ... MaterialTexture metal; metal.setDiffuse(...); ... }; // end of namespace Thus, to create a red-plastic like sphere, I only need to type following codes in the display call back: Material::MaterialTexture::plastic.setMaterial(); glutSolidSphere(...); Compile the above code with g++, it gives the error: error: ‘plastic’ does not name a type and error: ‘metal’ does not name a type in the line of the three setting functions (such as setDiffuse() ) of each instance. Thus it seems not only assignment directly in namespace, but functions contain assignment are not allowed... Am I right? Is there any other way to fix this? Or, are there some other way to facilitate the OpenGL programing? Statements such as metal.setDiffuse(...); can only go inside of functions. Material::plastic.setMaterial(); shouldn't this be Material::MaterialTexture::plastic.setMaterial();? 'and it would be tedious to pass 16 arguments to construct a instance. Instead' What about using a builder or prototype pattern? Dear πάντα ῥεῖ 4, you're correct, it's Material::MaterialTexture::plastic.setMaterial(); sorry for the mistake. Since I had altered the source code to simplified the code, there would be some mistake here. Beyond the immediate problem with your code (all executable code has to be within a function, which is not the case in your example), how to structure this best is quite subjective. Based on that, read the paragraphs below with an implied "IMHO". Here is my proposal for a simple solution that should get you going. I'll stick with your naming to keep things consistent. Even though I don't like the name MaterialTexture for this class. In the context of OpenGL, the name suggests that the class encapsulates a texture, which it's not. First of all, classes should have constructors that initialize all members. This makes sure that even if the class is not used as expected, no class member is ever in an uninitialized state, which could otherwise result in undefined behavior. I agree that having a constructor with 12 float arguments would be awkward in this case. I would have a default constructor that initializes all members to a reasonable default, e.g. matching the defaults of the old fixed pipeline: MaterialTexture::MaterialTexture() { diffuse[0] = 0.8f; diffuse[1] = 0.8f; diffuse[2] = 0.8f; diffuse[3] = 1.0f; ambient[0] = 0.2f; ambient[1] = 0.2f; ambient[2] = 0.2f; ambient[3] = 1.0f; specular[0] = 0.0f; specular[1] = 0.0f; specular[2] = 0.0f; specular[3] = 1.0f; } Your setDiffuse(), setSpecular() and setAmbient() methods look fine. Now the question is where you create the specific materials (plastic, metal). Using static methods in this class is one option, but I don't think it's a clean design. This class represents a generic material. Putting knowledge of specific materials in the same class mixes up responsibilities that should be separate. I would have a separate class that provides specific materials. There's a lot of options on how that could be designed and structured. The simplest is to have methods to create the specific materials. For a slightly nicer approach that allows you to add materials without changing any interfaces, you could reference them by name: class MaterialLibrary { public: MaterialLibrary(); const MaterialTexture& getMaterial(const std::string& materialName) const; private: std::map<std::string, MaterialTexture> m_materials; } MaterialLibrary::MaterialLibrary() { MaterialTexture plastic; plastic.setDiffuse(...); ... m_materials.insert(std::make_pair("plastic", plastic)); MaterialTexture metal; metal.setDiffuse(...); ... m_materials.insert(std::make_pair("metal", metal)); } const MaterialTexture& MaterialLibrary::getMaterial(const std::string& materialName) const { return m_materials.at(materialName); } You could easily change this to read the list of materials from a configuration file instead of having it in the code. Now you just need to create and use one instance of MaterialLibrary. Again, there's a multiple ways of doing that. You could make it a Singleton. Or create one instance during startup, and pass that instance to everybody who needs it. Once you have a MaterialLibrary instance lib, you can now get the materials: const MaterialTexture& mat = lib.getMaterial("plastic"); You could get fancier and make the MaterialLibrary more of a factory class, e.g. a Template Factory. This would separate the responsibilities even further, with the MaterialLibrary only maintaining a list of materials, and providing access to them, but without knowing how to build the list of specific materials that are available. It's your decision how far you want to go with your abstraction. You simply cannot call a function "in the middle of nowhere". This has nothing to do with namespaces. Consider this complete example: void f() { } f(); int main() { } It won't compile. A solution would be to add three different static member functions returning the three special instances, which may actually be created in another (private) static member function. Here is an example: class MaterialTexture { private: float diffuse[4]; float specular[4]; float ambient[4]; static MaterialTexture makePlastic() { MaterialTexture plastic; plastic.setDiffuse(0.6, 0.0, 0.0, 1.0); return plastic; } public: // Three functions below set the values for data members above. void setDiffuse(float R, float G, float B, float A); void setSpecular(float R, float G, float B, float A); void setAmbient(float R, float G, float B, float A); static MaterialTexture &plastic() { static MaterialTexture plastic = makePlastic(); return plastic; } }; There's room for improvement, though, especially if you can use C++11: Replace your raw arrays with std::array. Consider using double instead of float. Add a private constructor which allows you to get rid of the private static member functions and instead allows you to create the object directly. Here is a complete improved example: class MaterialTexture { private: std::array<double, 4> diffuse; std::array<double, 4> specular; std::array<double, 4> ambient; MaterialTexture(std::array<double, 4> const &diffuse, std::array<double, 4> const &specular, std::array<double, 4> const &ambient) : diffuse(diffuse), specular(specular), ambient(ambient) {} public: // Three functions below set the values for data members above. void setDiffuse(double R, double G, double B, double A); void setSpecular(double R, double G, double B, double A); void setAmbient(double R, double G, double B, double A); static MaterialTexture &plastic() { static MaterialTexture plastic( { 0.6, 0.0, 0.0, 1.0 }, { 0.0, 0.0, 0.0, 0.0 }, { 0.0, 0.0, 0.0, 0.0 } ); return plastic; } }; I don't think proposing to use double in this case is good advice. OpenGL generally operates with float, so using double in supporting classes will just result in a lot of type conversions later. I actually think using doubles over floats in general should only be done if the additional precision is really needed. It's been quite some time since I actually used OpenGL, but generally, in C++ the default rule is to favour double over float. If OpenGL normally uses float, then that's OK, of course. Sort of a side note, but where does the "favor double over float" come from? It uses twice the memory, and is slower. So that seems like a very questionable guideline. I have used C++ for a long time, and never heard that recommendation. It can also be faster than float, and it's more precise in any case. See http://stackoverflow.com/questions/3426165/is-using-double-faster-than-float (including a Stroustrup citation).
common-pile/stackexchange_filtered
How to assign value to key hashmap Perl? I have the code bellow. I want to assign for each version values but somehow I do not know what I'm doing wrong. %buildsMap = (); #read schedule opendir (DIR, $buildsDir) or die $!; while ( my $file = readdir DIR ) { if (($file =~ /(V\d\d\d.*)-DVD/) || ($file =~ /(V\d\d\d.*)/)) { foreach $version(@versionS){ if ($1 eq $version ){ @temp = @{$buildsMap{$version}}; push @temp,$file; @{$buildsMap{$version}} = @temp; } } } } If I want to use the keys from this hashmap it is ok. Please advice me. Could you give an example of input (ie. the filenames in $buildsDir) and the expected output? Also, what is in @versionS? There are just some folders readed by if (($file =~ /(V\d\d\d.*)-DVD/).... and if the folders meet the condition they should be assigns as values to the keys $version. @versionS is an array of versions from where I;m using just $version(a couple of version necessary for my script). I'm tring to have an output like this: foreach $version (keys %buildsMap){ print $values } have you thought about using strict and warnings? Also it's still not clear what @versionS contains, could you edit your code with some example data ( like my @versions= ('V132", 'V432') First order of business, turn on strict and warnings. This will discover any typo'd variables and other mistakes. The major issue here will be you have to declare all your variables. Fixing that all up, and printing out the resulting %buildMap, we have a minimum viable example. use strict; use warnings; use v5.10; my $buildsDir = shift; my @versionS = qw(V111 V222 V333); my %buildsMap = (); #read schedule opendir (DIR, $buildsDir) or die $!; while ( my $file = readdir DIR ) { if (($file =~ /(V\d\d\d.*)-DVD/) || ($file =~ /(V\d\d\d.*)/)) { foreach my $version (@versionS) { if ($1 eq $version ){ my @temp = @{$buildsMap{$version}}; push @temp,$file; @{$buildsMap{$version}} = @temp; } } } } for my $version (keys %buildsMap) { my $files = $buildsMap{$version}; say "$version @$files"; } Which gives the error Can't use an undefined value as an ARRAY reference at test.plx line 15.. That's this line. my @temp = @{$buildsMap{$version}}; The problem here is how you're working with array references. That specific line fails because if $buildsMap{$version} has no entry you're trying to dereference nothing and Perl won't allow that, not for a normal dereference. We could fix that, but there's better ways to work with hashes of lists. Here's what you have. my @temp = @{$buildsMap{$version}}; push @temp,$file; @{$buildsMap{$version}} = @temp; That copies out all the filenames into @temp, works with @temp and the more comfortable syntax, and copies them back in. It's inefficient for memory and amount of code. Instead, we can do it in place. First by initializing the value to an empty array reference, if necessary, and then by pushing the file onto that array reference directly. $buildsMap{$version} ||= []; push @{$buildsMap{$version}}, $file; ||= is the or-equals operator. It will only do the assignment if the left-hand side is false. It's often used to set default values. You could also write $buildsMap{$version} = [] if !$buildsMap{$version} but that rapidly gets redundant. But we don't even need to do that! push is a special case. For convenience, you can dereference an empty value and pass it to push! So we don't need the initializer, just push. push @{$buildsMap{$version}}, $file; While the code works, it could be made more efficient. Instead of scanning @versionS for every filename, potentially wasteful if there's a lot of files or a lot of versions, you can use a hash. Now there's no inner loop needed. my %versionS = ( 'V111' => 1, 'V222' => 2, 'V333' => 3 ); ... if (($file =~ /(V\d\d\d.*)-DVD/) || ($file =~ /(V\d\d\d.*)/)) { my $version = $1; if ($versionS{$version}){ push @{$buildsMap{$version}}, $file; } } If you use v5.10, you also could advice the //= operator, since it always checks for definedness, what is most likely that what someone wants. In this specific case it's okay to use, because we know from the program structure that the hash shouldn't contain any false values (as references are true (I hope)). But that is one more unnecessary hurdle to jump for the person reading your code (which is most likely the programmer himself in three months). On the other hand //= is quite new (just eight years old), so it could be that the future reader isn't accustomed to the newer ways yet. @PatrickJ.S. Good question. I specifically did not use //= here. There are many cases where //= is more correct than ||=, usually to retain 0 and "" as valid values. This is not one of them. I specifically chose ||= because if a hash entry is incorrectly set to 0 or "" it should be replaced with an empty list. This makes the code more robust, good enough for a short lived variable like %buildsMap. If you want more checks than that you should instead write a class and control access to the data through methods. @PatrickJ.S. On a similar note, one could eliminate %versionS and the ||= check entirely by preallocating version lists in %buildMap. Initialize it with my %buildMap = map { $_ => [] } qw(V111 V222 V333) and then push @{$buildMap{$1}}, $file if $buildMap{$1}. I avoided that both because the answer was getting long, and it overloads the meaning of %buildMap in ways potentially confusing to the original poster and future maintainers. If %versionS was big it might be justified to save some memory. Again, once you start doing things like that it's better to write a class. “if a hash entry is incorrectly set to 0 or "" it should be replaced with an empty list” I'm not so sure. If a hash value is anything other than an array reference then there is a bug in the code, and making it work silently regardless disables the diagnostics that use strict 'refs' provides. I think I'd prefer my code to die with Can't use string "0" as an array ref while strict refs in use. After all, that bug may be overwriting a legitimate array reference with 0 and so losing data
common-pile/stackexchange_filtered
Using ASP .NET Membership and Profile with MVC, how can I create a user and set it to HttpContext.Current.User? I implemented a custom Profile object in code as described by Joel here: How to assign Profile values? I can't get it to work when I'm creating a new user, however. When I do this: Membership.CreateUser(userName, password); Roles.AddUserToRole(userName, "MyRole"); the user is created and added to a role in the database, but HttpContext.Current.User is still empty, and Membership.GetUser() returns null, so this (from Joel's code) doesn't work: static public AccountProfile CurrentUser { get { return (AccountProfile) (ProfileBase.Create(Membership.GetUser().UserName)); } } AccountProfile.CurrentUser.FullName = "Snoopy"; I've tried calling Membership.GetUser(userName) and setting Profile properties that way, but the set properties remain empty, and calling AccountProfile.CurrentUser(userName).Save() doesn't put anything in the database. I've also tried indicating that the user is valid & logged in, by calling Membership.ValidateUser, FormsAuthentication.SetAuthCookie, etc., but the current user is still null or anonymous (depending on the state of my browser cookies). SOLVED (EDITED FURTHER, SEE BELOW): Based on Franci Penov's explanation and some more experimentation, I figured out the issue. Joel's code and the variations I tried will only work with an existing Profile. If no Profile exists, ProfileBase.Create(userName) will return a new empty object every time it's called; you can set properties, but they won't "stick" because a new instance is returned every time you access it. Setting HttpContext.Current.User to a new GenericPrincipal will give you a User object, but not a Profile object, and ProfileBase.Create(userName) and HttpContext.Current.Profile will still point to new, empty objects. If you want to create a Profile for a newly-created User in the same request, you need to call HttpContext.Current.Profile.Initialize(userName, true). You can then populate the initialized profile and save it, and it will be accessible on future requests by name, so Joel's code will work. I am only using HttpContext.Current.Profile internally, when I need to create/access the Profile immediately upon creation. On any other requests, I use ProfileBase.Create(userName), and I've exposed only that version as public. Note that Franci is correct: If you are willing to create the User (and Roles) and set it as Authenticated on the first round-trip, and ask the user to then log in, you will be able to access the Profile much more simply via Joel's code on the subsequent request. What threw me is that Roles is immediately accessible upon user creation without any initialization, but Profile is not. My new AccountProfile code: public static AccountProfile CurrentUser { get { if (Membership.GetUser() != null) return ProfileBase.Create(Membership.GetUser().UserName) as AccountProfile; else return null; } } internal static AccountProfile NewUser { get { return System.Web.HttpContext.Current.Profile as AccountProfile; } } New user creation: MembershipUser user = Membership.CreateUser(userName, password); Roles.AddUserToRole(userName, "MyBasicUserRole"); AccountProfile.NewUser.Initialize(userName, true); AccountProfile.NewUser.FullName = "Snoopy"; AccountProfile.NewUser.Save(); Subsequent access: if (Membership.ValidateUser(userName, password)) { string name = AccountProfile.CurrentUser.FullName; } Further thanks to Franci for explaining the Authentication life cycle - I'm calling FormsAuthentication.SetAuthCookie in my validation function, but I'm returning a bool to indicate success, because User.Identity.IsAuthenticated will not be true until the subsequent request. REVISED: I'm an idiot. The above explanation works in the narrow case, but doesn't resolve the core problem: Calling CurrentUser returns a new instance of the object each time, whether it's an existing Profile or not. Because it's defined as a property, I wasn't thinking about this, and wrote: AccountProfile.CurrentUser.FullName = "Snoopy"; AccountProfile.CurrentUser.OtherProperty = "ABC"; AccountProfile.CurrentUser.Save(); which (of course) doesn't work. It should be: AccountProfile currentProfile = AccountProfile.CurrentUser; currentProfile.FullName = "Snoopy"; currentProfile.OtherProperty = "ABC"; currentProfile.Save(); It's my own fault for completely overlooking this basic point, but I do think declaring CurrentUser as a property implies that it's an object that can be manipulated. Instead, it should be declared as GetCurrentUser(). Creating a user just adds it to the list of users. However, this does not authenticate or authorize the new user for the current request. You also need to authenticate the user in the current request context or for subsequent requests. Membership.ValidateUser will only validate the credentials, but it's not authenticating the user for the current or subsequent requests. FormsAuthentication.SetAuthCookie will set the authentication ticket in the response stream, so the next request will be authenticated, but it does not affect the state of the current request. The easiest way to authenticate the user would be to call FormsAuthentication.RedirectFromLoginPage (assuming you are using forms authentication in your app). However, this one would actually cause a new HTTP request, which will authenticate the user. Alternatively, if you need to continue your logic for processing the current request, but want the user to be authenticated, you can create a GenericPrincipal, assign it the identity of the new user and set the HttpContext.User to that principal. Impersonation usually implies Windows authentication, which requires a WindowsPrincipal. Impersonation can be done from security token for the impersonated user or from credentials. Given that the current context is not authenticated, chances are there's no way to get a security token for the user. Thus, the only choices is impersonating with credentials. It might be possible to construct a WindowsPrincipal with the proper identity by calling LogonUser (provided the code knows the credentials for the windows user). However, I have not tried this, so I can't vouch it'll necessarily work. That's an incredibly helpful explanation, thank you. Creating a GenericPrincipal (from my FormsAuthenticationTicket) did allow me to set HttpContext.Current.User. However, I still can't set Profile values. If I use ProfileBase.Create(Membership.GetUser().UserName).SetPropertyValue, nothing happens - it doesn't throw an exception anymore, but the property remains empty. If I use HttpContext.Current.Profile.SetPropertyValue, it says that I cannot set properties on an anonymous profile, implying that these are different Profile objects that I somehow need to merge. You are going to run into problems with this approach if you enable anonymousIdentification. Rather than Membership.GetUser().UserName, I would suggest using HttpContext.Profile.UserName. Like this... private UserProfile _profile; private UserProfile Profile { get { return _profile ?? (_profile = (UserProfile)ProfileBase.Create(HttpContext.Profile.UserName)); } } Hat tip: SqlProfileProvider - can you use Profile.GetProfile() in a project? First of all, thanks @Jeremy for sharing your findings. You helped me get going in the right direction. Secondly, sorry for bumping this old post. Hopefully this will help someone connect the dots. The way I finally got this working was to use the following static method inside my profile class: internal static void InitializeNewMerchant(string username, Merchant merchant) { var profile = System.Web.HttpContext.Current.Profile as MerchantProfile; profile.Initialize(username, true); profile.MerchantId = merchant.MerchantId; profile.Save(); }
common-pile/stackexchange_filtered
meteor client-side tests hang on AWS but not locally I'm trying to get my client side tests working for CI. Right now I'm just running them from a terminal - though eventually they will be running in Jenkins On my dev machine (Ubuntu 14.04) the tests run just fine. On my AWS EC2 instance (Ubuntu 16.04.1) the tests client DO NOT run, the server tests run, then I get the log: => App running at: http://localhost:3000/ then nothing. The command I'm using is: MOCHA_REPORTER=tap SERVER_TEST_REPORTER=tap CLIENT_TEST_REPORTER=tap TEST_BROWSER_DRIVER=nightmare xvfb-run --server-args="-screen 0 1024x768x24" meteor test --once --driver-package dispatch:mocha my npm and node versions are the same on both machines (3.10.9 and v4.6.2 respectively). I'm using the segmentio/nightmare browser to run tests (as supported by dispatch:mocha) because I was having issues with selenium/chrome when testing locally. I use this specific version as recommended here (https://github.com/segmentio/nightmare/issues/224) I'm using xvfb-run to run the headless browser. I tried the slightly different configuration recommended in the above link, but it also didn't work. The commands I'm running locally vs on AWS are identical. The only difference I see between the two is the Ubuntu version - is this likely to be the problem, or have I overlooked something? I'm pretty stuck on where to go from here - any thoughts would be appreciated. I spend hours looking, then 10 mins after I ask, I find the answer: It seems like one of these installs fixed the issue - I guess AWS doesnt install X11 by default on their servers, which makes sense. The majority of the command (except xorg and openbox) came from here: https://github.com/segmentio/nightmare/issues/224 sudo apt-get install -y xvfb x11-xkb-utils xfonts-100dpi xfonts-75dpi xfonts-scalable xfonts-cyrillic x11-apps clang libdbus-1-dev libgtk2.0-dev libnotify-dev libgnome-keyring-dev libgconf2-dev libasound2-dev libcap-dev libcups2-dev libxtst-dev libxss1 libnss3-dev gcc-multilib g++-multilib xorg openbox
common-pile/stackexchange_filtered
Issue with Expo CLI environment setup in windows (Solved). Registering for account in expo failed with firebase & Styling issue(New question) I am having a issue on react native. This is my first time using so I am following a guide from https://www.youtube.com/watch?v=f6TXEnHT_Mk. According to the video, after npm start, I am suppose to arrive with a browser opened that look like: But it did not happen on my laptop. When I tried to copy the link in the terminal to a browser, I get this instead: Can anyone explain to me what did I do wrong? I have double checked everything and is the same as the video till this point. In the video for this to appear the window defender firewall was blocking the feature so the user allow access for that. But my firewall seems to not be blocking it Having error logging in expo via terminal: Solved I am currently following a guide which is from https://www.youtube.com/watch?v=ql4J6SpLXZA I am doing the login UI and around the 8 min mark, i decided to take a short break. After coming back, I continued till the 9 min mark and i am met with this: Tried to remove the code I newly added but the problem still did not go away. Referring to Unable to resolve "../../App" from "node_modules/expo/AppEntry.js", tried to add "sourceExts": [ "js", "json", "ts", "tsx", "jsx", "vue"] in the app.json file but I do not have the packageOpts line in the file. How do i solve this? Solved I am still following the guide above: https://www.youtube.com/watch?v=ql4J6SpLXZA. Now I am having issue with firebase. As stated in the video, it is for firebase 8 and I am using firebase 9 so I used the code provided by firebase: // Import the functions you need from the SDKs you need import { initializeApp } from "firebase/app"; // TODO: Add SDKs for Firebase products that you want to use // https://firebase.google.com/docs/web/setup#available-libraries // Your web app's Firebase configuration const firebaseConfig = { apiKey: "AIzaSyDI9Ggk4h-bHBSDjZAZgMaB6Ur_lvJIPKw", authDomain: "fypapp-4f10a.firebaseapp.com", projectId: "fypapp-4f10a", storageBucket: "fypapp-4f10a.appspot.com", messagingSenderId: "306492584822", appId: "1:306492584822:web:24b7ccb717a107df6b4057" }; // Initialize Firebase const app = initializeApp(firebaseConfig); With this I was having an error of Unable to resolve "idb" from<EMAIL_ADDRESS>And I found a answer from online to bypass that by creating metro.config.js with the code: (This is for future user that is having trouble with this error) const { getDefaultConfig } = require("@expo/metro-config"); const defaultConfig = getDefaultConfig(__dirname); defaultConfig.resolver.assetExts.push("cjs"); module.exports = defaultConfig; But now I am having another error when creating new user. I have the following error : TypeError: undefined is not an object (evaluating '_firebase.auth.createUserWithEmailAndPassword') I searched around stack and no one has post error of evaluating '_firebase.auth.createUserWithEmailAndPassword` Is there something I can do to solve this? Another problem I have is regarding the styling for the page, I have this currently: (When i never attempt to type anything new in the input boxes) When i try to type something new in the input boxes: I have the following code: How do i correct it so that my keyboard can be seen and the input boxes can be seen at the same time? <KeyboardAvoidingView //To prevent keyboard from blocking the writing area style={styles.container} behavior = "padding" > <View style = {styles.inputContainer}> <TextInput placeholder = "Email" value={email} onChangeText ={text => setEmail(text)} styles = {styles.input} /> <TextInput placeholder = "Password" value={password} onChangeText ={text => setPassword(text)} styles = {styles.input} secureTextEntry //Hide password /> </View> <View style = {styles.buttonContainer}> <TouchableOpacity onPress = {() => { } } style = {styles.button} > <Text style={styles.buttonText}>Login</Text> </TouchableOpacity> <TouchableOpacity onPress = {handleSignUp} style = {[styles.button, styles.buttonOutline]} > <Text style={styles.buttonOutlineText}>Register</Text> </TouchableOpacity> </View> </KeyboardAvoidingView> Hi Calvin, this is a site for techincal questions with definitive answers, not opinion based answers, or for unfocused questions. Use google and read some articles instead Sorry about that. I have changed the question Update July 25, 2022<EMAIL_ADDRESS>has been released with the web UI removed. The last release to include the web UI is <EMAIL_ADDRESS> You can check this article in medium for more details from here. Or you can check this old question from here. I prefer to change the tutorial to one newer or just skip this detail and continue :D For expo login try run this two commands and try again: npm cache clean --force npm install -g npm@latest --force Thanks for the info about that! I have one more issue. I cannot log in via the terminal. Already registered one account but terminal doesn't allow me to log in. Is it possible for you to advice me on what I should do? PS: I post the snip in the same post as I am unable to create new question due to the down vote try run this two commands and try again npm cache clean --force npm install -g npm@latest --force Thanks! It work perfectly. I got one question about the app though. Sometime it can sync sometime it cannnot. How come? Like if i run it in my terminal gitbash i can. But when i run it in my visual code terminal, it cannot. And sometime when i run it in gitbash, it doesnt sync too. When i open app in my phone, my terminal dont have the line where it is generating the bundle. How do i deal with this? Can you mark it as a correct answer please, and about the terminal can you add images or the error message you receive? I just did it but is it possible to help me with a issue of mine again? I have edit the post. I am not allowed to post question thats why. For the terminal issue, theres no error or anything. In the phone it just stay the same screen even though i edit app.js. Can you check this article: https://docs.expo.dev/workflow/run-on-device/ npx expo start --tunnel Oh okay but how about the new error in my post? I still cannot figure out why is that happening Just add packageOpts to the file like the image. https://i.sstatic.net/54Kc9.png Oh okay but i dont know why after one day it is okay already. No error. But now I have another issue, this is about linking up the firebase and also the styling. Is it possible to help me on this? Thank you very much ! Can you ask it in another question?
common-pile/stackexchange_filtered
capifony deployment task order : site unavailable during grunt I am using capifony to deploy my symfony 2 application. I run two late tasks. During the time grunt is run, my website is unavailable. I've tried to play around with tasks order without success. How should I configure the following task or what else should I do to make sure my website does not become unavailable durint the grunt process (ideally, the simlink to current would be changed after the grunt command has finished) ? after 'deploy:restart', 'deploy:dump_dev_assets' after 'deploy:dump_dev_assets', 'deploy:grant_permissions' namespace :deploy do desc "Dump dev assets" task :dump_dev_assets do run("cd #{deploy_to}/current && npm install") run("cd #{deploy_to}/current && bower install --allow-root") run("cd #{deploy_to}/current && grunt") end end namespace :deploy do desc "Grantpermissions" task :grant_permissions do run "sudo chmod -R 777 #{latest_release}/app/cache" run "sudo chmod -R 777 #{latest_release}/app/logs" run "sudo chmod -R 777 #{latest_release}/web/uploads" run "sudo chmod -R 777 #{latest_release}/web/media" end end It's a weird decision to run grunt in production environment. Normally you don't even have it to be installed there. The "correct" way is to build everything in some dedicated build environment then simply deploy artifacts to the production. hmmm, I did not know that... I'll do it then. thanks ;)
common-pile/stackexchange_filtered
Heat equation with a positive coefficient If we have the heat equation $u_t-ku_{xx}=0$, where $(x,t)\in\Re\times(0,\infty)$, $k>0$, and $u(x,0)=f(x)$, then the general solution is \begin{align} u(x,t)=\int_{-\infty}^{\infty}\Phi(x-y,t)f(y)dy\,, \end{align} where \begin{align} \Phi(x,t)=\frac{1}{\sqrt{4\pi kt}}e^{-x^2/4kt}\,. \end{align} If we have the same conditions, what would be the general solution to the heat equation for $k<0$? I tried the change of variables, $x=ip$, where $p$ is pure imaginary, such that the heat equation became $u_t+ku_{pp}=0$. I am not sure how to solve this, though. I do know that I cannot assume that the spatial and temporal components of the equation are separable. I was thinking that the solution might be \begin{align} u(p,t)=\int_{-i\infty}^{i\infty}\Phi(ip-iy,t)f(iy)dy\,, \end{align} This satisfies $u_t+ku_{pp}=0$, but I am not sure if I am missing something crucial. I think the solution diverges and it's not physically possible (where is all the energy coming from? Infinite heat?). Ultimately, I'm trying to evaluate $e^{-t\partial_x^2}g(x)$. By setting this equal to $u(x)$ and differentiating both sides with respect to $t$, I get the backwards heat equation. I'm treating $t$ as a variable, though it is really a constant in my exponential, so the backwards heat equation is only a means to the end. What I want in the end is something like $u(x,0.03)$. Even though $u$ diverges as $t\to\infty$, does there exist a (divergent) solution? This is the same as reversing time. Because of the smoothing that takes place in forward time, there may be not be a solution in the backwards time direction for more than a short period of time, or perhaps not at all( ?). The backwards heat equation is not a very well-posed problem. Therefore, we cannot guarantee any degree of existence, uniqueness, or stability (we're very much concerned more with the stability condition).
common-pile/stackexchange_filtered
How can I show and hide Like/Unlike buttons using Javascript in my Web app? I'm doing online course CS50W by Harvard and building a web app similar to twitter. When user sees a post I need to show the user Like or Unlike button depending on user liked it or not. There's also a counter showing how many users liked the post so far. I am able to update the counter if the user liked or unliked the post, but I'm having problem with the code showing or hiding the buttons. Here's my code: models.py class Post(models.Model): """ Model representing a post. """ user = models.ForeignKey(User, on_delete=models.CASCADE) content = models.TextField() timestamp = models.DateTimeField(auto_now_add=True) no_of_likes = models.IntegerField(default=0) def __str__(self): return f"Post {self.id} by {self.user.username} on {self.timestamp}" class Like(models.Model): """ Model representing a like. """ user = models.ForeignKey(User, on_delete=models.CASCADE) post = models.ForeignKey(Post, on_delete=models.CASCADE) timestamp = models.DateTimeField(auto_now_add=True) def __str__(self): return f"{self.user} likes {self.post}" urls.py path("", views.index, name="index"), path("like/<int:post_id>", views.like, name="like"), path("unlike/<int:post_id>", views.unlike, name="unlike"), views.py def index(request): """ Home page. """ posts = Post.objects.all().order_by('-timestamp') paginator = Paginator(posts, 5) page_number = request.GET.get('page') page_obj = paginator.get_page(page_number) likes = Like.objects.all() # Make a list of liked posts. liked_posts = [] try: for like in likes: if like.user.id == request.user.id: liked_posts.append(like.post.id) except: liked_posts = [] return render(request, "network/index.html", { "posts": posts, "page_obj": page_obj, "likes": likes, "liked_posts": liked_posts, }) @login_required def like(request, post_id): post = Post.objects.get(pk=post_id) user = User.objects.get(pk=request.user.id) like = Like.objects.create(user=user, post=post) like.save() # Update no of likes. post.no_of_likes = Like.objects.filter(post=post).count() post.save() return JsonResponse({"message": "successfully liked", "no_of_likes": post.no_of_likes}) @login_required def unlike(request, post_id): post = Post.objects.get(pk=post_id) user = User.objects.get(pk=request.user.id) like = Like.objects.filter(user=user, post=post) like.delete() # Update no of likes. post.no_of_likes = Like.objects.filter(post=post).count() post.save() return JsonResponse({"message": "successfully unliked", "no_of_likes": post.no_of_likes}) index.html with Javascript {% if post.id not in liked_posts %} <button type="button" class="btn btn-primary" id="like{{ post.id }}" onclick="like('{{ post.id }}')">Like</button> {% else %} <button type="button" class="btn btn-primary" id="unlike{{ post.id }}"onclick="unlike('{{ post.id }}')">Unlike</button> {% endif %} function like(id) { fetch(`/like/${id}`, { method: "POST", headers: { "Content-Type": "application/json", "X-CSRFToken": "{{ csrf_token }}", }, }) .then(response => response.json()) .then(data => { document.querySelector(".no-of-likes").innerHTML = data.no_of_likes + " likes"; }) .then(() => { document.getElementById("unlike" + id).style.display = "block"; document.getElementById("like" + id).style.display = "none"; }); } function unlike(id) { fetch(`/unlike/${id}`, { method: "POST", headers: { "Content-Type": "application/json", "X-CSRFToken": "{{ csrf_token }}", }, }) .then(response => response.json()) .then(data => { document.querySelector(".no-of-likes").innerHTML = data.no_of_likes + " likes"; }) .then(() => { document.getElementById("unlike" + id).style.display = "none"; document.getElementById("like" + id).style.display = "block"; }); } I can update the counter without refreshing the page, but it's not the same with buttons. This is what happening: --> When I click Like button counter updates. I have to refresh the page to change button to Unlike. As per the specification this needs to be done asynchronously (assuming with the help of Javascript!) without reloading the page. --> And it's same with Unlike button too. When I click it counter updates but I have to reload the page to change the button to Like. I want to change the buttons without reloading the page. Tried placing code blocks at different places, tried using 'if else' conditions but I'm still stuck here. Any help is appreciated! Sorry for the messy Javascript code, newbie here. Why not just make one button, instead of two with one you hide. You here only generate one button, because the template is rendered at the server side. That is because you are only rendering one button. Add the two, one you hide, the other you show: <button type="button" class="btn btn-primary" id="like{{ post.id }}" onclick="like('{{ post.id }}')" style="display: {% if post.id not in liked_posts %}block{% else %}none{% endif %};">Like</button> <button type="button" class="btn btn-primary" id="unlike{{ post.id }}" onclick="unlike('{{ post.id }}')" style="display: {% if post.id in liked_posts %}block{% else %}none{% endif %};">Unlike</button> But I don't see why you need to use two buttons in the first place: just use one button that you use both for liking and unliking depending on the context. Note: Please don't store aggregates in the model: determine aggregates when needed: storing aggregates in the model makes updating and keeping data in sync harder. You can use .annotate(…) [Django-doc] to generate counts, sums, etc. per object when needed. Your code is working, Thank you. Though I tried to change the code as you suggested! I created one button and wrote a function toggleLike(postId) to change .innerText to Like or Unlike using 'if else' conditions. That part is working fine. But when I refresh the page the buttons are showing Like only. I tried to change the buttons with document.addEventListener('DOMContentLoaded', function() {// Logic here} but not having any success! Can you please give me any suggestions regarding how to fix it. Cheers! @srikk407: is it possible to ask a new question with the new state of the view(s) and template? I haven't changed 'views.py' that much only 'html button and javascript code'. Here's the link to my new code: https://stackoverflow.com/questions/78456711/how-to-render-right-buttons-like-or-unlike-in-a-post-based-on-users-choice-us
common-pile/stackexchange_filtered