text
stringlengths
70
452k
dataset
stringclasses
2 values
How can I support mods in my game made in Nim? First I would like to apologize for my English. I'm reviving an old project of mine originally made in the Lua language, where it was a Terraria-style sandbox game. The game was designed from the beginning to support mods, where the original game itself was a mod based on an api that manipulated everything within the game. That is, to create a mod, it was enough to follow this api, and the main code would import inside, thus creating blocks, items, entities, among others, Doing this in Lua was simple, since it is a scripting language, and to provide this support, it was enough to import all the codes from a folder. Already in Nim, the problem arises where the base of the game together with its api would be previously compiled. That way, there's no way for programmers to create mods for it without having the source code and recompiling the game with your modifications. Searching a bit, I found a plausible option in dynamic libraries, but I had difficulty implementing them in a pleasant way for the project. I also discovered a library called "nimpk", which was exactly what I was looking for. It implemented the "pocketlang" language in Nim, where I can easily create functions in Nim and run them inside pocketlang, and vice versa. So I could create an api in Nim, create objects in pocketlang using the api, and run it inside Nim. It was perfect, but as the library hasn't had any kind of support for a while, it has serious compatibility issues with other libraries, including the one I use to make my game. Initially they may ignore the fact that I'm making a game. I would just like an example of how I can do this type of implementation. It could be, for example, a calculator that imports all its functions from a "functions" folder. Thanks in advance for the reply. PS: Using methods like minecraft bedrock that use ".json" files in addons is out of the question in the project, since this method greatly limits what can be created. In that same comparison, it would be more ideal as in the most recent versions that have javascript support in the addons. Is this what you need: https://nim-lang.org/docs/hcr.html /https://nim-lang.org/docs/hotcodereloading.html ? As you've discovered there are basically two main ways of adding mod/extensibility support. This is more or less the same options you have no matter which language you implement your project in, although some languages can make this easier or harder. The first option is to directly extend your program on the same level as your original. For Nim this means extending the binary by loading DLL/so/dynlib files. These files will be able to run at native performance and have access to pretty much everything in your program (and they could potentially also crash your program). This is what Lua does, but since Lua is a scripting language and works on a higher level of abstraction directly extending it simply means putting more text into the interpreter. The second option is to implement or integrate some form of scripting language. In Nim you could integrate a Lua interpreter, a Python interpreter, a or any other interpreter which can be integrated into a C program. One of the easiest languages to integrate however might be NimScript. NimScript is a subset of Nim which runs in macros, config files, and other compile-time executions within the Nim language compilation cycle. This is done by simply importing the compiler libraries, or even easier by using nimscripter. The downside of a scripting language is that you'll have to convert from your internal types to whatever the scripting language accepts and back again. This can be hard on performance (compared to native speeds of course). If you go for the first method it is of course also possible to create a module which loads a scripting language and thereby implements the second method. Because of this and the other reasons the first option is probably the better of the two. To do this you should be able to create a file with all the things you want to make easily available to your dynamic library in a sort of header file with everything marked with the {.importc.} pragma and no body. In your code you then need to have the {.exportc.} pragma on the same things. If you then import the "header" file into your dynamic library code and compile it as a dynamic library your main code should now be able to load this library and it should be able to call your procedures directly. This is at least how it works on Linux. You could probably even create your own little pragma which added exportc and wrote an importc version of the procedure to a "header" file so that the definitions are guaranteed to match.
common-pile/stackexchange_filtered
How can i add an if statement in my script? I map a network drive, but after opening the application the network drive must be unmapped again. I build my application in VS2015 c#. private void button4_Click(object sender, EventArgs e) { IWshNetwork_Class network = new IWshNetwork_Class(); network.MapNetworkDrive("k:", @"\\10.*.*.*\d$\test", Type.Missing, "local\\blabla", "*******"); System.Diagnostics.Process.Start("file:///K:\\gemy.exe"); //This is the closing part network.RemoveNetworkDrive("k:"); } It seems that The title and question have no connection :P . So what are you trying to accomplish Is it possible to close mapped network drive when i close my application. You could probably check if the drive exists or not. See (Check Drive Exists(string path)) using System.IO; private IWshNetwork_Class network = new IWshNetwork_Class(); private void button4_Click(object sender, EventArgs e) { string drive = Path.GetPathRoot("k:"); if(!Directory.Exists(drive)) { network.MapNetworkDrive(drive, @"\\10.*.*.*\d$\test", Type.Missing, "local\\blabla", "*******"); System.Diagnostics.Process.Start("file:///K:\\gemy.exe"); } else { //This is the closing part network.RemoveNetworkDrive("k:"); } } In case Directory Exists always responds false you might want to take a look at this: Check if directory exists on Network Drive I get three errors. Severity Code Description Project File Line Error CS0103 The name 'Path' does not exist in the current context CAASapp C:\Users\Documents\Visual Studio 2015\Projects\CAASapp\CAASapp\Form1.cs Error CS0103 The name 'Directory' does not exist in the current context CAASapp C:\Users\Documents\Visual Studio 2015\Projects\CAASapp\CAASapp\Form1.cs Error CS0103 The name 'network' does not exist in the current context CAASapp C:\Users\Documents\Visual Studio 2015\Projects\CAASapp\CAASapp\Form1.cs I updated my answer. The Directory and Path error is related because you don't reference to System.IO (added it to answer) and the network error is related to not initializing it at the right place (also fixed in answer). I have to press the button to close the mapped network drive is it possible to close it when i close the entire application. You can use events for that. You can find that easily on the internet. I hope my answer was of use to you.
common-pile/stackexchange_filtered
DX vs FX the final comparison There are a lot of questions about using specific lenses from one format to the other, but I was hoping for a comprehensive overview of all combinations that is easily referenced. For the sake of math, lets choose a '100mm' lens. I know that that DX cameras, for example have a 1.5x drop factor, so a 100mm DX lens actually looks like 150mm on a DX camera. I have also heard that using a DX lens on an FX camera reduces the megapixels since it is only using part of the sensor. Etc. So, again for easy math and the sake of of having a good reference-able comparison. If you have a '100mm' DX lens what happens when you: Use it on a DX camera. Use it on an FX camera. If you have a '100mm' FX lens what happens when you: Use it on a DX camera. Use it on an FX camera. possible duplicate of What is the difference between DX format and FX format lenses, and which to choose for what purpose? Could we get a less dramatic ad more accurate title for this question? Things get much easier if you forget about millimetres and focal lengths and talk about angular field of view. If you have a '100mm' DX lens what happens when you: Use it on a DX camera. You get a horizontal field of view of 13.5 degrees (corresponding to 150mm on a full frame sensor) Use it on an FX camera. You get strong vignetting, or your camera crops the image reducing the resolution to 44% of the pixels leaving you with a horizontal field of view of 13.5 degrees (corresponding to 150mm on a full frame sensor) If you have a '100mm' FX lens what happens when you: Use it on a DX camera. You get a horizontal field of view of 13.5 degrees (corresponding to 150mm on a full frame sensor) Use it on an FX camera. You get a horizontal field of view of 20.4 degrees (corresponding to 100mm on a full frame sensor) Here's the same information in a handy table: +---------------------------------------------------+ | | FX sensor | DX sensor | +---------------------------------------------------+ | FX lens | 100mm equiv fov | 150mm equiv fov | | DX lens | 150mm equiv fov* | 150mm equiv fov | +---------------------------------------------------+ *when used in crop mode, otherwise the lens will vignette a certain amount and the usable field of view will depend on the particular lens, zoom setting, aperture and focus distance. That sounds very precise and accurate, though for most of us still thinking in terms of 'mm' could you translate? @TaylorHuston: Outside of offering that on DX, a 100mm lens "behaves like" a 150mm lens, which you already know...what else are you looking for? Matt's answer offered all the information you asked for that you didn't already have... THe DX lens vignetting and crop depends strongly on the lens, some people find certain DX lens are acceptabled on FX. Be careful here. A lens does not know what camera it is mounted to. Its popular folklore that a lens "acts longer" on crop sensor camera (Nikon DX, Canon EF-S). This is technically incorrect. The proper term is "crop sensor" and what the image looks like is exactly the same as if you took it on a full frame camera and cropped it. Exactly. Take as an example, a DX/EF-S lens. Mount it on a crop sensor camera on a tripod. Take a photo and then swap to a full frame camera at the same place, looking at the same scene. Take a photo. Now mount the same lens on a view camera (most are at least 4x5) and take the photo. In all cases, the lens is the same, it did not magically grow by 50%. In fact, the good image will be exactly the same size on each of the three cameras. There is no difference in the relative compression that a long lens shows, etc. It is more accurate to think that shots taken on a crop sensor camera are pre-cropped. You can compare the behaviour of DX and FX lens on various DX and FX bodies graphically with the nikkor lens simulator. You can experiment the field of view, % of crop and more with lens of various focal length on DX and FX bodies.
common-pile/stackexchange_filtered
JNLP at runtime using jsp I have created a java web application and I had one window application now I want to load window application using JNLP (java web start ) and it loads from jsp page at runtime.How can I do this.? it's easy, just write your jsp in a way you'd write your jnlp, where the dynamic parts would be scripted by jsp. I think that you anyway need to understand/learn jnlp syntax. Go through each of the links on the JWS tag Wiki 2) When you have a specific question based on that research, come to SO to ask it.
common-pile/stackexchange_filtered
How to create a custom table and provide a Create,Read,Update,Delete interface How will I create a new structure in the ABAP dictionary has the following attributes. Name Type,MyNAME CHAR(25),FatherNAME CHAR(25),DEGREE CHAR(25),AGE INT4,BIRTHDAY DATS and write a ABAP program that will use your created structure, fill in all of the information on the structure & write it out to the screen This requires no coding at all. Just create the table ( use transaction SE11 ), then run the table generator ( you will find it in the menu in SE11 ). Then run SM30, fill in your table and hit display or maintain. You can make then a parameter transaction that pre-fills the initial SM30 screen, this requires no coding as well.
common-pile/stackexchange_filtered
Handling multiple forms of different form tags in a function view in Django HTML: <form method="POST" enctype="multipart/form-data" id="p_formUpload"> {% csrf_token %} <fieldset class="form-group"> <p> { p_form|crispy }} </p> </fieldset> </form> <form method="POST" enctype="multipart/form-data" id="c_formUpload"> {% csrf_token %} <fieldset class="form-group"> <p> {{ c_form|crispy }} </p> </fieldset> </form> views.py: def profile(request): p_photos = ProfilePicture.objects.all() c_photos = CoverPicture.objects.all() if request.method == 'POST': p_form = ProfileUpdateForm(request.POST, request.FILES, instance=request.user.profilepicture) c_form = CoverUpdateForm(request.POST, request.FILES, instance=request.user.coverpicture) if p_form.is_valid(): p_form.save() messages.success(request, f'Your account has been successfully updated!') return redirect('profile') if c_form.is_valid(): c_form.save() messages.success(request, f'Your account has been successfully updated!') return redirect('profile') else: p_form = ProfileUpdateForm(instance=request.user.profilepicture) c_form = CoverUpdateForm(instance=request.user.coverpicture) context={ 'p_form' : p_form, 'c_form' : c_form, 'p_photos': p_photos, 'c_photos': c_photos, } return render(request, 'user/profile.html',context) Javascript <script src="{% static 'user/js/jquery-3.1.1.min.js' %}"></script> <script src="{% static 'user/js/bootstrap.min.js' %}"></script> <script src="{% static 'user/js/cropper.min.js' %}"></script> <script> $(function () { $("#id_profile_image").change(function () { if (this.files && this.files[0]) { var reader = new FileReader(); reader.onload = function (e) { $("#image").attr("src", e.target.result); $("#modalCrop").modal("show"); } reader.readAsDataURL(this.files[0]); } }); var $image = $("#image"); var cropBoxData; var canvasData; $("#modalCrop").on("shown.bs.modal", function () { $image.cropper({ viewMode: 1, aspectRatio: 1/1, minCropBoxWidth: 200, minCropBoxHeight: 200, ready: function () { $image.cropper("setCanvasData", canvasData); $image.cropper("setCropBoxData", cropBoxData); } }); }).on("hidden.bs.modal", function () { cropBoxData = $image.cropper("getCropBoxData"); canvasData = $image.cropper("getCanvasData"); $image.cropper("destroy"); }); $(".js-zoom-in").click(function () { $image.cropper("zoom", 0.1); }); $(".js-zoom-out").click(function () { $image.cropper("zoom", -0.1); }); $(".js-crop-and-upload").click(function () { var cropData = $image.cropper("getData"); $("#id_x").val(cropData["x"]); $("#id_y").val(cropData["y"]); $("#id_height").val(cropData["height"]); $("#id_width").val(cropData["width"]); $("#p_formUpload").submit(); }); }); </script> <script> $(function () { $("#id_cover_image").change(function () { if (this.files && this.files[0]) { var reader = new FileReader(); reader.onload = function (e) { $("#c-image").attr("src", e.target.result); $("#c-modalCrop").modal("show"); } reader.readAsDataURL(this.files[0]); } }); var $image = $("#c-image"); var cropBoxData; var canvasData; $("#c-modalCrop").on("shown.bs.modal", function () { $image.cropper({ viewMode: 1, aspectRatio: 11/4, minCropBoxWidth: 1100, minCropBoxHeight: 400, ready: function () { $image.cropper("setCanvasData", canvasData); $image.cropper("setCropBoxData", cropBoxData); } }); }).on("hidden.bs.modal", function () { cropBoxData = $image.cropper("getCropBoxData"); canvasData = $image.cropper("getCanvasData"); $image.cropper("destroy"); }); $(".c-js-zoom-in").click(function () { $image.cropper("zoom", 0.1); }); $(".c-js-zoom-out").click(function () { $image.cropper("zoom", -0.1); }); $(".c-js-crop-and-upload").click(function () { var cropData = $image.cropper("getData"); $("#id_x").val(cropData["x"]); $("#id_y").val(cropData["y"]); $("#id_height").val(cropData["height"]); $("#id_width").val(cropData["width"]); $("#c_formUpload").submit(); }); }); </script> I am trying to have both the form in same page but in different form tag. But my first form(p_form) is submitting data but second form(c_form) is not working. My problem looks like in the views.py. I tried to find many solutions bit could'nt find it. I have multiple submit buttons through javascript and have two forms in the same page in different tags. Modules used in the project for django, crispy_forms, pillow. My complete code can be found at https://github.com/otakliquekirito/otaklique Share forms.py code it seems there might be the error :( I tried my c_form in another page...it worked...so I think my error is in views.py because on bringing c_form at the front it is working Ok let's see wait Have u added both submit buttons in a proper way? Yes it worked perfectly when kept in different pages What exactly do you mean "second form(c_form) is not working"? @JohnGordon my p_form is submitting data, but the other form c_form is not submitting the data,c_form and p_form worked perfectly when kept in different pages Or just try creating a submit button via HTML and submit simple :) If this also doesn't works then I need to go through all code. @NanthakumarJJ I tried the submit button as well but it did'nt work...I want to keep both the forms in the same page could you please share your repo or code? I will try it @NanthakumarJJ from where should I share it As shown, those forms do not have submit buttons. How are you submitting the forms? If you're using Enter to submit the form, then that may be the problem -- as I recall, pressing Enter submits the first form on the page. @JohnGordon I am submitting it by javascript...I'll update the code and add javascript...just a minute @BickeyJaiswal share it as github repo @NanthakumarJJ (https://github.com/otakliquekirito/otaklique) here's my code...sorry for the delay the problem is in views.py you are just redirecting when first form is submitted so the second one is not submitted simple @NanthakumarJJ but the form is only redirected if my p_form is_valid() Yes if any of the form data is valid it will be redirected @NanthakumarJJ thats what I want to do...but its not working...1st form submit data but 2nd form dose'nt Try this if request.method == 'POST': p_form = ProfileUpdateForm(request.POST, request.FILES, instance=request.user.profilepicture) c_form = CoverUpdateForm(request.POST, request.FILES, instance=request.user.coverpicture) if p_form.is_valid() and c_form.is_valid(): p_form.save() c_form.save() messages.success(request, f'Your account has been successfully updated!') return redirect('profile') elif p_form.is_valid(): pass elif c_form.is_valid(): pass this only works when both the forms are valid. If one form is not valid then it does not save ha yes just add another condition in elif block I want to save p_form if p_form is valid and c_form if c_form is valid. The above code will only work if both the form are submitted at the same time. But my code submits two form at different time. will it save my p_form is I submit only p_form
common-pile/stackexchange_filtered
how to open standard Google Map application from my application and after choose a location then redirect back to my application? how to open standard Google Map application from my application and after choose a location then redirect back to my application? The simple answer is (as much I know) - No you cannot do it. You can just start Google maps by an intent but you cannot get back the data from it. If you need something like this, you need to have Google Maps API and show maps directly in your app. I am do not have a billing account in google cloud platform. Is it possible way, I think like this "how to open standard Google Map application from my application and after choose a location then redirect back to my application" ?. As I said, it cannot be done. Use google maps API instead Okay and Thank you for a response.. @s_o_m_m_y_e_e. If that helped, kindly mark it as the correct answer!
common-pile/stackexchange_filtered
MSchart label inside chart area Can somebody please tell me how can i show the Total Collection on MSChart I have got the answer in email which is incomplete , but i can't see it here, why ? help please. The mail you received is about an answer that has been deleted by owner (I can see it because I have more than 10K points). You can use chart.Annotations property to get a similar result. For example with the following code (located after filling the chart): var ann = new RectangleAnnotation(); ann.Text = "Total Collection" + Environment.NewLine + "250 Billion"; ann.IsMultiline = true; ann.AxisX = this.chart1.ChartAreas[0].AxisX; ann.AxisY = this.chart1.ChartAreas[0].AxisY; ann.AnchorX = 9; // as you can see from the image below, ann.AnchorY = 41; // these values are inside the range // add the annotation to the chart annotations list this.chart1.Annotations.Add(ann); I got the following result: N.B.: there are a lot of annotations types (CalloutAnnotation, EllipseAnnotation...) and they have a lot of properties to change styles and behaviors. You can even set a property to allow annotation moving (i.e. AllowMoving=true). Have a look to annotation properties through intellisense or MSDN. @codery2k: did you add the annotation to the chart.Annotations collection ? (my updated code now show how...) Also, are you sure you have selected valid AnchorX and AnchorY values? (they must be in the visible range) Yea I did it too, and I have selected valid X and Y axis, but still no result. :( You can set the property IsDockedInsideChartArea to true. You will also need to specify which ChartArea the legend is docked to, and set the position property to Auto. legend.IsDockedInsideChartArea = true; legend.DockedToChartArea = "ChartArea1"; There are more information about this and other legend properties here.
common-pile/stackexchange_filtered
JPA query : select only some fields of my object and then create my object I have a JPA repository custom query : @Query("Select id, nom, code, codeComptable, typeClient from Client") List<Object> findAllWithoutForeignKey(); This query returns a Java.Lang.Object : [1, TEST X, GUHHR, 1566FR487, TypeClient{id=1, nom='GARAGE', actif='true', dateDerniereModif='2015-01-03'}] I don't know how to access the values of my object. I have tried everything I could think of but I didn't manage to do that. Does anyone know how to do? Thanks. Your JPQL query returns an Object array just as the JPA spec says it should. So you get each row and cast to Object[]. And then you access the elements of the array to get the column values. Basic java
common-pile/stackexchange_filtered
YQL: Return single field from yahoo.finance.quotes I've just started working with YQL and I'm stuck with the following problem: I cannot let YQL return a single value from the yahoo.finance.quotes table. I would like to just select the "Open" tag in "quotes" This is the query I am using: http://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20yahoo.finance.historicaldata%20where%20symbol%20=%20%22AAPL%22%20and%20startDate%20=%20%222015-01-1%22%20and%20endDate%20=%20%222015-01-2%22&format=xml&diagnostics=true&env=store://datatables.org/alltableswithkeys Execute query I tried to find a solution in the official documentation, but it does work the way they explain it. Documentation Any help is highly appreciated! Thank you in advance! Simply specify to select "Open" tag instead of selecting all tags (using the asterisk (*) means you want all tags). So this query : select Open from yahoo.finance.historicaldata where symbol = "AAPL" and startDate = "2015-01-1" and endDate = "2015-01-2" ..which then translated to the following URL : http://query.yahooapis.com/v1/public/yql?q=select%20Open%20from%20yahoo.finance.historicaldata%20where%20symbol%20=%20%22AAPL%22%20and%20startDate%20=%20%222015-01-1%22%20and%20endDate%20=%20%222015-01-2%22&format=xml&diagnostics=true&env=store://datatables.org/alltableswithkeys ..should do exactly what you wanted.
common-pile/stackexchange_filtered
spark context cannot reslove in MLUtils.loadLibSVMFile with Intellij I try to run Multilayer perceptron classifier example here:https://spark.apache.org/docs/1.5.2/ml-ann.html, it seems works well at spark-shell, but not with IDE like Intellij and Eclipse. The problem comes from val data = MLUtils.loadLibSVMFile(sc, "data/mllib/sample_multiclass_classification_data.txt").toDF() IDE prompts cannot resolve symbol sc(sparkcontext), but the libraries path has been correctly configure. If anyone can helps me, thanks! Did you import the proper libraries? Actually there is no such value as sc by default. It's imported on spark-shell startup. In any ordinal scala\java\python code you should create it manually. I've recently made very low quality answer. You can use part about sbt and libraries in it. Next you can use something like following code as template to start. import org.apache.spark.sql.SQLContext import org.apache.spark.{SparkContext, SparkConf} object Spark extends App { val config = new SparkConf().setAppName("odo").setMaster("local[2]").set("spark.driver.host", "localhost") val sc = new SparkContext(config) val sqlc = new SQLContext(cs) import sqlc.implicits._ //here you code follows } Next you can just CtrlShiftF10 it Thank you bro! Your reply really helps me, I am looking at your links. It is nice to learn from you.
common-pile/stackexchange_filtered
mvc6: specify custom location for views of viewComponents By default mvc6 searches for views of ViewComponents either in /Views/ControllerUsingVc/Components or it also looks in /views/shared folder. Is it possible to add a custom location where it should look for them? E.g. /views/mySharedComponents You can do that but you have to do following steps for that. Create New View engine based on RazorViewEngine. Also by default it need Components directory so just use Views as root folder. namespace WebApplication10 { public class MyViewEngine : RazorViewEngine { public MyViewEngine(IRazorPageFactory pageFactory, IViewLocationExpanderProvider viewLocationExpanderProvider, IViewLocationCache viewLocationCache) : base(pageFactory,viewLocationExpanderProvider,viewLocationCache) { } public override IEnumerable<string> ViewLocationFormats { get { List<string> existing = base.ViewLocationFormats.ToList(); existing.Add("/Views/{0}.cshtml"); return existing; } } } } Add ViewEngine in MVC in Startup.cs services.AddMvc().Configure<MvcOptions>(options => { options.ViewEngines.Add(Type.GetType("WebApplication10.MyViewEngine")); }); Now I have placed My Component at following location. For example my component name is MyFirst.cshtml. So I can place Views/Components/MyFirst/Default.cshtml. the ViewEngines.Add method requires IViewEngine instance and not System.Type? Basically you can do this with creating custom view engine or creating custom IViewLocationExpander But for view components in beta-6 there will be always "Components" prefix added. Look at ViewViewComponentResult source code. And it is sad. Good news you could create your own view component result by copying code above and replacing only formatting string for view searching. I know it's an old question, but it can be done much simpler... services.Configure<RazorViewEngineOptions>(o => { o.ViewLocationFormats.Add("/views/{0}.cshtml"); }); This will result in /views/Components/ComponentName/ViewName.cshtml, since the /Components/ComponentName/ViewName is hard coded inside the ViewViewComponentResult class. You can also do the following for .NET Core services .AddMvc(options => { ... }) .AddRazorOptions(o => { o.AreaViewLocationFormats.Add("Areas/{2}/Views/SUBFOLDER/{1}/{0}.cshtml"); o.ViewLocationFormats.Add("Views/SUBFOLDER/{1}/{0}.cshtml"); }) Obviously choose AreaViewLocationFormats or ViewLocationFormats based on whether you're using Areas or not. Since view components have /Components/ComponentName/ViewName hard coded as the string formatted into {0}, this won't help.
common-pile/stackexchange_filtered
Why is my UITableView cut off in iOS 7? In iOS 6, my login tableview that consisted of two rows (Username and Password) was completely shown correctly. In iOS 7, the bottom row is cut off, and I don't know why or how to correct the issue. Nothing changed except for upgrading to Xcode 5 and running on the iOS 7 simulator. UPDATE: adding more images How are you creating the table view? How are you positioning it? You may need to show some code. I have added my cellforrowatindexpath method, but adding the table view and positioning is done in storyboard @AdamJohns then I think you should probably check your layout in the storyboard, I bet your autolayout constraints aren't quite what you think they are. @RonLugge how exactly do I do that? I'm not too familiar with autolayout. Click on the object, go the pane on the right, and bring up the 'size' inspector. Despite it's name, if you scroll down you should see the layout you've defined. What may be going on is that you don't have a specific height and width defined, but rather a distance from other objects -- and that distance is producing 'funky' results when those objects shift. @RonLugge I added more images showing my scene and size inspector tab. I'm not sure how to tell if I am specifying a distance from other objects? @AdamJohns you're specifying a height, a top space to superview, a leading space to superview, and a trailing space to superview. You're also specifying a center-x to the button, but that will really only center the button, you've already locked the table's position in place. What I find interesting is that it looks like your tableview is underlapping the status bar, or is that an illusion in the picture above? This is obviously some kind of issue with the grouped table view style. All I had to to was go into the storyboard scene, select the table view, then in the attributes inspector change the style from grouped to plain. It works as intended now without being cut off. This isn't really a solution, more of a work-around. I believe the answer suggested by unspokenblabber is the actual answer Fair enough, it ended up being the issue on my app. My point was that changing the style of the table isn't really a solution. try playing with navigationBar.translucent property in your view controller. in iOS 6 it is NO by default, but YES in iOS 7. I had a similar issue and this fixed it for me. This fixed this problem for me, but does anyone have an idea why this property set to YES would cause the bottom of a tableview to be cut off? The frame of my tableview is initialized with the view's frame, added to the view's subview, and there's no other views. One reason could be a constrained height. The top y being where the nav starts could push the bottom down in this case. Just check your UITableView frame in iOS7, may be you are running it on 3.5 inch view and it will shrink. i'm running on 4 inch and even if i make the frame bigger it doesn't help unfortunately. Looking at the provided image, I think you may be underlapping the nav bar. Or to put it another way, your nav bar is on top of hte table. Though I'm not sure why that would cut off the bottom of the login information no, the underlapping is not the issue, as I have already taken care of the underlapping successfully by using this I've found that simply changing from GROUPED to PLAIN table view style fixes the "underlap" issue with the section #0 header, but modifies the color of the section header views. I set the tableview background color in my app. With PLAIN style, the section header background color is messed up. The section header color is close to the tableview background color, but slightly modified. This does NOT happen if I simply switch back to GROUPED. It sounds like an iOS7 bug or an Xcode bug. The translucent = NO fixed it for some cases. In others, I ended up adjusting the tableView in viewDidLoad - (void)viewDidLoad { [super viewDidLoad]; CGRect f = self.tableView.frame; f.origin.y += self.navigationController.navigationBar.frame.size.height; f.size.height -= self.navigationController.navigationBar.frame.size.height; self.tableView.frame = f; }
common-pile/stackexchange_filtered
Plot bar graph using matplotlib with different dataframe shape I have three different Data frame as below.first two has the shape as (4,) and the last has the shape as (2,). How to convert the shape of the data frame? When I try to plot all the three in a bar graph, the last DF fails with "ValueError: shape mismatch: objects cannot be broadcast to a single shape" How to plot the DF3 in the same bar graph by showing "Empty" and "Invalid" as 0. DF1: Validity Empty 2672 InValid 581 Multiple Entries 282 Valid 5526 Name: Lifecycle, dtype: int64 DF2: Validity Empty 1920 InValid 471 Multiple Entries 2325 Valid 33446 Name: Lifecycle, dtype: int64 DF3: Validity Multiple Entries 10334 Valid 11984 Name: Lifecycle, dtype: int64 Below is my code. glot = sample_lot_number.groupby("Validity") vlot = sample1_lot_number.groupby("Validity") dplot = Data_Package_Lot_Number.dplot.groupby("Validity") ind = np.arange(4) width = 0.15 ax = plt.subplot() p1 = ax.bar(ind+width,glot.Lifecycle.count(), width) p2 = ax.bar(ind,vlot.Lifecycle.count(), width) p3 = ax.bar(ind-width,dplot.Lifecycle.count(), width) ax.set_xticks(ind + width / 2) ax.set_xticklabels(("Empty","InValid","Multiple Entries","Valid")) Use pandas.DataFrame.reindex to reindex your dataframe and set the missing ones. It'll fill with NaN but you can change that to 0 if you wish. @busybear gave the right answer in the comment. Your code is not runnable. If I would make a guess, you can try the following code: glot = sample_lot_number.groupby("Validity") vlot = sample1_lot_number.groupby("Validity") dplot = Data_Package_Lot_Number.dplot.groupby("Validity") df1 = glot.Lifecycle.count() df2 = vlot.Lifecycle.count().reindex(df1) df3 = dplot.Lifecycle.count().reindex(df1).fillna(0) ind = np.arange(4) width = 0.15 ax = plt.subplot() p1 = ax.bar(ind+width, df1, width) p2 = ax.bar(ind, df2, width) p3 = ax.bar(ind-width, df3, width) ax.set_xticks(ind + width / 2) ax.set_xticklabels(("Empty","InValid","Multiple Entries","Valid"))
common-pile/stackexchange_filtered
How to use mobile app into whatsapp share link working? How do you use an HTML link to open the whatsapp on a mobile app ? this link works on website Example link:<a href="whatsapp://send?text=https%3A%2F%2Fstackoverflow.com%2Fquestions%2F51533526%2Fhow-to-use-html-link-into-whatsapp-mobile-app%2F51533716%3Fnoredirect%3D1">Whatsapp Share </a> This link is failing in the WhatsApp mobile app. This is the error message: ERR_UNKNOWN_URL_SCHEME (same error => app into phone number click) I want to solve this problem on the front end side. Do you want to share a text or image or just want to open Whatsapp? I want to open text , Click the link in the app , opened whatsapp The below code directly shares the Text to the Whatsapp app using Intent. Intent textIntent = new Intent(Intent.ACTION_SEND); textIntent.setType("text/plain"); textIntent.setPackage("com.whatsapp"); textIntent.putExtra(Intent.EXTRA_TEXT, "Your text here"); startActivity(textIntent); is this problem solution back-end side ?(android) @korkut No, you have enter this code in the Android app.
common-pile/stackexchange_filtered
Paypal button won't appear in magento despite config being correct. I followed the Magento configuration instructions to the T for how to set up PayPal in Magento, but despite all of that, PayPal still doesn't appear as a payment option to users. Is there something that needs to be configured on PayPal manager to make it appear? Or is there something that could be messing up the checkout option in my theme somewhere. Any help on this would be greatly appreciated. Are you using a custom template? If so, try renaming your theme folder under app/skin so that it falls back to the default Magento theme. This will show if it's a theme related issue or not. Also make sure you clear all your cache by going to System > Cache Management
common-pile/stackexchange_filtered
How to get string after the last '/' character using a regular expression I would like to fetch last part of the string after character / which in our example is 'PAYLINK STALE CHECK ENTRI'. Also, please note that, 'PAYLINK STALE CHECK ENTRI' is not a static string. Example: :61:1511171116CR00,10NMSC566666666/15139333333333333/CTC/MSC/PAYLINK STALE CHECK ENTRI The output should be PAYLINK STALE CHECK ENTRI You don't need a regex to do this, just use lastIndexOf to find the last / and substr to get the substring after it. var str = '103150800130001/CTC/MSC/PAYLINK STALE CHECK ENTRI'; console.log(str.substr(str.lastIndexOf('/')+1)); However, if you prefer a regex-based solution, this would work to strip off everything from the last / and before: var str = '103150800130001/CTC/MSC/PAYLINK STALE CHECK ENTRI'; console.log(str.replace(/.*\//, ''));
common-pile/stackexchange_filtered
Can some components of metric be Finslerian while the others be Riemannian? A Finsler metric reduces to a Riemann metric in case it loses its dependence on velocities. Now, my question is this: Can we have a Finsler metric in which some components of the metric have velocity dependency and some do not? These are components of a general Finsler metric tensor: $g_{\mu \nu}(x^\lambda, \dot{x}^\lambda)$, where $\dot{x}^\lambda = \frac{dx^\lambda}{d\tau}$ are the velocities and $\tau$ is a scalar parameter ($\mu$, $\nu = 0, 1, 2, 3$). Suppose I know that some specific components of the metric, say $g_{ij}$ ($i,j=1,2,3$), do not depend on $\dot{x}^\lambda$. Can I conclude that the other components, $g_{00}$ and $g_{0i}$, must also not depend on $\dot{x}^\lambda$ so that the metric becomes totally Riemannian? or those components can depend on $\dot{x}^\lambda$ regardless of the components $g_{ij}$? Comments to the question (v2): 1. A direct product $M\times N$ of two Finsler manifolds $M$ and $N$ is again a Finsler manifold by adding the Finsler function $F_M+F_N$. 2. Comment #1 is in particular true if $N$ is a Riemannian manifold. Thanks Qmechanic, Could you be more detailed about how that answers my question? I'm thinking about distinct components of the metric. My question actually is that if I know that some components of the metric do not have velocity dependency, Can I conclude that the metric is totally Riemannian or it can still be Finslerian? Comments to the question (v2): Could you please clarify your definition of velocity in the context of a Finsler manifold? Are you restricting to just Randers manifolds? I have edited the question.
common-pile/stackexchange_filtered
When should I start planning for the next version? My company is setting dates for planning the version on the first day of the version. To me it seems to be waaaay too late.. I think it should be at least a couple of weeks before that. When should we start the planning process? Is there a standard for this process and how it should be handled? Planning Within the Project Schedule There's nothing wrong with putting most of your planning processes inside your project schedule. In fact, planning is a necessary part of most work packages, so scheduling your planning activities within the overall project plan can make a lot of sense. Planning Lead Time Can Vary As for when you should start planning, I don't believe there's a canonical answer to this; a lot depends on your chosen project management framework. For example, frameworks that require a lot of up-front planning and design (e.g. WaterFailure™) will likely require a great deal more lead time for planning than agile frameworks (e.g. Scrum or Kanban) that make use of "just enough" or "just-in-time" planning at the start of each iteration or milestone. There is no standard process for this, only recommendations: it is a good practice to start planning as early as possible until it is clear to everybody that a plan is not a commitment. If I were you, I would follow the 1/3 rule: plan the first 1/3 very detailed on the task level, plan the second 1/3 on the user story level, and finally plan the last 1/3 on the epic level. With this approach you can have an idea when you might finished the project, and what are your next steps. With this information you can go to your boss or supervisor and see whether this is fine with him or her and talk about risks. If you can do this before the start you can eliminate problems before even officially starting the project. After a several days or weeks, you do you planning again using the information from the past days/weeks, but this time user stories might become tasks, and epics might become user stories etc.
common-pile/stackexchange_filtered
Will JIRA clash with Plesk on Install I have a dedicated hosting account that is managed using Plesk (i am not very comfortable with Linux command line - learning). Plesk is fine as its easy to use for managing different web spaces. I want to know if i install JIRA on the same server will i run into any issues with Plesk. I believe the ports used for Plesk and JIRA are not the same (8080 for JIRA). Are there any good walk throughs for doing so Any recommendations on the install process Eventually, i want JIRA to be accessed via subdomain url j.domain.com and not xx.xxx.xx.xx:xxxx. how could i set this up thanks a lot! What OS you are using? Yes, JIRA can be installed on Plesk server without any issues. There is port clash only with Tomcat, so if you have it installed JIRA installer suggest you to choose another port. # wget https://www.atlassian.com/software/jira/downloads/binary/atlassian-jira-software-7.2.4-x64.bin # chmod +x atlassian-jira-software-7.2.4-x64.bin # ./atlassian-jira-software-7.2.4-x64.bin It's better to use MySQL database if you haven't experience with PostgreSQL. JIRA installer always silently fallback to built in H2 file database in case of issue with provided DB settings you have to create domain, database and mail user in plesk You have to enable proxy_http in Tools&Settings > Apache Web Server and create file /httpdocs/.htaccess in j.domain.com domain to redirect requests to JIRA: RewriteEngine on RewriteRule ^(.*) http://<IP_ADDRESS>:8080/$1 [P,L] SMTP settings: Troubleshooting: Logs are placed here /opt/atlassian/jira/logs/catalina.out thanks. I noticed that when using you method jiva has a difficult time locating all the images and such. ( my sidebar is empy and my icons too) if i switch the url from the domain to the specific port this is gone. Im curious if there is a way to fix this. @user1712804 try to replace in .htaccess file host <IP_ADDRESS> with j.domain.com but pay attention that j.domain.com should be resolving from server or you get 502 Bad Gateway Instead of using .htaccess you can use the "Additional Apache directives" in Plesk for your domain (under "Apache & nginx Settings") and configure the proxying. See here for details: link. Though you still have to enable "proxy_http" in Plesk of course.
common-pile/stackexchange_filtered
How should we support multilingual answers? We have long supported questions about claims made in other languages and answers referencing sources written in other languages, but the body of the question and answer needs to be in English to be widely understood by our users. A recent question was about the authenticity of some French text. One of the answerers wrote their answer in both an English and French version. I appreciate that. Many of the people who have this question are likely to be French speakers, and it makes a lot of sense to address them in their native tongue. However, both the English and French versions needed an edit (especially the French version), and my French-language skill are limited to making my French-speaking friends cringe. I reluctantly deleted the French version during the edit. It felt parochial to do so. Is there a better way of handling this? Are there any languages where we have sufficient active, trusted users to be able to allow multilingual answers? Well about the language: Hindi (Indian language) , as Indians are taught multiple languages starting from the play school(kids of age 3). Having mother tongue as Hindi, yet we have More English Speakers than US, Canada and Mexico combined. As to trusted and active , you can always find some exceptions.
common-pile/stackexchange_filtered
T-SQL code is extremely slow when saved as an Inline Table-valued Function I can't seem to figure out why SQL Server is taking a completely different execution plan when wrapping my code in an ITVF. When running the code inside of the ITVF on its own, the query runs in 5 seconds. If I save it as an ITVF, it will run for 20 minutes and not yield a result. I'd prefer to have this in an ITVF for code reuse. Any ideas why saving code as an ITVF would cause severe performance issues? CREATE FUNCTION myfunction ( @start_date date, @stop_date date ) RETURNS TABLE AS RETURN ( with ad as ( select [START_DATE] ,[STOP_DATE] ,ID ,NAME ,'domain1\' + lower(DOMAIN1_NAME) collate database_default as ad_name from EMP_INFO where DOMAIN1_NAME != '' union select [START_DATE] ,[STOP_DATE] ,ID ,NAME ,'domain2\' + lower(DOMAIN2_NAME) collate database_default as ad_name from EMP_INFO where DOMAIN2_NAME != '' ) select ad.ID ,ad.NAME ,COUNT(*) as MONITORS from scores join users on (scores.evaluator_id = users.[user_id]) join ad on (lower(users.auth_login) = ad.ad_name and scores.[start_date] between ad.[START_DATE] and ad.[STOP_DATE]) where scores.[start_date] between @start_date and @stop_date group by ad.ID ,ad.NAME ) EDIT: Ok...I think I figured out the problem...but I don't understand it. Possibly I should post an entirely new question, let me know what you think. The issue here is when I call the function with literals, it is REALLY slow...when I call it with variables it is fast. -- Executes in about 3 seconds declare @start_date date = '2012-03-01'; declare @stop_date date = '2012-03-31'; select * from myfunction(@start_date, @stop_date); --Takes forever! Never completes execution... select * from myfunction('2012-03-01', '2012-03-31') Any ideas? for starters (lower(users.auth_login) = ad.ad_name is non SARGable Nice! Wasn't familiar with SARG, good info. I have removed the lower() function and I'm still having the same issue. In fact, I've separated the logic into a view, removed out the aggregate portion, and I can do the aggregate work on the view. But once again, if I wrap that view in a function, I get the same problem... Can you post the inline & non-inline execution plans? Will the estimated plans do? I'm thinking the ITVF will take hours. When I did the estimated plans last time I got a "missing index" on the ITVF and good results on the raw code. I will post them in a moment. Strange, I just recreated the function to test it and it executes in 4 seconds! Maybe one of the DBAs altered something...very difficult to identify the issue now! I've identified that using literals instead of variables slows the query way down and have updated the question to reflect that. Any ideas? When you use literals SQL Server can look at the column statistics to estimate how many rows will be returned and choose an appropriate plan based on that assumption. When you use variables the values are not known at compile time so it falls back on guesses. If the plan is better when it guesses than when it refers to the actual statistics then this indicates the statistics likely need updating. If you have auto update of statistics turned on then you may well be hitting the issue here Statistics, row estimations and the ascending date column Wow, never would have guessed. Good info! Will up-vote as soon as I get the reputation!
common-pile/stackexchange_filtered
Ubuntu DVD for Africa I live in Tanzania Africa and have been promoting Ubuntu for use by NGOs, Schools, and Friends. So far I've got a lot of positive feedback, but do have one huge problem. Internet here is both slow and very costly (up to $1,200 per month) So what I am looking for is a current release of Ubuntu which includes all the updates and popular applications already on it like; Gimp, VLC, Ubuntu Restricted files, Scribus, Inkscape, VirtualBox, Chrome, Audacity, Compiz, Blender, Ubuntu Tweak, Gnucash, Skype, etc... Basically all the 4 - 5 star applications. Does a hybrid DVD exist? I found links to one called Ubuntu Install Box 11.04, but not sure if they are truly legitimate. Any help would be greatly appreciated. Brendon Making your own cd/dvd gets you into trouble: it -will- download the files from the web still using your internet line. Downloading the complete dvd will be quicker than creating your own version plus you can burn the dvd and share it with more Tanzanians ;-) 32 bits DVD download (3.9 Gb) and 64 bits DVD download (3.9 Gb) are the dvd downloads. This listing shows the content of the dvd and should have all the packages you need. If you want to make your own CD/DVD with all the packages specifically mentioned in your question you need to make it yourself. Luckily for you I posted an answer on that: How to customize the Ubuntu Live CD? Being in Africa you might want to try the official Africa mirror Good guide, but I think it might be above my understanding. Do you know of any premade custom installation DVDs that fit the bill? No and I do not think you'll find any either. Unless you make it yourself... I've heard of some releases called "Ubuntu remix". But I can only find a swedish version of it, which is a bit pointless here. you can buy Ubuntu Installer and repository disc. I recommend you to buy Ubuntu 10.04 Installer disc at canonical online shop and Ubuntu 10.04 Repository Disc at on-disk. Hope this helps Having the repository Discs would be good, but really I'd like to download or make a custom DVD with the above mentioned applications and Ubuntu 11.04. you can use remastersys. it is a back up tool and it enables you to create your own live cd of the current system you are using. it is very simple to use, you can find the repository on the site . once you've done your custom cd or dvd, you can copy it and distribute it as you like. You can also use custom distributions based in Ubuntu, provided from enthusiast people from around the world, of which -in my humble opinion- the one that better fits all your needs is the Israel Remix Team Distro. A pre-made custom distribution will include all these applications and a few more. I know people will downvote this but may be a pre-made custom distro will be the best answer for this question. Nevertheless, there are lots of custom distros that can be seen in DistroWatch. Take a look at those, of which I strongly suggest you the Israel Remix Team, based in Ubuntu. Good luck!
common-pile/stackexchange_filtered
Is there a more or less automatic way to create this kind of 3D metallic text effect, or does it have to be digitally painted by hand? I'm looking for a way to turn an ordinary flat font into a 3D metallic text, kind of like the Game of Thrones logo here, or the Skyrim text here? Or this logo here. And maybe even any hand drawn shape, not just text? It doesn't need to have these additional effects like scratches and imperfections, just the 3D metallic stuff. It would be especially helpful if it was possible to do in some kind of free software (GIMP, Krita, Inkscape, etc).
common-pile/stackexchange_filtered
Android Log in with CheckBox My first application in android studio and i want to do this: Description: username:username (TextBox) password:password (TextBox) Keep me logged in -->CheckBox LOGIN -->BUTTON The first time where the user enters your username and password and click to Keep me logged in the application must be remember the username and password without the user writes again in the second time. Could anyone give me some idea how this implement in android, I found many examples but nothing work. I did something very similar to this recently First, make sure to initialize your fields in question AutoCompleteTextView mEmailView = (AutoCompleteTextView) findViewById(R.id.email); EditText mPasswordView = (EditText) findViewById(R.id.password); Button mEmailSignInButton = (Button) findViewById(R.id.email_sign_in_button); Set your onClick event: mEmailSignInButton.setOnClickListener(new OnClickListener() { @Override public void onClick(View view) { attemptLogin(); } }); In order for the user password to be remembered, you have to save them somewhere. I stored them in shared preferences, but you can use a more secure method depending on your application or even encrypt before storing them http://developer.android.com/guide/topics/data/data-storage.html#pref //Initialize the shared preferences, set in private mode SharedPreferences sharedPref = getSharedPreferences("userDetails", MODE_PRIVATE); SharedPreferences.Editor editor = sharedPref.edit(); //Put the strings in the editor editor.putString(getString(R.string.userAccount), email); editor.putString(getString(R.string.password), password); Now in onCreate or your other favorite android start method (depending on if you're using a fragment or activity), retrieve them //Initialize the shared preferences, set in private mode SharedPreferences sharedPref = getSharedPreferences("userDetails", MODE_PRIVATE); //Retrive the values sharedPref.getString(getString(R.string.userAccount), "") sharedPref.getString(getString(R.string.password), "") You could use SharedPreferences to remember the user preference. Here it is a boolean. SharedPreferences prefs = getDefaultSharedPreferences(this); boolean keepLoggedIn = prefs.getBoolean(KEY_KEEP_LOGGED_IN, false); //After user makes the selection on the checkbox, SharedPreferences.Editor editor = getDefaultSharedPreferences(this).edit(); editor.putBoolean(KEY_KEEP_LOGGED_IN, true); editor.commit();
common-pile/stackexchange_filtered
glOrtho not working I have a problem using glOrtho in a program that uses GLMDraw() function of GLM library to draw Google SketchUp 3D images. I wanted to see the image for only certain values of z in Projection mode and glOrtho() didn't seem to work so I made the following code to test it: glOrtho(0.0f, 2.0f, 0.0f, 2.0f, 0.0f, 0.0f); Since near and far planes are the same I thought I should see no image but I see the whole image. What I am missing? If you call glOrtho with znear=zfar, it generates a GL_INVALID_VALUE error, and probably just discards the call. http://www.opengl.org/sdk/docs/man/xhtml/glOrtho.xml Try giving it a range greater than zero.
common-pile/stackexchange_filtered
`zeroinfl` from the `pscl` and `countreg` packages give very different results. Why? I am experimenting with the mdvis dataset from the COUNT package of R for a teaching purpose. I fitted a zero-inflated negative-binomial model using the zeroinfl function from the pscl and countreg packages. However, the results of zeroinfl from the pscl package and from countreg package differ a lot. The models and the outputs are provided below. zeroinfl from pscl: mdvisit_zeroinf<-pscl::zeroinfl(numvisit ~ reform + badh + agegrp + educ + loginc | reform + badh + agegrp + educ + loginc, dist = "negbin", data = mdvis, control = zeroinfl.control(method = "BFGS", EM=F) summary(mdvisit_zeroinf) Call: pscl::zeroinfl(formula = numvisit ~ reform + badh + agegrp + educ + loginc | reform + badh + agegrp + educ + loginc, data = mdvis, dist = "negbin") Pearson residuals: Min 1Q Median 3Q Max -0.9667 -0.8037 -0.3597 0.3154 9.4878 Count model coefficients (negbin with log link): Estimate Std. Error z value Pr(>|z|) (Intercept) -0.30316 0.56565 -0.536 0.5920 reform -0.09916 0.05543 -1.789 0.0736 . badh 1.11555 0.07517 14.841 <2e-16 *** agegrp2 0.11376 0.06990 1.628 0.1036 agegrp3 0.21126 0.08620 2.451 0.0143 * educ2 0.07605 0.07209 1.055 0.2914 educ3 -0.03979 0.07295 -0.545 0.5854 loginc 0.13209 0.07499 1.761 0.0782 . Log(theta) 0.04747 0.05815 0.816 0.4144 Zero-inflation model coefficients (binomial with logit link): Estimate Std. Error z value Pr(>|z|) (Intercept) -30.7110 772.2930 -0.040 0.968 reform 14.3274 763.7283 0.019 0.985 badh -1.9503 3.5392 -0.551 0.582 agegrp2 10.9236 113.6292 0.096 0.923 agegrp3 9.7723 113.6558 0.086 0.931 educ2 -0.6632 1.9878 -0.334 0.739 educ3 -0.8743 1.3501 -0.648 0.517 loginc 0.5437 1.9718 0.276 0.783 --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Theta = 1.0486 Number of iterations in BFGS optimization: 106 Log-likelihood: -4557 on 17 Df` zeroinfl from countreg: mdvisit_zeroinf2<-countreg::zeroinfl(numvisit ~ reform + badh + agegrp + educ + loginc | reform + badh + agegrp + educ + loginc, dist = "negbin", data = mdvis, control = zeroinfl.control(method = "BFGS", EM=F)) summary(mdvisit_zeroinf2) Call: countreg::zeroinfl(formula = numvisit ~ reform + badh + agegrp + educ + loginc | reform + badh + agegrp + educ + loginc, data = mdvis, dist = "negbin", control = zeroinfl.control(method = "BFGS", EM = F)) Pearson residuals: Min 1Q Median 3Q Max -0.9523 -0.8057 -0.3615 0.3106 9.6300 Count model coefficients (negbin with log link): Estimate Std. Error z value Pr(>|z|) (Intercept) -0.44473 0.53752 -0.827 0.40803 reform -0.12962 0.05106 -2.538 0.01113 * badh 1.13558 0.07413 15.319 < 2e-16 *** agegrp2 0.05851 0.06359 0.920 0.35756 agegrp3 0.18678 0.07176 2.603 0.00925 ** educ2 0.06398 0.06621 0.966 0.33385 educ3 -0.04825 0.07087 -0.681 0.49599 loginc 0.15338 0.07056 2.174 0.02973 * Log(theta) 0.01232 0.04778 0.258 0.79652 Zero-inflation model coefficients (binomial with logit link): Estimate Std. Error z value Pr(>|z|) (Intercept) -119.137 95.259 -1.251 0.2111 reform 4.821 2.646 1.822 0.0684 . badh -21.461 4614.989 -0.005 0.9963 agegrp2 9.261 28.341 0.327 0.7438 agegrp3 3.502 27.984 0.125 0.9004 educ2 -19.790 203.802 -0.097 0.9226 educ3 -7.894 4.758 -1.659 0.0971 . loginc 13.010 10.463 1.243 0.2137 --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Theta = 1.0124 Number of iterations in BFGS optimization: 197 Log-likelihood: -4554 on 17 Df What could be the reason for such a difference? Results from Stata closely resemble the results of zeroinfl from the pscl package (especially, the results for the count part of the model). I fited the models exactly similarly by specifying the same options using zeroinfl.control(). I also tried to search online to see if others have reported similar issues previously, and if answers are already available. I searched for answers also on stackoverflow and CV. But I couldn't get any. The one thing that pops out is that there is clearly complete separation in the zero-inflation component (any coefficient with absolute value greater than 10 is a red flag here). Don't know whether/why that should mess up the estimation of the count component of the model, but it seems reasonable that it could. @BenBolker I explored the data and I suspect that there may be complete separation due to the variable loginc which mostly contains unique values for each case. Cross-tabulation gives for each unique value of loginc , the count is either zero or non-zero. But I wonder how the same function just from two packages end up in different results both for the count and binary parts of the model. I email the devs when these types of issues arise. I have always received a response from devs I've contacted that answered my question. Devs are generally quite helpful on that front. @LeroyTyrone Okay. Will try that too. Thanks for pointing me to this, I have added an answer now. Additional remark: The functions in both packages were originally written by me and countreg is maintained by myself, still containing my original implementation. Some years ago the pscl maintainer integrated a patch into zeroinfl() that can sometimes lead to slightly better convergence, but sometimes not (like here). Hence the results can differ a bit. But I haven't seen cases where the differences are really practically relevant (including your example). Here, they just look different but are actually not much. Source of the problem: The problem is that there is no zero inflation but actually less zeros than expected from a negative binomial model - espectially in some subgroups with respect to reform and agegrp. Symptoms: Due to the lack of zero inflation, the binary zero inflation part tries as hard as it can to produce predicted zero inflation probabilities that are very close to zero for certain subgroups. Notice especially the intercept in that part of the model that is extremely small with a large standard error. Overall this looks similar to quasi-complete separation in binary regression models. The likelihood becomes very flat and it depends on the settings of the optimizer where exactly it stops (and these settings differ between pscl and countreg). As the two components of the zero inflation model (count vs. binary part) cannot be estimated separately, problems in the estimation of one component can lead to problems/differences in the estimation of the other component as well. Alternative: The hurdle model has several advantages here: (1) It can not only deal with zero inflation but also with a lack of zeros. (2) The two components of the model can be estimated separately and hence problems cannot spill over. Illustration: Let's compare the basic negative binomial model with its zero-inflation and hurdle counterparts: data("mdvis", package = "COUNT") mdvis <- transform(mdvis, reform = factor(reform), badh = factor(badh), agegrp = factor(agegrp), educ = factor(educ) ) library("countreg") f <- numvisit ~ reform + badh + agegrp + educ + loginc m <- glm.nb(f, data = mdvis) m2 <- zeroinfl(f, dist = "negbin", data = mdvis) m3 <- hurdle(f, dist = "negbin", data = mdvis) In terms of the log-likelihood, the zero inflation model only improves very slightly on the basic model despite needing almost twice as many parameters. The hurdle model improves a bit more but also not much: c(logLik(m), logLik(m2), logLik(m3)) ## [1] -4560.910 -4554.146 -4550.520 In terms of both AIC and BIC the zero inflation model is the worst out of these three models. AIC slightly prefers the hurdle model while BIC prefers the basic model somewhat more clearly. AIC(m, m2, m3) ## df AIC ## m 9 9139.819 ## m2 17 9142.293 ## m3 17 9135.040 BIC(m, m2, m3) ## df BIC ## m 9 9191.195 ## m2 17 9239.336 ## m3 17 9232.083 Looking at the rootogram as a diagnostic display for the basic model, we see that overall there are actually less observed zeros than expected from a negative binomial model. But by and large the basic model already fits reasonably well. rootogram(m) (Note: The version of the rootogram with confidence limits is actually produced by another package under development. The version in countreg does not have these confidence limits.) Thank you so much. That excellently answers my question, clears my doubt, and gives me new insight into the data and methods.
common-pile/stackexchange_filtered
What does this C Pointer Code Produce? [ulong pointer -> uint] Hey Peeps, Am currently working on porting some old C Library to C# but have a bit of trouble understanding a certain piece of Code containing Pointers. Am not the best when it comes to C so i might be lacking some understanding. Heres a simplified version of what im Looking at: unsigned int block[2]; // --- inside a function unsigned int *block; // only got access to this pointer not the array unsigned long left, right; // some math *block++ = right; // these lines are the important bit i dont quite get *block = left; now... What I think i got so far is: The first line... dereferences the pointer sets its value to right steps the pointer forward by 1 And the second line... dereferences the pointer sets its value to left Now what i have trouble wrapping my head around is how the end result (blocks[]) looks. (Sadly cant just debug it and take a peek bc I only have dont really know how id do that with a lib binary...) Would be fairly simple if left and right were uints aswell but they are both ulongs so theres probably some sort of overwriting going on right? Am a little confused / lost on this one... Maybe some of you with some better C knowledge can help me out ^^ Is block ever initialized inside the function? Hey @sj95126 should have probably mentioned that hehe, block is initialised before its handed to the function. @Barmar already gave me the answer i was looking for but still thanks for answering! ^^ I meant where you refer to unsigned int *block as "inside a function" - in that context, is block a parameter to the function, or a local variable? Just making sure the pointer has a valid value, or dereferencing it would be undefined behavior that might "happen" to work. But if Barmar has solved your problem, great! This is basically doing: block[0] = (unsigned int) right; block[1] = (unsigned int) left; And casting unsigned long to unsigned int simply discards the excess high order bits.
common-pile/stackexchange_filtered
Any way to change specific data in a column e.g Mr to Dr which contains a name I am trying to change the Titles of 'doctors' in a database and was just wondering was there a SQL query which I could run to change them. The column im trying to change What I am asking is that there is any way I can update the column to add a 'Dr' infront of the names to replace the 'Miss','Mr' etc. I'm thinking about using a SQL Statement containing the wildcard function to update it but not sure it would change the specifics. Thanks, Karl try this Update myTable set name = replace(replace(name,'Miss ','Dr '),'Mr ','Dr ') Use REPLACE option UPDATE table_name SET column_name = REPLACE(column_name, 'Mr.', 'Dr.') WHERE column_name LIKE 'Mr.%' I might suggest doing: update t set col = stuff(col, 1, charindex(' ', col + ' '), 'Dr'); This only replaces the first word in the string. You might want to be extra careful and add a where clause. update t set col = stuff(col, 1, charindex(' ', col + ' '), 'Dr') where col like 'Miss %' or col like 'Mr %' or col like 'Mrs %' or col like 'Ms %'; The problem with replace() is that it replaces all occurrences in the string. Although the honorifics are unlikely to be in a name, you could have names like "Missy Elliott". That's very useful - thank you Gordon. It makes a lot of sense!
common-pile/stackexchange_filtered
Is there a way to slice a Google doc into multiple PDFs? I would like to replicate in Google Scripts VBA code that I wrote for Word docs. Basically, it "slices" a document into multiple PDF files by searching for tags that I insert into the document. The purpose is to allow choirs using forScore (an app that manages musical scores) to insert previously annotated musical scores at the slice points. That way, during a performance they can page through a musical program in its entirety from start to finish. This is what I have so far. I know it's sketchy, and it doesn't actually work as is: enter code here // Loop through body elements searching for the slice tags // Slice tag example: #LCH-Introit (Seven Seasonal-God is Ascended)# // When a slice tag is found a document is created with a subsection of the document // NOT SURE USING RANGES IS WHAT'S NEEDED rangeBuilder = doc.newRange(); for (i = 0; i < body.getNumChildren(); i++) { child = body.getChild(i); rangeBuilder.addElement(child); // Logger.log(fName + ": element(" + i + ") type = " + child.getType()); if (child.getType() === DocumentApp.ElementType.PARAGRAPH) { foundElement = child.findText('#LCH-'); if (foundElement) { txt = foundElement.getElement().getParent().editAsText().getText(); // Generate PDF file name count++; padNum = padDigits(count, 2); // Function I wrote to pad zeroes pdfName = serviceName + '-' + padNum + ' ' + txt.substr(txt.indexOf('-') + 1, (txt.length - txt.indexOf('-') - 2)); Logger.log(fName + ": pdfName = " + pdfName); // Create new doc and populate with current Elements // Here's where I'm stuck. // There doesn't seem to be a way to reference page numbers in Google Scripts. // That would make things much easier. // docTemp = DocumentApp.create(pdfName); } } } Can you explain about your script? Because from your script, I couldn't understand about it doesn't actually work. Sorry, that wasn't very clear. This is just a code snippet . For example, I left out the 'doc' and 'body' variable definitions and how they derived their values. (I wasn't sure how much detail was required). As it stands, the name of the eventual file (pdfName) has been generated but that's it. Would it be better if I re-did the code? If I understood correctly, does each new PDF have only the paragraph where the word was found? Thank you for replying. Unfortunately, from your script, I cannot still understand about the output and input you want. I deeply apologize for my poor English skill. OK, I've really done a poor job at explaining what I'd like to do. Too many details are left out to make it clear. A problem for me is that I'm not sure what Google Script code would work in this situation. @Jescanellas: Each PDF slice would have all the pages in the document since the last slice. So let's say I find the first slice tag on page 5. I would create a PDF of pages 1-5. If the next slice was found at page 9, then I would create a PDF of pages 6-9, and so on. Perhaps it would be best to provide pseudo code and ask how it could be implemented in Google Scripts? Thank you for replying. I noticed that an answer has already been posted. I think that it will resolve your issue I think I have the solution for this. The next script will search for a certain word in the text, and every time it finds it it will create a new Doc in your Drive account with the text between each word. Keep in mind that this word is also added to the text, so if you want to remove it, I will update the answer with the code for that. As I wasn't sure if you wanted to download the PDFs or keep them in Drive, it will create links to download those files in PDF format. You can find them in View > Logs after the execution finished. function main() { var doc = DocumentApp.getActiveDocument(); var body = doc.getBody(); var listItems = body.getListItems(); var docsID = [] var content = []; for (var i = 0; i < listItems.length; i++){ //Iterates over all the items of the file content[i] = listItems[i].getText(); //Saves the text into the array we will use to fill the new doc if (listItems[i].findText("TextToFind")){ createDoc(content, i, docsID); content = []; //We empty the array to start over } } linksToPDF(docsID); } function createDoc(content, index, docsID){ var newdoc = DocumentApp.create("Doc number " + index); //Creates a new doc for every segment of text var id = newdoc.getId(); docsID.push(id); var newbody = DocumentApp.openById(id).getBody(); for (i in content){ newbody.appendListItem(content[i]); //Fills the doc with text } } function linksToPDF(docsID){ // Logger.log("PDF Links: "); for (i in docsID){ var file = DriveApp.getFileById(docsID[i]); Logger.log("https://docs.google.com/document/d/" + docsID[i] + "/export?format=pdf&"); } } In case you have too many files and want to download the PDFs automatically, you should deploy the script as a WebApp. If you need help with that I will update my code too. Thanks for your reply. For the document I use to test, there doesn't appear to be any ListItems, (listItems.length == 0). I've verified that there is content in the document, (body.getNumChildren() = 167). Of the children, all are of type PARAGRAH except two UNSUPPORTED. I was able to create PDF files but they are basically text strings not the document with all of its components up to the search string. What I'm looking for is somehow being able to determine what page the search string is on and then create a PDF from that. So if strings were found on page 4, 9, and 20, I want to create a PDF file of pages 1-4, 5-9, and 10-20. I'm now wondering if it's better to export a PDF file of the entire document -- which I know how to do -- and then find a javascript solution for searching the PDF file instead. I see, the problem is the Docs API doesn't work with page numbers, in the documentation it appears as an UnsupportedElement Class which means it can't be affected or returned by any script. The good news are Paragraph has the same methods as ListItem, so in my previous post, if you replace the word ListItem with Paragraph, it still works.
common-pile/stackexchange_filtered
How to spawn nested loops in python I've tried searching for an answer to this question and read a lot about decorators and global variables, but have not found anything that exactly makes sense with the problem at hand: I want to make every permutation of N-length using A-alphabet, fxn(A,N). I will pass the function 2 arguments: A and N. It will make dummy result of length N. Then, with N nested for loops it will update each index of the result with every element of A starting from the innermost loop. So with fxn(‘01’,4) it will produce 1111, 1110, 1101, 1100, 1011, 1010, 1001, 1000, 0111, 0110, 0101, 0100, 0011, 0010, 0001, 0000 It is straightforward to do this if you know how many nested loops you will need (N; although for more than 4 it starts to get really messy and cumbersome). However, if you want to make all arbitrary-length sequences using A, then you need some way to automate this looping behavior. In particular I will also want this function to act as a generator to prevent having to store all these values in memory, such as with a list. To start it needs to initialize the first loop and keep initializing nested loops with a single value change (the index to update) N-1 times. It will then yield the value of the innermost loop. The straightforward way to do fxn('01',4) would be: for i in alphabet: tempresult[0] = i for i in alphabet: tempresult[1] = i for i in alphabet: tempresult[2] = i for i in alphabet: tempresult[3] = i yield tempresult Basically, how can I extend this to an arbitrary length list or string and still get each nest loop to update the appropriate index. I know there is probably a permutation function as part of numpy that will do this, but I haven't been able to come across one. Any advice would be appreciated. You want itertools.permutations(), in the standard library - or, more likely, itertools.product('01', repeat=4). Actually, I don't think numpy has an easy way to do this. You can use a recursive function that combines tile and repeat at each recursive step, but that doesn't seem very simple or efficient. You don't actually want permutations here, but the cartesian product of alphabet*alphabet*alphabet*alphabet. Which you can write as: itertools.product(alphabet, repeat=4) Or, if you want to get strings back instead of tuples: map(''.join, itertools.product(alphabet, repeat=4)) (In 2.x, if you want this to return a lazy iterator instead of a list, as your original code does, use itertools.imap instead of map.) If you want to do this with numpy, the best way I could think of is to use a recursive function that tiles and repeats for each factor, but this answer has a better implementation, which you can copy from there, or apparently pull out of scikit-learn as sklearn.utils.extmath.cartesian, and then just do this: cartesian([alphabet]*4) Of course that gives you a 2D array of single-digit strings; you still need one more step to flatten it to a 1D array of N-digit strings, and numpy will slow you down there more than it speeds you up in the product calculation, so… unless you actually needed a numpy array anyway, I'd stick with itertools here. You can look to see how the itertools.permutations function works.
common-pile/stackexchange_filtered
Google Snap to Roads example output I have been trying to use the example output of the Google Snap to Roads API as a test, but the only way it is recognized as a valid JSON file is if I remove all of the commas after placeid, ("placeId": "ChIJr_xl0GdNFmsRsUtUbW7qABM",). Is this a mistake on my part? I would like to be able to use the test data without cleaning it of the commas. I removed commands from the placeid and the very bottom brackets, and my program ran, however, when I didn't, it would not work. Python can't parse JSON with extra trailing comma. Can json.loads ignore trailing commas? Or use https://github.com/googlemaps/google-maps-services-python If I were to use the python version, would the output still be a json? I'm just trying to get the lat/long of a road coordinate.
common-pile/stackexchange_filtered
Fix timeout for the "You may only open the close dialog every 3 seconds" notification Several of the interactions with the site are rate-limited and will bring up a warning notice if you do them repeatedly. For example, you can only flag comments every 3 seconds. Typically, these notices stay up for as long as you are required to wait; so, for example, if you get a notification when attempting to flag a comment, just wait until the notice disappears, and you know you can try again without getting the notification. When you vote to close and select the wrong button in the dialog, a common flow is to cancel the dialog with Esc (or clicking Cancel if you need a mouse to communicate with your computer) and then reopen the dialog. However, this action too is throttled, and will produce a warning message: However, unlike many other notices of this kind, this one stays up for much longer than 3 seconds. By informal timing, I get 20 seconds for this particular notification. Could the notification please disappear after 3 seconds? (Or, if 3 seconds is too short for people to see the notification, maybe just make the period 5 seconds. Tangentially, see also *5 seconds is too long, but if it must be, then give me a visual cue) Or, if you can, just don't rate limit this particular feature. In other words, the design where the notification disappears when time is up is an obscure but helpful usability hint; I want it for this notification, too, if indeed it has to remain rate-limited. Otherwise instead of showing an error message at all, maybe just open the same close vote dialog as the one that was loaded 3 seconds ago. @DonaldDuck I would find it annoying @Konrad If you dismissed it and tried to open it again, that's exactly the behavior you want. That's what removing the throttling entirely would accomplish. Opening the dialog when you didn't request it would be annoying, but that is not what is being discussed here. What I imagine @DonaldDuck is saying is, if you did some moderately expensive operation to make the pseudo-dialog appear the first time, cache the results if it's dismissed so that it can be brought up again with almost no cost. (Though perhaps I'm just demonstrating my ignorance of how this works under the hood.) @tripleee Correct. According to the comments to Why can I only open the close/flag dialog every 3 seconds?, the reason why this is rate limited at all is to prevent attacks where bots would open the dialog very often in order to overload the server. If the results are cached client side, there would be no need for requests at all the second time the dialog is opened, removing the need for rate limiting.
common-pile/stackexchange_filtered
git merge --no-commit creates a merge commit when run in IntelliJ IDEA I'm trying to merge master branch to my feature branch using IntelliJ IDEA's UI option. VCS -> Git -> Merge Changes.... I select the checkbox No commit and select the origin/master branch to merge. However, after doing this and I run git log I see that a new commit was created with the merged changes. The same happens if I use the Terminal window in IntelliJ by running git merge origin/master --no-commit. When I do this in Git Bash it works correctly (the commit is not created). Is there any known issue related to this problem with IntelliJ? My IntelliJ IDEA version is: 2017.1.1 (build 171.4073.35) I tried but did not reproduce it. The working tree and the index were updated but the commit was not made. GitBash is not a proper test here, cause it is not native bash for Windows, and commands issued by IDE are executed in another environment, not Git bash. Check the result of git merge origin/master --no-commit executed in Windows command prompt. It is used by the IDE terminal by default. BTW, --no-commit works fine for me.
common-pile/stackexchange_filtered
jests __mocks__ folder won't work with typescript I am using jest 27.0 I am using this mock in many tests: jest.mock('@aws-sdk/client-sfn', () => { return { ...jest.requireActual('@aws-sdk/client-sfn'), SendTaskSuccessCommand: jest.fn(), SendTaskFailureCommand: jest.fn(), }; }); Therefore I wanted to use a __mocks__ folder. I have the following project setup. In my root folder there are all dependencies installed including the @aws-sdk/client-sfn.ts. In the subfolder there is also a package.json and my folder where I want to write the tests. So I created this file: <EMAIL_ADDRESS>: export const SendTaskSuccessCommand = jest.fn(); export const SendTaskFailureCommand = jest.fn(); However when I want to use this in my test like this: expect(SendTaskSuccessCommand).toHaveBeenCalledWith({ taskToken: 'task_token', output: JSON.stringify(expectedOutput), }); and run the test, I always get: Error: expect(received).toHaveBeenCalledWith(...expected) Matcher error: received value must be a mock or spy function I was able to mock the send function as below, you could try to add SendTaskSuccessCommand as well. `const mockResponse = jest.fn(); jest.mock('@aws-sdk/client-sfn', () => { const originalModule = jest.requireActual('@aws-sdk/client-sfn'); class MockSFNClient { constructor() {} send() { return Reflect.apply(mockResponse, this, []); } } Object.setPrototypeOf(MockSFNClient.prototype, originalModule.SFNClient.prototype); return { ...originalModule, SFNClient: MockSFNClient }; });`
common-pile/stackexchange_filtered
How to match column a and column b from same table in BigQuery on substring match I'm using #standardSQL BigQuery. I tried to use LIKE but couldn't get the matching to occur. I have the table below, I am trying to write a query in BigQuery where if column A is contained in Column B then I want an output column C with the string from Column A OR '1'. In the case, that there is no match between Column A and Column B then I would want Column B OR '0' in the output Column C. id | Column A | Column B 1 | John | Alex, John 3 | Cerci | Cerci 5 | Mike | Mike 2 | Simi | Expatri 6 | Hazel | Expatri, Hazel 4 | Bald | Hair 7 | Cam | Ambrose, Cam What I would ideally want id | Column A | Column B | **Column C** 1 | John | Alex, John | John 3 | Cerci | Cerci | Cerci 5 | Mike | Mike | Mike 2 | Simi | Expatri | Expatri 6 | Hazel | Expatri, Hazel | Hazel 4 | Bald | Hair | Hair 7 | Cam | Ambrose, Cam | Cam Use a CASE expression: SELECT id, A, B, CASE WHEN B LIKE '%' || A || '%' THEN A ELSE B END AS C FROM yourTable ORDER BY id; Below is for BigQuery Standard SQL #standardSQL SELECT *, IF(REGEXP_CONTAINS(Column_B, Column_A), Column_A, Column_B) AS Column_C FROM `project.dataset.table` If applied to sample data from your question - output is Row id Column_A Column_B Column_C 1 1 John Alex, John John 2 3 Cerci Cerci Cerci 3 5 Mike Mike Mike 4 2 Simi Expatri Expatri 5 6 Hazel Expatri, Hazel Hazel 6 4 Bald Hair Hair 7 7 Cam Ambrose, Cam Cam
common-pile/stackexchange_filtered
Why does jQuery Mobile add in divs, classes etc I've been working on a small site and now I want to add a switch using jQuery mobile since it needs to work well on mobile devices. But when I add the necessary files it messes up my whole mark up by adding divs around input files. Adding classes to the body and divs, and adding styling to the div's and classes which destroy my whole layout. What's the point of that? This makes me not want to use jQuery Mobile since now I have to write exceptions everywhere :/ Can someone explain to me why this is happening? JQM is a framework. You should have started developing from ground with it. you can create a custom jQM framework by including any widget you want, and initialize them manually. @FabrícioMatté so if you have an excisiting website and want to use a few things from the jQuery Mobile framework you have to rebuild everything? Looks like a waste of time. @Omar I did this, I only included the switch part I need, but then the module doesn't work. have you used http://jquerymobile.com/download-builder/ ? @Omar yes I did, I only added the "Flip Switch" module, since I need that. But when I use the custom css/js files the module doesn't work. When I use the full download it does work. You need to initialize it on .ready() this way: $('selector').flipswitch(); if you're using JQM 1.4 custom build. @Omar I'm using the Basic checkbox switch checked function from JQM On the Builder I only select "Flip switch" and extract the files into my workspace. FlipSwitch %form %label{:for => "flip-checkbox"}Flip toggle switch checkbox: %input{:type => "checkbox", :"data-role" => "flipswitch", :name => "flip-checkbox", :id => "flip-checkbox", :checked => ""} jQuery $( document ).ready(function() { $('selector').flipswitch(); }); When using a custom jQuery Mobile, links in head should be placed as follows: <!-- structure for custom themes --> <link rel="stylesheet" href="jquery.mobile.custom.structure.min.css"/> <link rel="stylesheet" href="jquery.mobile.custom.theme.min.css"/> <script src="jquery-1.9.1.min.js"></script> <!-- load jQM after jQuery --> <script src="jquery.mobile.custom.min.js"></script> And then, you need to initialize widgets, each widget has its own function. $(function () { $("#flip-checkbox-ID").flipswitch(); /* initialize flip-switch*/ });
common-pile/stackexchange_filtered
How can i conceal the content of a file that is constructed using a Java desktop application and used by an Android app? I'm constructing a tool that enable teachers to create exercises by using a Java desktop application. The exercises consists of questions and their answers and are saved to a single file. The students import this file into an Android app. The students must naturally not be allowed to view the content of the excercise file as this would give them the answers to the questions. So, how can i conceal the content of a file that is constructed using a Java desktop application and used by an Android app? My first thought was to encrypt the file using symmetric encryption and the password hardcoded into the Java desktop application and the Android app, however I presume a simple reverse-engineering of the Java desktop application would reveal the password quickly. My next thought was to use some sort of public key encryption, hardcoding the private key into the Android app, but without being certain I assume it is possible to reverse-engineer the Android app as well. So what is my strategy? Is there any obvious solution I just can't wrap my mind into? Edit - additional info: I would like to avoid the usage of web servers. The most secure way is to let questions have ids and then let android application check the answer in runtime using some remote servers, so that application will never know about the correct answer. But you have to develop a web-service to do it. Yes, that is one solution, thank you for your answer. But for now I'm trying to avoid the use of web servers. Then the encryption is your way. But how can the files be encrypted by the editor so that only the Android app can read the exercise-files? Remember; I distribute both the Java desktop application (the editor) and the Android app freely (although not as open source).
common-pile/stackexchange_filtered
ubuntu 14 glassfish4 shutdown without log I had install Glassfish 4 on ubuntu 14 server. After some configurations everything seems to work fine,but sometimes it random shut down I had install jdk 1.7 60 release but i don't understand why it die during the night. I can't figure why jvm or glassfish crash. A have same problem about crash when chage configuration about jvm in DAS Can someone help me? Are there any compatibility issue about this configuration? Thank in advance How much physical memory has the server and how much is the max heap size of the JVM? I had a crash of GF server, because I had max heap size set to 1024MB, but the virtual server had only 512MB RAM. When fetching some large datastructure into memory, the GF server shut down w/o any log. You can also try to monitor your GF server with virtualvm. Typically shutdown of GF is an out of memory exception (OOME). Thank for reply.I work a lot about this issue.My vm use 6 Gb of ram on VMWare vserver,and i had increase xmx to 3 gb and maxperm to 1 gb.But it crash again.i increased swap space on ubuntu to 8 gb and reduce -Xmx2048m and -XX:MaxPermSize=1024m , and now seem to work fine since 2 days. But jvm.log is very poor log.How can i use virtualvm to monitoring GF server?
common-pile/stackexchange_filtered
How can I find the center of a region in a linear programming problem? I have an optimization problem that in most respects can be very elegantly expressed via linear programming. The twist is that I don't actually want to minimize or maximize any of my variables; I want to find the center of the feasible region. The problem is coming up with a definition of "center" that I can optimize for using an LP solver. I started out by finding the Chebyshev center; that works for larger feasible regions but many of my feasible regions I'm interested in are extremely long and narrow. I'd like something that's more logically in the "middle" than a Chebyshev sphere can give. An added bonus would be something that still works if one of my constraints is actually an equality (essentially specifying a hyperplane through my problem space), so that it can find the center of the hyperplane within the space. The Chebyshev approach can't do this at all. As an example problem: I'd like to find the "center" of the region specified by: x >= 0 x <= 100 y >= 0 y <= 100 x + y >= 90 x + y <= 100 Graph is here Ideally, I'd like the solution to be x=47.5, y=47.5; however, everything I've thought of so far gives me something either with a very small x or a very small y. Would downvoters please explain why? I thought this was a well-expressed question... I think the down vote was because this is not a linear programming question. You are trying to find the "center" of a polytope. But that's precisely my question; is it possible to solve this sort of problem using linear programming techniques? And if it's not possible, pointers towards other computationally tractable approaches would be welcome... It really just depends on your definition of center. See below. I upvoted to even out your downvote :P I don't think this question is worthy of a downvote. The only way I think you can use a quickly solvable linear programming formulation to find something close to a center would be to take your "truly free" inequalities in the form $Ax \leq b$, i.e. after all the implicit equations $Cx = d$ have been identified and removed from the inequalities and expressed separately, and then add one extra variable $\epsilon$ and solve the LP: $\max \epsilon$ subject to $Ax + \epsilon \leq b$ and $Cx = d$. Then whatever solution $(x,\epsilon)$ you get, you will have that $x$ will be toward the center of your original feasible region. Isn't that almost equivalent to the Chebychev center? The centroid of a simplex might work for you: $$ \frac{1}{n} \sum_{i=1}^n v_i. $$ Here, the $v_i$ are the vertices of the polytope. Addendum: If you're familiar with the theory behind linear programming, you'll find it easy to go from your constraint matrix $A$ to vertices on the polytope. "Selecting" a vertex is equivalent to selecting your basic variables. The number of vertices of the feasible region can be exponential in the number of inequalities defining the feasible region. In general, with $d$ variables and $n$ inequalities, the number of vertices could be $\Omega(n^{d/2})$ From the looks of the question, it doesn't seem like that is an issue. Well, that's just a simplified example. The actual version will have thousands of variables and constraints :) It seems that this cannot be done with linear programming. You have to look into finding the "analytic center of the feasible region" which involves minimizing a sum of logs. Therefore this is a convex optimization problem. One way to do it is to formulate your problem as $A\mathbf{x}=b$ and $\mathbf{x}>=0$ (to convert all your inequalities to equalities, use slack variables). Then minimize $-\sum_{i}\log{x_{i}}$ with a convex optimization algorithm. $\mathbf{x}$ should contain all the slack variables you had to create.
common-pile/stackexchange_filtered
Asking boss and co-workers to involve me more often First, the context: I work in a small company (~12 people) where almost everyone works together for more than 10 years (some for more almost 20 years) and I'm by far the youngest person. I work there almost 2 years. Everyone is Dutch except for me and I don't speak the language well. I have the job title of "Manager of R&D". I have a tendency to feel unliked and neglected. The problems: Since the beginning and in several occasions I was not involved in important decisions or meetings with customers and I feel that I could have had the chance to give my technical opinion or, at least, learn by listening and interacting with people from different backgrounds. Not only I feel bad because it seems like my co-workers do not care or do not trust me, but I am also missing many chances to learn and improve myself. Even recently, I was not informed or invited about a meeting with a very important client where I believe that me (as the Manager of R&D) should have been present. Every year we choose a few conferences to go to. I am never invited to go to those conference. While everyone in the office is nice to me, there is not a very strong connection or intereaction. Very often there are discussions about topics unrelated to work, or even interesting things about work discussed among all, and all those discussions are in Dutch. I feel really isolated because everyone is laughing and having fun and I just have to keep working. While it seems to me that I am right, I also wonder if I'm giving too much value to minor things. It may be that my boss of my colleagues thought that I had more important things to do, or it can be seen as an optimization of resources... but on the other hand it seems to happen to often that my colleagues and boss do not value my opinion and do not respect me. I also feel very isolated in social terms. My question is: should I express and explain my concerns to my colleagues? If yes, what is the best way to do it and avoiding looking like an extremely insecure and delusional person? If not, what actions should I try to implement to overcome this feeling and situations? I'm assuming, as you don't speak Dutch well and you work there, that the office language is English? I know from experience it can be hard working with people that don't speak your native language, and when you can't explain what you mean in their language the conversation gets awkward. As someone who lived in The Netherlands myself I'd pick the proverbial Dutch Way: tell them directly. I'm sure your boss won't take any offence and he will tell you the reason without any problem. The thing I loved most is that you don't need to speculate, you can always speak out and you'll get a direct honest answer. If everyone else has been working together for the past 10 years and you've only been there for 2 years, to some extent it's normal that they're reacting that way. They don't know you as well as they know each other. However, you have a very valid point that this shouldn't damage your career. I see a few possibilities here: 1) It's all just a cultural difference and they just don't know you as well as they know each other. In that case, try to make small talk when appropriate, ask them for feedback and then discuss it, etc. Work on your Dutch as well, so you can more easily jump into existing conversations. (I know it can be challenging, I've heard that Dutch people will immediately switch to perfect English if they see that you're not speaking Dutch perfectly. Keep trying!) 2) You just can't fit into that team's mentality. I've been in such a team. The way I found out was by trying to start some small talk and ending up literally disgusted by an otherwise nonchalant answer. In that case, it depends on how much of your own character you're willing to compromise in order to fit in, or how easy it would be for you to move to another job. In the meantime, try to weed out those opportunities when you actually can complain and talk to them. For instance, if you weren't invited to a meeting where your contribution would have been significant, you can find out who organised it and say "why was I not invited to that meeting? I think my opinion would have been valuable". About the conferences, find out who is responsible for inviting people, express your interest and inquire future eligibility etc. What if you never expressed interest so they just thought you don't want them? As a Dutchie, I do tend to switch to English when internationals are involved in the conversation, especially when there is a need to exchange pertinent information. We've been taught very functional english - we can get things done, even if it is more tiring and less comfortable, especially when trying to express emotion or relate experiences. And so I have observed that a lot of people switch back to Dutch for social or non-essential work-related talk. This contrast must be stark and disconcerting. It would definitely make me feel like more of an outsider, if I were in OP's position. An answer to emphasize the 'work on your Dutch' in another answer. Years ago I worked in Amsterdam as a post-doc. All science was done in English. Coffee, tea, and after-work was all in Dutch, unless people were speaking directly to me. After 3 months, I took 2 weeks off to do an intensive Dutch course. I then continued further doing night classes. I forced myself to use Dutch in most daily interactions (at the store, scheduling sports events, meeting new people). Yes, everyone usually tried switching to English (well, except my nice elderly upstairs neighbors). I continued onward in Dutch. By the end of the first year I was comfortable in conversational Dutch. So: Take Dutch courses. Read a Dutch newspaper daily - it will expand your vocabulary. I subscribed to the Handelsblad, you might prefer the Volkskrant (sorry, spelling is really rusty after 25 years away). Speak Dutch as much as possible to everyone, and persist in it. I couldn't agree more. However, I should add that the ability to speak Dutch is, in my opion, not as important as a firm ability to understand the language. Speaking the language is the best way to learn, but understanding it is required to fully integrate socially. @SambalMinion - I suspect some is down to how different people learn languages. I could always read better than I could speak, and speak better than I could write. But, carrying on a mixed Dutch-English conversation was also complicated. For me, improving my speaking helped me improve my listening since the rhythm of the language became second nature. Your mileage may vary... Not so much, it's more about efficiency. Speaking is really essential to language acquisition. For example, blatantly imitating an accent helps you learn to parse speech in that accent. (Adank, Rueschemeyer and Bekkering (2013). 'The role of accent imitation in sensorimotor integration during processing of intelligible speech'.) However, I work with several expats that excell in the workplace, are well integrated and understand the Dutch language perfectly, while speaking very little Dutch. They just suffered a lot longer before getting to that level of comprehension. For me I knew I'd made it when I started dreaming in Dutch... Sadly I don't anymore after 25 years away. Asking for attention and respect is hardly easy. These things are earned, not granted by your title. Also, regardless of your position, in-house-time often counts a lot within a company's culture. I know places where people are proud of having very old access badges, and sometimes put pictures on those from when they were much younger (so the badge looks older). That being said, 2 years should be time enough for you to the get the hang of the culture. You should have a good idea as to why you are not called for the conferences. (Are those academic events? Business events where everyone will be speaking dutch? Social events which serve as a prize for outstanding work?). I once was a foreign student myself and got an internship were people would always speak their language, which I had been learning from little more than a year by then. Yes, it's hard to communicate. Yes, you are not the center of attention. But I had a good manager by then who would value my deliveries. My opinion is that you should probably work to improve your Dutch mastery, take Dutch classes, read more dutch. Until you manage to hear and speak with no additional effort and get a fairly weak accent, you'll always be compromising on the social aspect of the work. One thing you should consider as well is that being called to important meetings and events is a social privilege, not a right. But it's a trade-able privilege. If you are scheduling a meeting, consider asking more people to join, they'll appreciate and possibly re-tribute. Maybe throw a party at your house and invite everyone in the team. The point on the last paragraph is that instead of asking what you want from others, try giving.
common-pile/stackexchange_filtered
AttributeError when running unittest with coverage in Python 2.7 on WSL I'm encountering an error while attempting to run unit tests with unittest and coverage in Python 2.7 on a Windows 11 system using WSL (Windows Subsystem for Linux). The error message suggests an issue with finding the test case test_admin_ruta_total_posiciones within the tests module. Error Message: Traceback (most recent call last): File "/usr/lib/python2.7/unittest/__main__.py", line 12, in <module> main(module=None) File "/usr/lib/python2.7/unittest/main.py", line 94, in __init__ self.parseArgs(argv) File "/usr/lib/python2.7/unittest/main.py", line 149, in parseArgs self.createTests() File "/usr/lib/python2.7/unittest/main.py", line 158, in createTests self.module) File "/usr/lib/python2.7/unittest/loader.py", line 130, in loadTestsFromNames suites = [self.loadTestsFromName(name, module) for name in names] File "/usr/lib/python2.7/unittest/loader.py", line 100, in loadTestsFromName parent, obj = obj, getattr(obj, part) AttributeError: 'module' object has no attribute 'test_admin_ruta_total_posiciones' Context: I'm running Python 2.7 within a WSL environment on Windows 11. My project structure includes a tests directory where test_admin_ruta_total_posiciones.py resides. This file contains several unit tests using unittest. Code Snippet: import unittest class TestAdminRutaTotalPosiciones(unittest.TestCase): # Test methods... What could be causing this AttributeError during test discovery with unittest in a Python 2.7 environment on WSL, and how can I resolve it to successfully run my tests with coverage? Steps Taken: Verified that test_admin_ruta_total_posiciones.py contains valid test methods within a unittest.TestCase subclass. Confirmed the directory structure is correct and tests is recognized as a module within the WSL environment. Attempted to execute tests with coverage using the command coverage run -m unittest tests.test_admin_ruta_total_posiciones, resulting in the above error. Expected Outcome: I expect unittest to discover and execute the tests defined in test_admin_ruta_total_posiciones.py, while coverage should generate a coverage report without encountering errors. Additional Notes: The setUp method within TestAdminRutaTotalPosiciones initializes necessary dependencies (db_beta, etc.) and sets configuration flags (EJECUTAR). Other test methods (test_soporte_sin_db, test_admin_db, etc.) within the same test class encounter similar issues during test discovery. If you are only just learning the basics, you should probably ignore Python 2, and spend your time on the currently recommended and supported version of the language, which is Python 3. @tripleee I'm an intern and I've been told to improve the tests so that the coverage checks if anything fails when the program is in python 3, I know it's outdated but i don't have any other option Thanks for clarifying. Going forward, it's probably a good idea to include an explanation like this in the question itself if you have similar circumstances where you need support for obsolescent technology or something else that is unusual. Sorry, it's just my first time using this because I haven't seen a solution anywhere.
common-pile/stackexchange_filtered
Export using Bcp I am exporting data using bcp and using union all to combine the column data with column header,I have used orderby variable and set it to 0 for the header and then to 1 for the column data inorder to have the header at the top in the excel file but it seems to place the header in between the data on some occasions when the no. of rows is more than 17,000 You are going to have to show us the code. Please add more details to this question. The question may be clear to you , but as it stands it's very vague to us as we don't know the system you are working with. It could use some sample data and desired output as a minimum along with what you've tried so far. Use following format for your query: Select * From (Select 0 as [Type], 'Header1' as Col1, 'Header2' as Col2 ,... UNION Select 1 as [Type], Col1, Col2, ... From YourTable )z Order by Z.[Type]
common-pile/stackexchange_filtered
Enable scalability in existing OpenShift application. It's possible? I created a non-scalable OpenShift application with a Diy cartridge, Mysql 5.5 and phpMyAdmin. Now I could set everything I need, saw that I can use the port-forwarding to access my database through my Mysql Workbenck. So I do not take more phpMyAdmin cartridge. Then it would be possible I remove the cartridge phpMyAdmin and enable scalability so I can use my three gears free for this application?
common-pile/stackexchange_filtered
How can I change one page to another without using Intent? I want to implement quiz module in my application which consists 14 questions so how do i go from one question to another. If i use intent then this is not smart work because i have need to create 14 activities. is there any other ways to move to next page without using intent.? you can use fragments and replace one fragment with another okay.But in fragment how can i implement the designing? google for tutorials on fragments you will find so much articles,you can inflate any layout in fragments If u have code then pls upload it. i don't know what you want to design so my code won't help you Check : http://www.tutorialspoint.com/android/android_fragments.htm Actully i have 14 to 15 questions and each question maintain single page so when i click next question that directly move to next page and there would be same designing example textview and radio group. Okay i frnds thank u so much to all i will try to implement fragment if i will face difficulties than i will again contact to u.thank u dear yadav. You can use the Fragment inside your activity which load your view with data. For this you need to ID to get the data for example Question 1 have ID 1 which will be fetching from database and display in your view, this will be implementing in your fragment, firstly get the ID and query in your database for fetch that record and then display in your view and this fragment object will initiate and set in your layout. Now after question complete pass the next question ID to your fragment with new initialization and replace with current fragment. You can achieve this in two ways 1) Using Fragments: Have 15 fragments All are mounted on a single container Start one fragment to another dynamically 2) Using a Single activity Have 1 Textview for question Say if you have objectives answers have radio buttons for them Then u have to write a logic to dynamically change the data one after another keeping the same activity Best Approach Using the Fragments Would be best and simple way to achieve your objective because second option unnecessarily complicates the things Online Sources to learn fragments If you have not used fragments before click here to check developer docs Simple tutorial to learn fragments I assume your modules are in different layouts. Then you can use above syntax to change to another one: setContentView(R.layout.second);
common-pile/stackexchange_filtered
error: expected specifier-qualifier-list before ‘ObjectP’ in c - CIRCULAR DEPENDENCY I've written a little header file, and I keep getting this error: expected specifier-qualifier-list before ‘ObjectP’ I've been looking for an answer and I understand that it's because of the way the compiler parses the text. ObjectP is defined in GenericHashTable.h which is included as you can see. I've tried writing the #include AFTER defining the struct, didn't help. Here's the problematic code with error line marked: #include "GenericHashTable.h" typedef struct List* ListP; typedef struct List { unsigned int size; ObjectP head; <----- ERROR HERE } List; Any ideas? thanks! EDIT: I think I know where the problem is. "List.h" includes "GenericHashMap.h" and vice versa, so I have kind of a circular dependency. When I remove the #include statement from one of them it compiles OK and the other one gets the error message. Must I somehow break this circle, or is there another solution? Thanks! I'm not 100% sure of this solution, since I do not see your GenericHashTable.h. But It would help if you make the definition of ObjectP in another header file which is included by your GenericHashTable.h and your header defining the list.
common-pile/stackexchange_filtered
I would like to redirect my user to a specific question, from the first line of code, how do I do that? print("Hello Welcome To The Essay Generating Code") print("") print("First things first, you need to give your essay characters") print("") print("") print("How many main characters would you like your story to have?") number_of_characters = input("Number Of Characters:") if number_of_characters <= "10": print("That's way too many characters!") else: # rest of code I would like to redirect the user with the else function back to input the number of characters, how do I do that? I would use functions for your purpose and use a while loop until you have the desired value. def set_characters_number(): print("Hello Welcome To The Essay Generating Code") print("") print("First things first, you need to give your essay characters") print("") print("") print("How many main characters would you like your story to have?") return (int(input("Number Of Characters:"))) number_of_characters = set_characters_number() while (number_of_characters >= 10): print("That's way too many characters!") number_of_characters = set_characters_number() # rest of code or, if you don't want to display the welcome message again : print("Hello Welcome To The Essay Generating Code") print("") print("First things first, you need to give your essay characters") print("") print("") print("How many main characters would you like your story to have?") number_of_characters = int(input("Number Of Characters:")) while (number_of_characters >= 10): print("That's way too many characters!") number_of_characters = int(input("Number Of Characters:")) # rest of code By the way, I suppose too many characters is superior or equal to 10 (>= 10), not inferior or equal to 10 (<= 10) as you wrote it Notice the int(input(...)) to cast the string from the input to an int, to make the numeric comparison possible Thank you so much!
common-pile/stackexchange_filtered
What can I do to fix questions that can't be improved without major changes? I've run into a couple of questions in the past few days involving concepts that I'd love to see answered, but the questions themselves are written extremely badly. They tend to be littered with grammatical mistakes, ambiguous as to what the question actually asks, and are more or less incomprehensible. I believe that questions like this will never receive good answers, because to provide one would require additional clarification from the author—something that, given the quality of the question, I think either will never come or, if it does, not provide much help. Trying to understand the meaning of and rework the question through heavy editing and content addition is a risky step, because to significantly improve the clarity of the problem an editor must specify what they think the original author meant. Doing so runs the risk of deviating from the author's initial intentions, which might be impossible to determine. I believe that this is why I've been rejected both of the times I have tried to make (what I thought to be) major improvements to a question. Although I consider my edits to provide higher-quality questions but still cover what the original authors probably meant, the assumptions I had to make which resulted in that "probably" were too dangerous to the original questions to accept. My point isn't anything to do with those specific edits (besides, I could have made more mistakes that I'm missing); rather, my inspiration for them. I would like to see these questions get answered, but I highly doubt that they will be in their current state, and if they are, their low level of quality will help few but the original author. How should I address questions with concepts I love but executions that can't be saved without major revising? All I've been able to come up with so far is to ask my own, higher-quality version of the question, although I feel as if I'd be somehow infringing on the original. If there is no certainty about the original intent then I'd just comment inviting the OP to improve the meaning.
common-pile/stackexchange_filtered
Html5 attribute maxlength="5" gets useless after changing event.keyCode I got an input field for captcha which prevents user from typing non Ascii code in it: <input id="Captcha"<EMAIL_ADDRESS>type="text" class="noFarsi" maxlength="5" placeholder="ُHuman Code" autocomplete="off" /> and the related class to prevent from typing Farsi(Persian) : $(".noFarsi").keypress(function (e) { var keyCode = e.which; if (keyCode >= 127) { e.preventDefault(); } }); It was working correctly till I decided to add some new feature. The following class converts non English inputs to English : $(".convertToEnglish").keydown(function (e) { var keyCode = e.which; switch (keyCode) { case 65: typeFunc("A", e, e.target.id); break; case 66: typeFunc("B", e, e.target.id); break; case 67: typeFunc("C", e, e.target.id); break; case 68: typeFunc("D", e, e.target.id); break; case 69: typeFunc("E", e, e.target.id); break; case 70: typeFunc("F", e, e.target.id); break; case 71: typeFunc("G", e, e.target.id); break; case 72: typeFunc("H", e, e.target.id); break; case 73: typeFunc("I", e, e.target.id); break; case 74: typeFunc("J", e, e.target.id); break; case 75: typeFunc("K", e, e.target.id); break; case 76: typeFunc("L", e, e.target.id); break; case 77: typeFunc("M", e, e.target.id); break; case 78: typeFunc("N", e, e.target.id); break; case 79: typeFunc("O", e, e.target.id); break; case 80: typeFunc("P", e, e.target.id); break; case 81: typeFunc("Q", e, e.target.id); break; case 82: typeFunc("R", e, e.target.id); break; case 83: typeFunc("S", e, e.target.id); break; case 84: typeFunc("T", e, e.target.id); break; case 85: typeFunc("U", e, e.target.id); break; case 86: typeFunc("V", e, e.target.id); break; case 87: typeFunc("W", e, e.target.id); break; case 88: typeFunc("X", e, e.target.id); break; case 89: typeFunc("Y", e, e.target.id); break; case 90: typeFunc("Z", e, e.target.id); break; } }); function typeFunc(str, e, thisID) { e.preventDefault(); $("#" + thisID).val($("#" + thisID).val() + str); } With the recent changes, and adding convertToEnglish to my Captcha input fields, when the user input language is Eng or Farsi, it doesn't matter :) and the field text fills correctly in English, but the max-length attribute becomes useless and the user can type 10 or many characters in my captcha field. Any help is appreciated. I tried using keypress and Unicode codes but no still the same problem: $(".convertToEnglish").keypress(function (e) { var keyCode = e.keyCode; switch (keyCode) { case 1588: typeFunc("A", e, e.target.id); break; case 1584: typeFunc("B", e, e.target.id); break; case 1586: typeFunc("C", e, e.target.id); break; // some similar codes here } }); Another point : the input field prevents from typing more than 5 digital character : I can type "12345ABCDEFG" but I'm not allowed to type "123456ABCDEFG" If you are manually setting the contents or prevent default, the task falls on you to limit the amount of characters - any attribute rules on a node don't actually get followed if you fill the node with node.value = 'whatever'. Wouldn't it be better to use the inputs pattern attribute and limit it to //[A-Za-z0-9] or something? Yes, I made all the limitations by RegEx too, but I wonder if I can change the input language somehow. If possible, I'm not sure. I am wondering what problem you are trying to solve here - is farsi an issue? Isn't a captcha something you will just match? And if it's farsi it won't match? Has it got to do with potential confusion between the two languages and their characters for users? For a quick solution to your max-length issue, just try event.target.value = event.target.value.slice(5) at the end of your event listener. Users fill all the fields in my form in Farsi and press Tab button on keyboard, and start typing in Captcha filed , and then head up and see Wow !! nothing typed in captcha or Farsi typed and have to delete the field type again. I'm looking for convenient ... Your java script code it useful. thanks You could also just regex away at it in that fashion: event.target.value = event.target.value.replace( /[^A-Za-z0-9]/g, '' ).slice( 5 ) which would remove any foreign characters and limit your string to 5? (Regex: replace everything NOT english with nothing) Also, correction: slice should be slice(0,5), my bad. Otherwise it would start at character 5, leaving it empty. If preventing anything that should't be typed into the text field is the goal, maybe it can be accomplished in a much simpler way: event.target.value = event.target.value.replace( /[^A-Za-z0-9]/g, '' ).slice(0, 5); This limits the characters you can enter to A-Z, a-z and 0-9, and imposes a max-length. document.querySelector( '#input' ).addEventListener( 'input', event => { event.target.value = event.target.value.replace( /[^A-Za-z0-9]/g, '' ).slice(0, 5); }); <input type="text" id="input" />
common-pile/stackexchange_filtered
Find the expected value of the observation at time T+1 in HMM I have built a HMM model in R using depmixs4 package. I have the state projections and posterior probabilities till time T. I understand that to find the state projections for time T+1, I have to multiply the transition matrix with the posterior from time T. But how can I find the expected value of the observation at time T+1, I don't know how to get the emission matrix since only the transition matrix is given in the output. I am new to HMM, just starting to learn. So would appreciate any help. https://stats.stackexchange.com/questions/130295/can-hidden-markov-models-be-used-to-predict-next-observation might help
common-pile/stackexchange_filtered
GuzzleHttp \ Exception \ ConnectException cURL error 7: Failed to connect to localhost port 8088: Connection refused I'm using laradock and I can access the page in browser http://localhost:8088/api/getakicks/get without any problems. But when I try to access it in controller I'm getting this error: GuzzleHttp \ Exception \ ConnectException cURL error 7: Failed to connect to localhost port 8088: Connection refused The code I use: $client = new \GuzzleHttp\Client(); // Set various headers on a request $client->request('GET', 'http://localhost:8088/api/getakicks/get'); Get IP address on MAC, click on wifi icon -> Open Network Preferences -> then copy the IP address <IP_ADDRESS> (own unique IP) and replace with xxx in below code $client->request('GET', 'http://xxx.xxx.xx.x:8088/api/getakicks/get'); yes I did change it and it is working well. changed the IP address from localhost:8088 to windows IP address 192.168.x.x:8088. Thank you for your comment. Now I get cURL error 6: Could not resolve host: 118.238.x.x I'm on Mac Replace x with number, for example <IP_ADDRESS> @RouhollahMazarei I did in the first place - just didn't wanna flash my IP out :) When using Docker, localhost in the container points to the container itself, not your host machine. In your case changing http://localhost:8088 to http://host.docker.internal:8088 lets the container access services on your host’s localhost, which fixes the connection issue.
common-pile/stackexchange_filtered
Inspect Managed Timer Information Using WinDbg I have a 32-bit memory dump from a .NET 2.0.50727.6421 process that has stopped working correctly. Some of its work is driven from a System.Threading.Timer. The timer is supposed to trigger some processing. I am thinking the timer may be stopped, but I cannot figure out how to see this from the memory dump. How can I tell the status of a .NET 2.0 System.Threading.Timer from a memory dump? I have dumped the timer object. Inside, there is a TimerBase object: 0:000> !do 0204f72c Name: System.Threading.Timer MethodTable: 74037344 EEClass: 73e04230 Size: 16(0x10) bytes (C:\Windows\assembly\GAC_32\mscorlib\<IP_ADDRESS>__b77a5c561934e089\mscorlib.dll) Fields: MT Field Offset Type VT Attr Value Name 74050924 400018a 4 System.Object 0 instance 00000000 __identity 740373cc 4000687 8 ...reading.TimerBase 0 instance 0204f73c timerBase I have dumped the TimerBase object, too. There is a handle inside: 0:000> !do 0204f73c Name: System.Threading.TimerBase MethodTable: 740373cc EEClass: 73e699dc Size: 24(0x18) bytes (C:\Windows\assembly\GAC_32\mscorlib\<IP_ADDRESS>__b77a5c561934e089\mscorlib.dll) Fields: MT Field Offset Type VT Attr Value Name 740535d0 4000682 4 System.IntPtr 1 instance 52b0508 timerHandle 740535d0 4000683 8 System.IntPtr 1 instance 146f798 delegateInfo 74052f54 4000684 c System.Int32 1 instance 0 timerDeleted 74052f54 4000685 10 System.Int32 1 instance 0 m_lock I'm not sure how to inspect the timerHandle. I can look at the raw memory, but this doesn't tell me much: 0:000> dd 52b0508 052b0508 01465ae8 014b2fe0 008632fd 74a498c9 052b0518 0146f798 00001770 00000000 00000003 052b0528 00000001 ffffffff ffffffff 00000000 052b0538 00000000 00000000 00000000 00000000 052b0548 3980d87d 8a002250 00440050 00430049 052b0558 0072006f 004c0065 00430049 006e006f 052b0568 00690066 002e0067 006d0058 0053006c 052b0578 00720065 00610069 0069006c 0065007a I am not sure what the handle is pointing to. I tried dumping it as _KTIMER object, but I don't know enough to see if this is the right kind of object, and if it is, what kind of thing it is: 0:000> dt -r _KTIMER 52b0508 ntdll!_KTIMER +0x000 Header : _DISPATCHER_HEADER +0x000 Type : 0xe8 '' +0x001 TimerControlFlags : 0x5a 'Z' +0x001 Absolute : 0y0 +0x001 Wake : 0y1 +0x001 EncodedTolerableDelay : 0y010110 (0x16) +0x001 Abandoned : 0x5a 'Z' +0x001 Signalling : 0x5a 'Z' +0x002 ThreadControlFlags : 0x46 'F' +0x002 CycleProfiling : 0y0 +0x002 CounterProfiling : 0y1 +0x002 GroupScheduling : 0y1 +0x002 AffinitySet : 0y0 +0x002 Reserved : 0y0100 +0x002 Hand : 0x46 'F' +0x002 Size : 0x46 'F' +0x003 TimerMiscFlags : 0x1 '' +0x003 Index : 0y1 +0x003 Processor : 0y00000 (0) +0x003 Inserted : 0y0 +0x003 Expired : 0y0 +0x003 DebugActive : 0x1 '' +0x003 DpcActive : 0x1 '' +0x000 Lock : 0n21388008 +0x000 LockNV : 0n21388008 +0x004 SignalState : 0n21704672 +0x008 WaitListHead : _LIST_ENTRY [ 0x8632fd - 0x74a498c9 ] +0x000 Flink : 0x008632fd _LIST_ENTRY +0x004 Blink : 0x74a498c9 _LIST_ENTRY [ 0x6aec8b55 - 0xc75ff01 ] +0x010 DueTime : _ULARGE_INTEGER 0x00001770`0146f798 +0x000 LowPart : 0x146f798 +0x004 HighPart : 0x1770 +0x000 u : <unnamed-tag> +0x000 LowPart : 0x146f798 +0x004 HighPart : 0x1770 +0x000 QuadPart : 0x00001770`0146f798 +0x018 TimerListEntry : _LIST_ENTRY [ 0x0 - 0x3 ] +0x000 Flink : (null) +0x004 Blink : 0x00000003 _LIST_ENTRY +0x000 Flink : ???? +0x004 Blink : ???? +0x020 Dpc : 0x00000001 _KDPC +0x000 Type : ?? +0x001 Importance : ?? +0x002 Number : ?? +0x004 DpcListEntry : _LIST_ENTRY +0x000 Flink : ???? +0x004 Blink : ???? +0x00c DeferredRoutine : ???? +0x010 DeferredContext : ???? +0x014 SystemArgument1 : ???? +0x018 SystemArgument2 : ???? +0x01c DpcData : ???? +0x024 Period : 0xffffffff Dumping the handle gives me an error: 0:000> !handle 52b0508 f ERROR: !handle: extension exception 0x80004002. "Unable to read handle information" I can see there are 9 managed threads: 0:000> !threads ThreadCount: 148 UnstartedThread: 0 BackgroundThread: 8 PendingThread: 0 DeadThread: 139 Hosted Runtime: no PreEmptive GC Alloc Lock ID OSID ThreadOBJ State GC Context Domain Count APT Exception 0 1 2b5c 01448d50 a020 Enabled 00000000:00000000 01444930 0 MTA 2 2 988 01456d40 b220 Enabled 00000000:00000000 01444930 0 MTA (Finalizer) 3 4 1ae0 014a1f88 180b220 Enabled 14344f30:143467a0 01444930 0 MTA (Threadpool Worker) 4 5 315c 014b6278 80a220 Enabled 00000000:00000000 01444930 0 MTA (Threadpool Completion Port) 7 6 2034 05276be0 180b220 Enabled 00000000:00000000 01444930 0 MTA (Threadpool Worker) 9 7 1b74 052c2618 200b220 Enabled 1414f654:14151330 01444930 1 MTA [snip] 16 31 19bc 08195e00 180b220 Enabled 1436cb64:1436e7a0 01444930 0 MTA (Threadpool Worker) 17 28 2208 08196da0 180b220 Enabled 00000000:00000000 01444930 0 MTA (Threadpool Worker) 18 21 3290 08195248 880b220 Enabled 00000000:00000000 01444930 0 MTA (Threadpool Completion Port) [snip] Thread 0 is the main thread of this Windows service. Thread 2 is the GC finalizer thread. Thread 4: 0:004> !dumpstack OS Thread Id: 0x315c (4) Current frame: ntdll!NtDelayExecution+0xc ChildEBP RetAddr Caller,Callee 04defb68 77a411f8 KERNELBASE!SleepEx+0x62, calling ntdll!NtDelayExecution 04defba0 77a5b1e4 KERNELBASE!SleepEx+0x39, calling ntdll!RtlActivateActivationContextUnsafeFast 04defbd0 74a48e23 mscorwks!ThreadpoolMgr::TimerThreadFire+0x6d, calling KERNELBASE!SleepEx 04defc4c 74a48cd1 mscorwks!ThreadpoolMgr::TimerThreadStart+0x57, calling mscorwks!ThreadpoolMgr::TimerThreadFire 04defc58 761186e3 kernel32!BaseThreadInitThunk+0xe 04defc64 77c0aa29 ntdll!__RtlUserThreadStart+0x72 04defca8 77c0a9fc ntdll!_RtlUserThreadStart+0x1b, calling ntdll!__RtlUserThreadStart Thread 9 is a System.Net.Timer thread: 0:009> !dso OS Thread Id: 0x1b74 (9) ESP/REG Object Name 061bef84 1414f63c System.Object[] (System.Threading.WaitHandle[]) 061bef98 1414f63c System.Object[] (System.Threading.WaitHandle[]) 061befa8 1414f5fc System.Net.TimerThread+TimerNode 061befac 024c83a4 System.Net.TimerThread+TimerQueue 061befd8 1414f5a4 System.Net.TimerThread+TimerNode 061beff8 024c812c System.Net.ConnectionPool 061bf054 1414f63c System.Object[] (System.Threading.WaitHandle[]) 061bf090 01ffde94 System.Object[] (System.Threading.WaitHandle[]) 061bf09c 024c83a4 System.Net.TimerThread+TimerQueue 061bf0bc 01ffdde8 System.Collections.Generic.LinkedList`1[[System.WeakReference, mscorlib]] 061bf0c0 01ffddcc System.Collections.Generic.LinkedList`1[[System.WeakReference, mscorlib]] 061bf120 02003cc0 System.Threading.ThreadHelper 061bf378 02003cd4 System.Threading.ThreadStart It seems not so easy. Someone wrote a (.NET 4) extension for it: http://blog.steveniemitz.com/spt-a-windbg-extension-for-debugging-net-applications/ Sadly, the original links on that page give a 404 not found error. And the code in GitHub doesn't seem to mention Timer or TimerBase. 52b0508 doesn't look like a handle value but the DWORD at 52b0508+0x14 (0x1770) does look more like a handle value. What does !handle 1770 1 show? If that's not a timer handle, then try !handle 0 f Timer. That will dump all the timer handles in your process and it might offer up some clues. I get extension exception 0x80004002 "Unable to read handle information" That means you might not have a full dump which is required to debug managed code. How did you generate the dump? I'm confused by that error, too, because .dumpdebug does say MiniDumpWithHandleData I captured the memory dump via C:\Windows\SysWOW64\TaskMgr.exe right-click process, Create Memory Dump. (This is a 32-bit Windows service process on 64-bit system.) Specifically, .dumpdebug reports MiniDumpWithFullMemory MiniDumpWithHandleData MiniDumpWithUnloadedModules MiniDumpWithFullMemoryInfo MiniDumpWithThreadInfo Maybe procdump -ma <pid|procname> will include handle information. Strange, MiniDumpWithHandleData means it's in there. Maybe try a different version of Windbg? Which version are you using? Thank you. I will try again with procdump. I am apparently using an older version of WinDbg: 6.3.9600.17298 X86. I will try a more recent version. Installed WinDbg 10.0.10586.567 X86. Same error on !handle, so maybe it is the memory dump. Capturing a new one with procdump soon. Also make sure you have the MS symbols server in your symbols path. .symfix;.reload will do the trick in the latest debuggers (maybe not 6.3). The new dump from procdump apparently has the correct info. !handle shows me 8 timers: 268, 270, 5b4, 5bc, a88, ad4, 1044, 1788. When I dump the DWORD timerHandle I don't see these handle values in the vicinity. The value 1770 is still there, but !handle 1770 returns Error retrieving type. Then try !handle 1770 f Timer handle 1770 f Timer returns `Error retrieving type - unable to query object information - No object specific information available 0x1770 is 6000 in base 10. Just for info, is the timer set to fire every six seconds? Good catch. Yes, this time fires every 6 seconds. Interesting. That tells us that at least 6 DWORDs are meaningful from the output of dd 52b0508. The first three DWORDS, 01465ae8 014b2fe0 008632fd each could be a pointer. I would dd each one and you may find the handle value in the output. The fourth DWORD, 74a498c9, looks like a function address in a system DLL. Try ln 74a498c9 and see what that displays. The fifth DWORD, 0146f798 , is actually the delegateInfo field from your !do 0204f73c output. I would definitely dd 0146f798 and inspect that output.
common-pile/stackexchange_filtered
Is it possible to change BackgroundColor of snackbar for ios? My app uses a snackbar of material-components-ios. I want to change BackgroundColor to blue, but I can't. So is it possible to change the background color of snackbar? You have to hack it. :) In your pods there is a file called. MDCSnackbarMessageView.m there is function like this. I changed the value already to blue. And it will take hex value color code. - (UIColor *)snackbarBackgroundColor { // return MDCRGBAColor(0x32, 0x32, 0x32, 1.0f); //previous grey color return MDCRGBAColor(0x10, 0x3F, 0xFF, 1.0f);// blue color } Now do, clean and build again. Here is the output. @ozzotto take a look at my answer Answer below from @Kumuluzz should be the accepted answer. Thanks you. How to change button tintcolor? You don't have to hack it :-) Since you have added the swift tag to this question, then I'll give an answer with Swift code. MDCSnackbarMessageView.appearance().snackbarMessageViewBackgroundColor = .green MDCSnackbarManager.show(MDCSnackbarMessage(text: "Hi there")) @user3659077: Great, let me know if it works! If you find the solution better than the previous answer, then feel free to change the accepted answer to mine :-) https://meta.stackexchange.com/questions/5234/how-does-accepting-an-answer-work I'm using MaterialComponents version 68.1. If you too, try this: let message = MDCSnackbarMessage() message.text = "message" MDCSnackbarManager.messageTextColor = .white MDCSnackbarManager.snackbarMessageViewBackgroundColor = .blue MDCSnackbarManager.show(message) You can do that : MDCSnackbarManager.default.snackbarMessageViewBackgroundColor = UIColor.blue I also created a wrapper class to simplify usage : import MaterialComponents.MaterialSnackbar class Snackbar { static func show(message: String, actionMessage: String? = nil , actionHandler: MDCSnackbarMessageActionHandler? = nil, messageTextColor: UIColor? = nil, snackbarMessageViewBackgroundColor: UIColor? = nil, buttonTitleColor: UIColor? = nil){ MDCSnackbarManager.default.snackbarMessageViewBackgroundColor = snackbarMessageViewBackgroundColor MDCSnackbarManager.default.messageTextColor = messageTextColor MDCSnackbarManager.default.setButtonTitleColor(buttonTitleColor ?? UIColor.white, for: .normal) let snackbarMessage = MDCSnackbarMessage() snackbarMessage.text = message if(actionMessage != nil && actionHandler != nil){ let snackbarMessageAction = MDCSnackbarMessageAction() snackbarMessageAction.handler = actionHandler snackbarMessageAction.title = actionMessage snackbarMessage.action = snackbarMessageAction } MDCSnackbarManager.default.show(snackbarMessage) } } Usage : Snackbar.show(message: "Super message", snackbarMessageViewBackgroundColor: UIColor.blue)
common-pile/stackexchange_filtered
Compare Unicode code point range in Python3 I would like to check if a character is in a certain Unicode range or not, but seems I cannot get the expected answer. char = "?" # the unicode value is 0xff1f print(hex(ord(char))) if hex(ord(char)) in range(0xff01, 0xff60): print("in range") else: print("not in range") It should print: "in range", but the results show: "not in range". What have I done wrong? hex() returns a string. To compare integers you should simply use ord: if ord(char) in range(0xff01, 0xff60): You could've also written: if 0xff01 <= ord(char) < 0xff60: In general for such problems, you can try inspecting the types of your variables. Typing 0xff01 without quotes, represents a number. list(range(0xff01, 0xff60)) will give you a list of integers [65281, 65282, .., 65375]. range(0xff01, 0xff60) == range(65281, 65376) evaluates to True. ord('?') gives you integer 65311. hex() takes an integer and converts it to '0xff01' (a string). So, you simply need to use ord(), no need to hex() it. Just only use ord: if ord(char) in range(0xff01, 0xff60): ... hex is not needed. As mentioned in the docs: Convert an integer number to a lowercase hexadecimal string prefixed with “0x”. Obviously that already describes it, it becomes a string instead of what we want, an integer. Whereas the ord function does what we want, as mentioned in the docs: Given a string representing one Unicode character, return an integer representing the Unicode code point of that character. For example, ord('a') returns the integer 97 and ord('€') (Euro sign) returns 8364. This is the inverse of chr().
common-pile/stackexchange_filtered
How to display org-pomodoro timer in mode-line I see a video on youtube which display pomodoro clock in mode line I installed lolownia/org-pomodoro: pomodoro technique for org-mode but it's clock does not show on the mode line Alternatively, tried the answer How to show org-clock (and org-pomodoro) timer in mode-line? - Emacs Stack Exchange Unfortunatley, it's wrong to report error when start emacs. I found the the rainbow is nyan-mode: Nyan Cat for Emacs! which just display the scroll of the buffer. How could I show a basic orgclock in mode line? The title of the question differs from the last sentence in the body of the question that states: "How could I show a basic orgclock in mode line?". This answer addresses that last component of the question body. org-mode uses the global-mode-string which is incorporated into the default mode-line used by Emacs. To the extent a user has a custom mode-line-format that differs from the default, the global-mode-string may or may not be included as part of that user customization. To see the org-clock in the mode-line as part of a custom setup, a user may wish to ensure that the global-mode-string is incorporated. Alternatively, the user may wish to use org-timer-mode-line-string and set-up a mechanism to add/remove it when turning the clock on/off.
common-pile/stackexchange_filtered
Why does the same query explain & perform so differently on a slave MySQL server than a master? I have a master MySQL server and a slave server. The data is replicated between them. When I run this query on the master it's taking a number of hours; on the slave it takes seconds. The EXPLAIN plans back this up -- the slave examines far fewer rows than the master. However, since the structure and data in these two databases are exactly the same (or should be at least), and they're both running the same version of MySQL (5.5.31 Enterprise), I don't understand what's causing this. This is a similar symptom to this question (and others) but I don't think it's the same root cause because my two servers are in sync via MySQL replication, and the structure and data contents are (or should be) the same, and the OS & hardware resources are exactly the same on both servers -- they're VMWare and one is an image of the other. I've verified that the number of rows in each table is exactly the same on both servers, and that their configurations are the same (except for the slave having directives pointing to the master). Short of going through the data itself to see if there are any differences I'm not sure what else I can check, and would be grateful for any advice. The query is SELECT COUNT(DISTINCT(cds.company_id)) FROM jobsmanager.companies c , jobsmanager.company_jobsmanager_settings cjs , jobsmanager.company_details_snapshot cds , vacancies v WHERE c.company_id = cjs.company_id AND cds.company_id = c.company_id AND cds.company_id = v.jobsmanager_company_id AND cjs.is_post_a_job = 'Y' AND cjs.can_access_jobsmanager = 'Y' AND cjs.account_status != 'suspended' AND v.last_live BETWEEN cds.record_date - INTERVAL 365 DAY AND cds.record_date AND cds.record_date BETWEEN '2016-01-30' AND '2016-02-05'; The master explains it like this, 3 million rows on the driving table, no key usage, and takes over an hour to return a result: +----+-------------+-------+--------+-------------------------+----------------+---------+---------------------------------+---------+--------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+-------------------------+----------------+---------+---------------------------------+---------+--------------------------+ | 1 | SIMPLE | v | ALL | job_owner,last_live_idx | NULL | NULL | NULL | 3465433 | | | 1 | SIMPLE | c | eq_ref | PRIMARY | PRIMARY | 4 | s1jobs.v.jobsmanager_company_id | 1 | Using where; Using index | | 1 | SIMPLE | cds | ref | PRIMARY,company_id_idx | company_id_idx | 4 | jobsmanager.c.company_id | 538 | Using where | | 1 | SIMPLE | cjs | eq_ref | PRIMARY,qidx,qidx2 | PRIMARY | 4 | jobsmanager.c.company_id | 1 | Using where | +----+-------------+-------+--------+-------------------------+----------------+---------+---------------------------------+---------+--------------------------+ The slave uses a different driving table, uses an index, predicts more like 310,000 rows examined, and returns the result within a couple of seconds: +----+-------------+-------+--------+-------------------------+-----------+---------+----------------------------+--------+--------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+-------------------------+-----------+---------+----------------------------+--------+--------------------------+ | 1 | SIMPLE | cds | range | PRIMARY,company_id_idx | PRIMARY | 3 | NULL | 310381 | Using where; Using index | | 1 | SIMPLE | c | eq_ref | PRIMARY | PRIMARY | 4 | jobsmanager.cds.company_id | 1 | Using index | | 1 | SIMPLE | cjs | eq_ref | PRIMARY,qidx,qidx2 | PRIMARY | 4 | jobsmanager.c.company_id | 1 | Using where | | 1 | SIMPLE | v | ref | job_owner,last_live_idx | job_owner | 2 | jobsmanager.cds.company_id | 32 | Using where | +----+-------------+-------+--------+-------------------------+-----------+---------+----------------------------+--------+--------------------------+ I've run ANALYZE TABLE, OPTIMIZE TABLE and REPAIR TABLE ... QUICK on both servers to try to make them consistent, with no luck. As a temporary solution I can run the queries on the slave, as they're in cron scripts and even if they take a long time on the slave they won't increase load on the master the way they do when they run on the master. However I'd be grateful for any other information on why these are different or what else I could check/revise which would explain such a drastic difference between the two. The only thing I can find is that the slave has more free memory, as it's in little use; would that alone account for this? If not what else? $ ssh s1-mysql-01 free # master total used free shared buffers cached Mem: 99018464 98204624 813840 0 160752 55060632 -/+ buffers/cache: 42983240 56035224 Swap: 4095992 4095992 0 $ ssh s1-mysql-02 free # slave total used free shared buffers cached Mem: 99018464 80866420 18152044 0 224772 72575168 -/+ buffers/cache: 8066480 90951984 Swap: 4095992 206056 3889936 $ Thanks very much. The only really big difference between the 2 explains is that on the master no index is used on the vacancies table. You could try place an index hint (force index) into the select on master to force the use of job_owner index. You can also try to run analyse table on all tables invovled in the above query on the master to make sure that the table and index stats are updated. Ok thanks Shadow will do. Have you forced it to refresh the indexes recently on one server? If the stats are out of date it could be mistakenly ignoring indexes. I would suggest trying to do an optimize table ( http://dev.mysql.com/doc/refman/5.1/en/optimize-table.html ) on the master and see if that improves things. Hi, thanks I'd run optimize & analyse already with no great difference. I've now revised it to force index (job_owner) as Shadow suggested and that's brought the master's EXPLAIN down to 310k rows, and it's now returning within a couple of seconds. Thanks for the help. @Shadow - What is job_owner? It seems irrelevant. @JeremyJones - please provide SHOW CREATE TABLE. @RickJames job_owner is an index's name. Apparently it is not irrelevant. The only really big difference between the 2 explains is that on the master no index is used on the vacancies table. You could try place an index hint (force index) into the select on master to force the use of job_owner index. You can also try to run analyze table on all tables involved in the above query on the master to make sure that the table and index stats are updated. Adding FORCE INDEX job_owner to the query successfully changed the EXPLAIN plan on the master to be 310k rows, similar to the slave. Do we know what is causing the difference? I have the same exact issue but I cannot determine why the master is not using the index. I have also the same problem, but in my case slave was not using index. Index hints (use index / force index) helped, but it's not good solution in this case. So I tried run analyze table on slave server, and it has fixed the problem: ANALYZE NO_WRITE_TO_BINLOG TABLE tbl_name Now both servers use correct indexes. NO_WRITE_TO_BINLOG - needed in case when it's runing on replica. Also ANALYZE should be executed during the low peak of load or a maintenance window, othervise you can get many user queries stuck in Waiting for table flush state.
common-pile/stackexchange_filtered
Problems assigning value with setState New to using React and having some trouble trying to get changes to a Textfield to update state. Using a functional component where the initial state is set with useState. I'm sure I'm just missing something simple... but finally giving up and asking for help. Here's the full code for the functional component: import React, { useState } from 'react'; import Step1 from '../Element/FormStep1'; const NewForm = () => { let [object, setObject] = useState({ property: { entry: 'string' } }); function handleChange(event) { const {name, value} = event.target; setObject({ ...object, [name]: value }) console.log(object) }; return ( <form> <Step1 handleChange={handleChange} object={object} /> </form> ); } export default NewForm And for the Form Component: import React from 'react'; import TextField from '@material-ui/core/TextField'; import t from 'typy' export default function Step1(props) { return( <React.Fragment> <TextField name={t(props.object, 'property.entry').safeObject} type='text' onChange={props.handleChange} /> </React.Fragment> ); } When the handleChange function runs, instead of replacing the target property, it creates a new property with the original value as the name i.e. object { property: { entry: 'string' }, string: value //Text input } The intention is to replace the value ‘string’ with the text input. object { property: { entry: 'value' //Text input } } Where does ...object comes from? What does setObject do? please provide a full code. Can't be answered otherwise. @errorQD please add useState line From your question, I can't understand what is wrong and how it should be, could you add the desired behavior and how it's wrong. (Show how the object should be if it worked correctly) So I managed to figure this out after some sleep, and more online research. Using the set method from lodash was the way to go. So a minor change to the name property on my Textfield was required: <TextField name='property.value' type='text' onChange={props.handleChange} /> And after importing set in my function component file, import { set } from 'lodash'; I could amend the handleChange function as follows: function handleChange(event) { const {name, value} = event.target; set(object, name, value) console.log(object) // to see that it worked in the console. } This is now working as intended (several hours of frustration later) Note... here's where I actually found the answer: https://levelup.gitconnected.com/handling-complex-form-state-using-react-hooks-76ee7bc937 If you want to have an object with a value property, you should add {}. You should change setObject({ ...object, [name]: value }) to setObject({ ...object, [name]: { value } // added { } }) Your object will be changed in next render. function handleChange(event) { const {name, value} = event.target; const newObject = { ...object, [name]: value } setObject(newObject) console.log('newObject', newObject) console.log('currentObject', object) }; newObject != object
common-pile/stackexchange_filtered
Karbonn Titanium S1 suddenly switches off My Karbonn Titanium S1 smartphone switches off when I open the gallery, play any game, listen to music or do anything with the phone without connecting the charger. The screen starts to shrink and the phone is switched off. After restarting the phone battery is empty. How can I solve this? Well,that's weird maybe you should try performing factory reset to see if that helps you out ! And if it still doesn't then better would be talking it to the customer support as it may be happening because of some hardware problems. Is the battery indicator on the phone showing full charge before you disconnect it? and after restarting the phone battery is empty. This shows the battery of the phone is dead. You will need to replace the battery to stop the sudden drain like this. I had the same problem with the same phone . I changed my battery and everything is OK now
common-pile/stackexchange_filtered
twitter bootstrap ajust label field properly in horizontal form We have the following form: <div class="form-horizontal"> <fieldset> <legend>Reports</legend> <div class="control-group"> <label class="control-label">Year</label> <div class="controls"> <select id="ddlYear" class="input-small"> <option value="2011">2011</option> <option selected="selected" value="2012">2012</option> </select> </div> </div> <div class="control-group"> <label class="control-label">Project</label> <div class="controls"> <label class="" id="lbProjectDesc">House X repairs</label> </div> </div> <div class="control-group"> <label class="control-label">Costs</label> <div class="controls"> <div class="input-append"> <input type="text" value="0,0" class="span2" id="tbCosts" disabled="disabled" > <span class="add-on">€</span> </div> </div> </div> <hr> <div class="control-group"> <div class="controls"> <input type="submit" value="Save" onclick="$(this).button('loading');" id="btnSave" class="btn btn-primary" data-loading-text="Saving..."> </div> </div> </fieldset> </div>​ http://jsfiddle.net/9JNcJ/2/ The problem is that the label lbProjectDesc (text= "House X repairs") is not displayed correctly. What class can we set or what change needs to be made to fix it? (Preferrably the correct way, not a css hack) Taking into account Selva and davidkonrad responses we solved it like this: .form-horizontal .control-group .controls label { margin-top: 5px; } This way it affects all the cases where labels are inside horizontal forms but doesnt displace verticaly the description labels of other input types... Works too with padding-top: 5px; #lbProjectDesc { margin-top: 5px; } (doesn't consider this as a "hack" :) forked : http://jsfiddle.net/davidkonrad/LK95Q/ +1 For being valid, but not being applicable globally (we would need 1 rule per label inside a horizontal form) we cannot mark it as the correct awnser Avoid the unwanted padding from that and set what ever you want. Here problem is padding. #lbProjectDesc { margin-top: 5px; } add and see. Demo : http://jsfiddle.net/9JNcJ/5/ The problem with this solution is that it affects all the .control-labels in your fiddle... if you modify the padding-top you can see how it moves the rest of the labels (which affect the vertical aligment between "Year" and its select, and between "Cost" and its input) you can add class for specifically.
common-pile/stackexchange_filtered
What could be the problem in the following code snippet? The following piece of code was giving a segmentation fault whenever i was trying to pass ./a.out www.yahoo.com at the shell... main(int c,char *argv[]) { struct hostent *ptr; ptr = gethostbyname(argv[1]); printf("%s\n", ptr->h_name); } @ThiefMaster: He can name it whatever he likes. On a Linux system that builds fine (and runs fine) with gcc 4.5 (after adding #include for netdb.h, sys/socket.h and stdio.h) @DeadMG: True, but argc/argv are the de-facto standard. You should check the return value (ptr) if it is NULL (gethostbyname returns NULL on error). When the function returns NULL you can check h_errno to see what exactly happened. See also: http://www.manpagez.com/man/3/gethostbyname/ You should also check the number of command line arguments befor you pass an argument to the gethostbyname function: if(c < 2) { /* print an error */ return 1; } Are you sure you're passing an argument to the application's command line? EDIT You must also check that gethostbyname() doesn't returns NULL. whenever i was trying to pass ./a.out www.yahoo.com at the shell... Check if two parameters are passed as command line parameters Check if gethostbyname returned a valid pointer, and report problem as needed . int main(int argc,char *argv[]) { struct hostent *ptr; /* Check if there is enough argument */ if (argc != 2) { printf ("\nusage: %s <host_name>\n", argv[0]); exit (1); } /* fill up hostent structure */ ptr = gethostbyname(argv[1]); /* Check if we have a valid one */ if (ptr != NULL) { printf ("\n%s\n", ptr->h_name); } else { /* Print the error */ printf ("\n%s", hstrerror (h_errno)); } printf ("\n"); return 0; } This works find here on my system with gcc file.c -Wall -Wextra and ./a.out says usage: ./a.out <host_name> And ./a.out yahoo.com tells yahoo.com EDIT1: Manuals say ... The gethostbyname*() and gethostbyaddr*() functions are obsolete. Applications should use getaddrinfo(3) and getnameinfo(3) instead.
common-pile/stackexchange_filtered
How to update UITableviewcell label in other method in swift3? I am trying to update a UITableviewcell's label in other method. I also saw many post on stackoverflow, non was work for me. i am using Swift 3 and I tried this one : let indexPath = IndexPath.init(row: 0, section: 0) let cell : CategoryRow = tableView.cellForRow(at: indexPath) as! CategoryRow and i am getting this error : fatal error: unexpectedly found nil while unwrapping an Optional value i am using this in other method where i update tableviewcell label text after 24hr. code is here var myQuoteArray = ["“If you have time to breathe you have time to meditate. You breathe when you walk. You breathe when you stand. You breathe when you lie down. – Ajahn Amaro”","We are what we repeatedly do. Excellence, therefore, is not an act but a habit. – Aristotle","The best way out is always through. – Robert Frost","“If you have time to breathe you have time to meditate. You breathe when you walk. You breathe when you stand. You breathe when you lie down. – Ajahn Amaro”","We are what we repeatedly do. Excellence, therefore, is not an act but a habit. – Aristotle","The best way out is always through. – Robert Frost" ] func checkLastRetrieval(){ // let indexPath = IndexPath.init(row: 0, section: 0) // let cell : CategoryRow = tableView.cellForRow(at: indexPath) as? CategoryRow let indexPath = IndexPath.init(row: 0, section: 0) guard let cell = tableView.cellForRow(at: indexPath) as? CategoryRow else { return } print("Getting Error line 320")// This is not printing on console let userDefaults = UserDefaults.standard if let lastRetrieval = userDefaults.dictionary(forKey: "lastRetrieval") { if let lastDate = lastRetrieval["date"] as? NSDate { if let index = lastRetrieval["index"] as? Int { if abs(lastDate.timeIntervalSinceNow) > 86400 { // seconds in 24 hours // Time to change the label var nextIndex = index + 1 // Check to see if next incremented index is out of bounds if self.myQuoteArray.count <= nextIndex { // Move index back to zero? Behavior up to you... nextIndex = 0 } cell?.quotationLabel.text = self.myQuoteArray[nextIndex] let lastRetrieval : [NSObject : AnyObject] = [ "date" as NSObject : NSDate(), "index" as NSObject : nextIndex as AnyObject ] userDefaults.set(lastRetrieval, forKey: "lastRetrieval") userDefaults.synchronize() } // Do nothing, not enough time has elapsed to change labels } } } else { // No dictionary found, show first quote cell?.quotationLabel.text = self.myQuoteArray.first! print("line 357") // Make new dictionary and save to NSUserDefaults let lastRetrieval : [NSObject : AnyObject] = [ "date" as NSObject : NSDate(), "index" as NSObject : 0 as AnyObject ] userDefaults.set(lastRetrieval, forKey: "lastRetrieval") userDefaults.synchronize() } } and Calling this method in ViewDidLoad() override func viewDidLoad() { super.viewDidLoad() checkLastRetrieval() } what should i do? Thanks in Advance make CategoryRow label optional with ? instead of ! I think you are updating the cell while there are no cells in a tableview. is this cell visible when you are trying to do that? Nauman Malik, i've done this.But doesn't work. Kirander, i've a CategoryRow cell, which is working fine and other elements are properly showing in cellForRowAtIndexPath, but it doesn't work outside. Lu,...Yes, this showing perfectly with other elements. Please edit your question to show more context for this code; As it stands the answer is simple; you are force unwrapping an optional that is returning nil because there is no cell onscreen at [0,0] at the time you ask for it; If you show more context as to where and how these lines fit into your app we may be able to help you @Paulw11.. Please Check updated question. Your table view will not have been rendered in viewDidLoad, so there will be no cells on screen at that point. Cells won't be onscreen until viewDidAppear. As I said in a comment below, the correct thing to do is update your table's data model so that the correct data is shown when the cell is subsequently loaded. The function cellForRow(at:) returns nil if the requested cell is not currently onscreen. Since you are calling your function in viewDidLoad there will be no cells on screen. This results in the crash because you have force downcast the result of cellForRow(at:). You should never use a force unwrap or a force downcast unless you are absolutely certain the result cannot be nil and/or the only possible action if it is nil is for your app to crash. Generally you should not update cells directly. It is better to update the table's data model and then call reloadRows(at:,with:) and let your data source cellForRow(at:) function handle updating the cell. If you are going to update the cell directly then make sure you use a conditional downcast with if let or guard let to avoid a crash if the cell isn't visible. In this case you still need to update your data model so that the cell data is correct when the cell does come on screen. Thanks Paulw11, Now i am able to getting cell properly, but my label text is not still updated. The fact that you were trying to run the code in viewDidLoad makes me suspicious; if the cell is already on the screen, why would you need to run anything in viewDidLoad? I suspect that you are creating a new instance of the view controller and updating that, rather than updating the instance that is already on screen. Your update seems to be time-based. Where is the timer running? What do you do when it fires? As per Apple Description, i can't play timer more than 3 min, that's why i've stored my custom data in dictionary, than check it from last date than update label. May be i am doing wrong? What I can't understand is why you are doing all of this at all; All you need to do is load the data into the table whenever the app loads and check whether the data needs to be refreshed when your app returns to the foreground. If it does, reload the row. You should use guard let statement: let indexPath = IndexPath(row: 0, section: 0) guard let cell = tableView.cellForRow(at: indexPath) as? CategoryRow else { return } If your cell type is not CategoryRow, you will return from the method. Try to find out why is your cell type not a CategoryRow. Thanks for reply, i've a CategoryRow cell, which is working fine and other elements are properly showing in cellForRowAtIndexPath. but it doesn't work outside. More importantly cellForRow(at:) will return nil if the cell isn't currently on the screen; you should be prepared for this by using an if let or guard let as suggested. Generally, you should update your data source and then reload the affected row rather than updating the cell directly. @ChetanLodhi, where do you use this code (which method)? V.Khambir......Actually i use this code in other method where i update label text after 24 hours. and this method is called in Viewdidload. @ChetanLodhi, probably, at this moment you have not any cells at the IndexPath(row: 0, section: 0). How to check if i'm getting cell label outside correctly? switch (indexPath.row) { case 0: let cell : CategoryRow = tableView.dequeueReusableCell(withIdentifier: "cell") as! CategoryRow cell.trainButton.layer.cornerRadius = 20}.... this is working fine for me.
common-pile/stackexchange_filtered
How to determine the correct way to pass the struct parameters in arm64 assembly? Suppose I have this function that has many struct parameters (example from Raylib): void DrawTexturePro(Texture texture, Rectangle source, Rectangle dest, Vector2 origin, float rotation, Color tint) { // do something with these } full test.c: // Texture, tex data stored in GPU memory (VRAM) typedef struct Texture { unsigned int id; // OpenGL texture id int width; // Texture base width int height; // Texture base height int mipmaps; // Mipmap levels, 1 by default int format; // Data format (PixelFormat type) } Texture; // Rectangle, 4 components typedef struct Rectangle { float x; // Rectangle top-left corner position x float y; // Rectangle top-left corner position y float width; // Rectangle width float height; // Rectangle height } Rectangle; // Vector2, 2 components typedef struct Vector2 { float x; // Vector x component float y; // Vector y component } Vector2; // Color, 4 components, R8G8B8A8 (32bit) typedef struct Color { unsigned char r; // Color red value unsigned char g; // Color green value unsigned char b; // Color blue value unsigned char a; // Color alpha value } Color; void DrawTexturePro(Texture texture, Rectangle source, Rectangle dest, Vector2 origin, float rotation, Color tint) { // do something with these } int main(int argc, char** argv) { Texture tex = {0, 1, 2, 3, 4}; Rectangle rec = { 0.0f, 0.1f, 0.2f, 0.3f}; Vector2 vec = { 0.4f, 0.5f}; Color color = {'a', 'b', 'c', 'd'}; DrawTexturePro(tex, rec, rec, vec, 0.6f, color); return 0; } when I try to disassemble this code, it is very interesting to see that: _DrawTexturePro: ; @DrawTexturePro .cfi_startproc ; %bb.0: sub sp, sp, #64 .cfi_def_cfa_offset 64 ldr w10, [sp, #64] ldr w9, [sp, #68] ldr w8, [sp, #72] str s0, [sp, #48] str s1, [sp, #52] str s2, [sp, #56] str s3, [sp, #60] str s4, [sp, #32] str s5, [sp, #36] str s6, [sp, #40] str s7, [sp, #44] str w10, [sp, #24] str w9, [sp, #28] str x1, [sp, #8] ldr w9, [sp, #8] str w9, [sp, #20] str w8, [sp, #4] add sp, sp, #64 ret .cfi_endproc full disassembly: .section __TEXT,__text,regular,pure_instructions .build_version macos, 13, 0 sdk_version 13, 1 .globl _DrawTexturePro ; -- Begin function DrawTexturePro .p2align 2 _DrawTexturePro: ; @DrawTexturePro .cfi_startproc ; %bb.0: sub sp, sp, #64 .cfi_def_cfa_offset 64 ldr w10, [sp, #64] ldr w9, [sp, #68] ldr w8, [sp, #72] str s0, [sp, #48] str s1, [sp, #52] str s2, [sp, #56] str s3, [sp, #60] str s4, [sp, #32] str s5, [sp, #36] str s6, [sp, #40] str s7, [sp, #44] str w10, [sp, #24] str w9, [sp, #28] str x1, [sp, #8] ldr w9, [sp, #8] str w9, [sp, #20] str w8, [sp, #4] add sp, sp, #64 ret .cfi_endproc ; -- End function .globl _main ; -- Begin function main .p2align 2 _main: ; @main .cfi_startproc ; %bb.0: sub sp, sp, #144 stp x29, x30, [sp, #128] ; 16-byte Folded Spill add x29, sp, #128 .cfi_def_cfa w29, 16 .cfi_offset w30, -8 .cfi_offset w29, -16 mov w8, #0 str w8, [sp, #20] ; 4-byte Folded Spill stur wzr, [x29, #-4] stur w0, [x29, #-8] stur x1, [x29, #-16] adrp x8, l___const.main.tex@PAGE add x8, x8, l___const.main.tex@PAGEOFF ldr q0, [x8] stur q0, [x29, #-48] ldr w8, [x8, #16] stur w8, [x29, #-32] adrp x8, l___const.main.rec@PAGE add x8, x8, l___const.main.rec@PAGEOFF ldr q0, [x8] str q0, [sp, #64] adrp x8, l___const.main.vec@PAGE add x8, x8, l___const.main.vec@PAGEOFF ldr x8, [x8] str x8, [sp, #56] adrp x8, l___const.main.color@PAGE add x8, x8, l___const.main.color@PAGEOFF ldr w8, [x8] str w8, [sp, #52] ldur q0, [x29, #-48] add x0, sp, #32 str q0, [sp, #32] ldur w8, [x29, #-32] str w8, [sp, #48] ldr s0, [sp, #64] ldr s1, [sp, #68] ldr s2, [sp, #72] ldr s3, [sp, #76] ldr s4, [sp, #64] ldr s5, [sp, #68] ldr s6, [sp, #72] ldr s7, [sp, #76] ldr w10, [sp, #56] ldr w9, [sp, #60] ldr w8, [sp, #52] str w8, [sp, #24] ldr x1, [sp, #24] mov x8, sp str w10, [x8] str w9, [x8, #4] mov w9, #39322 movk w9, #16153, lsl #16 fmov s16, w9 str s16, [x8, #8] bl _DrawTexturePro ldr w0, [sp, #20] ; 4-byte Folded Reload ldp x29, x30, [sp, #128] ; 16-byte Folded Reload add sp, sp, #144 ret .cfi_endproc ; -- End function .section __TEXT,__const .p2align 2 ; @__const.main.tex l___const.main.tex: .long 0 ; 0x0 .long 1 ; 0x1 .long 2 ; 0x2 .long 3 ; 0x3 .long 4 ; 0x4 .section __TEXT,__literal16,16byte_literals .p2align 2 ; @__const.main.rec l___const.main.rec: .long 0x00000000 ; float 0 .long 0x3dcccccd ; float 0.100000001 .long 0x3e4ccccd ; float 0.200000003 .long 0x3e99999a ; float 0.300000012 .section __TEXT,__literal8,8byte_literals .p2align 2 ; @__const.main.vec l___const.main.vec: .long 0x3ecccccd ; float 0.400000006 .long 0x3f000000 ; float 0.5 .section __TEXT,__literal4,4byte_literals l___const.main.color: ; @__const.main.color .byte 97 ; 0x61 .byte 98 ; 0x62 .byte 99 ; 0x63 .byte 100 ; 0x64 Looks like the params are passed so that parts of the structs are in multiple registers. According to AArch64 parameter passing rules if the Composite Type (in this case the struct?) is larger than 16 bytes, then rules B.4 dictates that it will be copied to allocated memory and passed as an address. If the argument type is a Composite Type that is larger than 16 bytes, then the argument is copied to memory allocated by the caller and the argument is replaced by a pointer to the copy. However, Rectangle is a struct comprised of 4 float values, making its size at least 32 bytes. Why then is its members passed to s0-s3 and s4-s7 (line 71 to 79 of my disassembly) instead of being just a single address passed in a register (and btw if that is the case, which register set will this address be used in, the regular or the floating point registers?) Rectangle's size is only 16 bytes since each IEEE-754 float is 4 bytes. The compiler is correct. (thanks @Siguza) My question is two fold: If I look at the C function declaration, how do I tell which way the compiler would like me to pass parameters to it using AArch64 assembly? (for example, passing an address of the struct on one register vs passing the truct's values onto multiple registers) did the AArch64 procedure call standard address it somehow and I'm just not seeing it, or is this really not defined? EDIT: Question clarified following @httpdigest's pointer in the comment. EDIT2: Question error fixed following @Siguza's comment. @httpdigest: Thank you for the pointer, and I tried to read that section again. However, it doesn't look right: it claims that Composite Types larger than 16 bytes will be passed by reference, but Rectangle is definitely larger than 16 bytes (4 floats each 8 bytes = 32 bytes) but it is still passed into s0 - s7 for some reason. I'll update my question to add that part. Honestly it's contradictory to what the output of the compiler is. @stanle float is 4 bytes. Nevertheless though, AAPCS64 is not the ABI used by Darwin. Some of the differences are outlined here, but I'm not convinced that this is a complete list. @Siguza: AHA! That's where I'm wrong. I typed "single precision (IEEE 754) how many bytes" into google and got 8 bytes as the answer and just didn't think enough to read into it. Turns out float is 4 bytes and double is 8 bytes lol. Thank you! that clears it up. Thanks to @httpdigest and @Siguza I think the answer to my question is as follow: The parameter passing rules for aarch64 can be found here. Darwin's rules that diverge from the standard can be found here. To determine the correct way of passing the parameter when you are looking at a function: Know the size of your param. Check if the param is larger than 16 bytes. If it is, then allocate memory, copy it to said memory and pass a pointer to the first available general register. If the param is smaller than 16 bytes and is not a composite type, depending on which machine type it is (reference here) pass it to either the general purpose registers (x0-x7 for example) or the floating point registers (q0-q7 for example). If the param is a struct smaller than 16 bytes, then each of its elements will be loaded onto the registers one by one (for example, line 70-73 in my disassembly shows how Rectangle is passed as four floats on s0 to s3). Thank you very much everyone! This is making more sense to me now.
common-pile/stackexchange_filtered
Extract some keywords like rent, deposit, liabilities etc. from unstructured document Writing an algorithm to extract some keywords like rent, deposit, liabilities etc. from rent agreement document. I used "naive bayes classifier" but the output is not giving desired output: my training data is like: train = [ ("refundable security deposit Rs 50000 numbers equal 5 months","deposit"), ("Lessee pay one month's advance rent Lessor","security"), ("eleven (11) months commencing 1st march 2019","duration"), ("commence 15th feb 2019 valid till 14th jan 2020","startdate")] The below code is not giving desired keyword: classifier.classify(test_data_features) Please share if there are any libraries in NLP to accomplish this. Seems like you need to make your specific NER(Named Entity Recognizer) for parsing your unstructured document. where you need to tag every word of your sentence into certain labels. Based on the surrounding words and context window your trained NER will be able to give you the results which you looking for. Check standford corenlp implementation of NER. Thanks for response. I was checking NER and it looks it is often used to extract information about person, location, organisation etc. I am not sure if it can be used to identify words, like "deposit", "startdate", etc. You have to make your own CRF
common-pile/stackexchange_filtered
WKWebView File upload isn't working from SwiftUI Package but it's working fine in a normal project I am developing a SwiftUI package of which a part is for WebView. I am using WKWebview. Now, the issue is when I am using the same code for a typical project the image/file upload is working fine inside webview. But when I am building the package and using the package inside the project the file upload isn't working. Inside webview for uploading files, javascript enable is required as far I know. I've enabled this as well. Camara and photo library access permission also added inside info.plist Could anybody tell me what is missing? Is there anything additional needed for the package? #if os(iOS) typealias WebViewRepresentable = UIViewRepresentable //#elseif os(macOS) //typealias WebViewRepresentable = NSViewRepresentable #endif #if os(iOS) //|| os(macOS) import SwiftUI import WebKit import MobileCoreServices /** This view wraps a `WKWebView` and can be used to load a URL that refers to both remote or local web pages. When you create this view, you can either provide it with a url, or an optional url and a view configuration block that can be used to configure the created `WKWebView`. You can also provide a custom `WKWebViewConfiguration` that can be used when initializing the `WKWebView` instance. */ public struct WebViewTS: WebViewRepresentable { // MARK: - Initializers /** Create a web view that loads the provided url after the provided configuration has been applied. If the `url` parameter is `nil`, you must manually load a url in the configuration block. If you don't, the web view will not present any content. - Parameters: - url: The url of the page to load into the web view, if any. - webConfiguration: The WKWebViewConfiguration to apply to the web view, if any. - webView: The custom configuration block to apply to the web view, if any. */ public init( appData: String, webConfiguration: WKWebViewConfiguration? = nil, viewConfiguration: @escaping (WKWebView) -> Void = { _ in }, callback: @escaping (DownloadRes) -> ()){ self.appData = appData self.webConfiguration = webConfiguration self.viewConfiguration = viewConfiguration self.callback = callback } // MARK: - Properties private let appData: String private let webConfiguration: WKWebViewConfiguration? private let viewConfiguration: (WKWebView) -> Void let callback: (DownloadRes) -> () var downloadUrl = URL(fileURLWithPath: "") // MARK: - Functions #if os(iOS) public func makeUIView(context: Context) -> WKWebView { makeView(context: context) } public func updateUIView(_ uiView: WKWebView, context: Context) {} #endif #if os(macOS) public func makeNSView(context: Context) -> WKWebView { makeView() } public func updateNSView(_ view: WKWebView, context: Context) {} #endif private func setupFilePicker(for webView: WKWebView, context: Context) { webView.navigationDelegate = context.coordinator } private func setupDownloadHandling(for webView: WKWebView, context: Context) { webView.navigationDelegate = context.coordinator } public func makeCoordinator() -> Coordinator { Coordinator(self) } public class Coordinator: NSObject, WKNavigationDelegate, UINavigationControllerDelegate, UIImagePickerControllerDelegate, WKDownloadDelegate { @available(iOS 14.5, *) public func webView(_ webView: WKWebView, navigationAction: WKNavigationAction, didBecome download: WKDownload) { download.delegate = self } @available(iOS 14.5, *) public func download(_ download: WKDownload, decideDestinationUsing response: URLResponse, suggestedFilename: String, completionHandler: @escaping (URL?) -> Void) { print(suggestedFilename) let fileManager = FileManager.default let documentDirectory = fileManager.urls(for: .documentDirectory, in: .userDomainMask)[0] let fileUrl = documentDirectory.appendingPathComponent("\(suggestedFilename)", isDirectory: false) parent.downloadUrl = fileUrl print(parent.downloadUrl) completionHandler(fileUrl) } // MARK: - Optional @available(iOS 14.5, *) public func downloadDidFinish(_ download: WKDownload) { print("final") print(parent.downloadUrl) // Present the alert controller on the main thread DispatchQueue.main.async { let downRes = DownloadRes(isSuccess: true, path: self.parent.downloadUrl.absoluteString, url: self.parent.downloadUrl) self.parent.callback(downRes) } } @available(iOS 14.5, *) public func download(_ download: WKDownload, didFailWithError error: Error, resumeData: Data?) { DispatchQueue.main.async { let downRes = DownloadRes(isSuccess: false, path: "", url: URL(fileURLWithPath:"")) self.parent.callback(downRes) } } var parent: WebViewTS init(_ parent: WebViewTS) { self.parent = parent } // Implement WKNavigationDelegate methods for file upload // For example, you can use webView(_:decidePolicyFor:decisionHandler:) // to handle file upload requests. public func webView(_ webView: WKWebView, decidePolicyFor navigationAction: WKNavigationAction, preferences: WKWebpagePreferences, decisionHandler: @escaping (WKNavigationActionPolicy, WKWebpagePreferences) -> Void) { if #available(iOS 14.5, *) { if navigationAction.shouldPerformDownload { decisionHandler(.download, preferences) } else { decisionHandler(.allow, preferences) } } else { // Fallback on earlier versions } } } } private extension WebViewTS { func makeWebView() -> WKWebView { let configuration = WKWebViewConfiguration() configuration.defaultWebpagePreferences.allowsContentJavaScript = true let script = """ var script = document.createElement('script'); script.src = 'https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/MathJax.js?config=default&#038;ver=1.3.8'; script.type = 'text/javascript'; document.getElementsByTagName('head')[0].appendChild(script); """ let userScript = WKUserScript(source: script, injectionTime: .atDocumentStart, forMainFrameOnly: true) let contentController = WKUserContentController() contentController.addUserScript(userScript) let webViewConfiguration = WKWebViewConfiguration() webViewConfiguration.preferences.setValue(true, forKey: "allowFileAccessFromFileURLs") let preferences = WKWebpagePreferences() preferences.allowsContentJavaScript = true webViewConfiguration.defaultWebpagePreferences = preferences webViewConfiguration.userContentController = contentController return WKWebView(frame: CGRect(x: 0.0, y: 0.0, width: 0.1, height: 0.1), configuration: webViewConfiguration) } func makeView(context: Context) -> WKWebView { let view = makeWebView() viewConfiguration(view) setupFilePicker(for: view, context: context) setupDownloadHandling(for: view, context: context) tryLoad(into: view) return view } func tryLoad( into view: WKWebView) { // let url = URL(string: AppConstants.BASE_URL) // view.loadHTMLString(appData, baseURL: url) let url = URL(string: "https://www.ilovepdf.com/jpg_to_pdf") let request = URLRequest(url: url!) view.load(request) } } #endif I've enabled javascript in sevaral process and trying with coordinator to enable file upload. When I am attaching image from photo libray or camera it's working fine with local code. But when the same code I am using from SwiftUI package, then it's not working.
common-pile/stackexchange_filtered
Defacto template application in Python Is there a defacto template application for Python? I am trying to auto generate C code for use in unit tests from python My original approach using print statements is very clunky and error prone It struck me that a template application such as those used in web app development might be a more elegant solution My initial research seems to suggest Cheetah is a good option, however there seem to be many potential options My requirements would be that it is reliable and simple - would Cheetah represent 'best practice' for this sort of application? While there are many standalone and powerful template engines in Python there is one in Python standard library. There is Template class in string module that implements PEP 292 "Simpler String Substitutions" that is very easy to learn and use. Personally I prefer to use this in my unit tests. I think this is the simplest and best solution for my C auto-generation needs (for now...) I don't think there's such a thing as a defacto templating system for Python, there's quite a wide variety of them. Personally, I haven't tried out Cheetah yet, but had some very successful experiences with Jinja2, and there's also alot of buzz around Mako. Both of these are more focused on generating HTML code, but there's really no impediment on using them for anything else. I would go with the one that provides the most comfortable syntax for what you're used to. @Hiett it is just a first impression. They use three different tag styles for different tags: <% foo 'bar' %>, % foo bar, $(foo(bar)). @Hiett if you are asking about tag styles in Mako then you need to know that you cannot choose. Every tag must use its own style. If you question was about template engine I would answer that I'm using different engines for different tasks. Django (they have its own Jinja like template engine) for usual web projects. Mako for advanced non Django web projects. string.Template (as in my answer) in unit tests where it is better to depend on fewer external libraries. actually the mako documentation isn't very good, too few examples - e.g. how do you push variables into a context? @Hiett here http://www.makotemplates.org/docs/usage.html#using-templatelookup **kwargs in second example is the context. let us continue this discussion in chat
common-pile/stackexchange_filtered
Hibernate Criteria Child Restriction I have a criteria to review list of parent objects Criteria criteria = session.createCriteria(Parent.class,"parent") but i need to add a restriction on the list of the child objects , since I dont need all the child objects Please help Not knowing the specifics of the restriction you want to add, it will look like this substituting in your own restriction for "yourRestriction". In its current form, this will look for values in the someColumn column of the parent table for values that are equal to the String "yourRestriction". Criterion restriction = Restrictions.eq("parent.someColumn", "yourRestriction"); criteria.add(restriction); More info: Hibernate Criteria documentation for more examples Restrictions documentation for my types of Restrictions like gt (greater than) and ge (greater than or equal to) EDIT: to create a restriction on a child collection: // Give the child collection an alias criteria.createAlias("childCollectionNameInParentClass", "childCollectionAlias"); // Create the restriction on the child collection's column Criterion restriction = Restrictions.eq("childCollectionAlias.someColumn", "someRestriction"); What i need is restriction on child collection i need to fetch only some child objects I edited my answer to include how to create a restriction on a child collection This code for restriction on child collection doesn't work, in the prepared Objects, child objects doesn't filter. All child records present in the collection. There is an other way to do this : criteria.setFetchMode("someColumn", FetchMode.EAGER);//It is deprecated but it works fine Criteria criteriaChildField = criteria .createCriteria("someColumn"); criteriaChildField.add(Restrictions.eq("restrictionAttribut", YOUR_RESTRICTION)); Hope this helps.
common-pile/stackexchange_filtered
asp, server-side populate textbox Questions about this do exist on stack but the answers are either wrong or of little use. Coming from older VB, and being familiar with Java (where the code runs fine on client), trying to populate a textbox server side asp is driving me up the wall. Microsoft's logic is simply inane (regardless of topic). Yes, it's probably due to a lack of conceptualization on my part, but if the textbox has an id and I can pull data, why can't I write to a different textbox with a named id? I can write to the body, so why not a text box with a named id? It makes no sense to me. Is there some issue with just text boxes? Any feedback, esp with a tech explanation of what I fail to understand, would be greatly appreciated. Very simple code example: <!DOCTYPE html> <html><body> <form action="p3.asp" method="post"> AnyVal: <input type="text" name="AnyVal" size="20" /> Other: <input type="text" name="Other" size="20" /> <input type="submit" value="Submit" /> </form> <% dim sRet sRet=Request.Form("AnyVal") If sRet <> "" Then Response.Write("Your Val is: " & sret & "!<br>") Other.value=sRet End If %> </body></html> Yes, it's probably due to a lack of conceptualization on my part, I agree with that statement, you simply can't do that ... let me explain it simple taking an extract of recent answer i wrote in the past days vbscript dropdown in function without using HTML (classic ASP) classic-asp like almost every other preprocessor languages for web applications, doesn't have the faculty to interact directly with your browser. instead the language provides you a set of methods to write and recieve data from the user-agent (not necesarry a browser). and the browser relies on HTML,XHTML,CSS and derivatives to construct an interface to the user, and due to fact that preprocessor doesn't interact directly with HTML, that its the reason because you can't make a dropdown in pure vbscript bypassing HTML code. What are you trying to do is to treat your form elements as objects using ASP language, unfortunately you can't do that because the form elements are not visible to server preprocessor (ASP). You need to write code to actually on the HTML, because such form elements doesn't exists to the ASP engine <% dim sRet sRet=Request.Form("AnyVal") If sRet <> "" Then Response.Write("Your Val is: " & sret & "!<br>") End If %> <!DOCTYPE html> <html><body> <form action="p3.asp" method="post"> AnyVal: <input type="text" value="<%=sRet%>" name="AnyVal" size="20" /> Other: <input type="text" value"<%=sRet%>" name="Other" size="20" /> <input type="submit" value="Submit" /> </form> </body></html> I hope I'm being clear in my explanation about how really ASP works Looking at your code I think the easiest way to achieve what you want is the following. <% dim sRet sRet=Request.Form("AnyVal") %> <!DOCTYPE html> <html><body> <form action="p3.asp" method="post"> AnyVal: <input type="text" name="AnyVal" size="20" /> Other: <input type="text" name="Other" size="20" value="<%= Sret %>" /> <input type="submit" value="Submit" /> <% If sRet <> "" Then Response.Write("Your Val is: " & sret & "!<br>") End If %> </form> </body></html> Your Classic ASP is server side code and your form elements are client side, so they can't interact with each other. NB You've tagged this as Classic ASP. If you mean ASP.net then you need to use a server side control - <asp:Textbox> Note that VB and VBScript are different, just as Java and JavaScript are different Edit - @Rafael beat me to posting this, and he gives a more detailed explanation of more or less the same, so you should probably accept his answer Thank you for your kind replies, but I think I got it nailed. I'll post the code below, in case others may find it useful. I kinda knew that asp & java interact with html differently, but one of the things that threw me was using a javascript variable to get the server side variable. for example, var cret; cret = <%= sret %> which works if is sret is a number but not if it's a string (don't know why, perhaps someone could enlighten me). I did figure out getting asp to generate a text box on the fly and populate it. But, I really wanted the javasript on the client side to do this part. From a practical point of view, I could have just given up and let asp do it, but I became a bit ocb about the whole thing. I've read a lot of postings, blogs, etc. about this issue. It seemed to me that the proposed solutions were either convoluted of just didn't work. The question seems to get asked a lot, so I was surprised to find no fairly straightforward answer. Everything I wanted to do could have been done in javascript, even using runat=server. But I specifically wanted to keep some of the code behind the page private. afaik, this needed asp. I suspect that this is the main reason other want to populate elements via javascript whicle still using asp on the server. The following code uses a form post, where asp can do some manipulation and generate an input element which can be subseqently accessed/manipulated by javascript. For this example the input element is hidden. <!DOCTYPE html> <html><head> <title>asp java test</title> </head> <body> <form action="p5.asp" method="post"> <p>Paste into the yellow box and then click Go.</p> Set Val: <input type="text" id="setval" name="setval" size="30" style="background-color: #FFFFCC" /> <input type="submit" value="Submit" /><p> Ret Val: <input type="text" id="retval" name="retval" size="30" /> </form> <!-- Server Side --> <% dim sret sret = Request.Form("setval") If sret = "" Then sret = " Nada" // <!-- gen output server side--> response.write("<input type='hidden' id='hid1' name='hid1' value='" & sret & "'/><br>") %> <!-- Client Side --> <script language = "JavaScript"> var cret; { // Get the val cret = document.getElementById("hid1").value; // Populate the output box document.getElementById("retval").value = cret; } </script> </body></html> Seems to work quite well for it's intended purpose. Gary ok, only a note to your solution: you are misundertanding how really works the ASP (jScript/VbScript) Server -side and The client side that is HTML, CSS and Javascript ... Javascript in the client is no the same that Jscript on the server. I suggest reading further about the topic. for your javacript proble you can do this var cret; cret='<%= sret %>'; Hi Rafael, 'Reading further' is valid, and I will be doing that. I think the biggest issue I face is needing to 'un-learn' concepts before being able to fully grasp new ones. Case in point: I was about to say that I already tried using: cret='<%= sret %>'; -- and it don't work for strings. Then I noticed the chr(39) wrapping the call. Wow! That simple. Thank you. As you probly know, in vb, we wrap strings in extra chr(38) quotes, esp if sending to a SQL data soucre. Now I understand a little bit more. :-)
common-pile/stackexchange_filtered
how to make bend modifier affecting from parent to child which has physics applied? the question is in the title, please help! The bend modifier is set to the DNA helix string. I want the two strings to split while the attached flags with physics are still on it. First I tried with a simple parent set to an empty in which my flag is hooked. But it wasn't moving at all. After that I've set parent to vertex(triangle). The hooked verticies are moving but the physics is not working now. How can I make this work? hello could you please share your file? https://blend-exchange.com/
common-pile/stackexchange_filtered
Django/Daphne, got NotImplementedError calling asyncio.create_subprocess_exec( under WINDOWS I am using Django 3.2 with Daphne 3.0.2, Python 3.8.9 under Windows 10. Trying to call proc = await asyncio.create_subprocess_exec( python_exe, bridge_script, stdin=asyncio.subprocess.PIPE, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.STDOUT, ) Gives me following error: Exception Type: NotImplementedError Exception Location: c:\python\python38\lib\asyncio\base_events.py, line 491, in _make_subprocess_transport When serving Django/ASGI with daphne. Under Linux everything runs fine. Serving under Windows with Python runserver also everything fine, Just daphne & windows gives me the error. Thanks for any help.
common-pile/stackexchange_filtered
Why can't this enable_if function template be specialized in VS2017? The following compiled with VS2015, but fails in VS2017 with the below errors. Was the code doing something non standard that has been fixed in VS2017, or should VS2017 compile it? #include "stdafx.h" #include <type_traits> template <typename E> constexpr auto ToUnderlying(E e) { return static_cast<std::underlying_type_t<E>>(e); } template<typename T> bool constexpr IsFlags(T) { return false; } template<typename E> std::enable_if_t<IsFlags(E{}), std::underlying_type_t<E>> operator | (E lhs, E rhs) { return ToUnderlying(lhs) | ToUnderlying(rhs); } enum class PlantFlags { green = 1, edible = 2, aromatic = 4, frostTolerant = 8, thirsty = 16, growsInSand = 32 }; bool constexpr IsFlags(PlantFlags) { return true; } int main() { auto ored = PlantFlags::green | PlantFlags::frostTolerant; return 0; } Errors are: c:\main.cpp(24): error C2893: Failed to specialize function template 'enable_if<false,_Ty>::type operator |(E,E)' with [ _Ty=underlying_type<_Ty>::type ] c:\main.cpp(24): note: With the following template arguments: c:\main.cpp(24): note: 'E=PlantFlags' c:\main.cpp(24): error C2676: binary '|': 'PlantFlags' does not define this operator or a conversion to a type acceptable to the predefined operator @Jarod42 ah, yes, thanks I'll get rid of that. This compiles on clang and gcc Visual C++ is probably wrongly assuming IsFlags(E{}) is always false. Edit: seems not to be the way I expected ideally VisualC++2017 is should be able to successfully compile the VisualC++2015 code since the 2017 is supposed be an in-place upgrade. This is interesting . . raised with MS here: https://connect.microsoft.com/VisualStudio/feedback/details/3141097 This is definitely a bug in Visual Studio. It compiles on GCC and Clang. It seems to be related with constexpr functions evaluated as template parameters. As a temporary workaround, you can make a template variable: template <typename T> bool constexpr is_flags_v = IsFlags(T{}); template<typename E> std::enable_if_t<is_flags_v<E>, std::underlying_type_t<E>> operator | (E lhs, E rhs) { return ToUnderlying(lhs) | ToUnderlying(rhs); } On Godbolt @ScottLangham It would be good to file a bug report with Visual Studio Did you guys report the bug? @ManjunathBabu I didn't report the bug; I haven't gotten around to it yet I successfully compiled the program on VS2015, took the executable on another clean machine that has VS2017. The executable successfully ran without any errors on this new machine. Does this mean, VC++Redistributables support both versions? @ManjunathBabu I would expect the executable to run just fine on another machine; this bug is a compile-time bug, not a runtime bug. I don't know much about VC++Redistributables @ManjunathBabu Yeh, the platform toolset 140 used by VS2015 is binary compatible with the 141 used by VS2017. So, the same runtime can be used. @Downvoter Where could I improve this answer? I would be happy to know what could be done better in this answer / future answer This might be a bug in Visual Studio. A possible workaround could be using template specialization instead of overloading: template <typename T> struct is_flags { constexpr static bool value = false; }; template <> struct is_flags<PlantFlags> { constexpr static bool value = true; }; template<typename E> std::enable_if_t<is_flags<E>::value, std::underlying_type_t<E >> operator | (E lhs, E rhs) { return ToUnderlying(lhs) | ToUnderlying(rhs); } on which version of Visual Studio was this code run? Thanks for the suggestion. The intention of the original code is to allow selected enums to work with bitwise operators and not others. I'm not sure this workaround can achieve that. I used Microsoft Optimized C++ Compiler Version 19.11.25507.1, included in VS 2017.3. I adjusted the workaround. Maybe a bit bulky, but does the job. Possibly! Sorry for moving the goal... but the code-base has the operator and non-specialized is_flags in a Library namespace. When defining an enum in a different namespace, it would be necessary to exit the namespace and open the Library namespace before specializing the is_flags and then get back to the original namespace. I guess that's possible, but is a bit of a pain for a library that's supposed to make working with enum as flags a breeze.
common-pile/stackexchange_filtered
Give an error if I don't reference a label I would like to have LaTeX compilation (I use pdflatex) give an error if there exists one label in my document that I do not reference. Is this possible? This is to ensure that I mention every table and figure for which I set a label. I would prefer to use the same label and ref commands as always, if possible. You could use the package refcheck. There already a question you might want to check out: here. It worked for me in LuaLaTeX. I'll give a brief example: \documentclass{article} \usepackage{amsmath} \usepackage{refcheck} \begin{document} \begin{figure} % ... \caption{A nice picture of my dog}\label{fig:dog} \end{figure} % As seen in figure~\ref{fig:dog} \end{document} With the second last line commented out a call of your LaTeX compiler (in my case LuaLaTex) will produce output like this ... Package refcheck Warning: Unused label `fig:dog' on input line 8. ... The output contains a warning that tells you that a label was never referenced. If you remove the % infront of the second line, the label will be referenced and therefore the warning disappears. How can I turn the warning into an error though? Refcheck has a command refcheckxrdoc (doc) to manually invoke the check (I have never used it). If the check fails you could probably throw an error by yourself (I have not done this as well) but here is a question regarding this topic. I would write a bash script to extract such warnings from the log file. @FabianKöhler Thanks, I posted a follow-up question here: http://tex.stackexchange.com/questions/253773/turn-a-warning-unused-label-into-an-error @FabianKöhler also, please use the @. I was not notified of your message. I had to check manually.
common-pile/stackexchange_filtered
Copy state from one contract to another I am experimenting with the zeppelin os upgradeable contracts but what I only need for my contract is to keep one specific state array when I deploy to another address. Avoiding proxy issues, upgrade frameworks and such, is there a way to do this manually somehow? This is just a rough sketch to give you some ideas about partitioning concerns into a simple upgradable structure. First, a very simple Owned contract for transferable access control. Then a Keeper contract that is meant to hold the array during upgrades. This version of Replaceable deploys it's own Keeper to get started. By doing so, Keeper recognizes that Replaceable is the owner, not the developer. To actually replace Replaceable, you would first deploy a new version of it. In the replacement case, the constructor would not create a new Keeper. Instead, you would pass in the address of the existing Keeper and use keeper = Keeper(keeperAddress); Of course, Keeper will initially ignore the new contract because it doesn't "own" the data store. You tell the old Replaceable to pull the trigger and transfer ownership away from itself. - pragma solidity 0.4.24; contract Owned { address public owner; modifier onlyOwner { require(msg.sender == owner); _; } constructor() public { owner = msg.sender; } function changeOwner(address newOwner) public onlyOwner returns(bool success) { owner = newOwner; return true; } } contract Keeper is Owned { bytes32[] public array; modifier onlyOwner { require(msg.sender == owner); _; } constructor() public { owner = msg.sender; } function appendArray(bytes32 value) public onlyOwner returns(uint arrayLength) { uint length = array.push(value); return length; } } contract Replacable is Owned { Keeper keeper; constructor() public { keeper = new Keeper(); } function getKeeperAddress() public view returns(address keeperAddress) { return address(keeper); } function appendInKeeper(bytes32 value) public onlyOwner returns(uint arrayLength) { return keeper.appendArray(value); } function inspectInKeeper(uint row) public view returns(bytes32 value) { return keeper.array(row); } function appointNewReplacable(address newContract) public onlyOwner returns(bool success) { return keeper.changeOwner(newContract); } } I left out quite a few details to keep the example on point. Hope it helps. thanks it helps. Although this is a bit complicated and would make the zeppelinos option compelling for upgrades. I was thinking of something more of like make a function that spits out all the array and then have it passed as an argument to the new contract constructor manually
common-pile/stackexchange_filtered
import RDS file stored online in the Open Science Framework I am trying to open a .rds file that is stored on the Open Science Framework: https://osf.io/4z7mb I'd like to open this file in R and I've tried a few different options, but neither have worked. Has anyone opened one of these files before? Attempt #1: using functions in the osfr package library(osfr) test <- osf_retrieve_file("4z7mb") %>% osf_download() Error: Authentication credentials were not provided. HTTP status code 401. Attempt #2: using option on ResearchGate require("httr") url <- "https://osf.io/4z7mb//?action=download" filename <- "AtlanticSalmonMeanForkLength.rds" GET(url, write_disk(filename, overwrite = TRUE)) test <- readRDS(filename) Error in readRDS(filename) : unknown input format Attempt #3: a similar stackoverflow question & answer osfURL <- ("https://osf.io/4z7mb//?action=download") download.file(osfURL,"AtlanticSalmonMeanForkLength.rds", method = "curl") test <- readRDS("AtlanticSalmonMeanForkLength.rds") Error in readRDS("AtlanticSalmonMeanForkLength.rds") : unknown input format When I click that link I get "Page not found". Are you sure that's the right page? When using osrf, have you authenticated to the service in R? See https://cran.r-project.org/web/packages/osfr/vignettes/auth.html You are using a different ID for attempt 2. It seems that file is not an rds file. It's a CSV like file. That can be read with require("httr");url <- "https://osf.io/gdr4q//?action=download";filename <- "data.csv";GET(url, write_disk(filename, overwrite = TRUE));data <- read.csv(filename, sep=";") @MrFlick, thanks for the helpful comments! 1) I do not have an authenticated personal access token that I'm using because I'm trying to make the script usable to anyone. Sounds like Option #1 probably isn't a good idea for that focus. 2) You're right, I forgot to change the link in Option #2 to match #1 and #2, I will correct that now. I still have the problem that when I click https://osf.io/d4a3u I get "page not found" It doesn't look like that dataset exists, or maybe you need to be logged in (authenticated) to access it. Not sure because I don't have an account there. But if you need to be logged in to see it, then anyone else that needs to access it will need to authenticate as well. @MrFlick I'm not sure why you can't see the file. The permissions on the project have been set to public. I'll have to reach out to OSF to figure out why it's not available. @MrFlick I got a quick response from OSF and the data should now be visible. There were backend issues with OSF that have now been corrected. Option #2 now works: require("httr") url <- "https://osf.io/4z7mb//?action=download" filename <- "AtlanticSalmonMeanForkLength.rds" GET(url, write_disk(filename, overwrite = TRUE)) test <- readRDS(filename)
common-pile/stackexchange_filtered
Is it safe to call controller action from JavaScript with parameters? I was wondering if it is safe to call a controller action from a javascript file and pass (say sensitive) parameters to it. Say that the end-user makes a bet on something, and then the value of the bet will be sent to a controller action. The bet itself is typed into a textbox on the page and javascript sends that value as a parameter to the action on a button click event. Before the action is called, can the user manipulate his/her bet value AFTER clicking the button? Thanks Can it be manipulated? yes He can just type a different value into the textbox in the first place. Where is the difference? It can be manipulated but there should be no problem as long as you're not going to send e.g. a price to the controller which will be processed. Think about that: Someone orders a pizza and the price of the pizza is sent from javascript to the controller. This is not a good way cause the price can be set to 0 and you get no money. In this case you will pass the ID or the name of the pizza to the controller and get the price from a database. In your case it's not a problem cause there is no difference if he types 10 and then manipulates it to 20 or if he directly types 20. In both cases you'll get the number of 20. If you're processing all data in your controller you should be safe. If you pass data to the controller check if it's a problem if it's not the value the user typed in. Thank you for that explanation and example. Got it ;-) Most of the modern websites today are heavily javascript driven, so yes, it is safe to call a controller action from a javascript file. But you'll have to make sure that you have certain security measures in place. HTTP over SSL. (To encrypt sensitive date & prevent Man-in-middle attacks) Antiforgery tokens (To prevent Cross site request forgery) Always validate the client inputs. Make sure you have server side validations in place (Data annotations etc) and the model state is valid (ModelState.IsValid) before you perform any operations. I know asp.net mvc has some level of built-in protection against Cross site scripting attacks, but make sure that the client input is always HTML encoded, if you're displaying them back to the webpage. I was wondering if it is safe to call a controller action from a javascript file and pass (say sensitive) parameters to it. "Save" in terms of...? If the connection is SSL, you can certainly assume that the values will be safely transferred from client to server. But "safe" in terms of stability depends on your application. Can the parameters be manipulated before sent to the controller? Certainly. But in what way are the parameters "sensitive"? There isn't really anything you could send from client to server without the client/user knowing about it. Edited my question. Thanks for the answer though
common-pile/stackexchange_filtered
Are there infinitely many real quadratic number fields with unique factorization? Unique factorization is a commutative ring in which every non-zero non-unit element can be written as a product of prime elements (or irreducible elements), uniquely up to order and units. I'm struggling with understanding and proving this. A bonus question posed by one of my professors. No one knows, that's an open problem. Ahhhhhhhhhhhhh. Lol. This problem is so far unsolved, your professor probably wanted to see if he could create a new Good Will Hunting.
common-pile/stackexchange_filtered
Must collections exposed by the view model implement ObservableCollection<T> with the MVVM pattern Recently, I have been trying to implement the MVVM design pattern, but I encountered 2 problems which I can't solve: As I see it, I must use ObservableCollection in my Model classes, in order to pass it in the ModelView to the View. I hope that I wrong, because the View must not affect the Model structure, and I shouldn't be limited to this specific collection type. Is there any way to do two-way binding with a value-type item list? Example: public ObservableCollection<bool> MyBooleans { get { return m_booleans; } } <ListView ItemsSource="{Binding MyBooleans}" ...> <ItemTemplate> ... <CheckBox IsChecked="{Binding}" ... /> ... </ItemTemplate> </ListView> No, see Dependency Properties, INotifyCollectionChanged, INotifyPropertyChanged. 2. Not without a wrapper. Please read the formatting help. Your view model should expose collections which change (i.e. have items added / removed) as ObservableCollections (or some other class that implements INotifyCollectionChanged). This does not mean your model needs to expose collection that implement this interface. Your view model is effectively an adapter on your model that makes it more readily bindable. As an example, if your application displays tweets, your service layer might return a model which is a list of tweets. Your view model would then insert these into an observable collection, resulting in your view being updated. You could then retrieve new tweets via your service at some point in the future (using a timer), these again would be returned as a list. Your view model would then add these tweets to its ObservableCollection resulting in the new items being visible in the view. I have Model that represent a file, and it has Flags member (List). Those flags need to be changed by the GUI. In this case, do I have to copy the collections, one to another, for every change?
common-pile/stackexchange_filtered
Does Maven have a way to get a dependency version as a property? I'm using a BOM to import dependencies from another project to mine, and I need a way to reference a dependency's version that is already declared in said BOM. So far, I've attempted to list the dependency version as a property in the BOM, but this approach fails because properties don't get imported with BOMs. I've seen where the Dependency Plugin's dependency:properties goal does almost exactly what I need, but instead of giving me a full path of the artifact I need the version as a property. Is there something out there that can give me the version of a resolved artifact as a property? UPDATE - 'Why not use a parent pom?' I commonly find myself working in application server environments, where the dependencies provided are specified with BOM artifacts (as it appears that this has become a somewhat common/standard way to distribute groups of inter-related artifacts, i.e. widlfly). As such, I want to treat the BOM as the single source of truth. The idea of doing something like re-delcaring a dependency version property that has already been defined in a BOM seems incorrect. If I were to define properties in a parent pom that mirrored an application server's environment, I now have to worry about keeping parent pom properties and BOM properties in sync - why even have a BOM at all at that point? The information is already available on the dependency tree, it's just a matter of exposing it... The usual approach is afaik to have a common parent (IRL example) that defines all versions. @zapl - see edit, I'm working specifically with a BOM. Why would you like to use a property for a dependency, cause it's defined via the BOM in the dependencyManagement so you don't need to define the version. Why do you need to reference the dependency? They wind up coming in handy for lots of reasons. For instance, I've recently had to work with JBoss modules. They require your to write a module.xml file explicitly stating any .jar dependencies you're using. Unless you have properties present in your pom such that you can filter these .jar names, you're left maintaining your module.xml by hand. The fact of the matter is that in the maven ecosystem, those versions wind up being useful in ways outside of just dependency declarations. Couldn't find any existing maven or plugin functionality for this, so I forked the old dependencypath-maven-plugin and altered it to use versions. Now I can drop in a plugin like this: <build> . . <plugins> . . <plugin> <groupId>io.reformanda.semper</groupId> <artifactId>dependencyversion-maven-plugin</artifactId> <version>1.0.0</version> <executions> <execution> <id>set-all</id> <goals> <goal>set-version</goal> </goals> </execution> </executions> </plugin> </plugins> </build> And access properties like this: groupId:artifactId:type[:classifier].version I.E. io.undertow:undertow-core:jar.version=1.3.15.Final Check out the README for more info on how to use the plugin. It's available @ Maven Central: <dependency> <groupId>io.reformanda.semper</groupId> <artifactId>dependencyversion-maven-plugin</artifactId> <version>1.0.0</version> </dependency> ... plugins all the way down ... The project has moved to GitLab: https://gitlab.com/josh-cain/dependencyversion-maven-plugin Can I somehow specify in the configuration of this Maven plugin which BOM import to use? My module does not import the BOM but I want to get a dependency version of the BOM anyway. Great thing, and I like besides everything that it automatically 'locks' snapshot versions, which is exactly what I need. Thank you. Short answer - yes, you can. In details, your root pom.xml: <properties> <slf4j.version>1.7.21</slf4j.version> </properties> ... <dependencyManagement> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> <version>${slf4j.version}</version> </dependency> ... </dependencyManagement> In modules pom.xml: <dependencies> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> </dependency> ... </dependencies> Also you can use ${slf4j.version} value to filter resources or in plugin configurations. Update In case you cannot use properties in the parent POM, you can either retreive all dependencies and their versions with dependency:list plugin; or use together dependency:list + antrun:run plugin; or configure CI server scripts to do it for you (e.g. with this example); or write a custom plugin to handle your versions logic. First sentence of question - "I'm using a BOM to import dependencies". If I were using a parent pom this wouldn't be a problem. My problem lies in that properties don't get imported from BOM dependencies, but versions of those dependencies do. Also, this answer has been stated already in several other S/O questions regarding how to share properties. I.E. http://stackoverflow.com/questions/1231561/how-to-share-common-properties-among-several-maven-projects Yeah, I'm halfway down the custom plugin road. It's the least glue-ish solution available (and everything in Maven seems to lead to plugins all too often). The retrieval is simple enough to do by hand, but the error-prone nature of it makes it highly undesirable. Something like a CI script is palatable, but still not ideal as it directly ties the stability of the build environment to a particular non-standard CI operation. This maven plugin is on Github (https://github.com/semper-reformanda/dependencyversion-maven-plugin) and it is a must for anyone dealing with Dependency versions, for instance when using Webjars dependencies - you can inject Webjar version numbers directly into your web resources. I had been looking for such a functionality for a long time, I hope more people come across it and that it gets up on Maven central (I actually think it should come with Maven out of the box) See my answer - I wrote the plugin since I couldn't find this functionality. But thanks for the kind words ;-) The project has moved to GitLab: https://gitlab.com/josh-cain/dependencyversion-maven-plugin
common-pile/stackexchange_filtered
PyQt dynamically switch between QSpinBox to QTimeEdit widgets depending on QComboBox data I have a combo box with 2 options 'time' or 'interval' when 'time' is selected I would like to show a QTimeEdit and When 'interval is selected I would like to show a QSpinBox. I can hide the interval widget and show the time widget but I do not know how to re-position it so that it is displayed where the interval widget was. Here is what I have so far: import sys from PyQt5 import QtWidgets as qtw from PyQt5 import QtGui as qtg from PyQt5 import QtCore as qtc class MainWindow(qtw.QMainWindow): def __init__(self): super().__init__() form = qtw.QWidget() self.setCentralWidget(form) layout = qtw.QFormLayout() form.setLayout(layout) self.when_list = qtw.QComboBox(currentIndexChanged=self.on_change) self.when_list.addItem('Every X Minutes', 'interval') self.when_list.addItem('At a specific time', 'time') self.interval_box = qtw.QSpinBox() self.time_edit = qtw.QTimeEdit() self.event_list = qtw.QComboBox() self.event_list.addItem('Event 1') self.event_list.addItem('Event 2') self.event_msg = qtw.QLineEdit() self.add_button = qtw.QPushButton('Add Event', clicked=self.add_event) layout.addRow(self.when_list, self.interval_box) layout.addRow(self.event_list) layout.addRow(self.event_msg) layout.addRow(self.add_button) self.show() def on_change(self): if self.when_list.currentData() == 'time': # Hide interval self.interval_box.hide() # Show time - how do I put this where interval_box was? self.time_edit.show() elif self.when_list.currentData() == 'interval': # Hide time - ERROR object has no attribute time_edit self.time_edit.hide() # show interval - ERROR object has no attribute interval_box self.interval_box.show() def add_event(self): pass if __name__ == '__main__': app = qtw.QApplication(sys.argv) mw = MainWindow() sys.exit(app.exec()) How can I fix the errors and dynamically switch between the widgets? Instead of hiding and showing widgets, you can use a QStackedWidget (which is similar to a tab widget, but without tabs) and use the combo box signal to select which one show. Note that you should not connect to a *changed signal in the constructor if you're going to set properties that could call that signal and the slot uses objects that don't exist yet: in your case you connected the currentIndexChanged signal in the constructor, but that signal is always called when an item is added to a previously empty combobox, and since at that point the time_edit object has not been created, you'll get an AttributeError as soon as you add the first item. While using signal connections in the constructor can be useful, it must always be used with care. class MainWindow(qtw.QMainWindow): def __init__(self): # ... self.when_list = qtw.QComboBox() self.when_list.addItem('Every X Minutes', 'interval') self.when_list.addItem('At a specific time', 'time') self.when_list.currentIndexChanged.connect(self.on_change) # ... self.time_stack = qtw.QStackedWidget() self.time_stack.addWidget(self.interval_box) self.time_stack.addWidget(self.time_edit) layout.addRow(self.when_list, self.time_stack) # ... def on_change(self): if self.when_list.currentData() == 'time': self.time_stack.setCurrentWidget(self.time_edit) elif self.when_list.currentData() == 'interval': self.time_stack.setCurrentWidget(self.interval_box) Another solution could be to remove the widget that is to be hidden and insert a row in the same place using QFormLayout functions, but while that layout is useful in many situations, it's mostly intended for pretty "static" interfaces. The alternative would be to use a QGridLayout, which allows setting more widgets on the same "cell": in that case, you can easily toggle visibility of items, but it could create some issues as the layout would also try to adjust its contents everytime (which can be solved by using setRetainSizeWhenHidden() for the widget's size policy. Thanks for this very well constructed/informative answer, your solution works and your answer has given me ideas on how to further improve it.
common-pile/stackexchange_filtered
responsive issue jquery click slider I have a click slide that goes back and forward through the images, But it doesnt scale or act responsive when on smaller window because obviously the width is set absolutely with pixels in the CSS and as the variable in my function. If i change this I'm not sure how it'll work though as it slides back and forth the width of each img (607px) Anyone got ideas?? Or a better way to do this?? HTML: <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.9.1/jquery.min.js"></script> <div id="slider"> <ul class="slides"> <li class="slide"><img src="images/banner1.jpg" class="img-responsive"></li> <li class="slide"><img src="images/banner2.jpg" class="img-responsive"></li> <li class="slide"><img src="images/banner3.jpg" class="img-responsive"></li> <li class="slide"><img src="images/banner1.jpg" class="img-responsive"></li> </ul> </div> CSS: #slider { width: 607px; height: 248px; overflow: hidden; } #slider .slides { display: block; width: 6000px; height: 248px; margin: 0px; padding: 0px; } #slider .slide { float: left; list-style-type: none; } JS: (function() { var width = 607; var slideSpeed = 300; var currentSlide = 1; var $slider = $('#slider'); var $slideContainer = $slider.find('.slides'); var $slides = $slideContainer.find('.slide'); var totalLength = $slides.length; $('#button-next').on('click', function(){ $slideContainer.animate({'margin-left': '-='+width}, slideSpeed, function(){ currentSlide++; if(currentSlide === $slides.length) { currentSlide = 1; $slideContainer.css('margin-left', '0'); } }); }); $('#button-prev').on('click', function(){ if(currentSlide === 1) { var pos = -1 * (width * ($slides.length -1)); $slideContainer.css('margin-left', pos); $slideContainer.animate({'margin-left': '+='+width}, slideSpeed); currentSlide = $slides.length - 1; } else { $slideContainer.animate({'margin-left': '+='+width}, slideSpeed); currentSlide--; } }); }); In order to make the slider you have posted responsive you will need to set your fixed dimensions in your CSS to fluid width. Like this: .slider { position: relative; padding: 10px; } .slider-frame { position: absolute; left: 0; top: 0; width: 100%; height: 100%; white-space: nowrap; } .slide { width: 100%; display: inline-block; } .control { width: 49%; } A wrote a very simple demo slider that hardly has any functionality just to serve the purpose of this explanation: http://codepen.io/nicholasabrams/pen/aOLoYM Resize your screen. The slides stay the width of the screen because the .slide and .slider are both set to 100% of the screens width. Dont forget to add your own functionality for next and prev that adjusts each slide progression distance according to the width of the slides in the slider. Also see here: How to make a image slider responsive? And this is a good tutorial with somewhat quality code used. The slide comes out usable as well! http://www.barrelny.com/blog/building-a-jquery-slideshow-plugin-from-scratch/ Hope this helps! UPDATE: Since the OP stated that the issue is with the js I dug up a simple slider I wrote a while back, here is the javascript/jQuery: $(function(){ $.fn.someCustomSlider = function (autoplay, velocity){ var sliderProps = [ // props n methods, accessible through data-slider-* attributes { settings : { invokeBecause : $('[data-widget~="slider"]'), autoplay : autoplay, speed : velocity, }, bindings : { slideRail : $('[data-function~="slide-rail"]'), nextButton : $('[data-function~="next"]'), prevButton : $('[data-function~="prev"]'), playButton : $('[data-function~="play"]'), pauseButton : $('[data-function~="pause"]'), stopButton : $('[data-function~="stop"]') // attach this functionality to the DOM }, methods : { slideNext : function(){ slideRail.animate({left: '-=100%'}, velocity) }, slidePrev : function(){ slideRail.animate({left: '+=100%'}, velocity) }, slideRestart : function(){ slideRail.animate({left: '0%'}, velocity) }, pause : function(){ window.sliderTimer = setInterval(slideNext, velocity) } } } ] $.each(sliderProps, function(){ // iterate through all of the slider objects properties window.SliderProps = this; // set slider props to be accessible to the global scope }); // slider props stored as vars var slideRail = SliderProps.bindings.slideRail; var play = SliderProps.bindings.playButton; var next = SliderProps.bindings.nextButton; var prev = SliderProps.bindings.prevButton; var pause = SliderProps.bindings.pauseButton; var stop = SliderProps.bindings.stopButton; var i = 0; function slideNext(){ var slideNext = SliderProps.methods.slideNext(); } function slidePrev(){ var slidePrev = SliderProps.methods.slidePrev(); } function slideStop(){ /*slideRail.stop(); */ window.clearInterval( sliderTimer ) } function slideRestart(){ var slidePrev = SliderProps.methods.slideRestart(); slideRail.stop(); window.clearInterval( sliderTimer ) } function autoPlay(){ SliderProps.methods.pause() } // elemen -> event delegation -> function() next.click(slideNext) prev.click(slidePrev) stop.click(slideRestart) pause.click(slideStop) play.click(autoPlay) } // close function slider() someCustomSlider(true, 1000); }); http://codepen.io/anon/pen/eytrh This was a basic version that I eventually extended but for simplicity sake, this should be just perfect I imagine. Thanks for the detailed response and demo @AlphaG33k My problem lies with my jQuery function I understand the CSS change fine, but I'm not sure how to rewrite the function to still cycle through images without stopping etc not using the width of the image. I'll have a proper look at those links, but should i be writing my code differently? Really new to jQ. If you put your partially working code in a codepen demo it will be easier for the community to help you Updated with new demo and code that works similar to yours. You are on the right path, everyone starts out with code just like yours! Wow, thanks again for the demo and updates @AlphaG33k It works really well i'll try and implement this. Does it loop through when it gets to the final image? Very much appreciated!! Sure, no problem teaching helps us learn. I actually feel bad because I would never write code like that now! But for learning purposes it should work. Sometimes the better the module the hard the source is to under stand. No it doesnt have the repeat built in but that is not that hard. You count the number of slides inside of the .slide container and then when the amount of next() firing or clicks reaches the amount of child slides then you know to reset the CSS back to left: 0 or whatever you wish. Don't forget to accept my answer if you like it by clicking the checkmark :) When working with responsive design, thinking in terms of pixels is a bad practice. Give your slider and each slide a width of 100vw instead of 607px. That would make it equivalent to one "viewport width". In your JavaScript try modifying the width variable to be the string 100vw as well. Make sure to include the unit. For more information on CSS units refer here. VW has sketchy browser support I wouldn't recommend getting into that mess just yet in a production site/app. IE still doesn't support much of the standard see here http://caniuse.com/#feat=viewport-units plus no android support in 4.2, 4.3! How about using good old % points for old time sake. Browser support is solid enough for me to feel comfortable. Percents work, but it depends on the elements position on the page. If I were building a modular system I would use vw.
common-pile/stackexchange_filtered
How to use non valid identifiers as django form field names I am not sure if I should post this question here or in serverfault since it involves both django and nginx configuration files. I have a django view that receives POST data from a form that contains file fields. The request goes through an nginx module (upload module) that takes file upload fields, stores them to a disk location and then appends extra fields with the local path where each file can be found, along with their names and other metadata, and then strips the original file fields from the POST. The fields that are appended are unfortunately not very flexible and their names contain either a period (.) or a dash (-). This is all well and fine but the problem comes on the python side. If would like to validate the incoming POST data using a form, but the I don't know how to define a form that has fields with names containing periods or dashes. Is there a way to tell the django form that the field has a different name that the variable used to store it? My workaround is to simply not use django forms and sanitize the incoming data in another way (how?). Or if somebody has experience using the upload module in nginx and knows how to use the directives upload_set_form_field $upload_field_name.name "$upload_file_name"; upload_set_form_field $upload_field_name.content_type "$upload_content_type"; upload_set_form_field $upload_field_name.path "$upload_tmp_path"; so that the $upload_field_name.XXXX appended fields don't use a period? (a dash works, but that again breaks python's identifiers, any other character breaks the configuration file) Any recomendations in either case? Thanks! Ok, I think I found a way to answer my own question and I hope it helps someone. The form I would use to handle the submitted POST with a field called "uploadedFile.name" would look something like this: from django.forms import Form, fields def UploadForm(Form): someField = fields.CharField() uploadedFile.name = fields.CharField() Which of course does not work since we can't use a period in the identifier (uploadedFile.name). The answer is to define the form fields using the fields dictionary of the Form class, where we can name the fields with a nice and arbitrary string: def UploadForm(Form): someField = fields.CharField() def __init__(self, *args, **kwargs): #call our superclasse's initializer super(UploadForm,self).__init__(*args,**kwargs) #define other fields dinamically: self.fields["uploadedFile.name"]=fields.CharField() Now we can use that form in a view and access uploadedFile.name's data through cleaned_data as we would any other field: if request.method == "POST": theForm = UploadForm(request.POST) if theForm.is_valid(): theUploadedFileName=theForm.cleaned_data["uploadedFile.name"] It works!
common-pile/stackexchange_filtered
Is this method valid to prove $\int_{a}^{b}f\left(x\right)g\left(x\right)dx=f\left(c\right)\int_{a}^{b}g\left(x\right)dx$? I came across this question in a book: Suppose $f(.)$ and $g(.)$ be two continuous functions in $\left[a,b\right]$. Assume that $g(.)$ does not change sign in $\left[a,b\right]$. Show that $\exists$ $c\in\left[a,b\right]$ such that $$\intop_{a}^{b}f\left(x\right)g\left(x\right)dx=f\left(c\right)\intop_{a}^{b}g\left(x\right)dx$$ I attempted in the following way: Let $h\left(x\right)=f\left(x\right)g\left(x\right)\forall x\in\left[a,b\right]$. Then, $\exists $ $c\in\left[a,b\right]$ such that $\frac{1}{b-a}\intop_{a}^{b}h\left(x\right)dx=h\left(c\right)$. Also, as $g\left(x\right)$ is continuous in $\left[a,b\right]$, $g\left(x\right)$ is differentiable on $\left[a,b\right]$. Thus, $\exists$ $c'\in\left[a,b\right]$ such that $\frac{1}{b-a}\intop_{a}^{b}g\left(x\right)dx=g\left(c'\right)$. But the problem I faced now is that, in general, $c\neq c'$. Had that been the case, then I could've just divided the two equations found above and conclude the result. So my question is, given the information in the question and what I have done above, is it true that $c=c'$ always? I seem to have run out of ideas here, so a little bit hint would be very much appreciable. P.S.: I later did this problem in the following manner- Let $I=\left[a,b\right]$ and $f\left(I\right)=\left\{f\left(x\right):x\in\left[a,b\right]\right\}$. Also, let us define $m=\inf f\left(I\right)$ and $M=\sup f\left(I\right)$. Then, $\forall x\in\left[a,b\right]$, $$m\leq f\left(x\right)\leq M\\ m.g\left(x\right)\leq f\left(x\right).g\left(x\right)\leq M.g\left(x\right)\\ m\int_a^b{g\left(x\right)}dx\leq \int_a^b{f\left(x\right)g\left(x\right)}dx\leq M\int_a^b{g\left(x\right)}dx\\ m\leq \frac{\int_a^b{f\left(x\right)g\left(x\right)}dx}{\int_a^b{g\left(x\right)}dx}\leq M\\ $$ Thus, using the Intermediate Value Theorem, $\exists$ $c\in\left[a,b\right]$ such that $f\left(c\right)=\frac{\int_a^b{f\left(x\right)g\left(x\right)}dx}{\int_a^b{g\left(x\right)}dx}$ and thus it completes the proof. But I am still in doubt regarding my first method. See the answer here: https://math.stackexchange.com/q/794025 In the first method, it should be $(b-a)h(c)$ instead of $\frac{1}{b-a} h(c)$. There is no reason that $c=c'$, so the first method cannot work. The second proof looks good. (i) Continuous functions need not be differentiable. (ii) [trivial] you've got the $(b-a)$ in the wrong place, should be in the numerator. I don't think you can make your first method work. It's like trying to prove Cauchy's Mean Value Theorem by using the MVT separately on the numerator and denominator. @Gribouillis I'm extremely sorry for my careless mistake. I've corrected the equation now. Thanks for pointing it out.
common-pile/stackexchange_filtered
Perl drop down menus and Unicode I've been going around on this for some time now and can't quite get it. This is Perl 5 on Ubuntu. I have a drop down list on my web page: $output .= start_form . "Student: " . popup_menu(-name=>'student', -values=>['', @students], -labels=>\%labels, -onChange=>'Javascript:submit()') . end_form; It's just a set of names in the form "Last, First" that are coming from a SQL Server table. The labels are created from the SQL columns like so: $labels{uc($record->{'id'})} = $record->{'lastname'} . ", " . $record->{'firstname'}; The issue is that the drop down isn't displaying some Unicode characters correctly. For instance, "Søren" shows up in the drop down as "Søren". I have in my header: use utf8; binmode(STDOUT, ":utf8"); ...and I've also played around with various takes on the "decode( )" function, to no avail. To me, the funny thing is that if I pull $labels into a test script and print the list to the console, the names appear just fine! So what is it about the drop down that is causing this? Thank you in advance. EDIT: This is the relevant functionality, which I've stripped down to this script that runs in the console and yields the correct results for three entries that have Unicode characters: #!/usr/bin/perl use DBI; use lib '/home/web/library'; use mssql_util; use Encode; binmode(STDOUT, ":utf8"); $query = "[SQL query here]"; $dbh = &connect; $sth = $dbh->prepare($query); $result = $sth->execute(); while ($record = $sth->fetchrow_hashref()) { if ($record->{'id'}) { $labels{uc($record->{'id'})} = Encode::decode('UTF-8', $record->{'lastname'} . ", " . $record->{'nickname'} . " (" . $record->{'entryid'} . ")"); } } $sth->finish(); print "$labels{'ST123'}\n"; print "$labels{'ST456'}\n"; print "$labels{'ST789'}\n"; The difference in what the production script is doing is that instead of printing to the console like above, it's printing to HTTP: $my_output = "<p>$labels{'ST123'}</p><br> <p>$labels{'ST456'}</p><br> <p>$labels{'ST789'}</p>"; $template =~ s/\$body/$my_output/; print header(-cookie=>$cookie) . $template; This gives, i.e., strings like "Zoë" and "Søren" on the page. BUT, if I remove binmode(STDOUT, ":utf8"); from the top of the production script, then the strings appear just fine on the page (i.e. I get "Zoë" and "Søren"). I believe that the binmode( ) line is necessary when writing UTF-8 to output, and yet removing it here produces the correct results. What gives? You need to check the $record->{'lastname'} and $record->{'firstname'} utf8 flag use Encode:is_utf8(). If they are all utf8 or not, you can concat them. Please provide the output of sprintf "%vX", $value for a string that doesn't work well, and provide what you expect to see for that string. Never use Encode::is_utf8 except in debug statements. Code that relies on its result is guaranteed to be buggy. Thanks @ikegami. I get: 53.C3.B<IP_ADDRESS>E, for a string that I expect to appear as "Søren" but instead is appearing as "Søren". My test script correctly prints "Søren" and "53.F<IP_ADDRESS>E" to the console. It would seem that I'm reading from the database just fine, and that it is the HTTP response encoding that is causing the problem (as Dave is, I think, suggesting below). Problem #1: Decoding inputs 53.C3.B<IP_ADDRESS>E is the UTF-8 encoding for Søren. When you instruct Perl to encode it all over again (by printing it to handle with the :utf8 layer), you are producing garbage. You need to decode your inputs ($record->{id}, $record->{lastname}, $record->{firstname}, etc)! This will transform The UTF-8 bytes 53.C3.B<IP_ADDRESS>E ("encoded text") into the Unicode Code Points 53.F<IP_ADDRESS>E ("decoded text"). In this form, you will be able to use uc, regex matches, etc. You will also be able to print them out to a handle with an encoding layer (e.g. :encoding(UTF-8), or the improper :utf8). You let on that these inputs come from a database. Most DBD have a flag that causes strings to be decoded. For example, if it's a MySQL database, you should pass mysql_enable_utf8mb4 => 1 to connect. Problem #2: Communicating encoding If you're going to output UTF-8, don't tell the browser it's ISO-8859-1! $ perl -e'use CGI qw( :standard ); print header()' Content-Type: text/html; charset=ISO-8859-1 Fixed: $ perl -e'use CGI qw( :standard ); print header( -type => "text/html; charset=UTF-8" )' Content-Type: text/html; charset=UTF-8 Absolutely correct. When I removed the binmode(STDOUT, ":utf8") and used Encode::decode('UTF-8', $record->{'lastname'} . ", " . $record->{'nickname'}) - all was well. Thank you for your help. I'm just starting to learn how encoding and character sets work so I appreciate the patience. NO!!! Keep binmode(STDOUT, ":utf8"). You need to encode your outputs! Retaining binmode(STDOUT, ":utf8") causes the output to revert to garbage though. I'm confused... Then you have an additional problem to fix you mentioned nothing about. I edited my post with a more comprehensive code snippet. I can't think of what else to provide, but I do appreciate your help. Thanks. Thank you! Inserting -type => "text/html; charset=UTF-8" did it. It was strange because we have that built into the page's template, and yet typing document.characterSet in Chrome's console still yielded windows-1252. Your fix corrects it, but I still wonder why the browser wasn't picking up our tag in the HTML template...but that's a question for another thread! The actual header must override the "header-equivalent" tag Hard to give a definitive solution as you don't give us much useful information. But here are some pointers that might help. use utf8 only tells Perl that your source code is encoded as UTF-8. It does nothing useful here. Reading perldoc perlunitut would be a good start. Do you know how your database tables are encoded? Do you know whether your database connection is configured to automatically decode data coming from the database into Perl characters? What encoding are you telling the browser that you have encoded your HTTP response in? I'm not a Perl developer, or even much of a web developer; this just landed in my lap, so I appreciate the direction. The database tables are encoded as "SQL_Latin1_General_CP1_CI_AS". The database connection utilizes DBI...I'm fairly certain that it is automatically decoding, since the same code is writing the proper characters to the console in my test script. That's what leads me to believe that your third question is the key here. How can I determine my HTTP response encoding? To be clear, the HTML header has , which I take to mean that the browser expects UTF-8, which I think is what you meant by your third question.
common-pile/stackexchange_filtered
CSS triangle changing display value with Javascript I have got this html <div id="slideMenu"> <div id="slideM1" onmouseover="colorM(1)">Something 1</div> <div id="slideM2" onmouseover="colorM(2)">Something 2</div> <div id="slideM3" onmouseover="colorM(3)">Something 3</div> <div id="slideM4" onmouseover="colorM(4)">Something 4</div> <div id="slideM5" onmouseover="colorM(5)">Something 5</div> </div> and this CSS html, body{ font-family: "Century Gothic", "Apple Gothic", AppleGothic, "URW Gothic L", "Avant Garde", Futura, sans-serif; } #slideMenu { float: right; } #slideM1:before, #slideM2:before, #slideM3:before, #slideM4:before, #slideM5:before { content: ""; position: absolute; top:0; right:210px; width: 0; height: 0; border-top: 32px solid transparent; border-right: 30px solid #602F4F; border-bottom: 32px solid transparent; display:none; } #slideM1, #slideM2, #slideM3, #slideM4, #slideM5 { background-color: silver; height: 54px; width: 200px; text-align: right; position:relative; padding: 5px; color: white; margin-bottom: 2px; } and finally this Javascript function colorM(n) { document.getElementById("slideM"+n).style.backgroundColor="#602F4F"; document.getElementById("slideM"+n+":before").style.display="block"; } Here is jsfiddle http://jsfiddle.net/xN6Da/ . As you can see CSS triangle has default value display: none;. But after it's hovered I want to change it's value to be visible. I tried to use document.getElementById("slideM"+n+":before").style.display="block"; but it still hides the triangle, so how to remove that display: none; from CSS with Javascript? Thank you Pseudo elements (the css-generated ::before and ::after) are presentation-only, and are not (currently at least) available within the DOM. To adjust the styles you'd need to access the style rules in which the properties are defined and change those. Or add a class to the 'parent' element, and use that class to influence the colour. So it's not possible to edit it while it's just one DIV? Add/remove a class and do it with a CSS rule; it's extremely easy. Your CSS would be simpler if you gave your slides a common class, so you wouldn't have to list out the "id" values. You cannot modify pseudo classes via Javascript. You will need to change it by adding/removing a class. For instance, by adding CSS like this: #slideM1.show:before, #slideM2.show:before, #slideM3.show:before, #slideM4.show:before, #slideM5.show:before { display: block; } and this Javascript function colorM(n) { document.getElementById("slideM"+n).style.backgroundColor="#602F4F"; document.getElementById("slideM"+n+).className="show"; } Here is the fiddle. http://jsfiddle.net/xN6Da/1/ I would recommend removing the IDs and replacing them with a common Class. Currently, every time you add another element. You are going to need to create another ID and add it to the CSS. You posted a link to the original jsfiddle. http://jsfiddle.net/xN6Da/2/ I have finished this. Would you mind telling me how to remove that class again? Thank you. You just need to do something like this. document.getElementById("slideM1").className=""; Setting the className to an empty string. If your asking how to change style rules via Javascript, this is the code I use: //* Function: putClassAttrib((myclass,myatt,myval) //* Purpose: Add/modify a rule in a given class putClassAttrib = function(myclass,myatt,myval) { // e.g. putClassAttrib(".hideme","whiteSpace","nowrap") var myrules; if (document.all) { myrules = 'rules'; } else if (document.getElementById) { myrules = 'cssRules'; } else { return "Prototype warning: no style rules changer for this browser" } for (var i=0;i<document.styleSheets.length;i++){ for (var j=0;j<document.styleSheets[i][myrules].length;j++) { if (document.styleSheets[i][myrules][j].selectorText == myclass) { document.styleSheets[i][myrules][j].style[myatt] = myval; } } } return ""; } (notice, I am an unrepentant advocate of banner style indenting ;-)... http://en.wikipedia.org/wiki/Indent_style#Banner_style
common-pile/stackexchange_filtered
What is the reason for colour in 2,4-Dinitrophenylhydrazine derivatives? 2,4-Dinitrophenylhydrazine is an important laboratory reagent for the detection of Carbonyl (C=O) group. It reacts with Carbonyl group by the typical Nucleophilic Addition-Elimination reaction and if positive the test results in the formation of bright yellow orange coloured derivatives. I was reading from a source and it says the reason for showing color is -"Extended conjugation". Okay that may be the reason but what exactly happens that they look such beautiful bright yellow, orange solids ? It would be useful if you could explicitly say at which level you want an explanation. It can be quickly done as "it depends on the electronic energy levels of the molecule and the allowed transition between them" or it can require much more effort. I would assume that someone doing such a test can more or less answer himself, or not pose this question at all. Do not forget the more important value of 2,4-dinitrophenyl hydrazones (or of the 3,5-dinitrobenzoates, [for alcohols]; the benzenesulfonamides and picrates [for amines], just to mention a few examples of preparing derivatives) is / was lesser about a little varying colour and determination of the absorption maximum (which may require a spectrometer). It is / was to obtain a crystalline compound of sharp melting point, which subsequently may be compared with reference tables. On the other hand, instead of preparing several derivatives of X, modern spectroscopic techniques have own benefits.
common-pile/stackexchange_filtered
Memory allocation in Ubuntu I have a Ubuntu operating system with 32 GB RAM. I am running a code that I expect it needs much memory (but less than 32GB). while running the code, I monitor the memory with "free -m" command. I see that my code just uses around 4GB memory, and after that it crashes and returns memory error. My Question is why it does not use more memory when I have free space, does the OS put a limit on the memory usage of each process? What's s the solution? is there any configuration option to increase memory usage? Memory can be fragmented, and if you are requesting too large a block, that cannot be met, this may explain your problem. It's possible to have per-process resource limits. See the ulimit command. 4GB, is your program compiled as a 32 bit executable? :) Seeing as how it crashes when it has allocated 4 GB, it sounds very much as if you're running in 32-bit mode. Have either installed a 32-bit variant of Ubuntu (check with uname -m), or otherwise still compiled this program as a 32-bit executable (check your compiled program with the file command)? I am running on 64-bit mode, I wrote a python code to see what actually happens, I wrote 'while' loop which is always True, and in each iteration I appended an integer number to a list, I monitored the memory, it just used 2 GB RAM, and 400,000,000 iterations, and returned "MemoryError" Is the program in the question a Python program? @MOH: Also, just to be sure, how did you verify that you're running in 64-bit mode? in the question it was a C code, but for testing I used a simple python code, uname -m returns i686 @MOH: If uname -m returns i686, then you are indeed not running in 64-bit mode. If you were, it would say x86_64. So, is this caused by running in 32-bit? @MOH: Yes. 32-bit mode implies 32-bit pointers, which implies a 4 GB address space (some of which is used by the kernel, so even less in practice). You can't use more memory than you have address space for.
common-pile/stackexchange_filtered
remote desktop overtaking someone else's disconnected session We have one user account on the remote server. When computer A initiates a remote session, then disconnects from it, and computer B initiates a remote connection, computer B is being connected to the A's session. How can I make sure that only the same device can reconnect to its remote session? I'm wondering what exactly you're trying to achieve. I might be naive, but I don't understand the down-side of having a user be able to disconnect from one device and pick up the session from another. Your wording "someone else's...session" leads me to believe you have multiple people connecting to one user account? Sorry for confusion. We have 25 clients and only one server. There is no need for them to have their own accounts so we just set up one local username on the server. The problem is that we would like them to actually be owners of their sessions, and not have other people be able to connect to "someone else's session." If it's a single user account I don't think you can. You need to make sure that they log off instead of disconnecting, then they can start a new session with the right account. See here for more info. So even if I am sending an e-mail blast, and I disconnect from the session, there is no way other people can start new sessions without disturbing mine? :( You may be able to, but you would have to configure mutliple sessions per user, I have added a link in my answer that might be helpful :)
common-pile/stackexchange_filtered
How to use Service Workers in Google Chrome extensions to modify an HTTP response body? Now that Google Chrome extensions can register Service Workers, how can I use them in order to modify HTTP responses from all hosts, e.g. by replacing all occurrences of cat with dog? Below is a sample code from Craig Russell, but how to use it inside Chrome extensions and bind it to all hosts? self.addEventListener('fetch', function(event) { event.respondWith( fetch(event.request).then(function(response) { var init = { status: response.status, statusText: response.statusText, headers: {'X-Foo': 'My Custom Header'} }; response.headers.forEach(function(v,k) { init.headers[k] = v; }); return response.text().then(function(body) { return new Response(body.replace(/cat/g, 'dog'), init); }); }) ); }); Can't seem to work, I'm getting error Add/AddAll does not support schemes other than "http" or "https" on caches cache.addAll(urlsToCache) Many results here: https://www.google.com/search?q=cache.addAll but not one tutorial seem to be for Chrome Extension... ⨕ The chrome object in serviceworker is all weird. chrome.tabs not possible. Ok after some trial and error I've got it partially pieced. I've posted an answer, see below. ¶ Btw I don't think that cache is a required API here. For crossorigin serviceworker see https://stackoverflow.com/q/46760820/632951 Solution ≪manifest.json≫: {"manifest_version":2,"name":"","version":"0","background":{"scripts":["asd"]}} ≪asd≫: navigator.serviceWorker.register('sdf.js').then(x=>console.log('done', x)) ≪sdf.js≫: addEventListener('fetch', e=> e.respondWith(fetch/*btw xhr is undefined*/(e.request).then(r=>{ if(r.headers === undefined /*request not end with <.htm>, <.html>*/){}else{ console.assert(r.headers.get('content-type')===null)/*for some odd reason this is empty*///[ let h = new Headers() r.headers.forEach((v,k)=>h.append(k,v)) Object.defineProperty(r,'headers',{'writable':true}) r.headers = h r.headers.append('content-type','text/html'/*btw <htm> doesnt work*/) //] } return r.text().then(_=>new Response(_.replace(/cat/g,'dog'),r)) }))) Go to ≪page url≫ (≪chrome-extension://≪ext id≫/≪page path≫≫) and see the replacements. Standalone response ≪manifest.json≫ and ≪asd≫ same as above. ≪sdf.js≫: addEventListener('fetch', e=> e.respondWith(new Response('url: '+e.request.url,{headers:{'content-type':'text/html'/*, etc*/}}))) Btw Serviceworker has other events that can be delved into eg: addEventListener('message', e=>{ console.log('onmessage', e) }) addEventListener('activate', e=>{ console.log('onactivate', e) }) addEventListener('install', e=>{ console.log('oninstall', e) }) Alternatives Note that currently serviceworker [–cf webNavigation, webRequest] is the only way to do this because some lamer in Google development team has decided that its "insecure" for response-modifying webNavigation and webRequest. Note Chrome bugs: Extension ≪Reload≫ will not reload serviceworkers. You need to remove the extension and load it back in. Page refresh does not refresh, even with chrome devtools disable cache hard reload. Use normal page visit.
common-pile/stackexchange_filtered
some pictures are not available on 'what is your best programmer joke' like this post Answer to What is your Best Programmer Joke I can't edit the answer to show a previous archived version of the site or anything like that. there is no flagging because of the status of the Question(thread) no way to comment or bring it to anyone's attention other than here. I saw a couple of others on there I know that this question is just there for historical purposes but how should we handle these, if at all? Not important. It's a question here for only historical reasons, some missing images that got link rot won't decrease the usefulness of Stack Overflow to anyone. Not at all. If I had my way, the whole thing would be dead and gone, but there was much hue-and-cry to keep it around for 'historical' purposes. There's no way those images can really be fixed, and no real benefit to doing so. Incidentally, for 'serious' posts, this is one reason why the use of images should be limited... especially externally-hosted. I agree with both of you. but the thing is, that question/answer comes up on Google if you search for programmer jokes, and if someone is going through the whole things and sees broken links, it's not going to look good for the site itself, even though this question should not represent a good template for valid questions. maybe there should be a site for "orphan" questions that have the historical-ness but don't fit with the way the site is supposed to be. @Malachi In my opinion, what you just described is a great reason to nuke the whole thing. Good Discussion, I understand the purpose of making it locked and not wanting to touch it anymore. makes sense to me. thank you. hopefully I have not duplicated a question out there, and this can be a reference if it ever comes up again. If you want programmer joke questions, it looks like Quora has you covered: http://www.quora.com/Jokes/What-are-the-most-popular-computer-programming-jokes?redirected_qid=1155502 , http://www.quora.com/Computer-Programming/What-are-some-good-programming-jokes , http://www.quora.com/Jokes/What-are-some-of-the-most-profound-programming-jokes-ever , http://www.quora.com/Jokes/What-are-the-best-A-programmer-had-a-problem-jokes , and there are even more ones than those. You don't. We don't want the users of their site spending their time maintaining the content of a joke thread, which is one of the reasons that it's locked in the first place. Just leave it alone. Spend your time answering good on-topic question, or editing answers with valuable information in them. If it at some point gets so bad that there really is nothing useful there then it could be deleted entirely. agreed, I only ask because of the presence that this locked question has on the internet itself, that it may represent StackOverFlow to people that don't know the site like we do @Malachi That is taken into consideration when considering whether to use a historical lock or just delete. The decision was made at some point that there was enough value added to not delete it. but should we remove the posts that were picture only where the picture is now a broken link so that it doesn't make the site look bad? maybe it doesn't make the site look bad. I just know if I go to a site I have never been before and there are broken pictures, it seems old and outdated to me and detracts from the site I am on. @Malachi As I said; we don't want to spend time curating posts like this. Either the whole thing is bad enough that we want to just delete the whole thing, or it adds enough value, as it stands, to be worth leaving up. Those are the options. FWIW I went through and just deleted the answers that had broken images. They're certainly not worth my time to try and find the original image and re-add it, but I think it makes our site look tacky having locked content with dead images in them. Against Presents a bad image of the Stack site from broken images & dead links, especially considering what Stack Overflow is all about. It being there sends an unclear message of what content the site actually allows. While the close message is quite clear, it's not necessarily going to be read, or even noticed as a visitor may be linked to one of the many answers beyond the close reason. Maintaining these questions will be futile as replacing broken images, links, etc will be an indefinite maintenance task as links continue to break (including fixed ones). There is no real place for these questions, and the site structure and community modding cannot manage them. For It's a huge post and so many potential keywords and images to bring incoming visitors. People finding this from Google or other sites may eventually "wander" to other parts of Stack Exchange and sign up/participate. It is a bit of fun. While Stack doesn't promote this, a little bit especially "from the past" is cheery and shows the owners/mods/staff etc are not stuffy*. They're usually from a time when it was allowed (a bit more). They're a bit of fun (I know I mentioned this one, but I thought that it was such a big one that it was worth mentioning twice). . * I didn't say they're stuffy, but there are plenty of sites and blogs slating Stack for "their hard edged ways", and fun and quirky questions being allowed to remain shows that where normal play is resumed, staff/mods are simply pushing to provide a decent and strict Q&A site, and not stuffy at all. Problem The points against are valid and quite detrimental to certain key aspects of Stack, however the points for are also quite valid. So it certainly needs addressing, not leaving, although certainly not just "nuking" (especially with the potential of inbound traffic). Suggestion Malachi's suggestion was a good thought and would be perfect - to have a historical place for these things - but this is likely way to much dev time for what would be a few of these historical questions. Ideally you'd have a message with fixed positioning so whatever part of the (usually huge) question the visitor hits they would see the message, however this is of course a fair bit of dev time again. Something like: This sort of content is no longer allowed to be posted at Stack Overflow! This question has only been allowed to remain for historical reasons and because it's gathered so much attention, and would normally be closed rapidly as "off topic". It is also not part of our maintenance system and so some links may be broken or answers not particularly useful (or funny). Anyone reading it can see clearly it's not allowed, staff aren't stuffy as have left it in place, it's known there are broken links or other issues (so Stack don't look like amateurs etc) and all the above mentioned "for" reasons remain intact. The same message could also be quickly applied to any other "historical" question which has the same issues of being off topic and sending out the wrong message, etc. The historical lock message is already visible at the top of the page, right before the answers start I know, I made that point myself (#3 on Against), but if you do not start at the top and are past that message (ie a link on Google or another website to a joke further down) and scroll down reading through the jokes from there, the message at the top is pointless - ie click this: http://stackoverflow.com/questions/234075/what-is-your-best-programmer-joke/1283200#234170
common-pile/stackexchange_filtered
How fast do they spin astronauts these days? Maximum routine g-training for astronauts in the 21st century? Comments got me thinking about NASA's 20 g centrifuge. Gemini astronauts pushed to 7 or perhaps 8 g's as discussed in this answer but these days with nicely throttleable engines astronauts going to space experience no more than ballpark 3 g under normal conditions. Puzzler: What acceleration are these astronauts experiencing? Has an object ever been put in orbit where the first stage was always at maximum thrust? The NASA 20 G Centrifuge (also NASA page) potentially goes up to 20 g, but I'm guessing they don't do that so much any more. Question: How fast (to what g-level) do they spin astronauts these days? Maximum routine g-training for astronauts in the 21st century? Source Tim Peake flew to the ISS in late 2015. There is a video of him experiencing 8g in a centrifuge here: https://www.bbc.co.uk/news/av/science-environment-35030406/tim-peake-experiences-huge-g-forces-in-a-centrifuge In a normal launch the force of a soyuz is less than 4g, but an emergency might subject the crew to higher than that This 20 g centrifuge was never used to expose astronauts to 20 g. According to this NASA page "additional safety features permit human studies from 1 to 12.5-g" From The Pull of HyperGravity "To produce a centrifugal force of 2-g, the centrifuge spins about 15 revolutions a minute." To reach 20 g the centrifuge should spin a 100 times faster, 25 revolutions per second. For 4 g 1 revolution per second, for 5 g 1.5625 revolutions per second. @Uwe $\sqrt{10}$ times faster!! $F_C = m v^2/r$ @uhoh square root of 10 instead of square of 10 is much better. So only 47.4 revolutions a minute for 20 g. I stand corrected. Is it training or only a test? To get a training effect, about 2 to 4 repetitions per week should be done for at least a month or two. The only centrifuge training received by Shuttle astronauts was a 3g ride in the Brooks Air Force Base facility as new astronauts, followed by optional use of the facility to verify ascent/entry suit fitting. Description by Clay Anderson from here Shuttle training sent us to a San Antonio Air Force Base for a single ride in their centrifuge. The Shuttle's ascent and entry profiles were flown, to give us the exposure to what 2-3 g's would feel like. Actually no big deal, and just "checking a box." Brooks AFB closed in 2011. Anderson goes on to comment about Soyuz centrifuge training: In Russia I flew their centrifuge as well. Since I was a ShREC (Shuttle Rotating Expedition Crew Member), I only did the Soyuz re-entry centrifuge profile, in a manner similar to that of the shuttle, pulling the requisite number of g's at the appropriate times. However, we also did some separate runs, which reflected a ballistic Soyuz re-entry profile. This re-entry, a contingency; is extremely dynamic. We pulled 8 and 10 g's for short periods of time, reflecting what would be experienced in the event of a failure driving us into that mode. (excerpt from Shuttle Crew Training Catalog)
common-pile/stackexchange_filtered
Using login for specified user that are able to delete their own table This is my database lecturer table name persons ID FirstName LastName Email 100 alex alex - 102 ali ali - This is my table user_subject ID Subject 100 AAA 100 BBB 102 AAA 102 CCC This is my drop_subject.php <html> <head></head> <body> <?php $conn = mysql_connect('localhost','root','password'); mysql_select_db('lecturer'); $query = 'SELECT persons.ID , persons.FirstName , user_subject.subject FROM persons INNER JOIN user_subject ON persons.ID = user_subject.ID ORDER BY persons.ID'; $result = mysql_query($query) or die(mysql_error()); ?> <table width="400" border="0" align="center" cellspacing="1" cellpadding="0"> <tr> <td><form name="form1" method="post" action=""> <table width="400" border="0" cellpadding="3" cellspacing="1" bgcolor="#CCCCCC"> <tr> <td bgcolor="#FFFFFF">&nbsp;</td> <td colspan="4" bgcolor="#FFFFFF"><strong>Drop subject</strong> </td> </tr> <tr> <td align="center" bgcolor="#FFFFFF">#</td> <td align="center" bgcolor="#FFFFFF"><strong>ID</strong></td> <td align="center" bgcolor="#FFFFFF"><strong>Subject</strong></td> </tr> <?php while($row = mysql_fetch_assoc($result)) { ?> <tr> <td align="center" bgcolor="#FFFFFF"><input type="checkbox" name="data[]" value="<?php echo $row['ID'];?>" /></td> <td bgcolor="#FFFFFF"> <?php echo $row['ID']; ?> </td> <td bgcolor="#FFFFFF"> <?php echo $row['subject']; ?> </td> </tr> <?php } ?> <tr> <td colspan="3" align="left" bgcolor="#FFFFFF"><input type="submit" value="Delete Checked Rows" /></td> <td colspan="2" align="right" bgcolor="#FFFFFF"><a href="user_subject.php?pressed=subject">Back</a></td> </tr> <?php if(isset($_POST['data'])) { $del_query = "DELETE FROM user_subject WHERE No IN ("; foreach($_POST['data'] as $data) { $del_query .= "'" . (int) $data . "',"; } $del_query .= "'')"; mysql_query($del_query) or die(mysql_error()); header("Location:" . $_SERVER['PHP_SELF']); } mysql_close() ?> </table> </form> </tr> </table> </body> </html> The problem is when I want to drop subject if I access as Ali I am able to drop the subject of alex. What I want to do is like when alex access to the drop_subject.php the system only appear his own data which is ID 100 subject AAA and ID 100 subject BBB and he is able to drop his own subject only What should I use to restrict specific user to access their own database not everyone's database? Your code is vulnerable to SQL injection! Take a look at the line $del_query .= "'" . (int) $data . "',";. Imagine what happens if someone POSTs the following data: 123'); DROP TABLE user_subject; --. You really should get that sorted out. Yeah, I agree to fix the part which allows SQL injection, as a good practice. You can read more at http://www.php.net/manual/en/security.database.sql-injection.php @dan: (int)'123'); DROP TABLE .. would result in just 123 anyways... the string's being type-cast to an int, and the sql bits would get removed. @MarcB - you're absolutely right. I didn't notice that part. But I don't think this is an ideal way to protect against SQL injection. I don't know how php's casting works, but wouldn't some casts potentially result in NULL values that might screw up the query string? @dan: (int) could potentially result in a php null, but concatenation a php null into a string results in an empty string. Since OP's got sql-level quotes around the potentially empty string, that php null would just result in '', which would be a valid empty string in sql The steps taken should be Able to capture the user ID of the user currently accessing the page; Filter the result with a WHERE statement together with the ID: WHERE id = <the id of the user>. For example, if you pass the current access user's id via query string in the following format: drop_subject.php?user_id=100, then in your PHP code, you can first retrieve the user id by doing $user_id = $_GET["id"]. After that, you can use that info to filter the result by doing: $query = 'SELECT persons.ID , persons.FirstName , user_subject.subject FROM persons INNER JOIN user_subject ON persons.ID = user_subject.ID WHERE person.ID = '.(int)$user_id.' ORDER BY persons.ID';. Please correct me if I'm wrong. i followed what you have stated to me and i got an error which is undefined index: ID and even i change to http://localhost/Project/testing.php?user_id=100 but if i change to $user_id = 100; i can get only 100 database. does it have any program that can use for 1 to many users thank you Sorry for the mistake. There is typo in my code, it should be $user_id = $_GET["user_id"]. So, if you want to show other user's data, for example the User 102, simply update the query string to ?user_id=102. Is this what you are looking for? thank you for your reply.what i am trying to do is each user can only read their own database and when i login as alex i only wanted only alex database to appear in a multiple user database like user_subject and now i trying to figure out how to change testing.php?user_id=100 by just clicking as alex in the webpage For a simple HTML sample, you can just have <a href="testing.php?user_id=100">Alex</a> and <a href="testing.php?user_id=102">Ali</a>. Of course, from there, you can use a for loop to generate a dynamic list of users also.
common-pile/stackexchange_filtered
2 Lines Causing thousands of Java Script Errors On this page: https://www.airsyspro.com There is a script for a stick header that seems to have an infinite loop that is causing thousands of errors in the console. Any ideas on how to fix this issue? You'll see in the console that it appears on lines 2216 and 2245. JS is not my expertise, thanks in advance. I think it all starts with the $('.hpg-sticky-bar'); It can't find an element with that class. Therefore the 'original' class will never get added to the element. The function stickIt() want to get the top value of a element with the class 'original' as a result the error. Because of the 'scrollIntervalID' every 10ms, you will get the error many times. Solution wrap this script in a check like if ($('.hpg-sticky-bar').length) {} Thanks for the response. on line 358 there is a class named .hpg-sticky-bar is it not seeing it. Do you know why? An element with this class is not present in the source code. Try in the console for example jQuery('.hpg-sticky-bar').length; It returns 0 That's a good point. Check out this page: https://www.airsyspro.com/products/ The element does exist there and it looks like the error is still triggering Sorry I still don't see an element that has a class="hpg-sticky-bar" in the source code. Also on this products page. If you want the errors to stop, wrap the script in a if ($('.hpg-sticky-bar').length) statement :) Any way you'd be willing to show me how that would be applied to the code? I'm sorry for asking I'm very new to JS I've added the code as an answer below if ($('.hpg-sticky-bar').length) { $('.hpg-sticky-bar').addClass('original').clone().insertAfter('.hpg-sticky-bar').addClass('cloned').css('position','fixed').css('top','0').css('margin-top','0').css('z-index','500').removeClass('original').hide(); scrollIntervalID = setInterval(stickIt, 10); function stickIt() { var orgElementPos = $('.original').offset(); orgElementTop = orgElementPos.top; if ($(window).scrollTop() >= (orgElementTop)) { // scrolled past the original position; now only show the cloned, sticky element. // Cloned element should always have same left position and width as original element. orgElement = $('.original'); coordsOrgElement = orgElement.offset(); leftOrgElement = coordsOrgElement.left; widthOrgElement = orgElement.css('width'); $('.cloned').css('left',leftOrgElement+'px').css('top',0).css('width',widthOrgElement).show(); $('.original').css('visibility','hidden'); } else { // not scrolled past the menu; only show the original menu. $('.cloned').hide(); $('.original').css('visibility','visible'); } } } It can be as simple as the example here above Thank you for your expertise. You are a true professional. I can't express how much I appreciate you taking time out of your day to help me. Thank you so much! I think that the problem is this: var orgElementPos = $('.original1').offset(); There is not a single element with class 'original'. The var orgElementPos is null. Also check this: $('.msb-sticky-bar').addClass('original1') the class msb-sticky-bar is misssing. on line 2208 there is .addClass('original') The sticky header works so I believe that class exists. Anything else you'd suggest. Thanks in advance can you check the current page? That class msb-sticky-bar is missing! Can you check that the code in 2208 is executed? Can you check if orgElementPos is null?
common-pile/stackexchange_filtered
ForkJoin, repeat a specific observable call until desired value is retrieved before proceed with subscription I am using ngrx store selectors on start of my component to retrieve data from my storage, these data may change any time through store dispatches from web sockets. I am using a forkJoin on start of my component to retrieve these data from the selectors and use them appropriately. Code example my-component.ts public ngOnInit(): void { forkJoin([ this.myNgrxStore.select(storeSelector1).pipe(take(1)), this.myNgrxStore.select(storeSelector2).pipe(take(1)), this.myNgrxStore.select(storeSelector3).pipe(take(1)), ]).subscribe(result => { // proceed with data as result[0], result[1], result[2], // call randomMethod only if result[1] is equal to '50', // else try retrieve value from the store again till it becomes this.randomMethod(result[0], result[1], result[2]); } } However, I want to proceed with my randomMethod in the subscription of forkJoin only when specific value is returned from storeSelector2, for example only if response of storeSelector2 is equal to '50', if it is not, add a delay of (let's say 1000ms and retry retrieve that value). What is the best approach to achieve my scenario? (I feel that there should be more than one). Thanks in advance There's something similar here Using RxJS retryWhen. how about filtering the stream for a particular value that you are interested in? @RiteshWaghela thank you follow up, if I understood correctly you propose to do something like the following this.myNgrxStore.select(storeSelector2).pipe( filter((response) => response === 50), take(1) ) ? Would something like that work? @Laurence thanks for your answer, the retryWhen option seems very interesting @NickAth Yes that should work imo. As forkjoin will emit once all of the observables have completed. Or you can also map your response from forkJoin and check condition and return array when your desired value has arrived. An example using map: const requestArray = []; requestArray.push(of(1)); requestArray.push(of(2)); requestArray.push(of(3)); requestArray.push(of(4)); forkJoin(requestArray).pipe(map(result => { if(result[1] === 2) { return result; } else { return []; } })).subscribe(result => { if(result.length) { // do what you want } Thank you for your code example, unfortunately this does not fit for my case since what I want is while this.myNgrxStore.select(storeSelector2) returns response different than 50, retry this.myNgrxStore.select(storeSelector2) and when this is achieved, proceed with the forkJoin subscription, any idea for that?
common-pile/stackexchange_filtered
could not convert ‘0l’ from ‘long int’ to ‘MemoryManager’ MemoryManager openMemory() { if (...) { return memory_manager_instance; } else return NULL; } MemoryManager is the name of a user defined C++ class. The function definition above gives me the error in the title. Basically I don't want to return an instance when the condition does not hold. Such definition of function is valid and what I usually do in Java, but it seems not working in C++. What should I do to tackle this? Java has only pointers, that's why you can return NULL object. In C++ your code requires conversion constructor in MemoryManager, which doesn't exist or is explicit. You probably meant to return a pointer not a value. It's also possible you might need to throw an exception or return a default initialized object -- there is no way to tell from your question. Read mkb's answer so that you understand what is happening. Then, returning NULL in this particular instance is probably not what you want anyway - you should probably throw an exception (as @Gene) says. Have your function return a MemoryManager * (perhaps using one of the several smart pointer classes available in the C++ library or in boost). If MemoryManager is the name of a class, then this function as written returns a copy of memory_manager_instance. This is different from Java, where a variable of type MemoryManager would be a reference to an object. EDIT: Further, it looks like you are trying to implement a singleton. You'd want to make the default constructor, copy constructor, and assignment operator for MemoryManager private or protected. The first two are in Java as well, but not the last!
common-pile/stackexchange_filtered
Are These Weaknesses Too Great For The Void? I came up with a class of mystical beings called Void. Void used to be human but were transformed into another species by the energies released by the Legendary Dark Dragon (Curse) when he died from a meteor. They look like people, sort of: they're mostly featureless (they have joints, fingers, arms, and so on, but lack the complete detail of a human being), have hands like talons, fangs instead of teeth, sunken voids for eyes (same shape, different look) and their lower half is replaced by a frill of shadows with ragged edges. The Void are like ghosts in that normal objects and weaponry passes right through them. Even most energy weapons have no effect; the most extreme example of which would be that a nuke doesn't affect them. The reason is that they exist on a different plane as beings of darkness, making them affected by elemental light, dark, or fire and electric attacks. My problem is that fire and electric attacks are rather common amongst mages in general (come on, how ubiquitous is the mage fireball or lightning bolt?) and I want the Void to be almost invulnerable, beings of incredible dark power like the Ringwraiths. However, light is inseparable from fire and electricity, the creation of either fire or electric energy results in the presence of the former as well. So my question is, is a weakness to (elemental, must be magical in nature) fire and electric attacks too much of an advantage for the Void to be the bosses I want them to be? Please Note: Voids are immune to nukes because they aren't the right kind of energy, they can only be harmed by energy that's on the right wavelength. In other words, since they are a manifestation of elemental or magical darkness, they can only be affected by magical attacks of the types listed as weaknesses above. Also, for those who wonder how many mages exist in this world, they comprise about 25% of the population. Question. An atom bomb creates a fireball of immense temperatures, flashes of light brighter than the Sun and an EMP. How are shadow creatures not affected if they're vulnerable to fire,light and electric? I think you mean they are immune to the physical devestation of a nuke, but don't want to assume. Under the "Please Note" I edited in, I answered your question. It's not just that they're immune to the physical devastation of a nuke, they're immune to the light, heat, and electromagnetic pulse because it's not magical in nature and therefore doesn't even touch them. It's like a ghost-would a ghost be affected by a nuke? that answers my question well. Thanks. Glad it helped! I hope you answer my question. So the ordinary mob with pitchforks and flaming torches won't touch them, it requires a mage? If so, how common are mages? Mages would compose about 25% of the population. I once read a book where the only thing that could kill main villain was a magical glass knife. Of which one only one existed. So a flame or electricy made of magic would be quite common in comparision. Exactly, and my question is whether magical flame or electricity is too common to be a good weakness for what's intended to be a nigh-invincible boss species. This is a story-based question (sorry). The limited weaknesses are only too great if your adversary's strengths and weaknesses are too unbalanced. Such a creature would be invulnerable to anything a modern-day soldier could do - but they might not be invulnerable to what a good scientist could do. In fact, once the "specific wavelength" of light is found, they're suddenly completely at the mercy of EM emissions. But, as I said, this all depends on whether or not your story is written to make them actually invulnerable by denying discovery of the weakness to the adversary. Remember: when you ask about a rule, the rule exists independent of all stories. I.E., "Can my Void be invulnerable to all but one frequency of EM radiation?" Plausible answer: "Yes, but not just a single number. That's not how nature works. It would be more realistic to have a gaussian distribution centered on the target frequency." If you want to know if that's too strong an advantage, you must also describe the adversary so we can judge between the two rules - not the story. You could make your creatures only vulnerable to mundane fire and lighting. This way, wizards blast it with their fireballs and lighting bolts, see that they're unaffected, and conclude that they're immune to all lighting and fire @JBH -- I usually don't disagree on such matters, but I find myself in vehement disagreement with your premise. Often times I can see the argument from the other person's perspective, but in this case, I really can't see how this is a query of "actions of a character" or a matter of mere narrativity. On the contrary, as I read it, this is a query of fundamental nature, and deep underlying substructures of a world. Don't let the mages and magic fireballs cloud your vision! @elemtilas But how are we supposed to judge if the proposed creature design is "too powerful?" The only metric is one provided by the OP (and I dare someone to tell me we can judge this question by real life... grrrr....). As I said, a creature is only too powerful if its adversaries are too weak. That kind of comparison isn't worldbuilding, it's storybuilding. Think about it. "My creature can withstand nuclear blasts, is it too powerful?" answer: "only if its adversaries can't throw something more powerful than a nuke." @JBH -- I think the answers given thus far (my own and AngelPray's) address this concern well, though coming from different perspectives. It's not a question of what can they throw that's "more powerful than a nuke", but rather, how can they access the wherewithal to sufficiently counteract the enemy. In other words, you don't necessarily have to utterly discombobulate an enemy's subatomic particles; you just have to apply sufficient pressure to his walnuts and he can be easily overcome. @elemtilas The existing answers aren't answering the question (at least not any question I can find in the OP's post). At best, they're rationalizing why the question needn't be asked. They're little different than a frame challenge, and that's not a great example of how answerable a question is. So far I'm keeping my own counsel on this one. @JBH -- No worries! I do see how the answers can be understood in terms of a frame challenge, and I for one am okay with that. They are valid answer types. It wasn't my intention to create one so much as to work with what the OP revealed in devising a solution. You're thinking about this wrong. Voids aren't creatures of darkness, they're creatures of darkness. They're not weak to light, they're weak to light. That sounds like nonsense, so let me explain. The word darkness has two (relevant) meanings: the physical absence of photons the antithesis of "life"/"good"/holiness/the universal creative essence So naturally they aren't weak to fire or electricity, even fire/electricity of a magical nature. Flames can burn down cities, lightning can kill. But of course both can be used to create too; warm houses, power civilizations. Physically they are complex phenomena. Magically, they involve multiple layers of abstraction to generate. Thus unsuitable to oppose the (mostly) pure expressions of metaphysical darkness that are void entities. Instead you need something deeper, simpler and closer to the creative source: the Light. I don't know much about your magical system. It could be a separate element: holy. It could just be a "powered up" version of normal magical electricity or magic. Sacred flames instead of normal (even magical) fire. Bolts of divine retribution instead of normal lightning. Note that this doesn't actually require the participation of a god or even any "objective goodness" in your universe. I mean, from the voids perspective, mages who try killing them are evil. You just need to have a form of magical "light" (in the metaphysical sense) that is different from light in the mundane sense involving photons and such. Coo, excellent answer that! That's a good point....I could overcome the potentially devastating weakness of fire and electricity by having the Void's weakness be "divine" magic. Thank you, this is very helpful! Dimensionality is Key These creatures are natural to and inhabit a different "plane", which we take to indicate a different area of dimensional space-time. For example, the inhabitants of Flatland inhabit a two-dimensional area of space-time. These creatures inhabit a Three-or-more-dimensional area of space-time. (Note that we're really only interested in the spatial dimensions here.) When a 2D being meets a 3D being, he can not conceive the true nature of the 3D being, as he can only see the 2D "shadow" of the 3D reality. Should he meet a Cube, he can only examine and come to know it as a Square, because he can only conceive of the perimeter, the angles, the lengths of the line segments, but not the depth or height. Should a 2D mage cast a fire circle (because in 2D space, they don't have balls) at a 3D Cube, the fire circle can only interact with an object or person of L x W (but not H, because in Flatland, height always equals zero) and thus literally can not interact with the Cube at all, because the Cube has a dimension of height and thus exists outside of the 2D mage's conceptual world. It may seem to her that the fire circle fully engulfs the Cube, and surely must destroy it! But no! Once the energy is expended, the Cube remains unharmed, and indeed totally unaffected. We suppose one might posit that, technically, 0 mm of the Cube's outer skin is affected by the fire circle -- how much burn damage would that be for a 3D being? And so it will be for these 3+ dimensional creatures. The fundamental issue here, as it is in Flatland, is that mages are applying the wrong kind of energy. Magic, as is well known, is an entity of the plane or world from which it springs. The 2D mage's magic fire circle is an entity of 2D space. An ordinary wizard’s 3D fireballs and electric attacks are entities of 3D space. Wrong kind of energy indeed! Their fireballs and lightning attacks will ultimately be of no use against a creature whose existent form is within more than three dimensions of space. It may be that a 3D attack can “push” one of these creatures out of 3D space, or far away within 3D space for a while, because the creatures shadow is cast within 3D space and thus, in some ways, may interact with it, but ultimately can have no lasting effect on the creature. So, how can a 3D mage seriously attack a 3+D entity? She is going to have to figure out a way to manifest herself within 3+D space-time. A shadow of a 3+D being can be cast upon the 3D world (plane), just as the 3D being casts a shadow upon the 2D world of Flatland. Interaction of a kind can take place. A mage will have to work out how to access a kind of magic that will reverse the process. This is very advanced and esoteric dwimmery! The mage who can successfully manifest herself within 3+D space-time can thus access & use such magical force as exists in that plane and bring it to bear on the creature, more or less on its own terms. But there is a price to be paid! Magic ... She’s already dabbling in the dangers of magical forces within a 3D world, and that's folly enough. She should be aware of the further dangers of extending her own 3D body into 3+D space: she might flip over, losing her grasp on 3D space entirely, thus becoming lost in 3+D space; she might become splunch, with half of her being wandering disconnected in each plane; she might end up losing control of a magic she has no native understanding of and wind up smeared across seventeen discontiguous dimensions of space-time. Very dangerous indeed!
common-pile/stackexchange_filtered
Office 365 Shared Calendar without invitations or reminders? I'd like to create a calendar in Office 365 Outlook which can be added to by several users, but produces no invitations or reminders. Is this possible to achieve? The users check this particular calendar regularly and the invitations/reminder popups become very distracting in addition to their regular personal calendars. What did O365 support say about this? After a couple of days more research plus 3 calls to O365 support... In Outlook, click on 'Folders', highlight the group which contains the shared calendar. In the Ribbon click 'Home > Membership > Unsubscribe'. User will no longer receive invitations, or reminders for that calendar but will still be able to view/add/edit as permissions allow. NB. The Organizer of an appointment will always be automatically attending the appointment and therefore it appears in their personal calender, meaning that they still get reminders. Currently it's not possible to prevent this behaviour and the organizer must delete then appointment in their personal calendar in order to prevent reminders popping up. I don't see "Membership" in calendar home\ within O365? I did not get the other answer to work, but it might be about how the calendar admin can disable emails. Individual members can disable emails like this: Open the calendar in Outlook In the ribbon, Group > Group settings > No emails or events
common-pile/stackexchange_filtered
Example of a bijective continuous self mapping whose inverse is not continuous on a complete subspace of $\mathbb{R}$ I gave an answer to the following question Finding an (easy) example of a bijective continuous self mapping whose inverse is not continuous. In this question the OP asked for a continuous mapping $f: (X,d) \rightarrow (X,d)$ which is bijective, continuous and not a homeomorphism (and $(X,d)$ a metric space). The famous Kavi Rama Murthy then remarked that all the counterexamples are for metric spaces which are incomplete. I gave it some thoughts and come up with a counterexample where the space is complete. However, I did not manage to make it work within $\mathbb{R}$. So my question is: Does there exist some closed subset $X\subseteq \mathbb{R}$ and a function $f: X \rightarrow X$ which is bijective, continuous (wrt to the subspace topology) and not a homeomorphism. My intuition tells me that it is not possible as there are at most two noncompact connected components. Thus, preventing us to play the game of gluing together connected components to prevent the inverse function to be continuous. Let me elaborate a bit on this thought. We note that we may wlog assume that $X$ has no unbounded connected components. Simply as those would be the only noncompact connected components and as continuous functions send compact sets to compact sets and our $f$ is bijective, we would have that it sends unbounded connected components to unbounded connected components. Either the image of the unbounded connected component covers an unbounded connected component, or we need to cover a bounded half-open interval by countably many compact disjoint intervals (which is not possible using a Baire-category argument, see for example here https://terrytao.wordpress.com/2010/10/04/covering-a-non-closed-interval-by-disjoint-closed-intervals/). Thus, unbounded connected components get swapped or fix and hence, the $X$ with the unbounded connected components replaced by points are a counterexample as well. Hence, $X$ can be taken to be a countable union of compact intervals. On the other hand, it is not possible that $X$ is compact (continuous functions from a compact space into a Hausdorff space are closed, which would make our function a homeomorphism). Furthermore, using again that we cannot cover a half-open interval with countably many disjoint compact intervals, we get that all that $f$ can do is permuting connected components (it maps some intervals to another interval and points to points). What does "$f:\mathbb{R}\supseteq X\rightarrow X\subseteq\mathbb{R}$" mean? Is it $f:X\rightarrow X$ (which seems most likely) or $f:\mathbb{R}\rightarrow X$ (which would explain the "$\mathbb{R}\supseteq X$" versus "$X\subseteq\mathbb{R}$" notation)? Since we already know that $X\subseteq\mathbb{R}$, it only makes things messier to include that data in the description of $f$. @NoahSchweber I meant $X\rightarrow X$. I like to keep the inclusions to remind myself that we are in the reals, but if you consider it confusing, then I'll change it. Given that this is the only "formula" it is not particular messy, is it? :) FWIW in my experience in the notation "$f: A[...]\rightarrow B[...]$" the $[...]$s are the annotations, so "$f: \mathbb{R}\supseteq X\rightarrow X\supseteq\mathbb{R}$" would mean $f:\mathbb{R}\rightarrow X$. (BTW +1, it's a good question.) @NoahSchweber That is interesting, I did not know that this is a standard way of writing. Until now I only saw things like $X\supseteq U \rightarrow V \subseteq Y$ to remind us that $U \subseteq X, V \subseteq Y$, simply to keep in mind about which of the many Banach spaces we are talking right now (and keeping the arrow between the sets we are mapping). Thanks, I like the question as well ;) Oh, interesting - I've not seen that before. It's quite possible that's more common, my experience is definitely limited, I just find it a little hard to read. @NoahSchweber This might be a bit of a silly question, but the notation you mentioned, in which subject is it common? It seems my intuition was wrong. Indeed, such an example does exist. I always find it a bit strange when people answer their own question, but for once I'll do it myself (I did not know the answer when I posted the question and as you may see on my profile I do not use this as a cheat to gain reputation). After some more thought I realized that one of the things that could go wrong is that the inverse function "sends points to infinity". Namely, if we had $$ Y= \{ 0 \} \cup \bigcup_{n\in \mathbb{N}_{\geq 1}} \left\{ \frac{1}{2^n}\left( 1 + \frac{1}{2} \right) \right\} \cup \bigcup_{n\in \mathbb{N}_{\geq 1}} \left\{ 2^n\left( 1 + \frac{1}{2} \right) \right\}, $$ then we could do some kind of "inversion" around $1$ while fixing the origin. Namely, we want for $n\in \mathbb{N}_{\geq 1}$ $$f(0):=0, \quad f\left( 2^n\left( 1 + \frac{1}{2} \right) \right) := \frac{1}{2^{n}}\left( 1 + \frac{1}{2} \right). $$ Then clearly the inverse of this function (if it was bijective) would be discontinuous at the origin. How to we make this bijective? We apply the trick that we can "create" or "destroy" a point if we add some converging sequence for it, simply by shifting along the sequence. Hence, we define $$ X= \{ 0 \} \cup \bigcup_{n\in \mathbb{N}_{\geq 1}} \left\{ \frac{1}{2^n}\left( 1 + \frac{1}{2^k} \right) \ : \ k\in \mathbb{N}_{\geq 1} \right\} \cup \bigcup_{n\in \mathbb{N}_{\geq 1}} \left\{ 2^n \left( 1 + \frac{1}{2^k} \right) \ : \ k\in \mathbb{N}_{\geq 1} \right\}. $$ We "shift into $\frac{1}{2^n}$" and "shift out of $2^n$". Namely, we define for all $n\in \mathbb{N}_{\geq 1}$ $$ f(0):= 0, \qquad f\left( \frac{1}{2^n} \right) := \frac{1}{2^n}, \qquad f\left( 2^n \right) := 2^n. $$ and $$ f\left( \frac{1}{2^n}\left( 1 + \frac{1}{2^k} \right) \right) := \frac{1}{2^n}\left( 1 + \frac{1}{2^{k+1}} \right), \qquad f\left( 2^n\left( 1 + \frac{1}{2^k} \right) \right) = \begin{cases} 2^n \left( 1 + \frac{1}{2^{k-1}} \right),& k\neq 1, \\ \frac{1}{2^n}\left( 1 + \frac{1}{2} \right),& k=1.\end{cases}$$ Thus, we found a continuous, bijective map $f: X \rightarrow X$ which is not a homeomorphism. And $X\subseteq \mathbb{R}$ is a closed set and thus complete.
common-pile/stackexchange_filtered
Is Careers for system administrators, too? As per subject: I can't understand from the FAQ is the Careers site is only programmer-related or if it can be used by/for sysadmins. I suppose it would make sense for (serious) employers to look for professionals on Server Fault other than Stack Overflow... Well, there is http://careers.serverfault.com/jobs, but it doesn't look quite the same. In practice it will no longer does The primary focus of the Careers services at this point is on programmers and programming jobs. The job listings are "syndicated" both on Stack Overflow and Server Fault because of the overlap in the audience and also because it's not uncommon for system administration jobs to be posted by employers also hiring programmers. As ServerFault.com continues to grow, we'd like to eventually augment our service offering to give the system administration community a careers place of its own as well.
common-pile/stackexchange_filtered
Hook for when a page template is changed I know that you can remove the editor section of the page editor page ( :/) depending on the template chosen using add_action( 'load-page.php', 'hide_editor_function' ); (with proper functionality of course). The problem with this though, as you should be able to tell, is that this will only work on a page load/reload. Not as soon as the template is changed. As far as I can find, there is no specific hook for this. So my question really is, is there a hook for when the user changes the page template for a page in the admin panel? And if not, what would be the best way to have 'instantaneous' hiding/revealing of the editor (and add custom meta boxes)? Thank you for your time, Lyphiix If you want to toggle the editor "on the fly", you'll need to revert to a pure JavaScript solution, and only ever "visually" hide it (as opposed to removing it server-side): function wpse_189693_hide_on_template_toggle() { $screen = get_current_screen(); if ( $screen && $screen->id === 'page' ) : ?> <script> jQuery( "#page_template" ).change( function() { jQuery( "#postdivrich" ).toggle( jQuery( this ).val() === "my-template.php" ); } ).trigger( "change" ); </script> <?php endif; } add_action( 'admin_print_footer_scripts', 'wpse_189693_hide_on_template_toggle' ); The reason you can't keep your hide_editor_function is, whilst this will work to initially hide the editor, once the user saves and reloads the page the editor will no longer be in the source to "toggle". So it always has to be there, even if it's just hidden. Will this work for adding/removing metaboxes as well? Or should I be really stupid and reload the page on template changed and do the hiding I am already doing (plus adding/removing meta boxes). Yes, just change the jQuery selectors. I think reloading the page is overkill and asking for trouble. Yes... As I said, 'stupidly reload'. How do I find the selectors I want? Just view source or inspect element with your browser to get the id of the metaboxes For Gutenberg the #page_template selector no longer exists. The best way I could find to select the select box in this case is either .editor-page-attributes__template select or .editor-page-attributes__template #inspector-select-control-0. The latter seems better for specificity, but "inspector-select-control-0" seems like it's generated by the order of the select in that section, so may be less reliable in future. For anyone using the Gutenberg editor, I found that (in addition to my comment on the accepted answer) there's more work which needs doing here. Gutenberg injects children of .block-editor__container on the fly, so it's not enough to amend the selectors to catch the change event, you have to watch for the correct element being inserted. Using MutationObserver (since this is the better practice which replaces the soon-deprecated DOM Mutation Events and has full browser support), this is how I accomplished catching the template change: $ = $ || jQuery; // Allow non-conflicting use of '$' $(function(){ // Create an observer to watch for Gutenberg editor changes const observer = new MutationObserver( function( mutationsList, observer ){ // Template select box element let $templateField = $( '.editor-page-attributes__template select' ); // Once we have the select in the DOM... if( $templateField.length ){ // ...add the handler, then... $templateField.on( 'change', function(){ // ...do your on-change work here console.log( 'The template was changed to ' + $( this ).val() ); }); observer.disconnect(); } }); // Trigger the observer on the nearest parent that exists at 'DOMReady' time observer.observe( document.getElementsByClassName( 'block-editor__container' )[0], { childList: true, // One of the required-to-be-true attributes (throws error if not set) subtree: true // Watches for the target element's subtree (where the select lives) }); }); Hope this helps someone!
common-pile/stackexchange_filtered
reformat laptop using a different windows disc My Dad's old laptop is pretty slow and old and he got a new one, so I thought I'd reformat it and reinstall Windows. However, he can't find the original disc it came with. The laptop is currently running Windows Vista, and he only has discs for Windows XP, or the Windows 8 discs that came with his new laptop. Can I use any of them or do I need to try and find the Vista disc? (I have done this before ages ago with my old laptop, but I had the original discs then, so I don't know what happens if you use a different disc.) You can only install Windows Vista using a Windows Vista disc. However, you can install a different version of Windows on your laptop if you have a spare copy laying around. Or install Linux, if you are in to that. So you think it will work if I just use a different disc? I know that it won't be vista if I don't use a vista disc. I don't especially want vista. Yes. Sometimes finding compatible drivers can be harder, but changing windows versions should not be that big a deal. If you have a Windows 8 or Windows XP serial number it will work. The laptop probably had Windows Vista OEM. The Windows Vista serial number is not going to work with a Windows 8 or Windows XP installation. From my experience it will not work with a normal version of Vista either. You will need to find a manufacturer's OEM install disc. what does OEM mean? Can I just use a different serial number? I have a copy of Windows 7 from an old Asus laptop, it has the serial number on the package, will that work? OEM means Original equipment manufacturer. What that means is if you had a Windows Vista OEM license on that laptop, it should stay on that machine only. The windows 7 install/license will work great. I thought you were asking if the vista license would work with the windows xp/8 installation. it doesn't matter that I already used that license on my old computer? Yes, that does matter. That's what I thought. All of the discs I have have serial numbers, but they've all been used on other computers (although the computers they were originally for are pretty much all now broken). This is my dilemma. I don't see any way around it, I thought Dad would have the disc, but he has no idea what he's done with it. If the old Asus computer is no longer in use, the serial number should work with your dad's computer. I'm making an assumption here, but the Asus laptop's serial is probably OEM too. The Asus installation disc may not work with your dad's computer. @xxl3ww: Even if the old PC's no longer in use, an OEM Windows license is non-transferable. Very true, I think Microsoft wants the OEM license to stay with the motherboard.
common-pile/stackexchange_filtered
openmp: recursive task slower than sequential even with depth limit I have a very big binary tree on which I want to do certain expensive calculations. I want to parallelize these methods using openmp and its task pragmas. As a test, I have parallelized the freeTree function and I have created a big example test tree. In order to prevent spawning many tasks, I have limited the tasks creation to the top two levels of the tree. So effectively only 4 tasks are created. Below is the minimal working example: #include <chrono> #include <iostream> class Node { public: int data; Node* l; Node* r; Node(Node* left, Node* right) : l(left), r(right) {} }; Node* createRandomTree(int depth) { if (depth == 0) return new Node(NULL, NULL); return new Node(createRandomTree(depth - 1), createRandomTree(depth - 1)); } void freeTree(Node* tree) { if (tree == NULL) return; freeTree(tree->l); freeTree(tree->r); delete tree; } void freeTreePar(Node* tree, int n = 0) { if (tree == NULL) return; Node *l = tree->l, *r = tree->r; if (n < 2) { #pragma omp task freeTreePar(l, n + 1); #pragma omp task freeTreePar(r, n + 1); } else { freeTree(tree->l); freeTree(tree->r); } // taskwait is not necessary delete tree; } int main(int argc, char const *argv[]) { std::chrono::time_point<std::chrono::system_clock> start, end; Node* tree = createRandomTree(22); start = std::chrono::system_clock::now(); #pragma omp parallel shared(tree) { #pragma omp single nowait freeTreePar(tree); } end = std::chrono::system_clock::now(); std::chrono::duration<double> elapsed_seconds = end-start; std::time_t end_time = std::chrono::system_clock::to_time_t(end); std::cout << "finished computation at " << std::ctime(&end_time) << "elapsed time: " << elapsed_seconds.count() << "s\n"; return 0; } When I run this code, it takes about 0.38 seconds to free the tree. However, if I just call freeTree(root) directly, it only takes 0.2 seconds. So even though the tree is very big (2^22 elements) and even though in this particular test case the tasks are of equal size, I cannot get a performance increase using parallel methods. Am I doing something wrong? Do you know how I can improve this code? Thanks! Some tasks are not really parallelizable, because some resources are only accessible by one thread at a time (Thread Safety). And this is the case of dynamic memory. malloc/free is now mostly thread safe. => A lock will be performed around each malloc/free. So, You cannot easily improve this kind of code. There are specially designed (nearly) lock-free memory allocators that can be used as drop-in replacements for the one provided by (g)libc. Some of them are listed in this question.
common-pile/stackexchange_filtered