_id
stringlengths
2
6
text
stringlengths
4
46k
title
stringclasses
1 value
d10001
Rather calling iteration inside a new thread, use iteration first and then start new thread in every iteration. I hope it will help you. I am posting code for your help. for (SongDetails songs : songDetails) { new DownloadTask(pass your song object in constructor).executeOnExecutor(AsyncTask.THREAD_POOL_EXECUTOR); } and in doInBackground call your downloading method stuff. A: I offer to use DownloadManager or workmanager for it. AsyncTask has used a bad idea. It has a problem with a memory leak. The asyncTask use only one thread The same question
d10002
The session cookie stored in the browser simply contains a reference to the session ID on the server and does not contain any actual session data. All of that is stored solely on the server and it would actually be a huge security issue if it were stored client-side. If you delete the session on the server, this deletes all data stored on the server to do with that session and any attempt to re-authenticate using the old session ID will fail. If the browser sends an old session cookie back to the server after the session has been destroyed, it will simply be ignored by the server. Therefore there is no need to delete the session cookie from the user's browser since this will be done automatically by the browser when the window is closed. The recommended way to kill a session is simply to do: $_SESSION = array(); However, with that said, if you want to manually force the cookie to be deleted for whatever reason: <?php session_start(); $_SESSION['style'] = $style; echo "Your style is : " . $style; unset($_SESSION['style']); if (ini_get("session.use_cookies")) { $params = session_get_cookie_params(); setcookie(session_name(), '', -1, $params["path"], $params["domain"], $params["secure"], $params["httponly"] ); } session_destroy();
d10003
problem here is "directive in modal", like in this thread: Integrating directive in Angular UI Modal The solution would be to load the directive after the modal is rendered: c.onWidget2 = function(template, task) { c.taskName = task.short_description; var initWidget = spUtil.get('hrj_task_activity_scoped', { sys_id: task.sys_id, table: 'sn_hr_core_task', recordInfo: task.taskInfo }).then(function(response) { c.newTask2 = response; }); c.modalInstance = $uibModal.open({ templateUrl: template, scope: $scope, size: 'lg' }).rendered.then(initWidget); }; //closeModal for the "x" c.closeModal = function() { c.modalInstance.close(); };
d10004
You don't provide sample data, so I'm generating a sample list of 4 data.frames. lst <- lapply(1:4, function(x) data.frame(one = LETTERS[1:4], two = 1:4)) We add a third column to every data.frame in the list. lapply(lst, function(x) { x$three = letters[11:14]; x }) #[[1]] # one two three #1 A 1 k #2 B 2 l #3 C 3 m #4 D 4 n # #[[2]] # one two three #1 A 1 k #2 B 2 l #3 C 3 m #4 D 4 n # #[[3]] # one two three #1 A 1 k #2 B 2 l #3 C 3 m #4 D 4 n # #[[4]] # one two three #1 A 1 k #2 B 2 l #3 C 3 m #4 D 4 n Note that we need to return x, to get the data.frame with the added column.
d10005
Gen.delay(Gen.const(new ObjectId)) delay's argument is by-name, so every attempt to generate a value will construct a new ObjectId.
d10006
Here is one way using the stop and change event:- $('.slider').each(function() { var $el = $(this); $el.slider({ range: "max", min: $el.data('min'), max: $el.data('max'), value: $el.data('value'), step: $el.data('step'), stop: function(event, ui) { var percent = (100 / ($(this).data('max') - $(this).data('min'))) * $(this).slider('value'); $('.slider').not(this).each(function() { $(this).slider('value', (($(this).data('max') - $(this).data('min')) / 100) * percent); }); }, change: function(event, ui) { $($el.data('target')).html('£' + ui.value); } }); }); FIDDLE A: Based on BG101's answer, this is the final result:- jQuery('.slider').each(function() { var $el = jQuery(this); $el.slider({ range: "max", min: $el.data('min'), max: $el.data('max'), value: $el.data('value'), step: $el.data('step'), slide: function(event, ui) { var percent = (100 / (jQuery(this).data('max') - jQuery(this).data('min'))) * jQuery(this).slider('value'); jQuery('.slider').not(this).each(function() { jQuery(this).slider( 'value', ((jQuery(this).data('max') - jQuery(this).data('min')) / 100) * percent ); }); jQuery('.slider').each(function() { var thisTarget = jQuery(this).data('target'); var thisValue = jQuery(this).slider('option','value'); jQuery(thisTarget+' span').html(thisValue); }); }, }); }); JSFIDDLE (just thought I'd share)
d10007
In your above examples you used the two matplotlib's interfaces: pyplot vs object oriented. If you'll look at the source code of pyplot.scatter you'll see that even if you are going to provide 3 arguments plt.scatter(x, y, z, color='k'), it is actually going to call the 2D version, with x, y, s=z, s being the marker size. So, it appears that you have to use the object oriented approach to achieve your goals.
d10008
You may create, let's say, BEFORE UPDATE OF COL1, ..., COLx trigger on this table with a SIGNAL statement inside. Alternatively you may revoke the update privilege on this table from everyone and grant update on a subset of columns needed only. A: Another option is to create a view with a subset of the columns you need to update. It may be a little more complex in case you need to rebind your program(s)
d10009
I think the most promising approach that could optimize your example is called supercompilation. There is a paper about supercompilation for lazy functional languages: https://www.microsoft.com/en-us/research/publication/supercompilation-by-evaluation/. In the future work section of the paper the authors state: The major barriers to the use of supercompilation in practice are code bloat and compilation time. There has been work on trying to integrate supercompilation into GHC, but it has not yet been successful. I don't know all the details, but there is a very technical GHC wiki page about it: https://gitlab.haskell.org/ghc/ghc/-/wikis/supercompilation.
d10010
The MAIN sub needs to be declared outside the module, but it still must be able to see process. There are multiple ways to achieve this, eg by not declaring a module at all sub process(@filenames) { for @filenames -> $filename { say "Processing '$filename'"; } } sub MAIN(*@filenames) { process(@filenames); } by making process our-scoped and calling it by its longname module main { our sub process(@filenames) { for @filenames -> $filename { say "Processing '$filename'"; } } } sub MAIN(*@filenames) { main::process(@filenames); } or by exporting process and importing it in the body of MAIN module main { sub process(@filenames) is export { for @filenames -> $filename { say "Processing '$filename'"; } } } sub MAIN(*@filenames) { import main; process(@filenames); } In my opinion the most appropriate option is to add MAIN to the module and import it into the script's mainline. This way, everything declared within the module is visible within MAIN without having to explicitly export everything: module main { sub process(@filenames) { for @filenames -> $filename { say "Processing '$filename'"; } } sub MAIN(*@filenames) is export(:MAIN) { process(@filenames); } } import main :MAIN; Note that this does not export MAIN by default, ie users of your module will only get it if they provide the :MAIN tag.
d10011
The following article should explain much of the process to you. For further reading you can also check out the PayPal developer documentation. Update: Here is an updated example for current version of ASP.NET (4.5 at the time of writing) A: Integrate PayPal into website vb.net * *Open cmd enter "http://www.catalog.update.microsoft.com/search.aspx?q=kb3140245" (install windows update for nuGetPackage) *Open cmd enter reg add "HKLM\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client" /v DisabledByDefault /t REG_DWORD /d 0 /f /reg:32(Register cmd update) *Register nuGet update in solution of the project. Goto : https://devblogs.microsoft.com/nuget/deprecating-tls-1-0-and-1-1-on-nuget-org/ *NuGet clients and PowerShell reg. *Install Paypal nuGet package in solution *Add Paypal SDK reference in the project. *Add configurations for Paypal. *NuGet manager package JSON uninstall. *Tools -> nuget manager-> package manager console paste Install-Package Newtonsoft.Json -Version 6.0.1, enter. *Add in vb file of startupwizard the code in paypal1. https://drive.google.com/file/d/1HXChrl0XWR_sE_rZCtGAvgWu24C55KEV/view?usp=sharing
d10012
So, we'll have a lot of steps here, but each individual step should be fairly short, self-contained, reusable, and relatively understandable. The first thing we'll do is create a method that can combine expressions. What it will do is take an expression that accepts some input and generates an intermediate value. Then it will take a second expression that accepts, as input, the same input as the first, the type of the intermediate result, and then computes a new result. It will return a new expression taking the input of the first, and returning the output of the second. public static Expression<Func<TFirstParam, TResult>> Combine<TFirstParam, TIntermediate, TResult>( this Expression<Func<TFirstParam, TIntermediate>> first, Expression<Func<TFirstParam, TIntermediate, TResult>> second) { var param = Expression.Parameter(typeof(TFirstParam), "param"); var newFirst = first.Body.Replace(first.Parameters[0], param); var newSecond = second.Body.Replace(second.Parameters[0], param) .Replace(second.Parameters[1], newFirst); return Expression.Lambda<Func<TFirstParam, TResult>>(newSecond, param); } To do this we simply replace all instances of the second parameter in the second expression's body with the body of the first expression. We also need to ensure both implementations use the same parameter instance for the main parameter. This implementation requires having a method to replace all instances of one expression with another: internal class ReplaceVisitor : ExpressionVisitor { private readonly Expression from, to; public ReplaceVisitor(Expression from, Expression to) { this.from = from; this.to = to; } public override Expression Visit(Expression node) { return node == from ? to : base.Visit(node); } } public static Expression Replace(this Expression expression, Expression searchEx, Expression replaceEx) { return new ReplaceVisitor(searchEx, replaceEx).Visit(expression); } Next we'll write a method that accepts a sequences of expressions that accept the same input and compute the same type of output. It will transform this into a single expression that accepts the same input, but computes a sequence of the output as a result, in which each item in the sequence represents the result of each of the input expressions. This implementation is fairly straightforward; we create a new array, use the body of each expression (replacing the parameters with a consistent one) as each item in the array. public static Expression<Func<T, IEnumerable<TResult>>> AsSequence<T, TResult>( this IEnumerable<Expression<Func<T, TResult>>> expressions) { var param = Expression.Parameter(typeof(T)); var body = Expression.NewArrayInit(typeof(TResult), expressions.Select(selector => selector.Body.Replace(selector.Parameters[0], param))); return Expression.Lambda<Func<T, IEnumerable<TResult>>>(body, param); } Now that we have all of these general purpose helper methods out of the way, we can start working on your specific situation. The first step here is to turn your dictionary into a sequence of expressions, each accepting a MyClass and creating a StringAndBool that represents that pair. To do this we'll use Combine on the value of the dictionary, and then use a lambda as the second expression to use it's intermediate result to compute a StringAndBool object, in addition to closing over the pair's key. IEnumerable<Expression<Func<MyClass, StringAndBool>>> stringAndBools = extraFields.Select(pair => pair.Value.Combine((foo, isTrue) => new StringAndBool() { FieldName = pair.Key, IsTrue = isTrue })); Now we can use our AsSequence method to transform this from a sequence of selectors into a single selector that selects out a sequence: Expression<Func<MyClass, IEnumerable<StringAndBool>>> extrafieldsSelector = stringAndBools.AsSequence(); Now we're almost done. We now just need to use Combine on this expression to write out our lambda for selecting a MyClass into an ExtendedMyClass while using the previous generated selector for selecting out the extra fields: var finalQuery = myQueryable.Select( extrafieldsSelector.Combine((foo, extraFieldValues) => new ExtendedMyClass { MyObject = foo, ExtraFieldValues = extraFieldValues, })); We can take this same code, remove the intermediate variable and rely on type inference to pull it down to a single statement, assuming you don't find it too unweidly: var finalQuery = myQueryable.Select(extraFields .Select(pair => pair.Value.Combine((foo, isTrue) => new StringAndBool() { FieldName = pair.Key, IsTrue = isTrue })) .AsSequence() .Combine((foo, extraFieldValues) => new ExtendedMyClass { MyObject = foo, ExtraFieldValues = extraFieldValues, })); It's worth noting that a key advantage of this general approach is that the use of the higher level Expression methods results in code that is at least reasonably understandable, but also that can be statically verified, at compile time, to be type safe. There are a handful of general purpose, reusable, testable, verifiable, extension methods here that, once written, allows us to solve the problem purely through composition of methods and lambdas, and that doesn't require any actual expression manipulation, which is both complex, error prone, and removes all type safety. Each of these extension methods is designed in such a way that the resulting expression will always be valid, so long as the input expressions are valid, and the input expressions here are all known to be valid as they are lambda expressions, which the compiler verifies for type safety. A: I think it's helpful here to take an example extraFields, imagine how would the expression that you need look like and then figure out how to actually create it. So, if you have: var extraFields = new Dictionary<string, Expression<Func<MyClass, bool>>> { { "Foo", x => x.Foo }, { "Bar", x => x.Bar } }; Then you want to generate something like: myQueryable.Select( x => new ExtendedMyClass { MyObject = x, ExtraFieldValues = new[] { new StringAndBool { FieldName = "Foo", IsTrue = x.Foo }, new StringAndBool { FieldName = "Bar", IsTrue = x.Bar } } }); Now you can use the expression trees API and LINQKit to create this expression: public static IQueryable<ExtendedMyClass> Extend( IQueryable<MyClass> myQueryable, Dictionary<string, Expression<Func<MyClass, bool>>> extraFields) { Func<Expression<Func<MyClass, bool>>, MyClass, bool> invoke = LinqKit.Extensions.Invoke; var parameter = Expression.Parameter(typeof(MyClass)); var extraFieldsExpression = Expression.Lambda<Func<MyClass, StringAndBool[]>>( Expression.NewArrayInit( typeof(StringAndBool), extraFields.Select( field => Expression.MemberInit( Expression.New(typeof(StringAndBool)), new MemberBinding[] { Expression.Bind( typeof(StringAndBool).GetProperty("FieldName"), Expression.Constant(field.Key)), Expression.Bind( typeof(StringAndBool).GetProperty("IsTrue"), Expression.Call( invoke.Method, Expression.Constant(field.Value), parameter)) }))), parameter); Expression<Func<MyClass, ExtendedMyClass>> selectExpression = x => new ExtendedMyClass { MyObject = x, ExtraFieldValues = extraFieldsExpression.Invoke(x) }; return myQueryable.Select(selectExpression.Expand()); }
d10013
SharedPreferences.Editor.commit() returns a boolean, indicating the status of write to the actual SharedPreferences object. See if commit() returned true. Also, make sure, you're not editing the same SharedPreference using two Editors. The last editor to commit, will have its changes reflected. Update Your code works fine, when I run it. I don't see anything wrong in your code. Please make sure you're writing to and reading from the same SharedPreferences. A: Your problem is the declaration of the SharedPreferences; it is all declared but...not initialized! Where should the os write your key-value data? I suggest you to read this Get a Handle to a SharedPreferences Try this code, I tested it and work: SharedPreferences dialogPrefs = this.getPreferences(Context.MODE_PRIVATE); final SharedPreferences.Editor dialogEditor = dialogPrefs.edit(); if (dialogPrefs.getBoolean("Show", true)) { new AlertDialog.Builder(this) .setTitle("Blah") .setMessage("Blah blah blah ") .setNegativeButton("Not now", null) .setNeutralButton("Don't show again", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { dialogEditor.putBoolean("Show", false); dialogEditor.commit(); } }) .setPositiveButton("Enable", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int which) { Log.i("TAG", "onClick: enable"); } }).show(); } } A: It should be like this: if (!dialogPrefs.getBoolean("Show", false)) {//don't show again will work instead of: if (dialogPrefs.getBoolean("Show", true) == true) {//this will always show dialog
d10014
You probably need a trailing slash at the end of the URL. Also, your jQuery selector is wrong. You don't need quotes within the square brackets. However, that selector is better written like this anyway: $("input#id_tag_list") or just $("#id_tag_list") A: Separate answer because I've just thought of another possibility: is your static page being served from the same domain as the Ajax call (gladis.org)? If not, the same-domain policy will prevent Ajax from being loaded. A: As an aside, assuming your document.ready is in your Django template, it would be a good idea to utilize the {% url %} tag rather than hardcoding your URL. $(document).ready(function(){ $("input[id='id_tag_list']").autocomplete({ url:'{% url my_tag_lookup %}', dataType:'text' }); }); This way the JS snippet will be rendered with the computed URL and your code will remain portable. A: I found a solution, but well I still don't know why the first approach didn't worked out. I just switched to a different library. I choose http://bassistance.de/jquery-plugins/jquery-plugin-autocomplete/. This one is actually promoted by jQuery and it works ;)
d10015
Should be almost exactly the same: count(a/b[@val='tsr']/preceding-sibling::*)+1 Example usage... XSLT 1.0 <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="/"> <xsl:value-of select="count(a/b[@val='tsr']/preceding-sibling::*)+1"/> </xsl:template> </xsl:stylesheet> Output: 3
d10016
The main thing you need to keep in mind here is that each time the page is refreshed it has no knowledge of the data that was on the previous page. As was mentioned in a previous comment, persistent storage is what you're looking for. This might come in the form of a full-on (NoSQL/RDBMS) database or in some other semi-permanent form stored only within the browser. You can either start using a full-on database or if you want something more light weight, you can use local storage or cookies. One thing to note is that, unless you implement something server side, eg: a database, the changes will be persistent only to the user and only so long as they don't clear the cache.
d10017
Try this one it will work "rotationY" means it will rotate Y Direction, "rotationX" means it will rotate X Direction ObjectAnimator animation = ObjectAnimator.ofFloat(view, "rotationY", 0.0f, 360f); animation.setDuration(600); animation.setRepeatCount(ObjectAnimator.INFINITE); animation.setInterpolator(new AccelerateDecelerateInterpolator()); animation.start();
d10018
A batch will start whenever the request is sent and end when the last request in the batch is completed. As with any RESTful API, every request comes with a cost, meaning how much/many resources it will take to complete said request. With the batch_write() class in DynamoDB2, they are wrapping the requests in a group and creating a queue to process them, which will reduce the cost as they are no longer individual requests. The batch_write() class returns a context manager that handles the individual requests and what you get back slightly resembles a Table object but only has the put_item and delete_item requests. DynamoDB's max batch size is 25, just like you've read. From the comments in the source code: DynamoDB's maximum batch size is 25 items per request. If you attempt to put/delete more than that, the context manager will batch as many as it can up to that number, then flush them to DynamoDB & continue batching as more calls come in. You can also read about migrating, batches in particular, from DynamoDB to DynamoDB2 here.
d10019
No, ending a thread you explicitly created is not the responsibility of the Android framework. You need to extend onDestroy(). Here is the javadoc for this method: Called by the system to notify a Service that it is no longer used and is being removed. The service should clean up an resources it holds (threads, registered receivers, etc) at this point. Upon return, there will be no more calls in to this Service object and it is effectively dead. Do not call this method directly. I would also suggest that you make your thread use Interrupts so that you can end it gracefully.
d10020
I think you are missing a closing bracket: //Add each items in the order _gaq.push(['_addItem', '650', // order ID - necessary to associate item with transaction '29', // SKU/code - required 'bags set of 4', // product name 'Cleaning Supplies', // category or variation '15.99', // unit price - required '1' ]); //right here, missing closing bracket
d10021
There's not much in the way of documentation on Multipeer Connectivity, so these answers are based on my own experiments: * *There are lots of questions inside this one question, but in a nutshell A's session(s) manage invitations that A has sent or accepted. So if B invites and A accepts, the session that A passes when accepting will then be used in the MCSessionDelegate session: peer: didChangeState: callback to say whether the connection happened. At this point the connectedPeers property of A's session will contain B (if the connection succeeded). The session that B used to send the invitation will also get the same callback once A accepts, and will contain A in connectedPeers. Neither A nor B will have any information on the other's session that managed this connection between them. And yes, if you want to send any data to the newly connected peer, you need to keep a reference to the session that handled the connection. I've yet to see any real advantage in creating a new session for each invitation. *Based on the above, B has no information on who A is connected to. So if A is connected to B,C and D, the only peer B knows about is the peer that it connected to - A. What Multipeer Connectivity offers here is the ability for B to discover C or D via A. So if the A-B connection was made over WiFi, and A-C over Bluetooth, but B does not have Bluetooth enabled, and C is on a different WiFi network to B, B and C can still discover each other through A. The process of inviting and accepting is still up to B and C to handle though. *I answered a question about session management here that may be helpful Best option for streaming data between iPhones *B doesn't know anything about A connecting to C. All B can do is discover C for itself and add C to it's own session. A: * *From quickly looking over Chris's answer it seems accurate from what I've been working on at my job. *I'm currently working on a game using multipeer connectivity and have found that: (Assuming everyone is on wifi) if A connects to B and A connects to C, then B will be connected to C and be able to send messages to C. the Multipeer Framework is a P2P framework and automatically connects everyone together. even if (as in my current project) you set it up as a Server-client model, all peers will still be connected and you can send messages between B and C without going through A. Depending on what you are doing with your app. *Go with Chris's answer *Addressed in #2 I recommend looking at the example project in the Apple Docs and carefully watch what happens in the debug area when you connect. Also, as a side note: Scenario:(A is only on wifi),(B is on wifi and bluetooth),(C is on bluetooth only) if A connects to B via wifi, and B connects to C via bluetooth, A will still be connected to C but you won't be able to send a message from A directly to C because they are on different networks. A: @SirCharlesWatson: (Assuming everyone is on wifi) if A connects to B and A connects to C, then B will be connected to C and be able to send messages to C. the Multipeer Framework is a P2P framework and automatically connects everyone together. My question is: when B automatically connected to C, will B & C receive a state change notification on the existed session? * *A: session:peer:B didChangeState(MCSessionStateConnected) *B: session:peer:A didChangeState(MCSessionStateConnected)
d10022
Implement method below and set desired colour. func collectionView(_ collectionView: UICollectionView, willDisplay cell: UICollectionViewCell, forItemAt indexPath: IndexPath) { // Access label of cell object and set desired colour } Tells the delegate that the specified cell is about to be displayed in the collection view. https://developer.apple.com/reference/uikit/uicollectionviewdelegate/1618087-collectionview
d10023
How about this? income_tax <- function(income, brackets = c(18200, 37000, 80000, 180000, Inf), rates = c(0, .19, .325, .37, .45), fixed = c(0,100,0,0,0)) { check <- diff(c(0,pmin(income, brackets))) sum(check * rates + fixed * (check>0)) } which gives, income_tax(18200) # [1] 0 income_tax(18201) #[1] 100.19
d10024
The server time zone 'Maroc' that is being used is invalid. To see what value it is set to use SELECT @@global.time_zone; Try to set the whichever Time Zone by default-time-zone in the file my.cnf For eg: default-time-zone='+00:00' To set it for current session do: SET time_zone = timezonename;
d10025
well, i have sent many emails, and never got an answer, if you ask me you are better with many other alternatives, like there is an a amazing flash control i use http://www.flash-control.net/ it does everything to implement flash like XHTML valid inclusion, Option to install flash if not available, Flash Vars, etc... then you can choose of millions of flash mp3 players, video players on net which use XML, then simply you create the XML in code and hook any special Flashvars with this control and you have a great media player. here is a nice site i always got to for such stuff www.flashden.net. hope this helps.
d10026
Someone reported this issue on Github. The maintainers aren't hosting the API on Heroku (or anywhere else) at the moment. They've made their source code for the API function available here though. I've extracted the text_to_handwriting function below: import urllib.request import string import numpy as np from PIL import Image import cv2 char = string.ascii_lowercase file_code_name = {} width = 50 height = 0 newwidth = 0 arr = string.ascii_letters arr = arr + string.digits + "+,.-? " letss = string.ascii_letters def getimg(case, col): global width, height, back try: url = ( "https://raw.githubusercontent.com/Ankit404butfound/HomeworkMachine/master/Image/%s.png" % case ) imglink = urllib.request.urlopen(url) except: url = ( "https://raw.githubusercontent.com/Ankit404butfound/HomeworkMachine/master/Image/%s.PNG" % case ) imglink = urllib.request.urlopen(url) imgNp = np.array(bytearray(imglink.read())) img = cv2.imdecode(imgNp, -1) cv2.imwrite(r"%s.png" % case, img) img = cv2.imread("%s.png" % case) img[np.where((img != [255, 255, 255]).all(axis=2))] = col cv2.imwrite("chr.png", img) cases = Image.open("chr.png") back.paste(cases, (width, height)) newwidth = cases.width width = width + newwidth def text_to_handwriting(string, rgb=[0, 0, 138], save_to: str = "pywhatkit.png"): """Convert the texts passed into handwritten characters""" global arr, width, height, back try: back = Image.open("zback.png") except: url = "https://raw.githubusercontent.com/Ankit404butfound/HomeworkMachine/master/Image/zback.png" imglink = urllib.request.urlopen(url) imgNp = np.array(bytearray(imglink.read())) img = cv2.imdecode(imgNp, -1) cv2.imwrite("zback.png", img) back = Image.open("zback.png") rgb = [rgb[2], rgb[1], rgb[0]] count = -1 lst = string.split() for letter in string: if width + 150 >= back.width or ord(letter) == 10: height = height + 227 width = 50 if letter in arr: if letter == " ": count += 1 letter = "zspace" wrdlen = len(lst[count + 1]) if wrdlen * 110 >= back.width - width: width = 50 height = height + 227 elif letter.isupper(): letter = "c" + letter.lower() elif letter == ",": letter = "coma" elif letter == ".": letter = "fs" elif letter == "?": letter = "que" getimg(letter, rgb) back.save(f"{save_to}") back.close() back = Image.open("zback.png") width = 50 height = 0 return save_to text_to_handwriting("hello, world!", save_to="myimage.png")
d10027
Probably you use bootstrap 4. For now it won't work with this version. Use bootstrap 3 or you have to adjust bootstrap-slider to new bootstrap.
d10028
Either you have an encoding problem, or a non-printing character in the parameter.
d10029
Use val and indexOf : var hasSpace = $('#myInputId').val().indexOf(' ')>=0; If you want to test other types of "spaces" (for example a tabulation), you might use a regex : var hasSpace = /\s/g.test($('#myInputId').val()); Demonstration A: Use contains for include space: var value = $('#myInputId').val(); if(value.contains(' ')){ console.log("has space"); } Or you can find with these ascII code &nbsp and \xA0 if(value.contains('&nbsp;')){ console.log("has space"); } A: use val and space: $('#name').val().length <= 0 || $('#name').val(" ")
d10030
The promised behavior for iterators of a standard container does not hold for reverse iterators of that container. A reverse iterator actually stores, as a member, the normal (forward moving) iterator which comes after the element to which the reverse iterator refers when dereferenced. Then when you dereference the reverse iterator, essentially it decrements a copy of this stored normal iterator and dereferences that. So this is a problem: rit = rs.rbegin(); // rit stores rs.end() srit = rit; // srit also stores rs.end() rit++; // rit stores a normal iterator pointing to the last element rs.erase(*srit); // this deletes the last element, invalidating the normal // iterator which is stored in rit. Funnily enough, the // one stored in srit remains valid, but now *srit is a // different value Reverse iterators behave this way because there is no "before begin" iterator. If they stored the iterator to the element to which they actually refer, what would rs.rend() store? I'm sure there are ways around this, but I guess they required compromises which the standards committee was not willing to make. Or perhaps they never considered this problem, or didn't consider it significant enough.
d10031
Assume you start with a DataFrame df = pd.DataFrame([[3, 1, 3], [3, 1, 3], [3, 1, 3], [3, 3, 3], [3, 1, 1]]) df.astype(str).apply(lambda x: ','.join(x.values), axis=1).values.tolist() Looks like: ['3,1,3', '3,1,3', '3,1,3', '3,3,3', '3,1,1'] A: def foo(): l = [] with open("file.asd", "r") as f: for line in f: l.append(line) return l A: To turn your dataframe in to strings, use the astype function: df = pd.DataFrame([[3, 1, 3], [3, 1, 3], [3, 1, 3], [3, 3, 3], [3, 1, 1]]) df = df.astype('str') Then manipulating your columns becomes easy, you can for instance create a new column: In [29]: df['temp'] = df[0] + ',' + df[1] + ',' + df[2] df Out[29]: 0 1 2 temp 0 3 1 3 3,1,3 1 3 1 3 3,1,3 2 3 1 3 3,1,3 3 3 3 3 3,3,3 4 3 1 1 3,1,1 And then compact it into a list: In [30]: list(df['temp']) Out[30]: ['3,1,3', '3,1,3', '3,1,3', '3,3,3', '3,1,1'] A: # Done in Jupyter notebook # add three quotes on each side of your column. # The advantage to dataframe is the minimal number of operations for # reformatting your column of numbers or column of text strings into # a single string a = """3,1,3 3,1,3 3,1,3 3,3,3 3,1,1""" b = f'"{a}"' print('String created with triple quotes:') print(b) c = a.split('\n') print ("Use split() function on the string. Split on newline character:") print(c) print ("Use splitlines() function on the string:") print(a.splitlines())
d10032
Follow these steps : Say the directory structure is this on my side, under C Drive : components-JWSFileChooserDemoProject | ------------------------------------ | | | | nbproject src build.xml manifest.mf | components | ------------------------------------------------- | | | | images jars | | JWSFileChooserDemo.java JWSFileChooserDemo.jnlp Under components Directory create a new Directory called build so now Directory - components will have five thingies, instead of four i.e. build, images, jars, JWSFileChooserDemo.java and JWSFileChooserDemo.jnlp. Now first go to components Directory. To compile write this command : C:\components-JWSFileChooserDemoProject\src\components>javac -classpath images\*;jars\*;build -d build JWSFileChooserDemo.java Here inside -classpath option, you are specifying that content of Directories images, jars and build is to be included while compiling JWSFileChooserDemo.java. -d option basically tells, as to where to place the .class files. Move to "build" folder : C:\components-JWSFileChooserDemoProject\src\components>cd build Run the Program : C:\components-JWSFileChooserDemoProject\src\components\build>java -cp .;..\images\*;..\jars\* components.JWSFileChooserDemo Here inside -cp option . represents, that look from the current position, ..\images\* means, go one level up inside images Directory, from the current location and get all its contents and the same goes for ..\jars\* thingy too. Now you will see it working, and giving the following output : EDIT 1 : Since you wanted to do it without -d option of the Java Compiler - javac. Considering the same directory structure, as before, move inside your components Directory. COMPILE with this command : C:\components-JWSFileChooserDemoProject\src\components>javac -classpath images\*;jars\* JWSFileChooserDemo.java Now manually create the package structure in the File System, i.e. create Directory components and then move your .class files created previously, inside this newly created components Directory, and also add images folder to this newly created components folder. Now Directory - components will have five thingies, instead of four i.e. components(which further contains JWSFileChooserDemo.class , JWSFileChooserDemo$1.class and images folder), images, jars, JWSFileChooserDemo.java and JWSFileChooserDemo.jnlp. RUN the program with this command : C:\components-JWSFileChooserDemoProject\src\components>java -cp .;jars\* components.JWSFileChooserDemo This will give you the previous output, though, if you wanted to move as suggested before, again copy images folder to the automatically generated components folder, since I just looked inside the .java and they using relative path for it to work. JUST THIS PARTICULAR SOLUTION, I AM ABOUT TO DESCRIBE WILL WORK FOR YOUR CASE ONLY : If after javac command given before, if you don't wanted to create any folder, then go one level up, i.e. outside components directory and use this command to run the program C:\components-JWSFileChooserDemoProject\src>java -cp .;components\jars\* components.JWSFileChooserDemo If you don't wanted to
d10033
Use a struct of some sort to store the data, then use an XML or JSON serializer to store and retrieve the data into an array of the structs. struct FrameData { public int FrameNumber; public string ObjectName; public int X, Y, Z; public FrameData(int frameNumber, string objectName, int x, int y, int z) { this.FrameNumber = frameNumber; this.ObjectName = objectName; this.X = x; this.Y = y; this.Z = z; } } At each frame, store the data into a new FrameData object and put it into an array or a list. Then when finished, use a serializer to serialize the data. XmlSerializer serializer = new XmlSerializer(typeof(FrameData[])); using (FileStream fs = File.Open(filepath, FileMode.Create)) { serializer.Serialize(fs, frameDataArray); } Then to get the data again: XmlSerializer serializer = new XmlSerializer(typeof(FrameData[])); FrameData[] frameDataArray; using (FileStream fs = File.Open(filepath, FileMode.Open)) { frameDataArray = (FrameData[])serializer.Deserialize(fs); } If XML serialization takes up too much space you could try using JSON instead. There are plenty of resources online to teach you about JSON serialization/deserialization, it's pretty similar to the XML version. I would recommend the Nuget library JSON.NET, it's good and simple. Can't guarantee the code I wrote will work exactly as is, but it should hopefully point you in the right direction.
d10034
It would appear that this is a known issue. Here is a link to the comment thread on github.
d10035
Requirement for loading staticfiles GCS Go to GCP: Cloud Storage (GCS) and click on CREATE BUCKET (fill-up as needed) Once created, you can make it public if you want it to act like a CDN of your website (storage of your static files such as css, images, videos, etc.) Go to your newly created bucket Go to Permissions and then Click Add members Add a new member "allUsers" with role "Cloud Storage - Storage Object Viewer" Reference: https://cloud.google.com/storage/docs/quickstart-console Main references: https://django-storages.readthedocs.io/en/latest/backends/gcloud.html https://medium.com/@umeshsaruk/upload-to-google-cloud-storage-using-django-storages-72ddec2f0d05 Prerequisite steps Go to GCP: Cloud Storage (GCS) and click on CREATE BUCKET (fill-up as needed) Once created, you can make it public if you want it to act like a CDN of your website (storage of your static files such as css, images, videos, etc.) Go to your newly created bucket Go to Permissions and then Click Add members Add a new member "allUsers" with role "Cloud Storage - Storage Object Viewer" Reference: https://cloud.google.com/storage/docs/quickstart-console Step one 1 (easier and faster, but requires constant manual copying of files to the GCS) Configure your Django's static file settings in your settings.py STATICFILES_DIRS = [ os.path.join(BASE_DIR, 'templates'), os.path.join(BASE_DIR, "yourapp1", "templates"), os.path.join(BASE_DIR, "yourapp2", "static"), os.path.join(BASE_DIR, "watever"), "/home/me/Music/TaylorSwift/", "/home/me/Videos/notNsfw/", ] STATIC_ROOT = "/var/www/mywebsite/" STATIC_URL = "https://storage.googleapis.com/<your_bucket_name>/" Step 2 If you have HTML files or CSS files that access other static files, make sure that they reference those other static files with this updated STATIC_URL setting. In your home.html {% load static %} <link rel="stylesheet" type="text/css" href="{% static 'home/css/home.css' %}"> Then in your home.css file background-image: url("../assets/img/myHandsomeImage.jpg"); You can actually read the reference from the link i have provided. https://django-storages.readthedocs.io/en/latest/backends/gcloud.html https://medium.com/@umeshsaruk/upload-to-google-cloud-storage-using-django-storages-72ddec2f0d05 You home CSS link now will translate to : https://storage.googleapis.com/[your_bucket_name]/home/css/home.css If you wish, you could just put the absolute path (complete URL), but such configuration would always require you to update the used URLs manually, like if you switched to development mode and wanted to just access the static files locally instead of from GCS. This would copy all files from each directory in STATICFILES_DIRS to STATIC_ROOT directory. python3 manage.py collectstatic # or if your STATIC_ROOT folder requires permissions to write to it then: # sudo python3 manage.py collectstatic Okay after searching through stackoverflow i see this problem have been solved and i don't want this to be a form of a duplicate. So here is the link to stackoverflow solution:Serve Static files from Google Cloud Storage Bucket (for Django App hosted on GCE) .
d10036
I expect you are using the same formatter definition and have already tried to export the formatter on one of your team members machine and import it on yours. Another thing you should check are the Save Actions in the preferences (Java -> Editor -> Save Actions). Maybe the settings for removing whitespaces differs here.
d10037
A bit late tot the party - perhaps for future post readers. You can wrap the function to disallow access. An example below: from functools import wraps def is_known_username(username): ''' Returns a boolean if the username is known in the user-list. ''' known_usernames = ['username1', 'username2'] return username in known_usernames def private_access(): """ Restrict access to the command to users allowed by the is_known_username function. """ def deco_restrict(f): @wraps(f) def f_restrict(message, *args, **kwargs): username = message.from_user.username if is_known_username(username): return f(message, *args, **kwargs) else: bot.reply_to(message, text='Who are you? Keep on walking...') return f_restrict # true decorator return deco_restrict Then where you are handling commands you can restrict access to the command like this: @bot.message_handler(commands=['start']) @private_access() def send_welcome(message): bot.reply_to(message, "Hi and welcome") Keep in mind, order matters. First the message-handler and then your custom decorator - or it will not work. A: The easiest way is probably a hard coded check on the user id. # The allowed user id my_user_id = '12345678' # Handle command @bot.message_handler(commands=['picture']) def send_picture(message): # Get user id from message to_check_id = message.message_id if my_user_id = to_check_id: response_message = 'Pretty picture' else: response_message = 'Sorry, this is a private bot!' # Send response message bot.reply_to(message, response_message)
d10038
The practical difference is where the macro "inserts" the variable (and the subsequent results) into the expressions: (ns so.example) (defn example-1 [s] (-> s (str "foo"))) (defn example-2 [s] (->> s (str "foo"))) (example-1 "bar") ;=> "barfoo" (example-2 "bar") ;=> "foobar" So (-> "bar" (str "foo")) is the same as (str "bar" "foo") and (->> "bar" (str "foo")) is the same as (str "foo" "bar"). With unary functions -> and ->> do the same thing. When you need more flexibility as to where these results should be inserted, you would use as->: (ns so.example) (defn example-3 [s] (as-> s v (str "foo" v "foo"))) (example-3 "bar") ;=> "foobarfoo" A: Following up on as-> as an answer purely because I can't format code in a comment. Here's a usage of as-> from our codebase at work: (-> date (.getTime) (as-> millis (/ (- (.getTime (java.util.Date.)) millis) 1000 60 60 24)) (long) (min days)) That computation could be unrolled, and ->> used to place the threaded value at the end of the - expression, but would probably be harder to read. It would also be possible to unroll it in such a way that -> alone would be enough (by negating the threaded value and then adding it to (.getTime (java.util.Date.))) but that would make it even harder to read I think. A: I think there is a misunderstanding on your part on how -> works. You say * *-> is an alternative expression for nested operations on a single object, each datastep using the object as input argument, performing an independent operation. and then about ->> you say *When using threat-last ->> operator, each tranformation uses output of the preceding transformation as implicit argument but statement 1 is not true, and statement 2 is true for both, -> and ->>. This is very easy to test, like this: cljs.user=> (-> [2 5 4 1 3 6] reverse rest) (3 1 4 5 2) cljs.user=> (-> [2 5 4 1 3 6] rest reverse) (6 3 1 4 5) If you add sort to the end of the call chain, like in your example, you wouldn't notice this difference, because both results would be sorted. Like cfrik said, when you are passing only one argument to a function, then the first argument and the last argument are the same (because there's just one), so that's why it's easy to get confused when all the functions in your call chain accept only one argument, which is the case with your example where you have used count, sort, rest, and reverse. Another thing you might have missed from the documentation in https://clojure.org/guides/threading_macros is the fact that many functions that work with sequences, like filter, map, and reduce, take the sequence as the last argument as a convention, and that makes it possible to chain calls to them using ->>, like (->> (range 10) (filter even?) (map #(+ 3 %)) (reduce +)) which becomes (reduce + (map #(+ 3 %) (filter even? (range 10)))) whereas -> is more suitable for functions like assoc and update, which (like you said) work with single objects and take the object as their first argument (and "transformations/update" on that object as the rest of the arguments), so then you can do the following (-> person (assoc :hair-color :gray) (update :age inc)) which becomes (update (assoc person :hair-color :gray) :age inc) To better understand how macros work, try using macroexpand-1, like this user> (macroexpand-1 '(->> (range 10) (filter even?) (map #(+ 3 %)) (reduce +))) (reduce + (map (fn* [p1__21582#] (+ 3 p1__21582#)) (filter even? (range 10)))) The argument macroexpand-1 should be a quoted version of the function call, and the result will be the expanded function call.
d10039
It is better to pass the value of the radioButton to the Export method and get the data again. In any case you might need to do some more work on that data before you export it any way. Also you might want to check the user's permissions to export such data. Also you might not want to transfer such data over the network back and forth. It is easier (faster) to send the value of a button than it is to send a collection of data which has to be exported. Not to mention that there are also security concerns depending on how you will handle the data you receive and if it is correct or not? What If a user changes it manually in the browser before hitting the export button? Would that hurt your application's logic in some way and would you want to protect yourself against that? Best is to use an HttpPost, to include AntiForgeryToken, to submit the radio button value and check the user's permissions.
d10040
I think it can be done much shorter/easier. The way I'm selecting values from dropdownboxes: SelectElement dropdown = new SelectElement(Driver.FindElement(By.Id(dropdownID))); dropdown.SelectByValue(valueToBeSelected); It's pretty simple and straight forward and it just works.
d10041
* *There are global variables in lambda which can be of help but they have to be used wisely. *They are usually the variables declared out side of lambda_handler. *There are pros and cons of using it. *You can't rely on this behavior but you must be aware it exists. When you call your Lambda function several times, you MIGHT get the same container to optimise run duration and setup delay Use of Global Variables *At the same time you should be aware of the issues or avoid wrong use of it caching issues *If you don't want to use ElastiCache/redis then i guess you have very less options left.......may be dynamoDB or S3 that's all i can think of again connection to dynamoDB or S3 can be cached here. It won't be as fast as ElastiCache though. A: In Java it's not too hard to do. Just create your cache outside of the handler: import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import java.util.HashMap; import java.util.Map; import org.apache.logging.log4j.LogManager; import org.apache.logging.log4j.Logger; import com.amazonaws.services.lambda.runtime.RequestStreamHandler; import com.amazonaws.services.lambda.runtime.Context; import com.amazonaws.services.lambda.runtime.LambdaLogger; import com.fasterxml.jackson.databind.JsonNode; import com.fasterxml.jackson.databind.ObjectMapper; public class SampleHandler implements RequestStreamHandler { private static final Logger logger = LogManager.getLogger(SampleHandler.class); private static Map<String, String> theCache = null; public SampleHandler() { logger.info( "filling cache..."); theCache = new HashMap<>(); theCache.put("key1", "value1"); theCache.put("key2", "value2"); theCache.put("key3", "value3"); theCache.put("key4", "value4"); theCache.put("key5", "value5"); } public void handleRequest(InputStream inputStream, OutputStream outputStream, Context context) throws IOException { logger.info("handlingRequest"); LambdaLogger lambdaLogger = context.getLogger(); ObjectMapper objectMapper = new ObjectMapper(); JsonNode jsonNode = objectMapper.readTree(inputStream); String requestedKey = jsonNode.get("requestedKey").asText(); if( theCache.containsKey( requestedKey )) { // read from the cache String result = "{\"requestedValue\": \"" + theCache.get(requestedKey) + "\"}"; outputStream.write(result.getBytes()); } logger.info("done with run, remaining time in ms is " + context.getRemainingTimeInMillis() ); } } (run with the AWS cli with aws lambda invoke --function-name lambda-cache-test --payload '{"requestedKey":"key4"}' out with the output going the the file out) When this runs with a "cold start" you'll see the "filling cache..." message and then the "handlingRequest" in the CloudWatch log. As long as the Lambda is kept "warm" you will not see the cache message again. Note that if you had hundreds of the same Lamda's running they would all have their own independent cache. Ultimately this does what you want though - it's a lazy load of the cache during a cold start and the cache is reused for warm calls.
d10042
Although not trivial, the question is not correctly formulated. I thought that an entitity repository method had to implement always some kind of findBy() method and return an object or a collection of objects of that entity to which this repository belongs. Actually, an entitity repository method can return anything, so this problem can be solved using native query inside the entitity repository method. For example: ClientRepository.php: public function findWithContractStatus($contractStatusShortname) { $em = $this->getEntityManager(); $clientQuery = "select distinct CLI.id, CLI.name, COUNT(contracts) as ncontracts, SUM(C.amount) as amount from client CLI join contract CON on CON.client_id = CON.id group by CLI.id, CLI.name" $rsm = new ResultSetMapping(); $rsm->addScalarResult('id', 'id'); $rsm->addScalarResult('name', 'name'); $rsm->addScalarResult('ncontracts', 'ncontracts'); $rsm->addScalarResult('amount', 'amount'); $query = $em->createNativeQuery($clientQuery, $rsm); return $query->getResult(); } This will return an array with the given structure - id, name, ncontracts, amount - which can be iterated in controller, twig template or whereever.
d10043
Edit: Your question is still answered using MSBuild(if you are simply looking to compile outside the IDE). The IDE(Visual Studios) is simply a "fancy" way of constructing the build files that are built by MSBuild. Visual Studios isn't building the files, it simply is invoking MSBuild which ships with the .NET Framework 2.0 and up which compiles your code based on the project file that you create. If Scons can read and process an MSBuild file then I'm sure you can invoke it to build your project. But considering the fact that C# is a Microsoft language, I think you will be hard-pressed to find a value-add in not using MSBuild since I'd assume both the language and build tool are very tuned to work together. - End Edit You can use MSBuild to compile your C# project. If you open your .csproj file in a text editor you will see that it is a MSBuild file. If you want to write some C# outside of the IDE you can construct a build file using the .csproj file as a starting point and invoke MSBuild to compile your apps. The IDE is just a way of abstracting the editing of the MSBuild file away for you. If you are really industrious you can create a set of custom tasks to do things in your custom build process like move files around and versioning. MSBuild Community Tasks are a great example of using custom code to do task for you during MSBuild. A: Given all the other answers, what MSBuild does when either VS or MSBuild perform a build can be found in the Targets files that ship with .Net. These can be be found in the FrameWork directory on your system. In my case: C:\Windows\Microsoft.NET\Framework64\v3.5 Contains Microsoft.Common.targets among others. This file contains the following snippit: <!-- ============================================================ Build The main build entry point. ============================================================ --> <PropertyGroup> <BuildDependsOn> BeforeBuild; CoreBuild; AfterBuild </BuildDependsOn> </PropertyGroup> <Target Name="Build" Condition=" '$(_InvalidConfigurationWarning)' != 'true' " DependsOnTargets="$(BuildDependsOn)" Outputs="$(TargetPath)"/> This means that redifining this Target you can make MSBuild an VS do anything you want. The top of the mentioned file contains an important messagge: Microsoft.Common.targets WARNING: DO NOT MODIFY this file unless you are knowledgeable about MSBuild and have created a backup copy. Incorrect changes to this file will make it impossible to load or build your projects from the command-line or the IDE. This file defines the steps in the standard build process for .NET projects. It contains all the steps that are common among the different .NET languages, such as Visual Basic, C#, and Visual J#. My suggestion would be to read all you can about MSBuild and it's build file syntax and try redifining the Build target in your project(s). My impression is that after reading up on MSBuild you'll probably find an easier way to meet your requierements. You can find an example of redifining a Target like this in one of the answers of this so question . Edit: How to redefine a target? Redefining is essentially defining the same target 'after' it has been defined. So for instance in your .*proj file(s) define a Build Task after the <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" /> line that imports all targets needed to in this case build a C# project. An example could be <Target Name="Build" Condition=" '$(_InvalidConfigurationWarning)' != 'true' " DependsOnTargets="BeforeBuild" Outputs="$(TargetPath)"> <Exec Command="nmake" /> </Target> A: I found a question in the same direction here, where it is suggested to edit the registry. I am pretty sure there is no other way to change the compiler used by Visual Studio because there is no trace of csc.exe in any solution, config, csproj file or whatsoever, nor in the Visual Studio 9.0 folder / subfolders within the Program Files dir. Registry locations can be found in: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Installer\UserData\S-1-5-18\Components\74ACAA9F1F0087E4882A06A5E18D7D32 HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Installer\UserData\S-1-5-18\Components\9055DA7481CC1024CB23A6109FD8FC9B but those keys may differ dependng on your installation. Conclusion: changing the compiler used by VS seems next to impossible. Addition: The following MSDN article deals with the same question for an custom C++ compiler, and Ed Dore's answer seems to confirm my theory that there's no way to choose an custom compiler for use within VS. A: Under 'Tools' > 'External Tools' you should be able to define an outside tool to do activities for you. The Command should be the path to the executible for your external tool. Hope this helps some. A: You don't have to maintain different project files to build using an external tool. MSBuild is designed to build using the same project files that Visual Studio uses. Here's an article that describes it. Customize Your Builds in Visual Studio Using the Standalone MSBuild Tool It's for VS2005, but should apply to VS2008 as well. A: You can build your solution from the command line like this: C:\WINDOWS\Microsoft.NET\Framework\v3.5>msbuild.exe "C:\path\Your Solution.sln" A: Edit your project file and update the CscToolPath keys to point to the directory containing your tool and add CscToolExe keys that holds the name of the directory: <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|.NET 3.5' "> . . . <CscToolPath>path\to\custom\tool\directory</CscToolPath> <CscToolExe>exe name</CscToolExe> . . . </PropertyGroup> I have not tested this, and the CscToolExe key may cause problems, in which case I would simply rename the external tool executable to "csc.exe". A: Looking through the answers, it seems clear to me that integrating scons into Visual Studio in a way that is compatible with the debugger and so on is not going to happen... An option you might to consider, and I understand you don't want to change build systems, but bear with me, is to use a meta-build system, ie 'cmake'. http://www.cmake.org/ Cmake doeesn't actually build the project. What it does is to create build files for you, that you can use to build the project, and on Windows, the build files it creates for you are: Visual Studio project files. You can simply load those directly into your IDE, and compile, and use normally! CMake is I feel very easy to use, and provides a high level of transparence and maintainability. The exact same CMakeLists.txt files on linux will causes linux makefiles to be generated. On mingw, they can generate mingw makefiles. There are numerous generators available within cmake. The list is here: http://www.cmake.org/cmake/help/cmake-2-8-docs.html#section_Generators http://springrts.com is a huge opensource rts game that used to use scons as its cross-platform build system and now uses cmake. I understand that you don't really want to have to change build systems, so it is a medium to long term solution. Cmake is in any case one more option, to add to those of using a custom build tool, or using msbuild, or running the scons build from the commandline by hand.
d10044
The conditions should be enclosed in parentheses, on the right you have square ones. And to get what you showed. You need to add a condition(df['type'] =="Original"), in my opinion. a = df[(df['total'] > 10) & (df['type'] == "Duplicate")|(df['type'] == "Original")] print(a) Output a total type 0 23 Original 2 11 Duplicate 3 5 Original 4 16 Duplicate
d10045
I don't normally go for the ' ... in 21 days' books but this online one seems reasonable: Teach Yourself SQL in 21 Days, Second Edition. See Where can I find training/tutorials for SQL and T-SQL? A: one of my favorite websites to get started with SQL is : SQLCourse Good luck for your starting A: This (w3schools) is always a nice place to start. A: Try SQL Exercises A: Perhaps You need to begin your own project, It is always recommended when learning something new that you fight with real problems. http://www.databasejournal.com/ is one of good recurses,
d10046
Your input field has not rendered and the script is looking for an element with it's id. A simple solution is to move your script to end of the html file. like this: <p><input type="text" placeholder="Results" name="idn" id="idn_id"></p> <script> var idn_text = "123" document.getElementById("idn_id").value = idn_text </script> A: Switch the order of your elements and call the script afterwards. <p><input type="text" placeholder="Results" name="idn" id="idn_id"></p> <script> var idn_text = "123" document.getElementById("idn_id").value = idn_text </script>
d10047
Python casts whatever __contains__() returns to a boolean. That is why you cannot use "not in" or "in" when constructing peewee queries. You instead use << to signify "IN". You might try: ignored = (Activity .select() .join(StuActIgnore) .join(Student) .where(Student.id == current_user.id)) Activity.select().where(~(Activity.name**"%BBP") & ~(Activity.id << ignored)) See the docs on querying for more info http://peewee.readthedocs.org/en/latest/peewee/querying.html#column-lookups
d10048
To have it as a field in the first table you need to update the counter every time that you insert/delete a record in the second table. Alternatively, when you need to retrieve the data, you can just query the second table, joining with the first and filtering on the Id from the first table. If you don't need the data every time that you retrieve a record from the first table and if you are inserting/deleting lots from the second table, then this will be the more efficient route. A: If the run-time query as suggested by @Najzero is not preferred you might think of creating View with this query and getting data from View Also if you need to update the field every-time on insert/delete, you can consider creating Trigger on INSERT and DELETE operation.
d10049
You don't need to touch the header.inc.php, you are using CMS Made Simple, not CMS made difficult :). Go to 'Site Admin > Settings - Global Settings > General Settings tab > Global Metadata', add all your tags in there and put the smarty tag {metadata} in a page template. More details here: https://docs.cmsmadesimple.org/configuration/global-settings Hope that helps. PS plenty of fast advise on the Slack channel also.
d10050
You can use the three-argument form of lag() with partition by: ("timestamp" - LAG("timestamp", 1, "timestamp") OVER (PARTITION BY sensor ORDER BY "timestamp") ) as delta For your ultimate problem, the NULL value for the first row doesn't matter. You can solve the problem using a subquery: select * from (select seq_id, stream_id, sensor, "timestamp", oper_value , lag("timestamp") over (partition by sensor order by timestamp) as prev_timestamp from public.mt_events where "type" = 'operational_value_event' ) t where delta is null or prev_timestamp < timestamp - interval '5 minute';
d10051
""} ​ console.log('ClassA CSSMod', CSSModules(ClassA, styles).defaultProps); //ClassA CSSMod undefined ​ ClassA.defaultProps = SomeClass.defaultProps; console.log('ClassA CssMod after explicit copy', CSSModules(ClassA, styles).defaultProps); //ClassA CssMod after explicit copy Object {propA: ""}
d10052
Consider some of the things you would use anonymous classes for in Java. e.g. often they are used for pluggable behaviour such as event listeners or to parametrize a method that has a general layout. Imagine we want to write a method that takes a list and returns a new list containing the items from the given list for which a specified condition is true. In Java we would write an interface: interface Condition { boolean f(Object o); } and then we could write: public List select(List list, Condition c) { List result = new ArrayList(); for (Object item : list) { if (c.f(item)) { result.add(item); } } return result; } and then if we wanted to select the even numbers from a list we could write: List even = select(mylist, new Condition() { public boolean f(Object o) { return ((Integer) o) % 2 == 0; } }); To write the equivalent in Ruby it could be: def select(list) new_list = [] # note: I'm avoid using 'each' so as to not illustrate blocks # using a method that needs a block for item in list # yield calls the block with the given parameters new_list << item if yield(item) end return new_list end and then we could select the even numbers with simply even = select(list) { |i| i % 2 == 0 } Of course, this functionality is already built into Ruby so in practice you would just do even = list.select { |i| i % 2 == 0 } As another example, consider code to open a file. You could do: f = open(somefile) # work with the file f.close but you then need to think about putting your close in an ensure block in case an exception occurs whilst working with the file. Instead, you can do open(somefile) do |f| # work with the file here # ruby will close it for us when the block terminates end A: The idea behind blocks is that it is a highly localized code where it is useful to have the definition at the call site. You can use an existing function as a block argument. Just pass it as an additional argument, and prefix it with an &
d10053
Artifactory returns the URL based on on the filename and the path (as any web server would do). Here are two options to achieve what you need: * *Name the artifacts uniquely (timestamps are the simplest). Instead of naming the artifact mypkgfile_v1.tgz, name it mypkgfile_v1-1553038888.tgz (I used the Unix Epoch time, but everything unique enough will do). *This one is more evolved but doesn't require you to change the naming scheme. * *First, configure a custom repository layout to match your versioning. *Once you've done that, every time you deploy an artifact, attach a unique identifier to the artifact as property during deployment (using matrix params, for example), deploying your artifact as mypkgfile_v1;timestamp=1553038888. *On the revrieval, use the token for the latest release together with the timestamp you need as a matrix param:mypkgfile_v[RELEASE];timestamp=1553038888
d10054
You should force jQuery to clear the animation queue and jump to the end of the animation when using the .stop() method, i.e. .stop(true, true).
d10055
Drawble only response for the draw operations, while view response for the draw and user interface like touch events and turning off screen and more. View can contain many Drawbles.
d10056
There is a single content node, use content/idarticle to get the inner collection: XmlNodeList xnInhalt = xml.SelectNodes("/lagerverwaltung/article/orders/order[@id='" + id + "']/content/idarticle"); You would then modify the following code because xmlNode now refers to an idarticle. For example, string articleid = xmlNode.InnerText;
d10057
Instead of underscore you need to use \ here for continuation: python myFileA.py && \ python myFileB.py && \ python myFileC.py && \ python myFileD.py However since you have && you don't really need to use \ and can just skip it: python myFileA.py && python myFileB.py && python myFileC.py && python myFileD.py A: With bash you don't need any continuation character following &&, ||, | python myFileA.py && python myFileB.py && python myFileC.py && python myFileD.py
d10058
The cause of this has to do with the order of query execution. The query will first join all rows (regardless of a match, since it's a left join) and then it will filter out rows that don't meet the condition rating >= 1, effectively dropping any rows that didn't have a match in the first place. To correct this, you need to change your join to only join when rating >= 1. This way, you can still get all rows from the first table, and only get matches from the second table that you need. Try this: SELECT * FROM kickstarter k LEFT JOIN user_kick_ratings ukr ON k.project_id = ukr.project_id AND ukr.rating >= 1; As far as the overwritten id, you should instead rewrite your query to only select relevant columns. There's no need to select project_id from both tables, since they are always the same. You should select it from the table that is guaranteed to have a value; In this case, kickstarter: SELECT k.project_id, k.stuff, ukr.rating FROM ... A: Figured it out, and it's so simple! Just use RIGHT JOIN and flip the table order. Even though it's simple, I hope this helps someone one day. SELECT * FROM user_kick_ratings RIGHT JOIN kickstarter ON kickstarter.project_id=user_kick_ratings.project_id WHERE rating>=1 A: For simple cases: Instead of doing a SELECT *, you can use SELECT table2.*, table1.* -- notice the reverse order! FROM table1 LEFT JOIN table2 ..... They overwrite each other in the order of reference. So now you got the id from table1.
d10059
It turned out when a post is published with cron, 'publish_post' hook is also executed and I didn't need to use it together with the 'future_to_publish' hook. The problem with both hooks, however, was that for some reason get_home_path(); does not work in the same way as when the post is published immediately from admin panel, so using the exact path to my file solved the problem.
d10060
Well my main point is just passing the stream into MediaSource like below : public void Read() { System.Threading.Tasks.Task.Run(() => { MediaPlayer player = new MediaPlayer(); try { player.Prepared += (sender, e) => { player.Start(); }; player.SetDataSource(new StreamMediaDataSource(mmInStream)); player.Prepare(); } catch (IOException ex) { System.Diagnostics.Debug.WriteLine("Input stream was disconnected", ex); } }).ConfigureAwait(false); } And your in your MediaSource implementation remove seeking operation as it is a network stream. And if you want streaming you have to copy it to a MemoryStream; public class StreamMediaDataSource : MediaDataSource { ... public override int ReadAt(long position, byte[] buffer, int offset, int size) { // data.Seek(position, System.IO.SeekOrigin.Begin); remove seeking return data.Read(buffer, offset, size); } ... // rest of your code remains same. }
d10061
Thanks guys for the advice. I was able to resolve the issue by adding this to my Page_Load: Dim cryRpt As New ReportDocument Dim crtableLogoninfos As New TableLogOnInfos Dim crtableLogoninfo As New TableLogOnInfo Dim crConnectionInfo As New ConnectionInfo Dim CrTables As Tables Dim CrTable As Table cryRpt.Load(Server.MapPath("PersonnelListingReport.rpt")) With crConnectionInfo .ServerName = "0.0.0.0" .DatabaseName = "TestDB" .UserID = "user" .Password = "pw" End With CrTables = cryRpt.Database.Tables For Each CrTable In CrTables crtableLogoninfo = CrTable.LogOnInfo crtableLogoninfo.ConnectionInfo = crConnectionInfo CrTable.ApplyLogOnInfo(crtableLogoninfo) Next CrystalReportViewer1.ReportSource = cryRpt CrystalReportViewer1.RefreshReport() A: Create a class for setting the Logon values for the Crystal report with a function getreport() which returns a reportdocument of a crystal report in a given report location Module Logonvalues Function getpeport(ByVal ReportLocation As String) As ReportDocument Dim crconnectioninfo As ConnectionInfo = New ConnectionInfo() Dim cryrpt As ReportDocument = New ReportDocument() Dim crtablelogoninfos As TableLogOnInfos = New TableLogOnInfos() Dim crtablelogoninfo As TableLogOnInfo = New TableLogOnInfo() Dim CrTables As Tables cryrpt.Load(ReportLocation) cryrpt.DataSourceConnections.Clear() crconnectioninfo.ServerName = "ur servername" crconnectioninfo.DatabaseName = "ur database" crconnectioninfo.UserID = "ur database username" crconnectioninfo.Password = "ur database password" CrTables = cryrpt.Database.Tables For Each CrTable As CrystalDecisions.CrystalReports.Engine.Table In CrTables crtablelogoninfo = CrTable.LogOnInfo crtablelogoninfo.ConnectionInfo = crconnectioninfo CrTable.ApplyLogOnInfo(crtablelogoninfo) Next Return cryrpt End Function End Module Finally we will call the logon value from the form consisting of the crystal reportviewer Public Sub loadreport() Crvt_ApplicationReport.ReportSource = Logonvalues.getpeport("yourlocation") Crvt_ApplicationReport.SelectionFormula = "yourformula if any" Crvt_ApplicationReport.RefreshReport() End Sub
d10062
No, there is neither a compiler nor and IDE available for the iPad. You need a Mac to do iOS development, but even a cheap used Mac Mini will do (and no, you cannot do iOS development on Windows, I'm afraid). A: You are correct. Apple wants you to develop your apps on a Mac. A: Here is a link to Apple's site describing what you need. A mac with xcode is a requirement. A: First to answer your "subject question": As far as I know, NO you cannot install Xcode development kit on an iPad and thereby producing new iPad software... Apple also would like you to buy a real Apple computer if you want to do real business with the platform. BUT With some effort and research, you can just buy an orignal Mac OSX 10.6 or newer. Then with some tweaking and fixes, you install this on ordinary PC hardware. This is because the Apple computers today also are running on Intel CPU's and PC motherboards. Its not officially supported nor "okay" from Apple's licensing point of view, but once you get it running the computer / OS thinks its a real Mac and then you can run and compile Mac software as its running 100% as a Mac. I've seen tests where the owner connected iPod and iPhones to iTunes and AppStore which didnt see anything unusual, so the owner was able to buy movies and music and applications as normally. Same goes for installing pure Apple software such as Xcode and other Mac-Only software. You can even install some boot-manager and be able to run Windows 7 and Mac OSX on a partioned harddrive I've been told. The "thing" is called a Hackintosh. But I was warned that it is far from every piece of PC hardware that you can make run with Mac OSX, so a lot of studying is needed before succeeding I guess. A: I am not sure if this app is compatible with ipad, but it can certainly MAKE your app. You still need the SDK to compile the code it generates, and you still need to purchase the dev program to release your app to the store. Not to mention the functionality you can add is very limited, but it is the closest to developing on the device itself that you can get. A: You could use the Notes app on an iPad, or a Javascript editing app, or a cloud hosted text editor from iPad Safari, to write HTML5/CSS/Javascript for a web app. Upload the resulting web app source text plus a manifest to some web server, go to it in Safari with your iPad, test it, and save it as a web clipping web app. That's for a web app. If you want to build native iOS/iPad apps you need an Intel Mac running OS X 10.6.x (but even a cheaper old used Mini or iMac will do, as long as it can run Snow Leopard 10.6). Or at least fast network access to a Mac. You could remote access a Mac using one of the many VNC or other remote viewing apps for the iPad, and develop native iPad apps from an iPad that way, but it would still involve a Mac.
d10063
You could get the ListView from you ListActivity with the method getListView() and then try to set your footer view before you set the adapter.
d10064
Track an indicator that you've already rendered the "first" tab. Something as simple as: $firstTab = true; Then within the loop, set it to true after rendering the "first" tab, conditionally including the active class: $firstTab = true; while($row = $result->fetch_assoc() ) { $id = $row['id']; $in = $row['initial']; $name = $row['name']; echo "<div class=\"tab-pane fade p-2 " . ($firstTab ? "active" : "") . "\" id=\"$in\" role=\"tabpanel\" aria-labelledby=\"$in-tab\">"; $firstTab = false; } A: First: write HTML as HTML, not using strings. You'll need to use a index, like: <?php $i = 0; while($row = $result->fetch_assoc() ) { $id = $row['id']; $in = $row['initial']; $name = $row['name']; ?> <div class="tab-pane fade p-2 <?= $i === 0 ? 'active' : '' ?>" id="<?= $in ?>" role="tabpanel" aria-labelledby="<?= $in-tab ?>"> ... </div> <?php $i++; } ?>
d10065
I have used Clever Components' Interbase DataPump with success. I personally haven't used it with MySQL, but there shouldn't be any problems. As of perfomance comparison - as always it comes down to your specific data and use cases. General wisdom is that MySQL is faster in some cases but it comes with the cost of reliability (ie not using transactions). A: I also use Clever Components Interbase DataPump but you can also check IBPhoenix ressource site here
d10066
You are doing many mistakes. Up to the point, that g++ does not compile the code and explains why pretty good. Pointer is an address. There is no "connecting pointer to address". ptr1 = &var1; means literally "store address of var1 in variable named ptr1" You use incompatible pointer types. So as long as you dereference it (e.g. using *) you are going into undefined behaviour. I am pretty sure you can reinterpret any type of data as char* or unsigned char*, I image this is true for equivalent types like uint8_t, i.e. single byte types. You, however, are going the other way, you declare 1-byte data, and are pretending it's a 4 byte int. Basically you force the program to read memory out the variable bounds. Fact, that *ptr1 and *ptr2 give result you expect is a rather lucky coincidence. Probably memory behind them was zeroed. For ptr3 it isn't because you have filled it with other element of the array (7 and 9). I believe you also use wrong type specifier for printing. %d is for int, uint8_t should be described as hhu and uint64_t as lu. I am not 100% convinced how fatal this is, because of platform specific widths and integer promotions. You should use matching types for your pointers and variables. A: It seems that you do understand what pointers are and you can use them with basic types. There are two problems in your code. First is this part: //connecting pointer to address + 1 ptr2 = &var1 + 1; Here you assigned some address to variable ptr2. Up to this point there is nothing dangerous about that. But then you assign a value to memory at that address //assign value to pointer *ptr2 = var2; This is dangerous because you, as a developer, don't know what is stored at that address. Even if you are lucky right now, and that part of memory isn't being used for anything else, it will most likely change once your program gets longer and then you will have hard time searching for the bug. Now arrays usually are a bit confusing, because when you create an array like this: uint8_t arr[] = {7,9,11}; three things happen. * *Your program allocates continual block of memory, that fits 3 variables of type uint8_t. The 3 variables in this context are called elements. *The elements will get the provided initial values 7, 9 and 11. *An address of first element (the one that contains value 7) will be stored in arr. So arr is actually of type uint8_t *. In order to get the last part do what you expect, you just need to change this one line (remove the &): ptr3 = arr; EDIT: BTW watch and understand this course and you will be expert on C memory manipulation. Video is a bit dated, but trust me, the guy is great. EDIT2: I just realised the other answer is absolutely correct, you really need to match the types.
d10067
Assumption: get-customcmdlet is returning a pscustomobject object with a property Name that is of type string. $var = 'vol' $null -ne ((get-customcmdlet).Name -split $var)[1] -as [int] This expression will return $true or $false based on whether the cast is successful. If your goal is to pad zeroes, you need to do that after-the-fact (in this case, I just captured the original string): $var = 'vol' $out = ((get-customcmdlet).Name -split $var)[1] if ($null -ne $out -as [int]) { $out } else { throw 'Failed to find appended numbers!' }
d10068
you may need to subset the UINavigationController and you should use it instead of the standard UINavigationController. I have done this in my projects, so this cares of the individual UIViewController classes' custom orientations: .h #import <UIKit/UIKit.h> @interface UIOrientationController : UINavigationController { } @end .m @implementation UIOrientationController - (BOOL)shouldAutorotate { return [self.topViewController shouldAutorotate]; } - (NSUInteger)supportedInterfaceOrientations { return [self.topViewController supportedInterfaceOrientations]; } @end NOTE: you can extended this class with overriding more methods, if it becomes a requirement in your final code.
d10069
I solved this issue following the indication provided in the article http://blog.dev-area.net/2015/08/13/android-4-1-enable-tls-1-1-and-tls-1-2/ with few changes. SSLContext context = SSLContext.getInstance("TLS"); context.init(null, null, null); SSLSocketFactory noSSLv3Factory = null; if (Build.VERSION.SDK_INT <= Build.VERSION_CODES.KITKAT) { noSSLv3Factory = new TLSSocketFactory(sslContext.getSocketFactory()); } else { noSSLv3Factory = sslContext.getSocketFactory(); } connection.setSSLSocketFactory(noSSLv3Factory); This is the code of the custom TLSSocketFactory: public static class TLSSocketFactory extends SSLSocketFactory { private SSLSocketFactory internalSSLSocketFactory; public TLSSocketFactory(SSLSocketFactory delegate) throws KeyManagementException, NoSuchAlgorithmException { internalSSLSocketFactory = delegate; } @Override public String[] getDefaultCipherSuites() { return internalSSLSocketFactory.getDefaultCipherSuites(); } @Override public String[] getSupportedCipherSuites() { return internalSSLSocketFactory.getSupportedCipherSuites(); } @Override public Socket createSocket(Socket s, String host, int port, boolean autoClose) throws IOException { return enableTLSOnSocket(internalSSLSocketFactory.createSocket(s, host, port, autoClose)); } @Override public Socket createSocket(String host, int port) throws IOException, UnknownHostException { return enableTLSOnSocket(internalSSLSocketFactory.createSocket(host, port)); } @Override public Socket createSocket(String host, int port, InetAddress localHost, int localPort) throws IOException, UnknownHostException { return enableTLSOnSocket(internalSSLSocketFactory.createSocket(host, port, localHost, localPort)); } @Override public Socket createSocket(InetAddress host, int port) throws IOException { return enableTLSOnSocket(internalSSLSocketFactory.createSocket(host, port)); } @Override public Socket createSocket(InetAddress address, int port, InetAddress localAddress, int localPort) throws IOException { return enableTLSOnSocket(internalSSLSocketFactory.createSocket(address, port, localAddress, localPort)); } /* * Utility methods */ private static Socket enableTLSOnSocket(Socket socket) { if (socket != null && (socket instanceof SSLSocket) && isTLSServerEnabled((SSLSocket) socket)) { // skip the fix if server doesn't provide the TLS version ((SSLSocket) socket).setEnabledProtocols(new String[]{TLS_v1_1, TLS_v1_2}); } return socket; } private static boolean isTLSServerEnabled(SSLSocket sslSocket) { System.out.println("__prova__ :: " + sslSocket.getSupportedProtocols().toString()); for (String protocol : sslSocket.getSupportedProtocols()) { if (protocol.equals(TLS_v1_1) || protocol.equals(TLS_v1_2)) { return true; } } return false; } } You can also check the server certificate using online services like https://www.ssllabs.com/ssltest/analyze.html to be sure that the server has the certificate you are enabling A: Your code already contains the line HttpsURLConnection.setDefaultSSLSocketFactory(new TLSSocketFactory()); which sets your TLSSocketFactory for all connections in your app that are established afterwards. Therefore the call to connection.setSSLSocketFactory(new TLSSocketFactory()); is redundant and should have no effect. Note: There may be an issue with already open connections that have been established before you call setDefaultSSLSocketFactory and that are cached internally and reused. Therefore it is recommended to set the default socket factory before you open any network connection. A: UPDATE: This has now been resolved the issue was a SSL certificate issue, and can confirm that both methods are working for android api level 16+
d10070
You are passing the View a single NewMessageListViewData, but it has been strongly typed to accept only objects implementing IEnumerable<T> where T is NewMessageListViewData. For example, a List<NewMessageListViewData> would work. As mentioned in the comments, I would start by looking at the return type of ClubStarterKit.Web.Infrastructure.Forum.NewMessageListAction.Execute(), or, if you only intend to display a single instance of NewMessageListViewData in this View, you need to set the model in the View to reflect that.
d10071
You need to set an unique key for each item. Try this: <ul class="gameHistoryList_rou"> <li :style="{background: historyCheckColor(win)}" v-for="(win, index) of lastWins" :key="index">{{ win }}</li> </ul>
d10072
Reading between the lines, you're generated an insert command, passing hash as the value to set for your resources_guid column? If you supply any value, even null, that will be used instead of the default. To use the default, you need to not supply that parameter/column to the MySqlCommand object at all.
d10073
You can use flexbox. Then, you have to play with the borders. Here is an example <html> <header> <meta charset="utf-8" /> <title>Example App</title> <link href="https://cdn.jsdelivr.net/npm/bootstrap@5.0.2/dist/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-EVSTQN3/azprG1Anm3QDgpJLIm9Nao0Yz1ztcQTwFspd3yD65VohhpuuCOmLASjC" crossorigin="anonymous" /> </header> <body> <div class="content d-flex justify-content-center"> <div class="left-border align-self-center"></div> <div class="text">FAST NAVIGATOR</div> <div class="right-border align-self-center"></div> </div> <div class="text-center">Your content here</div> </body> </html> <style> .content { padding: 1rem; } .left-border { border: 2px solid black; width: 100%; height: 10px; border-bottom: none; border-right: none; background: transparent; } .text { font-size: 12px; white-space: nowrap; padding-left: 16px; padding-right: 16px; margin-top:-8px; } .right-border { border: 2px solid black; width: 100%; height: 10px; border-bottom: none; border-left: none; background: transparent; } </style> A: fieldset { padding: 0; height: 8px; border-bottom: none; } <fieldset> <legend align="center">Title</legend> </fieldset>
d10074
Try googling, there are a lot of answers for this question. Here are the top three when I looked Programmatically generate video or animated GIF in Python? Generating an animated GIF in Python https://sukhbinder.wordpress.com/2014/03/19/gif-animation-in-python-in-3-steps/ A: Here is something that I wrote a while ago. I couldn't find anything better when I wrote this in the last six months. Hopefully this can get you started. It uses ImageMagick found here! import os, sys import glob dataDir = 'directory of image files' #must contain only image files #change directory gif directory os.chdir(dataDir) #Create txt file for gif command fileList = glob.glob('*') #star grabs everything, can change to *.png for only png files fileList.sort() #writes txt file file = open('blob_fileList.txt', 'w') for item in fileList: file.write("%s\n" % item) file.close() #verifies correct convert command is being used, then converts gif and png to new gif os.system('SETLOCAL EnableDelayedExpansion') os.system('SET IMCONV="C:\Program Files\ImageMagick-6.9.1-Q16\Convert"') # The following line is where you can set delay of images (100) os.system('%IMCONV% -delay 100 @fileList.txt theblob.gif') New convert call For whatever reason (I didn't look at differences), I no longer needed the set local and set command lines in the code. I needed it before because the convert command was not being used even with path set. When you use the new 7.. release of image magick. See new line of code to replace last three lines. Be sure to check box that ask to download legacy commands e.g convert when installing. Everything was converted to a single magick command, but I didnt want to relearn the new conventions. os.system('convert -delay 50 @fileList.txt animated.gif')
d10075
A VPN should not affect your tests. Judging by the symptoms, it looks like you are using a proxy, not a VPN. If it's true, the proxy server address should be specified using the --proxy option as described here mentioned below. --proxy <host> Specifies the proxy server used in your local network to access the Internet. testcafe chrome my-tests/**/*.js --proxy proxy.corp.mycompany.com testcafe chrome my-tests/**/*.js --proxy 172.0.10.10:8080 You can also specify authentication credentials with the proxy host. testcafe chrome my-tests/**/*.js --proxy username:password@proxy.mycorp.com As for the --proxy-bypass option, it has the opposite meaning. You can use it to specify resources that are not proxied, for example, the ones that are hosted inside of your local network.
d10076
I was able to fix this by upgrading cocoapods on my computer by running brew install cocoapods. I then closed out of the terminal I was using and opened a new one. This post helped me understand the issue
d10077
I have found the cause and solution. [Cause of problem] Service unable to understand that, to run JAR file, which program should be run. [Detail] I tried to debug the code. At the location where process is started, popup message like shown in below image is occurred. location : processSample.Start() * *It means that atleast once user need to select the program. *If we select [Java(TM) Platform SE binary] once, then after that the service always runs successfully. *This behavior is in Windows 10 only. *In addition to program selection, user setting shown in image in question is also required to run the service. *I want to say that, in default program setting already correct program is selected for .jar files as shown in below image, but still windows 10 asks user to select program once. [Solution] Run JAR file from windows(c#) service with settings below : sampleProcess.StartInfo.FileName = "javaw.exe"; sampleProcess.StartInfo.Arguments = "-jar Sample.jar"; sampleProcess.StartInfo.WorkingDirectory = @"C:\SampleFolder"; sampleProcess.StartInfo.UseShellExecute = false; sampleProcess.EnableRaisingEvents = true; sampleProcess.StartInfo.CreateNoWindow = false; Here working directory is the location where the [Sample.jar] does exist. additinally a Path environment variable must be set in order to execure "javaw.exe". Before fix I had implementation as below which is not proper for every system environment : sampleProcess.StartInfo.FileName = "Sample.jar"; sampleProcess.StartInfo.WorkingDirectory = @"C:\SampleFolder"; sampleProcess.EnableRaisingEvents = true; sampleProcess.StartInfo.CreateNoWindow = false;
d10078
It seems like a minor tweak will get what you want. First, there's space between the images because you put spaces between the images. Any whitespace between the HTML elements will be rendered as a space, so remove the line breaks: <img src="dragon_float.jpg"><img src="rootbeer_float.jpg"><img src="dog_tubing.jpg"><img src="money.jpg"> Second, add h2 to the style that centers, and remove the left margin on h2: h1, h2 { text-align: center; } h2 img { display: inline; height: 100px; //margin-left:100px; } That's all that's needed. A: Maybe use: h2 img { display: inline-block; float: left; height: 100px; margin-left:100px; }
d10079
To answer your question how to use an integer as an text item delimiters is just: set AppleScript's text item delimiters to {"0", "1", "2", "3", "4", "5", "6", "7", "8", "9"} You can set multiple text item delimiters at once but the problem is that when using multiple text item delimiters you have actually no idea what was between the two text items. Also the order of how the delimiters appear in the text is not important when using text item delimiters. Therefore I would suggest using regular expressions instead, you define a certain format instead of separating a string and guessing which character was actually the separator. tell application "Finder" to set aList to every file in folder "ImageRename" repeat with i from 1 to number of items in aList set aFile to item i of aList set fileName to name of aFile as string set newName to do shell script "perl -pe 's/^(.*) - (.*) ([0-9]{2}\\.(jpeg|png))$/\\2 - \\1 \\3/i' <<<" & quoted form of fileName if newName is not fileName then set name of aFile to newName end repeat The reason I use perl and not sed is because perl does support the I flag in the substitution which makes the comparison of the expression case insensitive. edit (requested explanation): The format of the old string is something like: String can start with any character (^.*) up to the literal string " - " ( - ) then followed by any character (.*) again. The string have to end with a string starting with space and 2 digits ( [0-9]{2}), followed by a literal period (\.) and end with either jpeg or png ((jpeg|png)$). If we put this all together we get a regex like "^.* - .* [0-9]{2}\.(jpeg|png)$". but we want to group the match in different sections and return them in a different order as our new string. Therefore we group the regular expression into 3 different sub matches by placing parentheses: ^(.*) - (.*) ([0-9]{2}\.(jpeg|png))$ The first group will match the firstPart, the second group will match secondPart and the third group (XX.xxx) will match the remaining part. The only thing we need to do is reorder them when we return the new string. A backslash followed by a number in the new string will be replaced by the matching group. In the substitution command this will be notated as /s/search/\2 - \1 \3/flags. The last part of our substitution are some flags, I place I as a flag for case insensitive matching. Putting this all together gets me s/^(.*) - (.*) ([0-9]{2}\.(jpeg|png))$/\2 - \1 \3/I note: because \ is in applescript a special character we have to write a \ down as \\ A: Just use the space as the delimiter and build the parts. Edited: to allow for spaces in the text parts. tell application "Finder" to set aList to every file in folder "ImageRename" set AppleScript's text item delimiters to " " repeat with i from 1 to number of items in aList set aFile to (item i of aList) try set fileName to name of aFile set lastParts to text item -1 of fileName set wordParts to (text items 1 thru -2 of fileName) as string set AppleScript's text item delimiters to " - " set newName to {text item 2 of wordParts, "-", text item 1 of wordParts, lastParts} set AppleScript's text item delimiters to " " set name of aFile to (newName as string) end try end repeat set AppleScript's text item delimiters to ""
d10080
You have source2swagger installed in your local and gems are installed in root. So your source2swagger which needs json can't access those gem which are installed in root. So I recommend to gems in local always and avoid using sudo for installing gems. To manage gems in local I suggest to use RVM.
d10081
You are passing in a tuple, not a bytestring: sqlite3.Binary((a,)) Create a tuple with the result of sqlite3.Binary(), having passed in just a: (sqlite3.Binary(a),) The whole statement is then run as: c.execute("INSERT INTO authors(Name) VALUES (?)", (sqlite3.Binary(a),)) However, if this is supposed to be text, you'd normally decode the bytes to a string instead of trying to insert binary data: c.execute("INSERT INTO authors(Name) VALUES (?)", (a.decode('utf8'),)) This does assume that your text is encoded using the UTF-8 codec; adjust the decoding as needed to match your data.
d10082
a is in the global namespace scope. If it isn't shadowed, and assuming you have included the right header files, you can simply refer to it as a in your other file. However, if it is shadowed, or if you just want to play it safe and refer to it explicity, you can refer to it as ::a. For example, #include <header_for_a.h> namespace B { int a;//This now shadows the "a" from the global namespace. void foo() { a = 1;//This is referring to B::a; ::a = 2; // This is referring to "a" from the global namespace. } } A: int a belongs to global namespace. That means another variable with same name in global namespace could buy you a linker error. If I want to use this variable a from other files, how do we specify its namespace? You can just enclose it in namespace. Generally a namespace should have all related entities in it. So, if you think it can be put inside already existing namespace then get set go.
d10083
you need to set the "maxDate". it cannot be "maxDate": '0',
d10084
The problem is that the string you pass to Data(base64Encoded: is not actually base64encoded, it contains some more plaintext in the front. You need to remove that and only pass the actual base64 encoded image, like so: let str = "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAMCAgICAgMCAgIDAwMDBAYEBAQEBAgGBgUGCQgKCgkICQkKDA8MCgsOCwkJDRENDg8QEBEQCgwSExIQ..." let dataDecoded : Data = Data(base64Encoded: str)! let decodedimage = UIImage.init(data: dataDecoded) Alternatively you can programatically split the original data at the , and take the second part as the actual data input. At that point you can remove the options: .ignoreUnknownCharacters as well since now there should not be any non-base64 chars left - and if there are you should fix the data instead of the code. Final note: depending on where you get the data from it might be a good idea to not force-unwrap anything and instead deal with a broken / missing image by e.g. displaying a placeholder image.
d10085
Ajax is working on your computer, but not with url begining with file:// because ajax needs to request the server to get a file. So, if you want to use ajax, you have to install a wamp server and move your files in.
d10086
I think there is no necessary to add UIPanGestureRecognizer to View(C), you can recognize finger position in UILongPressGestureRecognizer handle method. look at sample code declare variables: @IBOutlet var cView: UIView? Here is UILongPressGestureRecognizer handle method: @IBAction func handleLongPressGesture(_ gesture: UILongPressGestureRecognizer) { switch gesture.state { case .began: cView?.isHidden = false case .changed: if let cView = cView, cView.isHidden == false { let location = gesture.location(in: self.cView) print("Finger Location - (\(location.x),\(location.y))") } case .ended, .cancelled: cView?.isHidden = true default: break } } The code performs your requirements.
d10087
It is because void is a valid TypeScript type declaration. E.g the following is valid var f:void; However it is not useful as a variable type. From the language spec (http://www.typescriptlang.org/Content/TypeScript%20Language%20Specification.pdf): NOTE: We might consider disallowing declaring variables of type Void as they serve no useful purpose. However, because Void is permitted as a type argument to a generic type or function it is not feasible to disallow Void properties or parameters. Update: As pointed out by Ryan : https://twitter.com/SeaRyanC/status/479664200323579904 You don't get much additional typesafety if f:void was disallowed. You can't use f in a meaningful way e.g. the following are all compile errors. var f:void; f.bar; // error function bar(f:{}){} bar(f); // error var baz:{a?:number} = f; // error // Only allowed cases f = undefined; f = null; Curious about your real world case though. Update based on question update: Unfortunately I don't see a way to get the compiler to prevent this, since anything is allowed in a JS/TS if statement. Perhaps you want a boolean only truthy/falsy restriction. if (f()) { } A: I believe if you try: var f:number=foo(); It will return the error you're looking for. I think an untyped variable defaults to "any" which makes it not check typing. Link here:
d10088
sample Task.Factory.StartNew(testMethod).ContinueWith(p => { if (p.Exception != null) p.Exception.Handle(x => { Console.WriteLine(x.Message); return false; }); }); A: They are viewed as observed once you access the Exception property. See also AggregateException.Handle. You can use t.Exception.Handle instead: t.Exception.Handle(exception => { Console.WriteLine(exception); return true; } );
d10089
Unfortunately there's no way to deal with this currently. C++20 solves this problem by introducing concepts, where templates can have abstract definitions that are restricted with everything except for their binary layout. Violating these definitions will provide simple errors. Currently, I dig into these lines and I got used to it. I'm currently dealing with a program with 5 template parameters at places. It's all about getting used to it and training your eyes to parse the content. However, if you're really stuck, one solution I may suggest is that you copy all the relevant error output to some editor, and do a find-and-replace to simplify individual expressions, making them smaller and smaller with every replace until it becomes readable for you. Good skills in regex may help as well. In Notepad++ (or Notepadqq on linux), you can find regular expressions and use capture groups in the replacement with \1 for first capture group, \2 for second, etc. So, bottom line: Until C++20, there's no clean solution for this except what you invent yourself.
d10090
If your aim is to reduce memory demands, then don't serialize then encrypt: instead - serialize directly to an encrypting Stream. The Stream API is designed to be chained (decorator pattern) to perform multiple transformations without excessive buffering. Likewise: deserialize from a decrypting stream; don't decrypt then deserialize. Done this way, data is encrypted/decrypted on-the-fly as needed; in addition to reducing memory, it is also good for security - since this also means the entire data never exists in decrypted form as a single buffer. See CryptoStream on MSDN for a full example. Some additional notes; if you do happen to use protobuf-net, there are ways of reducing any in-memory buffering by using "grouped" encoding; you see: the default for sub-messages (including lists) is "length prefixed" - and the way it usually does this is by buffering the data in memory to calculate the length. However, protobuf also supports a format that uses a start/end marker which never requires knowing the length, so never requires buffering - and so the entire sequence can be written in a single pass direct to output (well, it does still use a buffer internally to improve IO, but it pools the buffer here, for maximum re-use). This is as simple as, for sub-objects: [ProtoMember(11, DatFormat = DataFormat.Grouped)] public Customer Customer {get;set;} // a sub-object (where there is no significance in the 11) A: See http://code.google.com/p/protobuf-net/wiki/Performance for a comparison of performance.
d10091
A a A: B: b B B: C: c C c C: c D: D d D D: d X: x Y X: Y: y X Y: A: There is no such mechanical procedure because the problem of determining whether a CFG defines a regular language is undecidable. This result is a simple application of Greibach's Thereom.
d10092
To achieve your expected result remove display:flex #login { height: 100vh; width: 100%; display: -webkit-box; display: -webkit-flex; display: -ms-flexbox; -webkit-box-align: center; -webkit-align-items: center; -ms-flex-align: center; align-items: center; } http://codepen.io/nagasai/pen/zBjkWw
d10093
Try this JSON: "body": "{\n\"newVendorNames\": \"{{newVendorNames.value}}\",\n\"newVendorDocs\": \"{{newVendorDocs.value}}\",\n\"existingVendorAction\": \"{{existingVendorAction.value}}\"\n}" You forgot the \n
d10094
From what I could gather, you're posting to the wrong URL. In your server app, you create a post handler for /send However, in your React App, you post to /xxxxx/send (You obscured the xxxxx part) I advise that you replace your <form method="POST" className="form" action="send"> With <form method="POST" className="form" action="http://127.0.0.1:3000/send"> And try again
d10095
If you are trying to save the image to the images table, and you are instead 'it keeps storing images in the row(image) in products table' - you are saving the wrong item. You appear to have the correct structure - with one to many in your models. Just need to save the Image rather than the Product Something like this will fix this issue: Image::create(['whateverYouNeedToSave', 'product_id'=>$product_id, <-- Connect the product here, within the image 'etc']); It is actually possible to do this by creating the Product and then saving the Images as a relation, but this might not be the cleanest / easiest way to architect this - suggest you start with saving the Image, as you will have many of them, vs. just one Product at a time. Then, after this works, and is clear / comfortable, you can try it the more 'advanced' way if you need to.
d10096
Does this help: Put a BindingSource on the Form (BindingSource1). Set your DataGridView's Datasource to Binding1. Open the designer for the Form in question, and assuming you want to show the columns for the MyObject class, enter the following: this.BindingSource1.Datasource = typeof(YourNamespace.MyObject);
d10097
I think you could simplify your task by using a hash table ($map) where the Keys are the GUIDs of each Azure Group and the Values are each AD Group where the Az Group members need to be added. For example: $map = @{ 'xxxxxxxxxxxx' = 'group 1', 'group 5', 'group 8' # Football 'zzzzzzzzzzzz' = 'group 2' # Volleyball 'yyyyyyyyyyyy' = 'group 3', 'group 4' # Boys Soccer # and so on here } foreach($pair in $map.GetEnumerator()) { # I think you could use `DisplayName` instead of `UserPrincipalName` here # and don't have a need to parse the UPNs $azMembers = (Get-AzureADGroupMember -ObjectId $pair.Key).DisplayName foreach($adGroup in $pair.Value) { Add-ADGroupMember -Identity $adGroup -Members $azMembers.DisplayName } } As stated in the inline comment, I believe using .DisplayName would suffice since -Members takes one of these values: * *Distinguished name *GUID (objectGUID) *Security identifier (objectSid) *SAM account name (sAMAccountName) But considering this may not be case and that doesn't work, then an easier and safer way to parse the user's UserPrincipalName rather than using .Split('@')[0] would be to use the MailAddress class, so using it the code would look like this: # here goes the `$map` too! foreach($pair in $map.GetEnumerator()) { $azMembers = [mailaddress[]] (Get-AzureADGroupMember -ObjectId $pair.Key).UserPrincipalName foreach($adGroup in $pair.Value) { Add-ADGroupMember -Identity $adGroup -Members $azMembers.User } }
d10098
A rough idea to start you: <?php session_start(); if( isset( $_GET['logout'] ) ) { session_destroy(); header('Location: ../logout.php'); exit; } if( !isset( $_SESSION['login'] ) ) { if( !isset( $_SERVER['PHP_AUTH_USER'] ) || !isset( $_SERVER['PHP_AUTH_PW'] ) ) { header("HTTP/1.0 401 Unauthorized"); header("WWW-authenticate: Basic realm=\"Tets\""); header("Content-type: text/html"); // Print HTML that a password is required exit; } else { // Validate the $_SERVER['PHP_AUTH_USER'] & $_SERVER['PHP_AUTH_PW'] if( $_SERVER['PHP_AUTH_USER']!='TheUsername' || $_SERVER['PHP_AUTH_PW']!='ThePassword' ) { // Invalid: 401 Error & Exit header("HTTP/1.0 401 Unauthorized"); header("WWW-authenticate: Basic realm=\"Tets\""); header("Content-type: text/html"); // Print HTML that a username or password is not valid exit; } else { // Valid $_SESSION['login']=true; } } } ?> // The rest of the page is then displayed like normal A: I've found a way around it. I have 2 files: index.php and logout.php Here is my 'index.php' code: # CHECK LOGIN. if (!isset($_SESSION["loged"])) { $_SESSION["loged"] = false; } else { if (isset( $_SERVER['PHP_AUTH_USER'] ) && isset($_SERVER['PHP_AUTH_PW'])) { if (($_SERVER['PHP_AUTH_USER'] == L_USER) && (md5($_SERVER['PHP_AUTH_PW']) == L_PASS)) { $_SESSION["loged"] = true; } } } if ($_SESSION["loged"] === false) { header('WWW-Authenticate: Basic realm="Need authorization"'); header('HTTP/1.0 401 Unauthorized'); die('<br /><br /> <div style="text-align:center;"> <h1 style="color:gray; margin-top:-30px;">Need authorization</h1> </div>'); } And here is my 'logout.php' code: session_start(); $_SESSION["loged"] = false; // We can't use unset($_SESSION) when using HTTP_AUTH. session_destroy(); A: You can use the meta tag http-equiv="refresh" with a very short response time (e.g. content="1"). This refresh will clear any $_POST. if ( !isset($_SERVER['PHP_AUTH_USER']) || $_SERVER['PHP_AUTH_USER']!='myusername' || $_SERVER['PHP_AUTH_PW']!='mypassword' || isset($_POST['logout']) ) { header('WWW-Authenticate: Basic realm="My protected area"'); header('HTTP/1.0 401 Unauthorized'); echo '<html><head><title>401 Unauthorized</title><meta http-equiv="refresh" content="1"></head><body><h1>401 Unauthorized</h1><p>You are not allowed to see this page. Reload the page to try again.</p></body></html>'; exit(); }
d10099
Turns out it had to do with the .dll's not being found due to the Path/Environment Variables not being configured properly.
d10100
I found the solution by running a test myself. Yes, in a cluster configuration you need to monitor each master in order for failover to occur.