text
stringlengths
15
59.8k
meta
dict
Q: Optimize slow ranking query I need to optimize a query for a ranking that is taking forever (the query itself works, but I know it's awful and I've just tried it with a good number of records and it gives a timeout). I'll briefly explain the model. I have 3 tables: player, team and player_team. I have players, that can belong to a team. Obvious as it sounds, players are stored in the player table and teams in team. In my app, each player can switch teams at any time, and a log has to be mantained. However, a player is considered to belong to only one team at a given time. The current team of a player is the last one he's joined. The structure of player and team is not relevant, I think. I have an id column PK in each. In player_team I have: id (PK) player_id (FK -> player.id) team_id (FK -> team.id) Now, each team is assigned a point for each player that has joined. So, now, I want to get a ranking of the first N teams with the biggest number of players. My first idea was to get first the current players from player_team (that is one record top for each player; this record must be the player's current team). I failed to find a simple way to do it (tried GROUP BY player_team.player_id HAVING player_team.id = MAX(player_team.id), but that didn't cut it. I tried a number of querys that didn't work, but managed to get this working. SELECT COUNT(*) AS total, pt.team_id, p.facebook_uid AS owner_uid, t.color FROM player_team pt JOIN player p ON (p.id = pt.player_id) JOIN team t ON (t.id = pt.team_id) WHERE pt.id IN ( SELECT max(J.id) FROM player_team J GROUP BY J.player_id ) GROUP BY pt.team_id ORDER BY total DESC LIMIT 50 As I said, it works but looks very bad and performs worse, so I'm sure there must be a better way to go. Anyone has any ideas for optimizing this? I'm using mysql, by the way. Thanks in advance Adding the explain. (Sorry, not sure how to format it properly) id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY t ALL PRIMARY NULL NULL NULL 5000 Using temporary; Using filesort 1 PRIMARY pt ref FKplayer_pt77082,FKplayer_pt265938,new_index FKplayer_pt77082 4 t.id 30 Using where 1 PRIMARY p eq_ref PRIMARY PRIMARY 4 pt.player_id 1 2 DEPENDENT SUBQUERY J index NULL new_index 8 NULL 150000 Using index A: Its the subquery that is killing it - if you add a current field on the player_team table, where you give it value = 1 if it is current, and 0 if it is old you could simplify this alot by just doing: SELECT COUNT(*) AS total, pt.team_id, p.facebook_uid AS owner_uid, t.color FROM player_team pt JOIN player p ON (p.id = pt.player_id) JOIN team t ON (t.id = pt.team_id) WHERE player_team.current = 1 GROUP BY pt.team_id ORDER BY total DESC LIMIT 50 Having multiple entries in the player_team table for the same relationship where the only way to distinguish which one is the 'current' record is by comparing two (or more) rows I think is bad practice. I have been in this situation before and the workarounds you have to do to make it work really kill performance. It is far better to be able to see which row is current by doing a simple lookup (in this case, where current=1) - or by moving historical data into a completely different table (depending on your situation this might be overkill). A: Try this: SELECT t.*, cnt FROM ( SELECT team_id, COUNT(*) AS cnt FROM ( SELECT player_id, MAX(id) AS mid FROM player_team GROUP BY player_id ) q JOIN player_team pt ON pt.id = q.mid GROUP BY team_id ) q2 JOIN team t ON t.id = q2.team_id ORDER BY cnt DESC LIMIT 50 Create an index on player_team (player_id, id) (in this order) for this to work fast. A: I sometimes find that more complex queries in MySQL need to be broken into two pieces. The first piece would pull the data required into a temporary table and the second piece would be the query that attempts to manipulate the dataset created. Doing this definitely results in a significant performance gain. A: This will get the current teams with colours ordered by size: SELECT team_id, COUNT(player_id) c AS total, t.color FROM player_team pt JOIN teams t ON t.team_id=pt.team_id GROUP BY pt.team_id WHERE current=1 ORDER BY pt.c DESC LIMIT 50; But you've not given a condition for which player should be considered owner of the team. Your current query is arbitrarily showing one player as owner_id because of the grouping, not because that player is the actual owner. If your player_team table contained an 'owner' column, you could join the above query to a query of owners. Something like: SELECT o.facebook_uid, a.team_id, a.color, a.c FROM player_teams pt1 JOIN players o ON (pt1.player_id=o.player_id AND o.owner=1) JOIN (...above query...) a ON a.team_id=pt1.team_id; A: You could add a column "last_playteam_id" to player table, and update it each time a player changes his team with the pk from player_team table. Then you can do this: SELECT COUNT(*) AS total, pt.team_id, p.facebook_uid AS owner_uid, t.color FROM player_team pt JOIN player p ON (p.id = pt.player_id) and p.last_playteam_id = pt.id JOIN team t ON (t.id = pt.team_id) GROUP BY pt.team_id ORDER BY total DESC LIMIT 50 This could be fastest because you don't have to update the old player_team rows to current=0. You could also add instead a column "last_team_id" and keep it's current team there, you get the fastest result for the above query, but it could be less helpful with other queries.
{ "language": "en", "url": "https://stackoverflow.com/questions/2788806", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: jquery rebinding event I am trying to rebind the keydown event on focusout. Not really sure how what to pass to the keydown when rebinding. I tried passing this, but had no luck. Anyone? Thanks $('input.text').bind({ click : function(e) { },focusin : function(e) { },focusout : function() { // rebind keydown // $(this).bind('keydown', this); },keydown : function() { $(this).unbind('keydown'); } A: one of the possible solutions is to define the event function before calling the bind method on the element, and then reuse it to rebind when you focusout. it goes something like this: (this code should work...) keyDownFn = function() { console.log('this will happen only on the first keydown event!'); $(this).unbind('keydown') } $('input.text').bind({ click: function(e) {}, focusin: function(e) {}, focusout: function() { $(this).bind('keydown', keyDownFn); }, keydown: keyDownFn }) enjoy. A: You have to save a reference to the function. By saving a reference to the function, you can rebind the original function: var keydown_func = function() { $(this).unbind('keydown'); }; $('input.text').bind({ click : function(e) { },focusin : function(e) { },focusout : function() { // rebind keydown $(this).bind('keydown', keydown_func); },keydown : keydown_func }
{ "language": "en", "url": "https://stackoverflow.com/questions/8028727", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Difference between Garbage Collection and a For Loop destroying objetcs I have this code that creates 5500 objetcs of a class and then it outputs the the total allocated bytes and total memory so that way I can see the changes in the allocated bytes. I also have a destructor so I can see approximately when the Garbage Collection occurs. From the output on the console, I can see that the GC does not occur after each iteration of the for loop and I heard from somewhere that a for loop constructs and destructs objects after each iteration(obviously if the loop creates objects in its body). Now I am wondering if the action of the for loop of constructing and destructing objects after each iteration is not Garbage Collection(It is not done by the process of Garbage Collection), then what is It? How does the for loop destruct the object, considering the Object is stored in the heap? How is the object removed from the heap, after each iteration? class Destruct { int x; int g; public Destruct(int i, int h) { x = i; g = h; } ~Destruct() { Console.WriteLine($"Destructor called for {x}"); } public void Generator(int o,int l) { new Destruct(o,l); } } class one { static void Main() { int count = 1; Destruct ob = new Destruct(0,0); long tot; long mem; for (int i = 1, y=1; i <= 5500; i++,y++) { ob.Generator(i,y); tot = GC.GetTotalAllocatedBytes(); mem = GC.GetTotalMemory(false); count++; Console.WriteLine(count+" "+tot+" "+mem+" "+(tot-mem)); } Console.WriteLine("Done"); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/72243236", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: getchar() in c program - Eclipse Luna - Permission Denied Integer and Float based programs are working good in my Eclipse Luna Installation, without any permission problems created by Windows Process, but a string based C Program, gives errors or sometimes builds but empty console. I hold a latest copy of Eclipse IDE for C/C++ Developers Version: Luna Service Release 1 (4.4.1) - Build id: 20140925-1800 and MinGW - mingw version 0.6.2 on a Window 7 Ultimate 32-bit operating system. This would be my code This screenshot would be after the first build Ctrl+B First Build shows no error/exceptions. But when I tried to run it, I get empty console, I believe atleast Enter String should have been present if error starts from getchar, but surprisingly the console is empty I tried rebuilding it, below message came up. I tried running it again Ctrl + F11 It's again empty. I am not able to replicate the Permission Denied scenario, but however on monitoring the process I found this but as soon as I end these processes, It was capable of building, but empty console again with no messages in problems tab. On occasions, these permission denied is appearing, and as soon as I end the process, this disappears.
{ "language": "en", "url": "https://stackoverflow.com/questions/26613812", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Why does question mark show up in web browser? I was (re)reading Joel's great article on Unicode and came across this paragraph, which I didn't quite understand: For example, you could encode the Unicode string for Hello (U+0048 U+0065 U+006C U+006C U+006F) in ASCII, or the old OEM Greek Encoding, or the Hebrew ANSI Encoding, or any of several hundred encodings that have been invented so far, with one catch: some of the letters might not show up! If there's no equivalent for the Unicode code point you're trying to represent in the encoding you're trying to represent it in, you usually get a little question mark: ? or, if you're really good, a box. Which did you get? -> � Why is there a question mark, and what does he mean by "or, if you're really good, a box"? And what character is he trying to display? A: There is a question mark because the encoding process recognizes that the encoding can't support the character, and substitutes a question mark instead. By "if you're really good," he means, "if you have a newer browser and proper font support," you'll get a fancier substitution character, a box. In Joel's case, he isn't trying to display a real character, he literally included the Unicode replacement character, U+FFFD REPLACEMENT CHARACTER. A: It’s a rather confusing paragraph, and I don’t really know what the author is trying to say. Anyway, different browsers (and other programs) have different ways of handling problems with characters. A question mark “?” may appear in place of a character for which there is no glyph in the font(s) being used, so that it effectively says “I cannot display the character.” Browsers may alternatively use a small rectangle, or some other indicator, for the same purpose. But the “�” symbol is REPLACEMENT CHARACTER that is normally used to indicate data error, e.g. when character data has been converted from some encoding to Unicode and it has contained some character that cannot be represented in Unicode. Browsers often use “�” in display for a related purpose: to indicate that character data is malformed, containing bytes that do not constitute a character, in the character encoding being applied. This often happens when data in some encoding is being handled as if it were in some other encoding. So “�” does not really mean “unknown character”, still less “undisplayable character”. Rather, it means “not a character”. A: A question mark appears when a byte sequence in the raw data does not match the data's character set so it cannot be decoded properly. That happens if the data is malformed, if the data's charset is explicitally stated incorrectly in the HTTP headers or the HTML itself, the charset is guessed incorrectly by the browser when other information is missing, or the user's browser settings override the data's charset with an incompatible charset. A box appears when a decoded character does not exist in the font that is being used to display the data. A: Just what it says - some browsers show "a weird character" or a question mark for characters outside of the current known character set. It's their "hey, I don't know what this is" character. Get an old version of Netscape, paste some text form Microsoft Word which is using smart quotes, and you'll get question marks. http://blog.salientdigital.com/2009/06/06/special-characters-showing-up-as-a-question-mark-inside-of-a-black-diamond/ has a decent explanation.
{ "language": "en", "url": "https://stackoverflow.com/questions/11424613", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: access vba array by looping through recordset I have a loop which I want to create an array then use the array to insert into another table. Band Country AIR FR Bon Jovi US Oasis UK Blur UK Green Day US Metalica US I want to loop through this recordset, so I want to create arrays, for example, arrayFR = "AIR"; arrayUK = "Blur vbCrLf Oasis" and arrayUS = "Bon Jovi vbCrLf Green Day vbCrLf Metalica". At the same time, based on this recordset, I have created a temp table with columns FR, UK & US. I hope to use the arrays created, then insert into the temp table like the view below. FR UK US AIR Blur Bon Jovi Oasis Green Day Metalica I don't know how to start with as I have searched a lot of arrays related pages but doesn't help, please help me gurus! Thanks in advance! A: Since your end result (per country) is a single field with bands delimited by CRLF, you can make it simple. Add the following Dims; Dim aArray(10, 2) As String Dim iC As Integer Dim i As Integer IC = 0 Then add this inside your loop (change field names as required): Debug.Print rs1!country & vbTab & rs1!band If aArray(iC, 1) <> rs1!country Then iC = iC + 1 If iC > 10 Then MsgBox "There are more than 10 countries! Change the code...", vbOKOnly, "Too many countries" aArray(iC, 1) = rs1!country aArray(iC, 2) = rs1!band Else aArray(iC, 2) = aArray(iC, 2) & vbCrLf & rs1!band End If To see results: For i = 1 To iC MsgBox aArray(i, 1) & vbCrLf & aArray(i, 2) Next i A: Why not creating your temp table by using a simple CrossTab query ? If that suits your needs, it will be much easier.
{ "language": "en", "url": "https://stackoverflow.com/questions/24405409", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to create a dynamic form with values in model using form.py? I would like to create a dynamic form using forms.py where questions and answers come from values in the columns of models. I have three models, one called Sheet, which contains many questions from the Question model (Many to Many). The Question model can contain one or more answers from the Answer model (Many to Many) In my views.py, my function retrieves all values from Sheet model and sends values to my html file. In my html file I have a code that allows me to display all the questions with their answers. class Sheet(models.Model): form = models.ForeignKey(Form, on_delete=models.CASCADE) question = models.ManyToManyField(Question, blank=True) sheet_name = models.CharField(max_length=256) creation_date = models.DateTimeField(auto_now_add=True) def __str__(self): return self.sheet_name class Question(models.Model): is_parent = models.ManyToManyField("self", blank=True) answer = models.ManyToManyField(Answer, blank=True) in_french = models.TextField(max_length=512) in_english = models.TextField(max_length=512) def __str__(self): return self.in_french class Answer(models.Model): in_french = models.CharField(max_length=32) in_english = models.CharField(max_length=32) def __str__(self): return self.in_french {% for a_sheet in all_sheets %} <b>Survey: {{ a_sheet.sheet_name}}</b> {% for q in a_sheet.question.all %} <label>Question: {{ q.in_english }}</label><br/> <label>Answers:</label> {% if q.answer.all %} <select> {% for a in q.answer.all %} <option value={{ a.id }}> {{ a.in_french }} </option> {% endfor %} </select> {% else%} <input type="text" name="answer"> {% endif %} {% endfor %} {% endfor %} My question is how to translate my html code to put it in my forms.py file. Putting the code in a function of my forms.py file would allow me to set up conditions and retrieve the results of my form to put them in another model. This problem cannot be solved by default with the forms.ModelForm which displays the model columns in the forms. Thank you very much for your help
{ "language": "en", "url": "https://stackoverflow.com/questions/55609215", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to avoid the automatic zero value in a Kendo int textbox from Grid Edit screen or inline edit? I am using asp.net mvc kendo tools. I have a int property from my c# model side, see below: public class MyModel { [Required] public int MyIntField {get;set; } in the view side I am adding the column using Kendo Grid selector ... .Columns(c => c.Bound(my=>my.MyIntField)) ... That field always shows with zero when an edit popup screen is called. How can I avoid that still keeping the automatic required UI validations and without messy changing the DOM directly using javascript? I'm looking for a kendo selector solution or a c# attribute side solution. But maybe I am asking for too much. thanks
{ "language": "en", "url": "https://stackoverflow.com/questions/64519187", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Unit Testing Input Event in Angular6 I have an Input field in the UI <input matInput placeholder="BLC" id='blc' formControlName="blc" (change)="onBLCChanged($event)" /> I want to do unit testing for this, but this field send $event not any value to the function, how can I mock this. it('some text', async(async() => { spyOn(component, 'onBLCChanged'); // first round of change detection fixture.detectChanges(); // get ahold of the input let input = debugElement.query(By.css('#blc')); let inputElement = input.nativeElement; //set input value inputElement.value = 'test'; inputElement.dispatchEvent(new Event('change')); expect(component.onBLCChanged).toHaveBeenCalledWith({}); })); This is not working, Expected spy onBLCChanged to have been called with [ Object({ }) ] but actual calls were [ [object Event] ]. Please help, onBLCChanged is the method in the .ts file. A: Try this- it('some text', async(async() => { spyOn(component, 'onBLCChanged'); // first round of change detection fixture.detectChanges(); // get ahold of the input let input = debugElement.query(By.css('#blc')); let inputElement = input.nativeElement; //set input value inputElement.value = 'test'; // save event object so that you can do comparison. const event = new Event('change'); inputElement.dispatchEvent(event); expect(component.onBLCChanged).toHaveBeenCalledWith(event); })); A: Try this: it('some text', async(async() => { const spyOnFunction = spyOn(component, 'onBLCChanged'); // first round of change detection fixture.detectChanges(); // get ahold of the input let input = debugElement.query(By.css('#blc')); let inputElement = input.nativeElement; //set input value inputElement.value = 'test'; inputElement.dispatchEvent(new Event('change')); expect(spyOnFunction).toHaveBeenCalled(); }));
{ "language": "en", "url": "https://stackoverflow.com/questions/59824622", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to push an image id to docker repo When I run the command docker ps I get a list of containers like this: 6f8d5585918f 56058f3d1997 "test" 23 hours ago Up 23 hours test1 18edba56bfeb 2482781314c7 "java -jar test2" 2 days ago Up 2 days 0.0.0.0:8206->8484/tcp test2 The second column is the image. I want to push the second image as test11 to our private repo. I don't have the files to build the image. How would I do this? A: First you must tag the image ID. The you must login to your private Docker registry. (The correct name is registry and not repository. A Docker registry holds repositories). Then you push the image. Substitute privateregistry with the hostname of the Registry. docker tag 2482781314c7 privateregistry/test11 docker login privateregistry docker push privateregistry/test11
{ "language": "en", "url": "https://stackoverflow.com/questions/44829368", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to reshape dataframe with multiple levels I currently have a dataframe (df1) formatted as shown below: ID F1_1 F2_1r1 F2_1r2 F2_1r3 F1_2 F2_2r1 F2_2r2 F2_2r3 F1_3 F2_3r1 F2_3r2 F2_3r3 1 10 1 1 0 15 0 1 0 30 1 0 0 2 25 1 0 0 30 0 1 1 25 1 0 1 3 40 0 1 0 15 0 1 0 10 0 0 1 4 25 1 1 0 10 0 1 1 30 1 0 0 I would like to reformat it so that it is arranged as shown here in df2: ID F1_value R1 R2 R3 F1_x 1 10 1 1 0 1 1 15 0 1 0 2 1 30 1 0 0 3 2 25 1 0 0 1 2 30 0 1 1 2 2 25 1 0 1 3 3 40 0 1 0 1 3 15 0 1 0 2 3 10 0 0 1 3 4 25 1 1 0 1 4 10 0 1 1 2 4 30 1 0 0 3 A: You can use pivot_longer() but it is easier if you rename the variables first as below: x <- data.frame( ID = 1:4, A1 = c(10,25,40,25), A1.1=c(1,1,0,1), A1.2=c(1,0,1,1), A1.3=c(0,0,0,0), B1 = c(15,30,15,10), B1.1=c(0,0,0,0), B1.2=c(1,1,1,1), B1.3=c(0,1,0,1), C1 = c(30,25,10,30), C1.1=c(1,1,0,1), C1.2=c(0,0,0,0), C1.3=c(0,1,1,0) ) x %>% rename("A1.0" = "A1", "B1.0" = "B1", "C1.0" = "C1") %>% pivot_longer(`A1.0`:`C1.3`, names_pattern=c("([A-C])\\d.(\\d)"), names_to=c("A_C", ".value"), names_prefix = "R") %>% rename("A1_C1_value" = "0", "R1" = "1", "R2" = "2", "R3" = "3") # # A tibble: 12 × 6 # ID A_C A1_C1_value R1 R2 R3 # <int> <chr> <dbl> <dbl> <dbl> <dbl> # 1 1 A 10 1 1 0 # 2 1 B 15 0 1 0 # 3 1 C 30 1 0 0 # 4 2 A 25 1 0 0 # 5 2 B 30 0 1 1 # 6 2 C 25 1 0 1 # 7 3 A 40 0 1 0 # 8 3 B 15 0 1 0 # 9 3 C 10 0 0 1 # 10 4 A 25 1 1 0 # 11 4 B 10 0 1 1 # 12 4 C 30 1 0 0** A: You can do this pretty efficiently using data.table: library(data.table) df1 <- data.table(df1) df2 <- melt(df1, measure = patterns("^F1", "r1$", "r2$", "r3$"), value.name = c("F1_value", "R1", "R2", "R3"), variable.name = "F1_x") Producing: ID F1_x F1_value R1 R2 R3 1: 1 1 10 1 1 0 2: 2 1 25 1 0 0 3: 3 1 40 0 1 0 4: 4 1 25 1 1 0 5: 1 2 15 0 1 0 6: 2 2 30 0 1 1 7: 3 2 15 0 1 0 8: 4 2 10 0 1 1 9: 1 3 30 1 0 0 10: 2 3 25 1 0 1 11: 3 3 10 0 0 1 12: 4 3 30 1 0 0
{ "language": "en", "url": "https://stackoverflow.com/questions/69572063", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Publishing POMs via Maven and inserting build version info I'm building Maven projects via TeamCity/Git and trying to insert the TeamCity build numbers in the pom.xml that gets published to my repository upon a successful build. Unfortunately I can't determine how to publish a pom.xml with the substitutions inserted. My pom.xml contains info like: <version>${build.number}</version> where build.number is provided by TeamCity. That all builds ok, and if (say) build.number = 0.1, then the deployment is a pom.xml to a directory with 0.1. All well and good. However, the pom.xml that is deployed is the pom.xml without the substitutions made. i.e. Maven is running with a pom.xml with appropriate substitutions, but deploys the initial version and so I get <version>${build.number}</version> in the final pom.xml. How can I get the build version number in the pom.xml ? A: I wouldn't use this approach because it makes building a project checked out from the SCM not possible without providing the build.number property. I don't think that this is a good thing. Maybe I'm missing something though. Actually, I don't get what you are trying to achieve exactly (why don't you write the build number in the manifest for example). But, according to the Maven Features on the Teamcity website: By default, it also keeps TeamCity build number in sync with the Maven version number (...). Couldn't that be helpful? There is another thread about this here. A: Try to use generateReleasePoms property of maven-realease-plugin, maybe that helps a little.
{ "language": "en", "url": "https://stackoverflow.com/questions/2183015", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Comparing non null-terminated char arrays I have a compressed unsigned char array of a known size. Due to compression considiration I don't store the null-terminator at the end. I want to compare it to another array of the same format. What would be the best way to do so? I thought duplicating the comparessed arrays to a new array, add the null-terminator, and then compare them using strcmp(). Any better suggestions? A: you can use strncmp() function from string.h strncmp(str1, str2, size_of_your_string); Here you can manually give the size of string(or the number of characters that you want to compare). strlen() may not work to get the length of your string because your strings are not terminated with the NUL character. UPDATE: see the code for comparison of unsigned char array #include<stdio.h> #include<string.h> main() { unsigned char str[]="gangadhar", str1[]="ganga"; if(!strncmp(str,str1,5)) printf("first five charcters of str and str1 are same\n"); else printf("Not same\n"); } A: Since you know the size of the array, you could use strncmp(): int strncmp (char *string1, char *string2, int n) with n being the number of characters to compare. Deets: http://www.tutorialspoint.com/ansi_c/c_strncmp.htm A: two years later, but... I believe ‘memcmp‘ is the function you want to use, since ‘strncmp‘ may stop if it found a zero byte inside the compression string.
{ "language": "en", "url": "https://stackoverflow.com/questions/18437430", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Azure Api Error 500 I have created an API service in azure. It is assumed that making a GET request to {my_url}/api/{name_of_the_table} should return an XML with the data ot the table. However, it returns an error message: <Error> <Message>An error has occurred.</Message> <ExceptionMessage> The 'ObjectContent1' type failed to serialize the response body for content type 'application/xml; charset=utf-8'. </ExceptionMessage> <ExceptionType>System.InvalidOperationException</ExceptionType> <StackTrace/> <InnerException> <Message>An error has occurred.</Message> <ExceptionMessage> The operation is not supported for your subscription offer type. </ExceptionMessage> <ExceptionType>System.Data.SqlClient.SqlException</ExceptionType> <StackTrace> at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose) at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady) at System.Data.SqlClient.SqlCommand.RunExecuteNonQueryTds(String methodName, Boolean async, Int32 timeout, Boolean asyncWrite) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(TaskCompletionSource`1 completion, String methodName, Boolean sendToPipe, Int32 timeout, Boolean asyncWrite) at System.Data.SqlClient.SqlCommand.ExecuteNonQuery() at System.Data.Entity.Infrastructure.Interception.DbCommandDispatcher.<NonQuery>b__0(DbCommand t, DbCommandInterceptionContext`1 c) at System.Data.Entity.Infrastructure.Interception.InternalDispatcher`1.Dispatch[TTarget,TInterceptionContext,TResult](TTarget target, Func`3 operation, TInterceptionContext interceptionContext, Action`3 executing, Action`3 executed) at System.Data.Entity.Infrastructure.Interception.DbCommandDispatcher.NonQuery(DbCommand command, DbCommandInterceptionContext interceptionContext) at System.Data.Entity.SqlServer.SqlProviderServices.<>c__DisplayClass1a.<CreateDatabaseFromScript>b__19(DbConnection conn) at System.Data.Entity.SqlServer.SqlProviderServices.<>c__DisplayClass33.<UsingConnection>b__32() at System.Data.Entity.SqlServer.DefaultSqlExecutionStrategy.<>c__DisplayClass1.<Execute>b__0() at System.Data.Entity.SqlServer.DefaultSqlExecutionStrategy.Execute[TResult](Func`1 operation) at System.Data.Entity.SqlServer.DefaultSqlExecutionStrategy.Execute(Action operation) at System.Data.Entity.SqlServer.SqlProviderServices.UsingConnection(DbConnection sqlConnection, Action`1 act) at System.Data.Entity.SqlServer.SqlProviderServices.UsingMasterConnection(DbConnection sqlConnection, Action`1 act) at System.Data.Entity.SqlServer.SqlProviderServices.CreateDatabaseFromScript(Nullable`1 commandTimeout, DbConnection sqlConnection, String createDatabaseScript) at System.Data.Entity.SqlServer.SqlProviderServices.DbCreateDatabase(DbConnection connection, Nullable`1 commandTimeout, StoreItemCollection storeItemCollection) at System.Data.Entity.Core.Common.DbProviderServices.CreateDatabase(DbConnection connection, Nullable`1 commandTimeout, StoreItemCollection storeItemCollection) at System.Data.Entity.Core.Objects.ObjectContext.CreateDatabase() at System.Data.Entity.Migrations.Utilities.DatabaseCreator.Create(DbConnection connection) at System.Data.Entity.Migrations.DbMigrator.EnsureDatabaseExists(Action mustSucceedToKeepDatabase) at System.Data.Entity.Migrations.DbMigrator.Update(String targetMigration) at System.Data.Entity.MigrateDatabaseToLatestVersion`2.InitializeDatabase(TContext context) at System.Data.Entity.Internal.InternalContext.<>c__DisplayClassf`1.<CreateInitializationAction>b__e() at System.Data.Entity.Internal.InternalContext.PerformInitializationAction(Action action) at System.Data.Entity.Internal.InternalContext.PerformDatabaseInitialization() at System.Data.Entity.Internal.LazyInternalContext.<InitializeDatabase>b__4(InternalContext c) at System.Data.Entity.Internal.RetryAction`1.PerformAction(TInput input) at System.Data.Entity.Internal.LazyInternalContext.InitializeDatabaseAction(Action`1 action) at System.Data.Entity.Internal.LazyInternalContext.InitializeDatabase() at System.Data.Entity.Internal.InternalContext.GetEntitySetAndBaseTypeForType(Type entityType) at System.Data.Entity.Internal.Linq.InternalSet`1.Initialize() at System.Data.Entity.Internal.Linq.InternalSet`1.GetEnumerator() at System.Data.Entity.Infrastructure.DbQuery`1.System.Collections.Generic.IEnumerable<TResult>.GetEnumerator() at WriteArrayOfUsuarioToXml(XmlWriterDelegator , Object , XmlObjectSerializerWriteContext , CollectionDataContract ) at System.Runtime.Serialization.CollectionDataContract.WriteXmlValue(XmlWriterDelegator xmlWriter, Object obj, XmlObjectSerializerWriteContext context) at System.Runtime.Serialization.XmlObjectSerializerWriteContext.WriteDataContractValue(DataContract dataContract, XmlWriterDelegator xmlWriter, Object obj, RuntimeTypeHandle declaredTypeHandle) at System.Runtime.Serialization.XmlObjectSerializerWriteContext.SerializeWithoutXsiType(DataContract dataContract, XmlWriterDelegator xmlWriter, Object obj, RuntimeTypeHandle declaredTypeHandle) at System.Runtime.Serialization.DataContractSerializer.InternalWriteObjectContent(XmlWriterDelegator writer, Object graph, DataContractResolver dataContractResolver) at System.Runtime.Serialization.DataContractSerializer.InternalWriteObject(XmlWriterDelegator writer, Object graph, DataContractResolver dataContractResolver) at System.Runtime.Serialization.XmlObjectSerializer.WriteObjectHandleExceptions(XmlWriterDelegator writer, Object graph, DataContractResolver dataContractResolver) at System.Runtime.Serialization.DataContractSerializer.WriteObject(XmlWriter writer, Object graph) at System.Net.Http.Formatting.XmlMediaTypeFormatter.WriteToStream(Type type, Object value, Stream writeStream, HttpContent content) at System.Net.Http.Formatting.XmlMediaTypeFormatter.WriteToStreamAsync(Type type, Object value, Stream writeStream, HttpContent content, TransportContext transportContext, CancellationToken cancellationToken) --- End of stack trace from previous location where exception was thrown --- at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at System.Web.Http.WebHost.HttpControllerHandler.<WriteBufferedResponseContentAsync>d__1b.MoveNext() </StackTrace> </InnerException> </Error> This is the url to the web api: http://epocapi.azurewebsites.net/api/Usuarios I'm using the DreamSpark subscription (students) How can I fix this? The full exception is: "ExceptionMessage":"The operation is not supported for your subscription offer type.", "ExceptionType":"System.Data.SqlClient.SqlException", "StackTrace":" at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose) at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady) at System.Data.SqlClient.SqlCommand.RunExecuteNonQueryTds(String methodName, Boolean async, Int32 timeout, Boolean asyncWrite) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(TaskCompletionSource`1 completion, String methodName, Boolean sendToPipe, Int32 timeout, Boolean asyncWrite) at System.Data.SqlClient.SqlCommand.ExecuteNonQuery() at System.Data.Entity.Infrastructure.Interception.DbCommandDispatcher.<NonQuery>b__0(DbCommand t, DbCommandInterceptionContext`1 c) at System.Data.Entity.Infrastructure.Interception.InternalDispatcher`1.Dispatch[TTarget,TInterceptionContext,TResult](TTarget target, Func`3 operation, TInterceptionContext interceptionContext, Action`3 executing, Action`3 executed) at System.Data.Entity.Infrastructure.Interception.DbCommandDispatcher.NonQuery(DbCommand command, DbCommandInterceptionContext interceptionContext) at System.Data.Entity.SqlServer.SqlProviderServices.<>c__DisplayClass1a.<CreateDatabaseFromScript>b__19(DbConnection conn) at System.Data.Entity.SqlServer.SqlProviderServices.<>c__DisplayClass33.<UsingConnection>b__32() at System.Data.Entity.SqlServer.DefaultSqlExecutionStrategy.<>c__DisplayClass1.<Execute>b__0() at System.Data.Entity.SqlServer.DefaultSqlExecutionStrategy.Execute[TResult](Func`1 operation) at System.Data.Entity.SqlServer.DefaultSqlExecutionStrategy.Execute(Action operation) at System.Data.Entity.SqlServer.SqlProviderServices.UsingConnection(DbConnection sqlConnection, Action`1 act) at System.Data.Entity.SqlServer.SqlProviderServices.UsingMasterConnection(DbConnection sqlConnection, Action`1 act) at System.Data.Entity.SqlServer.SqlProviderServices.CreateDatabaseFromScript(Nullable`1 commandTimeout, DbConnection sqlConnection, String createDatabaseScript) at System.Data.Entity.SqlServer.SqlProviderServices.DbCreateDatabase(DbConnection connection, Nullable`1 commandTimeout, StoreItemCollection storeItemCollection) at System.Data.Entity.Core.Common.DbProviderServices.CreateDatabase(DbConnection connection, Nullable`1 commandTimeout, StoreItemCollection storeItemCollection) at System.Data.Entity.Core.Objects.ObjectContext.CreateDatabase() at System.Data.Entity.Migrations.Utilities.DatabaseCreator.Create(DbConnection connection) at System.Data.Entity.Migrations.DbMigrator.EnsureDatabaseExists(Action mustSucceedToKeepDatabase) at System.Data.Entity.Migrations.DbMigrator.Update(String targetMigration) at System.Data.Entity.MigrateDatabaseToLatestVersion`2.InitializeDatabase(TContext context) at System.Data.Entity.Internal.InternalContext.<>c__DisplayClassf`1.<CreateInitializationAction>b__e() at System.Data.Entity.Internal.InternalContext.PerformInitializationAction(Action action) at System.Data.Entity.Internal.InternalContext.PerformDatabaseInitialization() at System.Data.Entity.Internal.LazyInternalContext.<InitializeDatabase>b__4(InternalContext c) at System.Data.Entity.Internal.RetryAction`1.PerformAction(TInput input) at System.Data.Entity.Internal.LazyInternalContext.InitializeDatabaseAction(Action`1 action) at System.Data.Entity.Internal.LazyInternalContext.InitializeDatabase() at System.Data.Entity.Internal.InternalContext.GetEntitySetAndBaseTypeForType(Type entityType) at System.Data.Entity.Internal.Linq.InternalSet`1.Initialize() at System.Data.Entity.Internal.Linq.InternalSet`1.GetEnumerator() at System.Data.Entity.Infrastructure.DbQuery`1.System.Collections.IEnumerable.GetEnumerator() at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.SerializeList(JsonWriter writer, IEnumerable values, JsonArrayContract contract, JsonProperty member, JsonContainerContract collectionContract, JsonProperty containerProperty) at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.SerializeValue(JsonWriter writer, Object value, JsonContract valueContract, JsonProperty member, JsonContainerContract containerContract, JsonProperty containerProperty) at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.Serialize(JsonWriter jsonWriter, Object value, Type objectType) at Newtonsoft.Json.JsonSerializer.SerializeInternal(JsonWriter jsonWriter, Object value, Type objectType) at System.Net.Http.Formatting.BaseJsonMediaTypeFormatter.WriteToStream(Type type, Object value, Stream writeStream, Encoding effectiveEncoding) at System.Net.Http.Formatting.JsonMediaTypeFormatter.WriteToStream(Type type, Object value, Stream writeStream, Encoding effectiveEncoding) at System.Net.Http.Formatting.BaseJsonMediaTypeFormatter.WriteToStream(Type type, Object value, Stream writeStream, HttpContent content) at System.Net.Http.Formatting.BaseJsonMediaTypeFormatter.WriteToStreamAsync(Type type, Object value, Stream writeStream, HttpContent content, TransportContext transportContext, CancellationToken cancellationToken) --- End of stack trace from previous location where exception was thrown --- at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at System.Web.Http.WebHost.HttpControllerHandler.<WriteBufferedResponseContentAsync>d__1b.MoveNext() A: The error is occurring as a result of your code trying to create a database. The error is indicating that the edition/performance level being created is not supported by your subscription offer type - a DreamSpark subscription is limited to creating free resources. See https://azure.microsoft.com/en-us/offers/ms-azr-0144p/. Creating a database without specifying an edition will create a Standard S0 database by default, which is not free. If you create a free website or mobile service and elect to create a free Azure SQL Database as part of the template that should work for your subscription's offer type. You could then access this database from any application although it will have limited storage and performance. A: You can put the default constructor to the DTO class. Ex: public class User { public User() { } } or you can try to put the following lines at the top of your Application_Start method in the Global.asax file: GlobalConfiguration.Configuration.Formatters.JsonFormatter.SerializerSettings.ReferenceLoopHandling = Newtonsoft.Json.ReferenceLoopHandling.Ignore;
{ "language": "en", "url": "https://stackoverflow.com/questions/36640981", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Favicon not being added to build folder during webpack build I'm trying to get an image into my dist folder but it's not showing up. I've included the copy-webpack-plugin which is what I thought would make it work. I do get a warning message during the build that says: 'WARNING in unable to locate '/Users/developer/Desktop/RecruitsPro/client/public' at '/Users/developer/Desktop/RecruitsPro/client/public' webpack.config.js var webpack = require('webpack'); var path = require('path'); var HtmlWebpackPlugin = require('html-webpack-plugin'); var CopyWebpackPlugin = require('copy-webpack-plugin'); const helpers = require('./helpers'); const VENDOR_LIBS = [ 'axios', 'ordinal', 'react', 'react-activity', 'react-avatar', 'react-dom', 'react-easy-chart', 'react-is', 'react-modal', 'react-redux', 'react-router-dom', 'react-stripe-checkout', 'react-window-size', 'redux', 'redux-form', 'redux-thunk' ]; module.exports = { entry: { bundle: './client/app/index.js', vendor: VENDOR_LIBS }, output: { path: path.join(__dirname, 'dist'), filename: '[name].[chunkhash].js' }, module: { rules: [ { use: 'babel-loader', test: /\.js$/, exclude: /node_modules/ }, { use: ['style-loader', 'css-loader'], test: /\.css$/ } ] }, plugins: [ new HtmlWebpackPlugin({ template: './client/public/index.html' }), new webpack.DefinePlugin({ 'process.env.NODE_ENV': JSON.stringify(process.env.NODE_ENV) }), new CopyWebpackPlugin([{ from: helpers.root('client/public') }]) ] }; helpers.js const path = require('path'); // Helper functions function root(args) { args = Array.prototype.slice.call(arguments, 0); return path.join.apply(path, [__dirname].concat('../', ...args)); } exports.root = root; I was expecting the picture to show up. A: For anyone who's curious I had to add this code to my module.rules array. { test: /\.png$/, loader: 'file-loader' }
{ "language": "en", "url": "https://stackoverflow.com/questions/55753480", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: NSURL of an image from UIImagePicker I am trying to upload an image from my phone to my Amazon S3 bucket: func imagePickerController(picker: UIImagePickerController, didFinishPickingImage image: UIImage!, editingInfo: [NSObject : AnyObject]!) { var uploadReq = AWSS3TransferManagerUploadRequest() uploadReq.bucket = "bucketname"; uploadReq.key = "testimage.png" uploadReq.body = image; //This line needs an NSURL } How can I get the NSURL of the image the user selected from a UIImagePickerController? A: While Zhengjie's answer might work. I found a more simple way to go about it: let fileURL = NSURL(fileURLWithPath: NSTemporaryDirectory().stringByAppendingPathComponent("imagetoupload")) let data = UIImagePNGRepresentation(image) data.writeToURL(fileURL!, atomically: true) uploadReq.body = fileURL A: First U should save the Image to Document Directory. then Post the URL. U can do like this : func imagePickerController(picker: UIImagePickerController, didFinishPickingImage image: UIImage!, editingInfo: [NSObject : AnyObject]!) { let data = UIImagePNGRepresentation(image) // get the data let paths = NSSearchPathForDirectoriesInDomains(.DocumentDirectory,.UserDomainMask,true) as! [String] let path = paths[0] let fileName = "testimage.png" let filePath = path.stringByAppendingPathComponent(fileName) // get the file url to save var error:NSError? if !data.writeToFile(photoPath, options: .DataWritingAtomic, error: &error) {// save the data logger.error("Error write file: \(error)") } upload.body = NSURL(string: filePath) // used url to post to server }
{ "language": "en", "url": "https://stackoverflow.com/questions/30202777", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to change the tintcolor of the slider in a MPVolumeView? How can I change the tintcolor of the slider in a MPVolumeView? Instead of blue I want to display a different color. A: You need to customize UISlider. You can do it like this: [slider setMinimumTrackImage:[[UIImage imageNamed:@"redSlider.png"] stretchableImageWithLeftCapWidth:10.0 topCapHeight:0.0] forState:UIControlStateNormal]; Result: Here is some backgrounds for sliders and example image how they looks: Slider backgrounds: Example: More information here. A: After you alloc volumeView = [[MPVolumeView alloc] initWithFrame:CGRectMake(40, 145, 270, 23)]; Just search the MPVolumeView subviews and get the slider for (id current in volumeView.subviews) { if ([current isKindOfClass:[UISlider class]]) { UISlider *volumeSlider = (UISlider *)current; volumeSlider.minimumTrackTintColor = [UIColor redColor]; volumeSlider.maximumTrackTintColor = [UIColor lightGrayColor]; } Put the colors you like in UIColor and all done. If you need to customize further, treat volumeSlider as a standard UISlider.
{ "language": "en", "url": "https://stackoverflow.com/questions/11922943", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I prove a class-room scheduling issue to be NP complete correctly? I am given a problem, that is about scheduling n classes in k rooms at a school, and it is a decision problem, because we want to ask if we can arrange these n classes in those k rooms so that a given timelimit t is not exceeded (the total time of classes in a certain scheduled way should not exceed t). I am aware about to firstly show that every solution to the problem can be verified in polynomial time, but when it comes to reducing some known NP complete problem to the class-room scheduling problem then I do not know which NP-complete problem I should take. I was thinking about using Traveling Salesman Problem to reduce, but I am not sure about how to interpret my class-room scheduling problem into a graph considering the symbolics. My first attempt to interpret my problem as a graph is to consider the classes as vertices, rooms as colours and the time for a class denoted by a weighted edge between two classes (the latter two interpretations completely unsure). But I don't know if this follows a standard pattern for some scheduling problem or I don't even know if Traveling Salesman Problem is a good NP-complete problem to reduce to the class-room scheduling problem. In the latter case, then I would like to know examples of more suitable NP-complete problems to reduce in my case. Thanks in advance! A: You can use map-coloring (graph-coloring) for it. You just need to define rules for edges and nodes. Nodes will be classes, rooms will be colors and you connect classes that can't be in same time. This is actually k-coloring problem, where you need to color specific graph with k colors in order to minimize number of classes per color. However in this special case, you just need to have less or equal to t per color. You can achieve this by going by standard rule of coloring, and switch to new color as soon as it has t number of classes. This is still a NP-complete problem. Only exception is when you have 1 or 2 classes, then its in polynomial time. When you have 1 room, you just need to check if n<=t. When you have 2 rooms, you just need to check if it can be colored by 2 colors. You can achieve this by DFS (but first check if n <= 2t) and color odd steps with first color and even steps with second color. If it is possible to color all nodes with this tactic, you have a positive solution. When k>=3, its NP-complete.
{ "language": "en", "url": "https://stackoverflow.com/questions/40949894", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Issues with code for google-script (from drive to sheet) I am currently looking to get file names from google drive to my google sheet. I have around 30 files in my drive, and I want their file name to appear in my google sheets. All files are located in one folder. I have tried to do this through script google, but something seems to be wrong as I get an error code. Below is the code I found through other sites but something goes wrong.... function list_all_files_inside_one_folder_without_subfolders(){ var sh = SpreadsheetApp.getActiveSheet(); var folder = DriveApp.getFolderById('1HPv9-umg0XQ8Fa9UV8lDr6O2Y4kAIAJe'); // I change the folder ID here var list = []; list.push(['Name','ID','Size']); var files = folder.getFiles(); while (files.hasNext()){ file = files.next(); var row = [] row.push(file.getName(),file.getId(),file.getSize()) list.push(row); } sh.getRange(1,1,list.length,list[0].length).setValues(list); } A: Try this: function list_all_files_inside_one_folder_without_subfolders(){ var sh = SpreadsheetApp.getActiveSheet(); var folder = DriveApp.getFolderById('1HPv9-umg0XQ8Fa9UV8lDr6O2Y4kAIAJe'); var list = []; list.push(['Name','ID','Size']); var files = folder.getFiles(); while (files.hasNext()){ file = files.next(); list.push([file.getName(),file.getId(),file.getSize()]); } sh.getRange(1,1,list.length,list[0].length).setValues(list); }
{ "language": "en", "url": "https://stackoverflow.com/questions/65096114", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-2" }
Q: Object of instance not set to an instance or an object. Visual Basic I am writing a simple piece to insert 5 values into my local database. The connection gets established well but when I press the button , which does the job for inserting into the DB I get "Object of instance not set to an instance or an object" this message. Sql version Microsoft SQL Server 2014 - 12.0.2000.8 (X64) Here is My code Imports System.Data.Sql Imports System.Data.SqlClient Public Class Form1 Dim ID As Integer = 0 Private Sub Form1_Load(sender As Object, e As EventArgs) Handles MyBase.Load Dim SQLCon As New SqlConnection With {.ConnectionString = "Server=DIONISIS-PC\SQLEXPRESS; Database=Testing;Trusted_Connection=True;" } Dim SQLcmd As SqlCommand Try SQLCon.Open() Label2.Text = "Connected" Catch ex As Exception Label2.Text = "ERROR" MsgBox(ex.Message) Finally SQLCon.Close() End Try End Sub Private Sub Button1_Click(sender As Object, e As EventArgs) Handles Button1.Click Dim SQLCon As New SqlConnection With {.ConnectionString = "Server=DIONISIS-PC\SQLEXPRESS; Database=Testing;Trusted_Connection=True;" } Dim SQLcmd As SqlCommand ID += 1 Dim LastName As String = TextBox1.Text Dim firstName As String = TextBox2.Text Dim Address As String = TextBox3.Text Dim city As String = TextBox4.Text Try SQLCon.Open() SQLcmd.Connection = SQLCon 'EDIT: The problem seems to be here' SQLcmd.CommandText = "INSERT INTO students([student_ID], [LastName],[FirstName],[Address],[City]) VALUES([ID], [LastName],[firstName],[Address],[city])" SQLcmd.ExecuteNonQuery() Catch ex As Exception MsgBox(ex.Message) Finally SQLCon.Close() End Try End Sub End Class A: This should work for you: SQLcmd.CommandText = ("INSERT INTO students([student_ID], [LastName], [FirstName],[Address],[City]) VALUES({1},'{2}','{3}','{4}','{5}'"),LastName ,firstName,Address,city) BUT you will be prone to SQL Injection. The correct way to do this is described here and it's name is by using SQL Parameters A: You need to instantiate SQLcmd before trying to set properties on it.
{ "language": "en", "url": "https://stackoverflow.com/questions/35650914", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Subsetting a column I have a data frame(data10) with 128 obs and 46 variables, I am interested in isolating a single column(variable) based on a condition(trial). I wonder why I am getting 'incorrect number of dimensions' in my R console. Please find my code below. PrePost_NJ <-data10$NormalizedJerk[data10$trial=="102",] I need some education, ​please A: Every time you ask a question try to include some data so people can play with in order to find correct answers. In this case, you shouldn't use a "," in the brackets: PrePost_NJ <-data10$NormalizedJerk[data10$trial=="102"] Take a look to the 'tidyverse' package, it will make your data manipulation easier.
{ "language": "en", "url": "https://stackoverflow.com/questions/55693517", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: need the logic to maintain state of checkboxes in filter checked on refresh or on going to next page i can filter data and display output but i cannot maintain the state of filter checked on refresh or on going to nextpage.here is my code for filter. $(document).ready(function(){ filter_data(); function filter_data() { $('.filter_data').html('<div id="loading" style="" ></div>'); var action = 'fetch_data'; var minimum_price = $('#hidden_minimum_price').val(); var maximum_price = $('#hidden_maximum_price').val(); var maxprice = $('#hidden_maximum_price_mob').val(); var minprice= $('#hidden_minimum_price_mob').val(); var typeofproperty = get_filter('typeofproperty'); var area = get_filter('area'); var fortypepro = get_filter('fortypepro'); $.ajax({ url:"fetch_data.php", method:"POST", data:{action:action, minimum_price:minimum_price, maximum_price:maximum_price, typeofproperty:typeofproperty, area:area, fortypepro:fortypepro,maxprice:maxprice,minprice:minprice}, success:function(data){ $('.filter_data').html(data); } }); } function get_filter(class_name) { var filter = []; $('.'+class_name+':checked').each(function(){ filter.push($(this).val()); }); return filter; } $('.common_selector').click(function(){ filter_data(); }); and here is my logic to filter. if(isset($_POST["area"])) { $area_filter = implode("','", $_POST["area"]); $query .= " AND area IN('".$area_filter."') "; } $query .= " LIMIT $this_page_first_result , $results_per_page "; $statement=mysqli_query($conn,$query); $total_row=mysqli_num_rows($statement); $output = ''; if($total_row > 0) {//output my pagination logic is this. $results_per_page = 7; $number_of_results=mysqli_num_rows($resul); $number_of_pages = ceil($number_of_results/$results_per_page); if (!isset($_GET['page'])) { $page = 1; } else { $page = $_GET['page']; } $this_page_first_result = ($page-1)*$results_per_page; $current_page = isset($get['page'])?$get['page'] : 1; for($i=1;$i<=$number_of_pages;$i++){ $get['page'] = $i; $qs = http_build_query($get,'','&amp;'); if($i==$current_page) {//display pagination links. I want to know how to store checked filter data in a varaible,so that i can maitain filter data on refresh ,also logic on how to maintain pagination after filtering data(for example:if a user uses filter on page 5 he should be redirected to page 1 with filtered results,).I know its asking a lot but if not possible please point me in a direction where i get to know this kind of stuff.i mean ajax jquery php mysql filter.
{ "language": "en", "url": "https://stackoverflow.com/questions/70902043", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: react-native package android.support.v4.media does not exist I am using react-native 0.57.1 react-native track player 0.2.5 I have got following error: C:\zzzprojects\app\node_modules\react-native-track-player\android\src\main\java\guichaguri\trackplayer\metadata\Metadata.java:9: error: package android.support.v4.media does not exist import android.support.v4.media.MediaMetadataCompat; ^ C:\zzzprojects\app\node_modules\react-native-track-player\android\src\main\java\guichaguri\trackplayer\metadata\Metadata.java:10: error: package android.support.v4.media does not exist import android.support.v4.media.RatingCompat; ^ C:\zzzprojects\app\node_modules\react-native-track-player\android\src\main\java\guichaguri\trackplayer\metadata\Metadata.java:11: error: package android.support.v4.media.session does not exist import android.support.v4.media.session.MediaButtonReceiver; ^ C:\zzzprojects\app\node_modules\react-native-track-player\android\src\main\java\guichaguri\trackplayer\metadata\Metadata.java:12: error: package android.support.v4.media.session does not exist import android.support.v4.media.session.MediaControllerCompat; ^ ........... ........... .......... C:\zzzprojects\app\node_modules\react-native-track-player\android\src\main\java\guichaguri\trackplayer\metadata\components\CustomVolume.java:14: error: cannot find symbol super(canControl ? VOLUME_CONTROL_ABSOLUTE : VOLUME_CONTROL_FIXED, maxVolume, (int)(volume * maxVolume)); ^ symbol: variable VOLUME_CONTROL_ABSOLUTE location: class CustomVolume Note: Some input files use unchecked or unsafe operations. Note: Recompile with -Xlint:unchecked for details. Note: Some messages have been simplified; recompile with -Xdiags:verbose to get full output 100 errors FAILURE: Build failed with an exception. What went wrong: Execution failed for task ':react-native-track-player:compileDebugJavaWithJavac'. Compilation failed; see the compiler error output for details. Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights. Get more help at https://help.gradle.org BUILD FAILED in 43s 36 actionable tasks: 25 executed, 11 up-to-date Could not install the app on the device, read the error above for details. Make sure you have an Android emulator running or a device connected and have set up your Android development environment: https://facebook.github.io/react-native/docs/getting-started.html I have tried a lot of methods I even init a new project and just install react native it is giving same error again . ... thanks. do answer me soon
{ "language": "en", "url": "https://stackoverflow.com/questions/52847554", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to change key names in laravel collection? I have one model instance in which i have many to many relationship data and now everything is in good condition the only thing which i am having trouble is their keys name . As I want to send better camelCase keys json in response but i am not able to change it my collection object while converting it in array array:28 [ "id" => 268897045 "name" => "remitterNew22June" "name_last" => "singh" "beneficiary_json" => null "flag" => 0 "consumed_on_paytm" => null "consumed_on_fino" => null "consumed_on_icici" => null "consumed_on_federal" => null "consumed_on_airtel" => "0.00" "paytm_key" => "" "ipaytm_otp_state" => "" "paytm_status" => 0 "prabhumoney_key" => "0" "beneficiaries" => array:2 [ 0 => array:28 [ "id" => 11969779 "account" => "ATROUNTNUMBEE" "name" => "firstName lastName" "address" => null "last_success_dt" => null "created_dt" => null "modifide_dt" => "2022-06-17 17:36:26" "rl_remitter_id" => null "rl_bene_id" => null "rl_verification" => 0 "sm_remitter_id" => null "sm_bene_ids" => null "sm_verification" => 0 "user_id" => "295906" "verification" => 0 "remitter_id" => 0 "india" => 1 "nepal" => 0 "mode" => 1 "paytm_key" => "" "bene_id" => "32639604bffe8c8c79361d7fce5e24ca" "pivot" => array:2 [ "remitter_mobile" => "9808908901" "bene_id" => "32639604bffe8c8c79361d7fce5e24ca" ] ] 1 => array:28 [ "id" => 11969780 "account" => "ATROUNTNUMBEE" "name" => "firstName lastName" "address" => null "last_success_dt" => null "created_dt" => null "modifide_dt" => "2022-06-22 12:47:02" "rl_remitter_id" => null "rl_bene_id" => null "rl_verification" => 0 "sm_remitter_id" => null "sm_bene_ids" => null "sm_verification" => 0 "user_id" => "295906" "verification" => 0 "remitter_id" => 0 "india" => 1 "nepal" => 0 "mode" => 1 "paytm_key" => "" "bene_id" => "179dce7acecd983e64f2a0bd0155f444" "pivot" => array:2 [ "remitter_mobile" => "9808908901" "bene_id" => "179dce7acecd983e64f2a0bd0155f444" ] ] ] ] but this was in collection and converting it in array all the keys are in snake case now my prime focus was to change those keys lets pick name_last in outer array i wanted to name it as lastName but using collection as i know that i can do it by using sql alias or query builder but i dont want to ruin my collection . So the point is .. is there any way to handle this issue one method i have found but I am not sure is this right way or not $remitterDetails->beneficiaries = $remitterDetails->beneficiaries->map(function ($beneficiary) { return [ 'account' => $beneficiary->account, 'name' => $beneficiary->name, 'gender' => $beneficiary->gender, 'beneficiaryId' => $beneficiary->bene_id, ]; }); So in short I have one model instance in which i have one many to many relationship binded data and now i just want to return this data in modified form in which i can modify keys of both array outer array inner array of beneficiaries Thank you !!
{ "language": "en", "url": "https://stackoverflow.com/questions/72712715", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: basic math equation math to java code so i have a math equation that i need to use in java but for some reason my code is giving me small errors :( the math equation is describe on this web page in the section extra credit my current code outpouts 4000 and the answere is 4005 what am i duing wrong ? my test class lookes like this public class MainActivity { public static void main(String[] args) throws Exception{ double baseMaterial =556; int me =5; int ml = 10; int extraMaterial = 3444; System.out.println(""+calculateMiniralTotal(baseMaterial,me,ml,extraMaterial)); } public static double calculateMiniralTotal(double perfekt,int me,int ml,int extraMaterial) { double s = (perfekt + (perfekt * (10 / (ml + 1)) / 100)); s = Math.round(s); double r = s + (perfekt * (0.25 - (0.05 * me))); r = Math.round(r); double q = extraMaterial + (extraMaterial * (0.25 - (0.05 * me))); q = Math.round(q); //double r=q; r = r + q; return Math.round(r); } } A: You are performing integer division with (10 / (ml + 1)) / 100, which in Java must result in another int. Your ml is 10, and in Java, 10 / 11 is 0, not 0.909..., and nothing is added to s. Use a double literal or cast to double to force floating-point computations. double s = (perfekt + (perfekt * (10.0 / (ml + 1)) / 100)); or double s = (perfekt + (perfekt * ( (double) 10 / (ml + 1)) / 100)); Making either change makes the output: 4005.0 A: When you multiply a double by an int you get an int back. public class Main { public static void main(String[] args) throws Exception { double baseMaterial = 556; int me = 5; int ml = 10; int extraMaterial = 3444; System.out.println("" + calculateMiniralTotal(baseMaterial, me, ml, extraMaterial)); } public static double calculateMiniralTotal(double perfekt, int me, int ml, int extraMaterial) { double s = (perfekt + (perfekt * (10.0 / (ml + 1)) / 100.0)); // <-- changed from 10 to 10.0 and 100 to 100.0. This way they are doubles too s = Math.round(s); double r = s + (perfekt * (0.25 - (0.05 * me))); r = Math.round(r); double q = extraMaterial + (extraMaterial * (0.25 - (0.05 * me))); q = Math.round(q); // double r=q; r = r + q; return Math.round(r); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/24639550", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-2" }
Q: How Can I Play Audio On Document Ready MVC5 Application This Code not working its generate an error when i play audio on button click its working but its gives error on document ready event A: You probably can't do it. Web browsers explicitly block this behavior because users don't expect it and find it annoying. See Google's Chrome blog post, Mozilla's Firefox documentation, and Apple's WebKit/Safari blog post about these rules. If you will be deploying your app in a corporate environment where you have control over the web browsers, you can probably configure the browsers on the computers to allow autoplay for your site. See the links above for details.
{ "language": "en", "url": "https://stackoverflow.com/questions/72596418", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Virtual file not found using servicestack 4.0.5 after adding Telerik OpenAccess datacontext I'm testing out the new 4.0.5 Service stack (previously I was using version 3), and starting afresh I just can't seem to get my service to start when I add Telerik OpenAccess. I'm using Telerik's OpenAccess to talk to a MSSQL database again, which all works fine using version 3.x - As soon as I add in the Telerik Domain model I get a "Virtual File Not Found" Virtual file not found Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.IO.FileNotFoundException: Virtual file not found Source Error: Line 23: ' Fires when the application is started Line 24: Dim apphost = New VPNTestAppHost() Line 25: apphost.Init() Line 26: End Sub Here is the full IIS error screen Source File: c:\users\tw\documents\visual studio 2013\Projects\VPN_ApiTest\VPN_ApiTest\Global.asax.vb Line: 25 Stack Trace: [FileNotFoundException: Virtual file not found] ServiceStack.VirtualPath.ResourceVirtualDirectory.CreateVirtualFile(String resourceName) +303 System.Linq.WhereSelectListIterator`2.MoveNext() +108 System.Linq.Buffer`1..ctor(IEnumerable`1 source) +216 System.Linq.<GetEnumerator>d__0.MoveNext() +106 System.Collections.Generic.List`1.InsertRange(Int32 index, IEnumerable`1 collection) +392 System.Collections.Generic.List`1.AddRange(IEnumerable`1 collection) +10 ServiceStack.VirtualPath.ResourceVirtualDirectory.InitializeDirectoryStructure(IEnumerable`1 manifestResourceNames) +593 ServiceStack.VirtualPath.ResourceVirtualDirectory..ctor(IVirtualPathProvider owningProvider, IVirtualDirectory parentDir, Assembly backingAsm, String directoryName, IEnumerable`1 manifestResourceNames) +227 ServiceStack.VirtualPath.ResourceVirtualDirectory..ctor(IVirtualPathProvider owningProvider, IVirtualDirectory parentDir, Assembly backingAsm) +131 ServiceStack.VirtualPath.ResourceVirtualPathProvider.Initialize() +141 ServiceStack.VirtualPath.ResourceVirtualPathProvider..ctor(IAppHost appHost, Assembly backingAssembly) +138 ServiceStack.ServiceStackHost.<Init>b__4(Assembly x) +58 ServiceStack.EnumerableExtensions.Map(IEnumerable`1 items, Func`2 converter) +337 ServiceStack.ServiceStackHost.Init() +778 VPN_ApiTest.Global_asax.Application_Start(Object sender, EventArgs e) in c:\users\tw\documents\visual studio 2013\Projects\VPN_ApiTest\VPN_ApiTest\Global.asax.vb:25 [HttpException (0x80004005): Virtual file not found] System.Web.HttpApplicationFactory.EnsureAppStartCalledForIntegratedMode(HttpContext context, HttpApplication app) +9935033 System.Web.HttpApplication.RegisterEventSubscriptionsWithIIS(IntPtr appContext, HttpContext context, MethodInfo[] handlers) +118 System.Web.HttpApplication.InitSpecial(HttpApplicationState state, MethodInfo[] handlers, IntPtr appContext, HttpContext context) +172 System.Web.HttpApplicationFactory.GetSpecialApplicationInstance(IntPtr appContext, HttpContext context) +336 System.Web.Hosting.PipelineRuntime.InitializeApplication(IntPtr appContext) +296 [HttpException (0x80004005): Virtual file not found] System.Web.HttpRuntime.FirstRequestInit(HttpContext context) +9913572 System.Web.HttpRuntime.EnsureFirstRequestInit(HttpContext context) +101 System.Web.HttpRuntime.ProcessRequestNotificationPrivate(IIS7WorkerRequest wr, HttpContext context) +254 WEB.CONFIG <?xml version="1.0" encoding="utf-8"?> <!-- For more information on how to configure your ASP.NET application, please visit http://go.microsoft.com/fwlink/?LinkId=169433 --> <configuration> <system.web> <compilation debug="true" strict="false" explicit="true" targetFramework="4.5" /> <httpRuntime requestPathInvalidCharacters="" targetFramework="4.5" /> </system.web> <system.webServer> <validation validateIntegratedModeConfiguration="false" /> <handlers> <add path="*" name="ServiceStack.Factory" type="ServiceStack.HttpHandlerFactory, ServiceStack" verb="*" preCondition="integratedMode" resourceType="Unspecified" allowPathInfo="true" /> </handlers> <httpProtocol> <customHeaders> <add name="Access-Control-Allow-Origin" value="*" /> <add name="Access-Control-Allow-Headers" value="Content-Type, Accept" /> </customHeaders> </httpProtocol> </system.webServer> <connectionStrings> <add name="XYZ" connectionString="data source=abc.xyz.com;initial catalog=xyz;persist security info=True;user id=sa;password=abc" providerName="System.Data.SqlClient" /> </connectionStrings> </configuration> Global.Asax Imports System.Web.SessionState Imports ServiceStack Imports ServiceStack.Auth Public Class Global_asax Inherits System.Web.HttpApplication Public Class VPNTestAppHost Inherits AppHostBase Public Sub New() MyBase.New("VPN Test Web Services", GetType(LanguageLibrary).Assembly) End Sub Public Overrides Sub Configure(container As Funq.Container) SetConfig(New HostConfig With {.DebugMode = True}) Plugins.Add(New AuthFeature(Function() New AuthUserSession(), New Auth.IAuthProvider() {New BasicAuthProvider()})) End Sub End Class Sub Application_Start(ByVal sender As Object, ByVal e As EventArgs) ' Fires when the application is started Dim apphost = New VPNTestAppHost() apphost.Init() End Sub Can anybody point me in the right direction? Many thanks Terry
{ "language": "en", "url": "https://stackoverflow.com/questions/20996763", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Can Solr or ElasticSearch be configured to use HDFS as their persistence layer in a way that also supports MapReduce? I have a large index over which I need to perform near-real-time updates and full-text search, but I also want to be able to run map-reduce jobs over that data. Is it possible to do this without having to maintain two separate copies of the data? (e.g. one copy in Solr, another in HDFS). It looks like Solr can be configured to use HDFS for storage, but it doesn't look like this plays well with map-reduce, since it's just storing the index in HDFS in a way that would be difficult to read from Hadoop map-reduce. For ElasticSearch, there is es-hadoop, but this is geared towards reading and writing to ElasticSearch from within Hadoop, but doesn't seem to solve the problem of getting data into HDFS in near-real-time or avoiding having two copies of the data. Has anyone faced a similar problem or possibly found other tools that might help solve the problem? Or is it standard practice to have a separate copy of your data for map-reduce jobs? Thanks! A: If you are talking about having the option to store in hdfs(run map reduce) in future and then perform indexing with solr, then I think, you can follow the below steps For real time streaming(for eg twitter), you need to store them in db at real time. One option is to send them to kakfka and utilize storm. From there you can store in hdfs and in solr in parellel. They have concept of bolts which will perform the same. Once is hdfs, you can use map reduce. Once in Solr, you an perform search. If you want both data to be in synch, you can try some event processing which listens to data insertion into HDFS(or its stack) and perform indexing in Solr. Please go through kafka, storm documentation to have basic idea. Alternatives can be Flume, or Spark. Not sure about those.
{ "language": "en", "url": "https://stackoverflow.com/questions/30721396", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to link/nest 2 dropdowns in a form in Adalo I have 2 collections: collection 1 is called “Category” with almost 10 records. collection 2 is called “Subcategory” with almost 100 records. these 2 collections are currently linked. I have a form that allow user to create a profile, the form has 2 drop downs, one for Category and one for sub-category. i want when user pick a category from the first dropdown, it only shows the relevant sub-category of that item in the second drop down. at the moment both drop downs show full list and there’s no way i can filter either of them. the select filter for subcategory drop down doesn’t give me an option to filter that based on the content of category dropdown: i chose other component but it takes you the chain of pages with no end
{ "language": "en", "url": "https://stackoverflow.com/questions/75482318", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Add TabBar on SearchDelegate Flutter I am trying to add a tab bar on Search Delegate. I want to show the results of products and user's profiles on the same search screen. I was trying to get a controller from the parent class and implement in delegate class but nothing happend(I wasn't getting any error) Currently, my problem is I want to rebuild the build suggestion widget but I don't have any idea how to do that. Because the tab button widget has a different build context I can't rebuild the Suggestion widget. hope you understand please ignore my grammar mistakes class CustomSearchDelegate extends SearchDelegate { final TabController tabController; CustomSearchDelegate(this.tabController); var currentIndex = 0; List<Widget> tabs = [Text("Tab 1"), Text("Tab 2")]; @override List<Widget> buildActions(BuildContext context) { return [ Icon(Icons.clear), ]; // TODO: implement buildActions throw UnimplementedError(); } @override Widget buildLeading(BuildContext context) { return Container(); // TODO: implement buildLeading throw UnimplementedError(); } @override Widget buildResults(BuildContext context) { return Container(); // TODO: implement buildResults throw UnimplementedError(); } @override PreferredSizeWidget buildBottom(BuildContext context) { return PreferredSize( preferredSize: const Size.fromHeight(50.0), child: StatefulBuilder( builder: (context, setState) => DefaultTabController( length: 2, child: TabBar( onTap: (index) { setState(() { currentIndex = index; }); }, indicatorColor: Colors.red, tabs: [ Padding( padding: const EdgeInsets.symmetric(vertical: 15), child: Text( "Products", style: Theme.of(context).textTheme.headline6, ), ), Padding( padding: const EdgeInsets.symmetric(vertical: 15), child: Text( "Users", style: Theme.of(context).textTheme.headline6, ), ), ], ), ), )); } @override Widget buildSuggestions(BuildContext context) { final categoriesList = Provider.of<PopularServiceList>(context).dummy.where((element) { return element.title.toLowerCase().contains(query.toLowerCase()); }); return StatefulBuilder( builder: (context, setState) { return tabs[currentIndex]; }, ); throw UnimplementedError(); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/68112389", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Combining echo and cat on Unix Really simple question, how do I combine echo and cat in the shell, I'm trying to write the contents of a file into another file with a prepended string? If /tmp/file looks like this: this is a test I want to run this: echo "PREPENDED STRING" cat /tmp/file | sed 's/test/test2/g' > /tmp/result so that /tmp/result looks like this: PREPENDED STRINGthis is a test2 Thanks. A: This should work: echo "PREPENDED STRING" | cat - /tmp/file | sed 's/test/test2/g' > /tmp/result A: Or just use only sed sed -e 's/test/test2/g s/^/PREPEND STRING/' /tmp/file > /tmp/result A: Or also: { echo "PREPENDED STRING" ; cat /tmp/file | sed 's/test/test2/g' } > /tmp/result A: Try: (printf "%s" "PREPENDED STRING"; sed 's/test/test2/g' /tmp/file) >/tmp/result The parentheses run the command(s) inside a subshell, so that the output looks like a single stream for the >/tmp/result redirect. A: If this is ever for sending an e-mail, remember to use CRLF line-endings, like so: echo -e 'To: cookimonster@kibo.org\r' | cat - body-of-message \ | sed 's/test/test2/g' | sendmail -t Notice the -e-flag and the \r inside the string. Setting To: this way in a loop gives you the world's simplest bulk-mailer. A: Another option: assuming the prepended string should only appear once and not for every line: gawk 'BEGIN {printf("%s","PREPEND STRING")} {gsub(/test/, "&2")} 1' in > out
{ "language": "en", "url": "https://stackoverflow.com/questions/3005457", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "41" }
Q: How do I use Json.NET to parse json in PowerShell? I want to parse JSON in PowerShell but I can't use the new v3 functions that are available in PowerShell. My first thought was to load the JSON.Net assembly and use that to parse the JSON string but it doesn't work as I expect it to. I have this JSON: $json = "{""Name"": ""Apple"", ""Price"": 3.99, ""Sizes"": [ ""Small"", ""Medium"", ""Large""]}" I load the JSON.NET assembly with this code: [Reflection.Assembly]::LoadFile("$currentPath\Newtonsoft.Json.dll”) And tries to parse it with $result = [Newtonsoft.Json.JsonConvert]::DeserializeObject($json) Now I expect that $result["Name"] is Apple but I get nothing there. Any ideas? The code ´$result.ContainsKey("Name")returnsTruebut$result.GetValue("Name")returnsnull`. A: If you get here and are using Powershell 5.0, it's available in the powershell gallery Install-Module Newtonsoft.Json Import-Module Newtonsoft.Json $json = '{"test":1}' [Newtonsoft.Json.Linq.JObject]::Parse($json) A: maybe this is what you're after : http://poshcode.org/2930 function Convert-JsonToXml { PARAM([Parameter(ValueFromPipeline=$true)][string[]]$json) BEGIN { $mStream = New-Object System.IO.MemoryStream } PROCESS { $json | Write-Stream -Stream $mStream } END { $mStream.Position = 0 try { $jsonReader = [System.Runtime.Serialization.Json.JsonReaderWriterFactory]::CreateJsonReader($mStream,[System.Xml.XmlDictionaryReaderQuotas]::Max) $xml = New-Object Xml.XmlDocument $xml.Load($jsonReader) $xml } finally { $jsonReader.Close() $mStream.Dispose() } } } function Write-Stream { PARAM( [Parameter(Position=0)]$stream, [Parameter(ValueFromPipeline=$true)]$string ) PROCESS { $bytes = $utf8.GetBytes($string) $stream.Write( $bytes, 0, $bytes.Length ) } } $json = '{ "Name": "Apple", "Expiry": 1230422400000, "Price": 3.99, "Sizes": [ "Small", "Medium", "Large" ] }' Add-Type -AssemblyName System.ServiceModel.Web, System.Runtime.Serialization $utf8 = [System.Text.Encoding]::UTF8 (convert-jsonToXml $json).innerXML Output : <root type="object"><Name type="string">Apple</Name><Expiry type="number">1230422 400000</Expiry><Price type="number">3.99</Price><Sizes type="array"><item type="s tring">Small</item><item type="string">Medium</item><item type="string">Large</it em></Sizes></root> If you want the name node : $j=(convert-jsonToXml $json) $j.SelectNodes("/root/Name") or $j |Select-Xml -XPath "/root/Name" |select -ExpandProperty node A: Ok, so here is how I did it so it works down to at least PowerShell v2 on Windows 2008. First, load the Json.NET assembly for the version you would like to use, I took the .NET 3.5 version: [Reflection.Assembly]::LoadFile("Newtonsoft.Json.dll") I had the JSON in a file since it was used in a deployment configuration I wrote, so I needed to read the file and then parse the json $json = (Get-Content $FileName | Out-String) # read file $config = [Newtonsoft.Json.Linq.JObject]::Parse($json) # parse string Now to get values from the config you need to to use the Item method which seems defined by PowerShell on hashtables/dictionaries. So to get an item that is a simple string you would write: Write-Host $config.Item("SomeStringKeyInJson").ToString() If you had an array of things you would need to do something like $config.Item("SomeKeyToAnArray") | ForEach-Object { Write-Host $_.Item("SomeKeyInArrayItem").ToString() } To access nested items you write $config.Item("SomeItem").Item("NestedItem") That's how I solved parsing JSON with Json.NET in PowerShell. A: None of the answers here so far made it straightforward to deserialize, edit, and serialize without Token PropertyName in state ArrayStart would result in an invalid JSON object errors. This is what ended up working for me. Assuming $json contains a valid json string this deserializes to a hashtable: $config = [Newtonsoft.Json.JsonConvert]::DeserializeAnonymousType($json, @{}) Or, if you need to maintain key order: $config = [Newtonsoft.Json.JsonConvert]::DeserializeAnonymousType($json, [ordered]@{}) Then you can work with the data as a normal hashtable, and serialize it again. $sb = [System.Text.StringBuilder]::new() $sw = [System.IO.StringWriter]::new($sb) $jw = [Newtonsoft.Json.JsonTextWriter]::new($sw) $jw.Formatting = [Newtonsoft.Json.Formatting]::Indented $s = [Newtonsoft.Json.JsonSerializer]::new() $s.Serialize($jw, $config) $json = $sb.ToString()
{ "language": "en", "url": "https://stackoverflow.com/questions/13968004", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Change style of Google Play Services AccountPicker dialog I am showing the AccountPicker dialog from Google Play Services with this code: String[] accountTypes = new String[]{"com.google"}; Intent intent = AccountPicker.newChooseAccountIntent(null, null, accountTypes, false, null, null, null, null); startActivityForResult(intent, REQUEST_CODE_PICK_ACCOUNT); It appears as a dark themed dialog even though I am using AppCompat v21 with Theme.AppCompat.Light. Is it possible to style the dialog? Preferably as a Material dialog on Lollipop but at least make it a light dialog to match my app. A: I think, no need to "hack". It can be achieved easier: ... String[] accountTypes = new String[]{GoogleAuthUtil.GOOGLE_ACCOUNT_TYPE}; Intent intent = AccountPicker.newChooseAccountIntent(null, null, accountTypes, false, description, null, null, null); // set the style if ( isItDarkTheme ) { intent.putExtra("overrideTheme", 0); } else { intent.putExtra("overrideTheme", 1); } intent.putExtra("overrideCustomTheme", 0); try { startActivityForResult(intent, YOUR_REQUEST_CODE_PICK_ACCOUNT); } catch (ActivityNotFoundException e) { ... } ... A: I had the same problem, but I finally found the solution. Take a look to AccountPicker.class, where are methods: newChooseAccountIntent() and zza(); You have to change AccountPicker.newChooseAccountIntent(null, null, accountTypes, false, null, null, null, null); to AccountPicker.zza(null, null, accountTypes, false, null, null, null, null, false, 1, 0); Last two arguments are for "overrideTheme" and "overrideCustomTheme". So set the first one to 1 and it will override the theme to light. :-) Hope it helps. A: My solution is Intent intent = AccountPicker.a(null, null,accountTypes, true, null, null, null, null, false, 1, 0);
{ "language": "en", "url": "https://stackoverflow.com/questions/28237798", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: gnuplot unable to run in XCode 4.3.1 Im trying to get gnuplot up running for C++ environment in XCode. Im using following tutorial to archieve my result: http://www.calozgroup.org/Shulabh/MediaWikiS/index.php5?title=Gnuplot_in_C%2B%2B_using_XCode * *I have created the libgnuplot.a and imported this along with the gnuplot_i.h into my project. *Copied /opt/local/bin/gnuplot into /usr/local/bin *I have created the ~/MacOSX/environment.plist file with following content: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DT$ <plist version="1.0"> <dict> <key>GNUTERM</key> <string>aqua</string> <key>PATH</key> <string>/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin</string> </dict> </plist> The error occurs when compiling following sample code: char myStr[] = "text"; char myEqu[] = "sin(x)"; gnuplot_ctrl *h; h = gnuplot_init(); gnuplot_plot_equation(h, myEqu, myStr); gnuplot_close(h); Error: cannot find gnuplot in your PATH(lldb) Furthermore i was getting following compiler warnings: Conversion from string literal to 'char *' is deprecated gnuplot_setstyle(handle, "lines"); I tried to modify these to: char line_string[] = "lines"; gnuplot_setstyle(handle, line_string); A: I'm no expert on these things, but try adding: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DT$ <plist version="1.0"> <dict> <key>GNUTERM</key> <string>aqua</string> <key>PATH</key> <string>/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/opt/local/bin</string> </dict> </plist> (added /opt/local/bin to PATH) A: I reinstalled gnuplot and it suddenly worked. A: I had the same problem. gnuplot worked from the terminal, but not from c++. The c++ code called getenv("PATH") and searched for "gnuplot" throwing an exception when it could not be found. After copying /opt/local/bin/gnuplot into /usr/local/bin, it didn't work as in your case. But then I decided to also copy it into /usr/bin and this seems to work. labjunky
{ "language": "en", "url": "https://stackoverflow.com/questions/9863882", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Check time changing automatic There is a streaming channel which displays for 24 hours per day and I want to display that for certain hours of day in my website. I use a code as bellow: $now = date('G',time()); $start = 13; $end = 14; if($now >= $start && $now <= $end){ echo "streaming video"; } else{ echo "image"; } It works when I do refresh/load page. But I want this program to check the time and do reactions without it is needed to do reload the page. So, when the time comes up to 14:00 the video will not being displayed anymore but the image instead of video and when the time comes to 13:00 to display video instead of image. Can be done with php or it is needed ajax or something else? Waiting for some help. A: Create a javascript function that performs an ajax call to a time function which can decide to set the stream or an image: <div id="stream-content"></div> setInterval(function() { $.ajax({ url : "/gettime.php", type: 'GET', success: function(data){ //check time, if correct, set video to the div #stream-content otherwise, set an image as the content } }); }, 1000);
{ "language": "en", "url": "https://stackoverflow.com/questions/26480302", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: How to change checkbox to required html format in pdfmake? I want to change the below html to required format as in below image, <div id="pdfTable" #pdfTable style="display: none;"> <table class="table table-borderless mb-0" style="font-size:14px;"> <thead> <label for="" style="text-align: center;">Recommendation: </label> <tr style=""> <td *ngFor="let recommendation of recommendations" style="border:0;width:25%"> <span *ngIf="recoName == recommendation.listTypeValueName"> <img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYA alt=""> {{recommendation.listTypeValueName}} </span> <p *ngIf="recoName != recommendation.listTypeValueName"><img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAA alt=""> {{recommendation.listTypeValueName}} </p> </td> </tr> </thead> </table> </div> Required Format What I am getting from above html
{ "language": "en", "url": "https://stackoverflow.com/questions/75432136", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Apply Folder Icon Change I am attempting to change the icon of a folder. The code below does all what I found online says to do but the icon never changes. Am I maybe not "Applying" the change? string createdFile = Path.Combine(@"C:\Users\np\Desktop\PUTEST", "desktop.ini"); if (File.Exists(createdFile)) { var di = new DirectoryInfo(createdFile); di.Attributes &= ~FileAttributes.ReadOnly; File.Delete(createdFile); File.Create(createdFile).Dispose(); } else { File.Create(createdFile).Dispose(); } //string iconPath = @"%SystemRoot%\system32\SHELL32.dll"; string iconPath = Environment.ExpandEnvironmentVariables(@"%SystemRoot%\system32\SHELL32.dll"); string iconIndex = "-183"; using (TextWriter tw = new StreamWriter(createdFile)) { tw.WriteLine("[.ShellClassInfo]"); tw.WriteLine("IconResource=" + iconPath + "," + iconIndex); //tw.WriteLine("IconFile=" + iconPath); //tw.WriteLine("IconIndex=" + iconIndex); } File.SetAttributes(createdFile, System.IO.FileAttributes.ReadOnly); File.SetAttributes(createdFile, System.IO.FileAttributes.System); File.SetAttributes(createdFile, System.IO.FileAttributes.Hidden); A: When crafting a file like this it's always good to do so using Explorer or Notepad first, then write/adjust your code to match whatever was produced. Otherwise, it's harder to figure out if the problem is with your file or your code. I believe the minimum requirements to make this work is Desktop.ini must be marked System and the parent directory must be marked ReadOnly (System may work there as well, but I know ReadOnly definitely does). So, your code is working with the right attributes, but there are still a few problems. Your if ... else ... block is saying "If a file exists at this path, create a directory at that path, then delete the file at that path, then create a file at that path." Of course, the directory should not and cannot have the same path as the file. I assume you are deleting and recreating the file to clear the contents when it already exists, however File.Create() overwrites (truncates) existing files, making the calls to both File.Delete() and File.Exists() unnecessary. More importantly is this line... di.Attributes &= ~FileAttributes.ReadOnly; ...with which there are two problems. First, you are ANDing the directory's attributes with the negation of ReadOnly, which has the effect of removing ReadOnly and keeping the other attributes the same. You want to ensure ReadOnly is set on the directory, so you want to do the opposite of the code you used: OR the directory's attributes with ReadOnly (not negated)... di.Attributes |= FileAttributes.ReadOnly; Also, you need that attribute set regardless of whether you created the directory or not, so that line should be moved outside of the if ... else .... Another issue is the successive calls to File.SetAttributes(). After those three calls the file's attributes will be only Hidden, since that was the value of the last call. Instead, you need to combine (bitwise OR) those attributes in a single call. A couple of other minor tweaks... * *As you know since you are calling Dispose() on it, File.Create() returns a FileStream to that file. Instead of throwing it away, you could use it to create your StreamWriter, which will have to create one, anyways, under the covers. Better yet, call File.CreateText() instead and it will create the StreamWriter for you. *Environment variables are supported in Desktop.ini files, so you don't have to expand them yourself. This would make the file portable between systems if, say, you copied it from one system to another, or the directory is on a network share accessed by multiple systems with different %SystemRoot% values. Incorporating all of the above changes your code becomes... // Create a new directory, or get the existing one if it exists DirectoryInfo directory = Directory.CreateDirectory(@"C:\Users\np\Desktop\PUTEST"); directory.Attributes |= FileAttributes.ReadOnly; string filePath = Path.Combine(directory.FullName, "desktop.ini"); string iconPath = @"%SystemRoot%\system32\SHELL32.dll"; string iconIndex = "-183"; using (TextWriter tw = File.CreateText(filePath)) { tw.WriteLine("[.ShellClassInfo]"); tw.WriteLine("IconResource=" + iconPath + "," + iconIndex); //tw.WriteLine("IconFile=" + iconPath); //tw.WriteLine("IconIndex=" + iconIndex); } File.SetAttributes(filePath, FileAttributes.ReadOnly | FileAttributes.System | FileAttributes.Hidden); One catch is that the above code throws an exception if you run it twice in succession. This is because the File.Create*() methods fail if the input file is Hidden or ReadOnly. We could use new FileStream() as an alternative, but that still throws an exception if the file is ReadOnly. Instead, we'll just have to remove those attributes from any existing input file before opening it... // Create a new directory, or get the existing one if it exists DirectoryInfo directory = Directory.CreateDirectory(@"C:\Users\np\Desktop\PUTEST"); directory.Attributes |= FileAttributes.ReadOnly; string filePath = Path.Combine(directory.FullName, "desktop.ini"); FileInfo file = new FileInfo(filePath); try { // Remove the Hidden and ReadOnly attributes so file.Create*() will succeed file.Attributes = FileAttributes.Normal; } catch (FileNotFoundException) { // The file does not yet exist; no extra handling needed } string iconPath = @"%SystemRoot%\system32\SHELL32.dll"; string iconIndex = "-183"; using (TextWriter tw = file.CreateText()) { tw.WriteLine("[.ShellClassInfo]"); tw.WriteLine("IconResource=" + iconPath + "," + iconIndex); //tw.WriteLine("IconFile=" + iconPath); //tw.WriteLine("IconIndex=" + iconIndex); } file.Attributes = FileAttributes.ReadOnly | FileAttributes.System | FileAttributes.Hidden; I changed from using File to FileInfo since that makes this a little easier.
{ "language": "en", "url": "https://stackoverflow.com/questions/57713671", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Dealing with character variables containing semicolons in CSV files I have a file separated by semicolons in which one of the variables of type character contains semicolon inside it. The readr::read_csv2 function splits the contents of those variables that have semicolons into more columns, messing up the formatting of the file. For example, when using read_csv2 to open the file below, Bill's age column will show jogging, not 41. File: name;hobbies;age Jon;cooking;38 Bill;karate;jogging;41 Maria;fishing;32 Considering that the original file doesn't contain quotes around the character type variables, how can I import the file so that karate and jogging belong in the hobbies column? A: read.csv() You can use the read.csv() function. But there would be some warning messages (or use suppressWarnings() to wrap around the read.csv() function). If you wish to avoid warning messages, using the scan() method in the next section. library(dplyr) read.csv("./path/to/your/file.csv", sep = ";", col.names = c("name", "hobbies", "age", "X4")) %>% mutate(hobbies = ifelse(is.na(X4), hobbies, paste0(hobbies, ";" ,age)), age = ifelse(is.na(X4), age, X4)) %>% select(-X4) scan() file You can first scan() the CSV file as a character vector first, then split the string with pattern ; and change it into a dataframe. After that, do some mutate() to identify your target column and remove unnecessary columns. Finally, use the first row as the column name. library(tidyverse) library(janitor) semicolon_file <- scan(file = "./path/to/your/file.csv", character()) semicolon_df <- data.frame(str_split(semicolon_file, ";", simplify = T)) semicolon_df %>% mutate(X4 = na_if(X4, ""), X2 = ifelse(is.na(X4), X2, paste0(X2, ";" ,X3)), X3 = ifelse(is.na(X4), X3, X4)) %>% select(-X4) %>% janitor::row_to_names(row_number = 1) Output name hobbies age 2 Jon cooking 38 3 Bill karate;jogging 41 4 Maria fishing 32 A: Assuming that you have the columns name and age with a single entry per observation and hobbies with possible multiple entries the following approach works: * *read in the file line by line instead of treating it as a table: tmp <- readLines(con <- file("table.csv")) close(con) *Find the position of the separator in every row. The entry before the first separator is the name the entry after the last is the age: separator_pos <- gregexpr(";", tmp) name <- character(length(tmp) - 1) age <- integer(length(tmp) - 1) hobbies <- vector(length=length(tmp) - 1, "list") *fill the three elements using a for loop: # the first line are the colnames for(line in 2:length(tmp)){ # from the beginning of the row to the first";" name[line-1] <- strtrim(tmp[line], separator_pos[[line]][1] -1) # between the first ";" and the last ";". # Every ";" is a different elemet of the list hobbies[line-1] <- strsplit(substr(tmp[line], separator_pos[[line]][1] +1, separator_pos[[line]][length(separator_pos[[line]])]-1),";") #after the last ";", must be an integer age[line-1] <- as.integer(substr(tmp[line],separator_pos[[line]][length(separator_pos[[line]])]+1, nchar(tmp[line]))) } *Create a separate matrix to hold the hobbies and fill it rowwise: hobbies_matrix <- matrix(NA_character_, nrow = length(hobbies), ncol = max(lengths(hobbies))) for(line in 1:length(hobbies)) hobbies_matrix[line,1:length(hobbies[[line]])] <- hobbies[[line]] *Add all variable to a data.frame: df <- data.frame(name = name, hobbies = hobbies_matrix, age = age) > df name hobbies.1 hobbies.2 age 1 Jon cooking <NA> 38 2 Bill karate jogging 41 3 Maria fishing <NA> 32 A: You could also do: read.csv(text=gsub('(^[^;]+);|;([^;]+$)', '\\1,\\2', readLines('file.csv'))) name hobbies age 1 Jon cooking 38 2 Bill karate;jogging 41 3 Maria fishing 32 A: Ideally you'd ask whoever generated the file to do it properly next time :) but of course this is not always possible. Easiest way is probably to read the lines from the file into a character vector, then clean up and make a data frame by string matching. library(readr) library(dplyr) library(stringr) # skip header, add it later dataset <- read_lines("your_file.csv", skip = 1) dataset_df <- data.frame(name = str_match(dataset, "^(.*?);")[, 2], hobbies = str_match(dataset, ";(.*?);\\d")[, 2], age = as.numeric(str_match(dataset, ";(\\d+)$")[, 2])) Result: name hobbies age 1 Jon cooking 38 2 Bill karate;jogging 41 3 Maria fishing 32 A: Using the file created in the Note at the end 1) read.pattern can read this by specifying the pattern as a regular expression with the portions within parentheses representing the fields. library(gsubfn) read.pattern("hobbies.csv", pattern = '^(.*?);(.*);(.*)$', header = TRUE) ## name hobbies age ## 1 Jon cooking 38 ## 2 Bill karate;jogging 41 ## 3 Maria fishing 32 2) Base R Using base R we can read in the lines, put quotes around the middle field and then read it in normally. L <- "hobbies.csv" |> readLines() |> sub(pattern = ';(.*);', replacement = ';"\\1";') read.csv2(text = L) ## name hobbies age ## 1 Jon cooking 38 ## 2 Bill karate;jogging 41 ## 3 Maria fishing 32 Note Lines <- "name;hobbies;age Jon;cooking;38 Bill;karate;jogging;41 Maria;fishing;32 " cat(Lines, file = "hobbies.csv")
{ "language": "en", "url": "https://stackoverflow.com/questions/71135244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Implementing Tree Structure in a UITableView I want to implement a tree structure in a UITableView something like Item-1 subItem-1 subItem-2 Item-2 Item-3 subItem-1 subItem-2 but I don't want to use custom cells for this. I have done lot of Googling for this but I got one link which said I would have to use custom cells. If anyone has any ideas please can you shared them with me. Thanks in advance. A: A table view isn't really designed to display hierarchical views. What's supposed to happen is that you drill down (push a new view controller on the stack) to get to the next level of the hierarchy. However, if your hierarchy is only as deep as you suggest, then you could have your items as headers and subitems as rows. If you don't want to create a custom view (why not?) you'd be limited to just text for your top level items.
{ "language": "en", "url": "https://stackoverflow.com/questions/12704091", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Make Horizontal Indeterminate ProgressBar a Fixed Width I currently have a horizontal ProgressBar defined in XML as below, that I am using to show the progress of an item loading: <ProgressBar android:id="@+id/loading_progressbar" style="@style/Widget.AppCompat.ProgressBar.Horizontal" android:layout_width="match_parent" android:layout_height="wrap_content" android:indeterminate="true" android:indeterminateBehavior="repeat" android:indeterminateDuration="1000" android:indeterminateDrawable="@drawable/custom_loading_progressbar" /> Here is the custom drawable to allow me to colour the ProgessBar how I wish: <?xml version="1.0" encoding="UTF-8"?> <layer-list xmlns:android="http://schemas.android.com/apk/res/android"> <item android:id="@android:id/background"> <shape android:shape="rectangle"> <solid android:color="@color/black" /> <size android:height="2dp" /> </shape> </item> <item android:id="@android:id/secondaryProgress"> <clip> <shape android:shape="rectangle"> <solid android:color="@color/black" /> <size android:height="2dp" /> </shape> </clip> </item> <item android:id="@android:id/progress"> <clip> <shape android:shape="rectangle"> <solid android:color="@color/white" /> <size android:height="2dp" /> </shape> </clip> </item> </layer-list> The issue I have with the current indeterminate implementation is that it pulses between being a small width and a larger width. This can be seen in the second ProgressBar at this link: https://materialdoc.com/images/linear-progress-1.gif I purely want to be able to specify a width of 100dp and allow it to keep cycling through its repeats until complete. Is there any way to set a fixed width for an indeterminate ProgressBar?
{ "language": "en", "url": "https://stackoverflow.com/questions/49382166", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to validate click one per browser? How can i reduce raiting to be able to rate only once per browser? I need to make validation for raiting component in React where client can rate only once per browser. Here is example of the rating component: export default function App() { const [value, setValue] = React.useState(0); return ( <div> <Box component="fieldset" mb={3} borderColor="transparent"> <Typography component="legend">Rate</Typography> <Rating name="pristine" value={value} onClick={(e) => setValue(e.target.value)} /> </Box> </div> ); } Here is sandbox link of the component sandbox link A: One possible solution you could think of is to use localStorage For every user, who has already rate, you then store a boolean to the local storage. Then next time, it will be validated first Something like this: export default function App() { const [value, setValue] = React.useState(0); const hasRated = localStorage.getItem("hasRated"); const handleRate = (e) => { setValue(e.target.value); localStorage.setItem("hasRated", true); }; return ( <div> <Box component="fieldset" mb={3} borderColor="transparent"> <Typography component="legend">Rate</Typography> <Rating disabled={hasRated} name="pristine" value={value} onClick={handleRate} /> </Box> </div> ); } Please note, that this is only a demonstrated example, so it supposes that you will need to have some improvement based on your needs Sanbox Example:
{ "language": "en", "url": "https://stackoverflow.com/questions/69204621", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Predicting images Jupyter notebook For my first Machine Learning experience i have a basic classification to do. I have 3 different folders : train_path = './dataset/pneumonia/train/' test_path = './dataset/pneumonia/test/' val_path = './dataset/pneumonia/val/ each folders : os.listdir(train_path) returns ['NORMAL', 'PNEUMONIA'] In each sets : * *Training set: * *Normal: 949 *Pneumonia: 949 *Test set: * *Normal: 317 *Pneumonia: 317 *Validation set: * *Normal: 317 *Pneumonia: 317 I use tensorflow : from tensorflow.keras.preprocessing.image import ImageDataGenerator image_gen = ImageDataGenerator( rotation_range=10, # rotate the image 10 degrees width_shift_range=0.10, # Shift the pic width by a max of 5% height_shift_range=0.10, # Shift the pic height by a max of 5% rescale=1/255, # Rescale the image by normalzing it. shear_range=0.1, # Shear means cutting away part of the image (max 10%) zoom_range=0.1, # Zoom in by 10% max horizontal_flip=True, # Allow horizontal flipping fill_mode='nearest' # Fill in missing pixels with the nearest filled value ) image_gen.flow_from_directory(train_path) image_gen.flow_from_directory(test_path) I create a model (basic model) : model = Sequential() model.add(Conv2D(32, (3, 3), input_shape=(image_width, image_height, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(64, (3, 3), input_shape=(image_width, image_height, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(128, (3, 3), input_shape=(image_width, image_height, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(256, (3, 3), input_shape=(image_width, image_height, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(512, (3, 3), input_shape=(image_width, image_height, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Flatten()) model.add(Dense(512, activation='relu')) # Dropouts help reduce overfitting by randomly turning neurons off during training. # Here we say randomly turn off 50% of neurons. model.add(Dropout(0.5)) # Last layer, remember its binary so we use sigmoid model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) Then I train it : train_image_gen = image_gen.flow_from_directory( train_path, target_size=image_shape[:2], color_mode='rgb', batch_size=batch_size, class_mode='binary' ) results = model.fit_generator(train_image_gen,epochs=20, validation_data=test_image_gen, callbacks=[early_stop, board]) So far so good results are correct : pred_probabilities = model.predict_generator(test_image_gen) predictions = pred_probabilities > 0.5 confusion_matrix(test_image_gen.classes,predictions) I obtain rather good results : My issue is when I want to predict images it returns results which are far from being correct: val_image_gen = image_gen.flow_from_directory( val_path, target_size=image_shape[:2], color_mode='rgb', class_mode='binary', ) pred_probabilities = model.predict_generator(val_image_gen) predictions = pred_probabilities > 0.5 Here are some output I obtain : precision recall f1-score support 0 0.51 0.57 0.53 317 1 0.51 0.44 0.47 317 accuracy 0.51 634 macro avg 0.51 0.51 0.50 634 weighted avg 0.51 0.51 0.50 634 THe confusion matrix on this data set is the following : [[180 137] [176 141]] A: A few issues with your code: * *You are using test set for validation and validation set for testing. This may be a problem or not, depending on your data and how it was split. *Augmentation should be applied only to training set. Use separate instance of ImageDataGenerator(rescale=1/255) for testing and validation. Your test results look like they were got from untrained model. Check if the model object you are running test on is the same one you were training. You may want to use model.save() and load_model() functions to preserve model weights after training. A: I replaced : val_image_gen = image_gen.flow_from_directory( val_path, target_size=image_shape[:2], color_mode='rgb', class_mode='binary', ) by: val_image_gen = image_gen.flow_from_directory( val_path, target_size=image_shape[:2], color_mode='rgb', batch_size=batch_size, class_mode='binary', shuffle=False ) I obtain nice results : [[269 48] [ 3 314]]
{ "language": "en", "url": "https://stackoverflow.com/questions/61598707", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: want to change the cursor on mouse move on JTree listed elements I want to change the cursor to hand cursor on mouse move over JTree component when the cursor is on listed elements only, not for the whole component. The below code is for Jlist Component. I want same for the JTree, but JTree does not have getCellBound(). Is there any other way? final JList list = new JList(new String[] {"a","b","c"}); list.addMouseMotionListener(new MouseMotionListener() { @Override public void mouseMoved(MouseEvent e) { final int x = e.getX(); final int y = e.getY(); // only display a hand if the cursor is over the items final Rectangle cellBounds = list.getCellBounds(0, list.getModel().getSize() - 1); if (cellBounds != null && cellBounds.contains(x, y)) { list.setCursor(new Cursor(Cursor.HAND_CURSOR)); } else { list.setCursor(new Cursor(Cursor.DEFAULT_CURSOR)); } } @Override public void mouseDragged(MouseEvent e) { } }); A: You are looking for something like this, I guess: import java.awt.Cursor; import java.awt.Rectangle; import java.awt.event.MouseAdapter; import java.awt.event.MouseEvent; import javax.swing.JFrame; import javax.swing.JScrollPane; import javax.swing.JTree; import javax.swing.SwingUtilities; import javax.swing.tree.DefaultMutableTreeNode; import javax.swing.tree.DefaultTreeModel; import javax.swing.tree.TreePath; public class TestTreeSelection { protected void initUI() { final DefaultMutableTreeNode root = new DefaultMutableTreeNode("Root"); fillTree(root, 5, "Some tree label"); final DefaultTreeModel model = new DefaultTreeModel(root); final JTree tree = new JTree(model); tree.addMouseMotionListener(new MouseAdapter() { @Override public void mouseMoved(MouseEvent e) { boolean inside = false; TreePath path = tree.getPathForLocation(e.getX(), e.getY()); if (path != null) { Rectangle pathBounds = tree.getPathBounds(path); inside = pathBounds.contains(e.getPoint()); } if (inside) { tree.setCursor(Cursor.getPredefinedCursor(Cursor.HAND_CURSOR)); } else { tree.setCursor(Cursor.getDefaultCursor()); } } }); JFrame f = new JFrame(); f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); f.add(new JScrollPane(tree)); f.setSize(400, 600); f.setLocationRelativeTo(null); f.setVisible(true); } private void fillTree(DefaultMutableTreeNode parent, int level, String label) { for (int i = 0; i < 5; i++) { DefaultMutableTreeNode node = new DefaultMutableTreeNode(label + " " + i); parent.add(node); if (level > 0) { fillTree(node, level - 1, label); } } } public static void main(String[] args) { SwingUtilities.invokeLater(new Runnable() { @Override public void run() { new TestTreeSelection().initUI(); } }); } } A: This code works: tree_object.addMouseMotionListener(new MouseMotionListener() { @Override public void mouseDragged(MouseEvent e) { } @Override public void mouseMoved(MouseEvent e) { TreePath tp = ((JTree)e.getSource()).getPathForLocation(e.getX(), e.getY()); if(tp != null) { ((JTree)e.getSource()).setCursor(new Cursor(Cursor.HAND_CURSOR)); } else { ((JTree)e.getSource()).setCursor(new Cursor(Cursor.DEFAULT_CURSOR)); } } });
{ "language": "en", "url": "https://stackoverflow.com/questions/14522221", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Flutter Web: Divider color not showing when scrolling page In my flutter project, the Divider widget color is not showing when scrolling. The issue exists only in flutter web project. A: Since last two years I have been using this divider code below. It working well for all platforms. To fix your issue, provide your code. Widget commonDividerWidget(Color color) { return Divider( thickness: 1.0, color: color, ); }
{ "language": "en", "url": "https://stackoverflow.com/questions/74274747", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Scala how to convert Map to varargs of tuples? In the context of Scala Play 2.2.x testing I have a Map[String, String] which I need to pass to a function that accepts (String, String)* i.e. a varargs of (String, String) tuple. e.g. val data : Map[String, String] = Map("value" -> "25", "id" -> "", "columnName" -> "trades") route(FakeRequest(POST, "/whatever/do").withFormUrlEncodedBody(data)) but this gives a type mismatch because withFormUrlEncodedBody accepts only a (String, String)* type. A: Simply: def foo(names: (String, String)*) = names.foreach(println) val folks = Map("john" -> "smith", "queen" -> "mary") foo(folks.toSeq:_*) // (john,smith) // (queen,mary) Where _* is a hint to compiler. A: Oh found the answer e.g. route(FakeRequest(POST, "/wharever/do").withFormUrlEncodedBody(data.toList: _*)) or route(FakeRequest(POST, "/wharever/do").withFormUrlEncodedBody(data.toSeq: _*)) or route(FakeRequest(POST, "/wharever/do").withFormUrlEncodedBody(data.toArray: _*))
{ "language": "en", "url": "https://stackoverflow.com/questions/24012146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: javascript showing old events I am having some trouble with javascript event handling. In my application I have an event which makes external xml requests and gives a responce to my event listener. This is allowed to happen many times to the same event listener. My event listener is repeating functions for old copies of my event and repeating all of my functions on each of them. Here is my code: document.addEventListener("data", getRemoteDataEvent, false); function getRemoteDataEvent(event){ console.log(event); if(event.success===false){ console.log(event); alert("error obtaining remote data"); } else if(event.response != "<query_result></query_result>"){ var xml = $.parseXML(event.response); parseThis(xml); } else if(event.response == "<query_result></query_result>"){ console.log(event.response); alert("Sorry, we have not yet come to your area"); } } has anyone else run into this issue before? edit: to show the remaining bits of my function. noting too important here, however do you see me missing a step in handling this? A: You're probably adding the event listener each time you send the request. If you do, you should remove the listener when it finally runs, or just add it once.
{ "language": "en", "url": "https://stackoverflow.com/questions/24787301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: load page at anchor, less 175px When loading a page from a link using an anchor (ie, website.com/#midPage), I need the anchor to actually load 175px below the top of the page. Is there a convenient way to to do this? Javascript or JQUERY is fine by me, but I don't know what method to use.
{ "language": "en", "url": "https://stackoverflow.com/questions/10742302", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Call two relational tables in a singe id in Laravel. (check photos) I am trying to combine the below responses into a single entry Here i got two keys but how i will make it in a single key. In my controller i specified id for two different resources for author and authorprofile. But how can i use with single resourse because author and author profile is in relationship tables. The resource attributes in the screenshot please check. A: Hey its better to use Eloquent Relation like if you have Author Model and AuthorProfile Model so you put below method in Author model public function Authorprofile(){ return $this->belongsTo(AuthorProfile::class, 'authorprofile_id'); } then use in Controller like. public function show($id){ $author = Author::with('Authorprofile')->find($id) } Hope it mights help
{ "language": "en", "url": "https://stackoverflow.com/questions/58533698", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: ValueError: could not broadcast input array from shape in python 2.7 I'm a newbie in using python 2.7. I run this command python plot_bands_gnu.py but get Traceback (most recent call last): File "plot_bands_gnu.py", line 16, in <module> ebands [:,ib] = databands[idx1:idx2,1] ValueError: could not broadcast input array from shape (50) into shape (60) How to solve this?
{ "language": "en", "url": "https://stackoverflow.com/questions/51834598", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Show Currency Symbol after values I am using CultureInfo methods to successfully format all different currencies into their correct format. But on some exceptions, such as EUR and SEK currencies I need to be able to add them after the value. At the moment my CultureInfo is formatting them in the following way: "SEK 1.00,00" when it needs to be "1.00,00 SEK". Any help is appreciated. A: All you need is to change the NumberFormatInfo.CurrencyPositivePattern and NumberFormatInfo.CurrencyNegativePattern properties for the culture. Just clone the original culture: CultureInfo swedish = new CultureInfo("sv-SE"); swedish = (CultureInfo)swedish.Clone(); swedish.NumberFormat.CurrencyPositivePattern = 3; swedish.NumberFormat.CurrencyNegativePattern = 3; and then var value = 123.99M; var result = value.ToString("C", swedish); should give you desired result. This should get you: 123,99 kr A: Be careful about the CurrencyNegativePattern This code CultureInfo swedish = new CultureInfo("sv-SE"); swedish = (CultureInfo)swedish.Clone(); swedish.NumberFormat.CurrencyPositivePattern = 3; swedish.NumberFormat.CurrencyNegativePattern = 3; Will give you 134,99 kr. kr.134,99kr.- Changing CurrencyNegativePattern to 8 swedish.NumberFormat.CurrencyNegativePattern = 8; Will give you 134,99 kr. -134,99 kr. More info https://msdn.microsoft.com/en-us/library/system.globalization.numberformatinfo.currencynegativepattern(v=vs.110).aspx A: string.Format(CultureInfo.CurrentCulture, "{0:C}", moneyValue.DataAmount)
{ "language": "en", "url": "https://stackoverflow.com/questions/5272760", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Ajax doesn't work in IE if URL contains Arabic character In my Blogger website I load posts from JSON feed, The JSON link looks like this. http://technopress-demo.blogspot.com/feeds/posts/default/-/LABEL NAME?alt=json-in-script&max-results=5 This is the code that I use to get posts from the URL above. $.ajax({url:""+window.location.protocol+"//"+window.location.host +"/feeds/posts/default/-/"+LABEL NAME +"?alt=json-in-script&max-results=5", type:'get',dataType:"jsonp",success:function(data){} The problem is that when I change 'LABEL NAME' with an Arabic label the posts didn't load. I tested it with English label and it's working fine, but I have problem with Arabic ones. I have tried this to decode URL but it's not working. $.ajax({url:""+window.location.protocol+"//"+window.location.host +"/feeds/posts/default/-/"+encodeURIComponent(LABEL NAME) +"?alt=json-in-script&max-results=5", type:'get',dataType:"jsonp",success:function(data){} This is a live demo of the problem. A: IE has problems with not properly encoded URLS, it has also problems with simple <a href containing unencoded chars. LABEL%20NAME instead of LABEL NAME should work. With JSONP, jQuery generates a <script src="http://technopress-demo.blogspot.com/feeds/posts/default/-/LABEL NAME?alt=json-in-script&max-results=5"> which has the unencoded char in it. Instead of encodeURIComponent(LABEL NAME), use quotation marks: encodeURIComponent("LABEL NAME") Important: Save your files UTF-8 encoded. (pic from blog.flow.info) Example which works in IE (copied from Firefox+Firebug): A: In your live demo, removing .shortext { text-indent: -9999px; } From the css makes it look OK in IE for me. The div with id="recent" and class="recent shortext" seems to have different markup in FF.
{ "language": "en", "url": "https://stackoverflow.com/questions/23582205", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: My code is freezing and using to much CPU power. (Python) I am attempting to make a sort of AI like script which will figure out your phrase. It is still a work in progress and is not yet complete. Each guess should print the following info: The text produced: ......... The string generated by the script. The length of the text: ... The length of the previously mentioned string. The score of the text: ..... The script will score the text based on how long it is and what letters are in it. The target text: ................. The text it is trying to produce. For some reason, after the first string the code freezes. I am hoping anyone can help, Thanks in advance! import random posAlpha = ["a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z", "!", "?", ",", ".", " ", "&", "0", "1", "2", "3", "4", "5", "6", "7", "8", "9"] def scoreText(input, goal): output = 0-abs(len(input)-len(goal)) for i in range(len(min(input, goal, key=len))): if not (input[i] in goal): output+=0 elif not (input[i] == goal[i]): output += 1 else: output += 2 if output < 1: return 1 else: return output goal = input("Target phrase: ") score = 0 Ai="" for i in range(random.randrange(round(len(goal)*0.5), len(goal)*2)): Ai = Ai+ random.choice(posAlpha) score = scoreText(Ai, goal) print("Result: " + Ai) print("Result Length: " + str(len(Ai))) print("Goal: " + goal) print("Score: " + str(score)) print() while True: oldAi = Ai Ai = "" for i in range(len(Ai)-1): if (score == 1 or random.randrange(1, score) == 1): Ai = Ai+random.choice(posAlpha) else: Ai = Ai+oldAi[i] while True: lenVariation=random.randrange(-4, 4) if not (len(Ai)+lenVariation > len(goal)*2 or len(Ai)+lenVariation<round(len(goal)*0.5)): if lenVariation > 0: for i in range(lenVariation): Ai = Ai+ random.choice(posAlpha) break elif lenVariation < 0: for i in range(lenVariation): AI=Ai[:-1] break score = scoreText(Ai, goal) print("Result: " + Ai) print("Result Length: " + str(len(Ai))) print("Goal: " + goal) print("Score: " + str(score)) print() input("Goal Achived!!") A: It is because the execution is stuck in the second infinite loop. The condition (len(Ai)+lenVariation > len(goal)*2 or len(Ai)+lenVariation<round(len(goal)*0.5)) is met every time after the first execution so the if statement is never evaluated to True and the while loop is never exited. Also, note that your break statements only exist the for loop and not the while loop so statements after the second break are never executed.
{ "language": "en", "url": "https://stackoverflow.com/questions/71111604", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Voice profile in Google Assistant Does Google Assistant provide individual voice identification, just like voice profile of Alexa? Thank you A: The Google Assistant does not provide user recognition as a core part of the platform or as part of the Assistant SDK.
{ "language": "en", "url": "https://stackoverflow.com/questions/50901973", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-3" }
Q: Java - gov.nasa.jpf.jvm.Verify Pathfinder Package Doesn't Exist I am attempting use Java Pathfinder and I have pathfinder working. import gov.nasa.jpf.jvm.Verify; user.java:2: package gov.nasa.jpf.jvm does not exist import gov.nasa.jpf.jvm.Verify; I need to use the Verify.random function. Can anyone tell me how to resolve this problem? I don't really understand how the importation of what I am assuming is a URL works. A: Having used Java Pathfinder some time back, I know that its not an applet as the other answer worries. You are getting this error because the Java Pathfinder jar files are not on your classpath. Here is a complete Java Pathfinder Getting Started tutorial that could help others coming to this old thread. A: It's not a URL at all (except in the case of applets, and you didn't mention an applet). You need to put the Pathfinder jar in the classpath when compiling and running, via -cp to javac and java. If it's an applet, it's more complex.
{ "language": "en", "url": "https://stackoverflow.com/questions/5850840", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: sed line matching with exclamation I am trying to understand this sed line matching part. What does it do exactly. The patters is supposed to match comment lines starting with ## and attempt to remove the comment characterrs at the front of the line. pn_ere="^[[:space:]]*([#;!]+|@c|${cmt})[[:space:]]+" sed -E -n " /$beg_ere/ { :L1 n /$end_ere/z /$pn_ere/!z s/// ; p tL1 } "
{ "language": "en", "url": "https://stackoverflow.com/questions/75317859", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: railflow cli command ignores the cucumber steps with data table. and only shows one line we just started using testrail with railflow and I am using railflow cli command to create test cases that are written in cucumber/gherkin style. Test results are converted into json files and railflow cli reads those json files and create test cases in test rail. up to this point, everything works fine. However, recently realized that test scenarios where I use data table are not being transferred to my test case in test rail. Anyone had similar issue or suggesting any solution for this? Here is cucumber step: Then I verify "abc" table column headers to be | columnName | | Subject | | Report Data | | Action | | ER Type | in test rail, it only includes the header which is " Then I verify "abc" table column headers to be " any suggestion is appreciated. A: we are constantly improving Railflow and reports handling, so we are more than happy to add support for the cucumber data tables. Please contact the support team via our website Update: This is now implemented and available in Railflow NPM CLI v. 2.1.12
{ "language": "en", "url": "https://stackoverflow.com/questions/72833323", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Does google map geo location require secure ssl connection? I am using google map geo location api. I have first tested it on local, its working file on both firefox and chrome. When it moved it to live site (that is not https), its working fine for firefox but not working on chrome. Getting this error: Geocoder failed Here is my code: <script type="text/javascript" src="http://maps.googleapis.com/maps/api/js?sensor=false"></script> <script type="text/javascript"> var geocoder; if (navigator.geolocation) { navigator.geolocation.getCurrentPosition(successFunction, errorFunction); } //Get the latitude and the longitude; function successFunction(position) { var lat = position.coords.latitude; var lng = position.coords.longitude; } function errorFunction() { alert("Geocoder failed"); } </script> Does chrome need secure ssl connection for geo location?? A: For Chrome version >= 50, Geolocation API requires secured origin. But, it should work fine on your localhost. Secured origins are origins that match the following (scheme, host, port) patterns: * *(https, *, *) *(wss, *, *) *(*, localhost, *) *(*, 127/8, *) *(*, ::1/128, *) *(file, *, —) *(chrome-extension, *, —) You can read more about it here. Update: Firefox, iOS Safari, Chrome Android, and Samsung Internet now also require secured contexts.
{ "language": "en", "url": "https://stackoverflow.com/questions/37959791", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Rails/coffeescript/datatables, after changing views, need to refresh to reload I'm sorry for asking this here as I typically find everything I need to ask, but I think my problem is I do not know really WHAT to search for to get the answer. Here goes. I am playing with datatables to provide sorted/ajaxified/searchable tables. Everything loads correctly when I hit the site. It is fine after I add data to the table or sort or search. But, if I navigate from the page, say back to the home page, and then back to the page with the table, the table is still there, but the datatables stuff is gone. I assume this is a matter of me JUST starting to work with JS/Ajax and the like for my Rails app and I probably am just in need of something very simple. The specific code I'm using is: $ -> $("#customers").dataTable sDom: "<'row-fluid'<'span6'l><'span6'f>r>t<'row-fluid'<'span6'i><'span6'p>>" sPaginationType: "bootstrap" If I watch the page load, I think I see the issue. When it loads initially, or any time I send it back after adding a new customer, I see all the JS and CSS files load. But, when I hit back, it's a cached call and it's not loading any of those again, so, I'm assuming I'm not "initializing" the JS again. Any help would be appreciated. A: So, my problem was really not knowing enough to know exactly what to ask. The problem here wasn't dataTables or JS or Ajax or any of that. It was Turbolinks in Rails 4. Because it caches to make things seem fast, whenever I'd leave the page and come back to it, I'd have to refresh to get the underlying javascript to initialize. Turbolinks has an option you can pass into your script to make a document ready on page:restore. That worked. If anyone else runs into this maybe they'll find this helpful.
{ "language": "en", "url": "https://stackoverflow.com/questions/16454934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: struct with member function as parameter I am a beginner in C++ and stack exchange. I am working on an Interface class that gets keyboard input and checks to see whether it is correct through looping through an array of structs which contains strings to compare to and strings to output depending if it is equal to the compare string or not. If the input is correct, it will print the string within the struct and a function within the structure is called and does some action. interface.hpp #include <string> class Input_Interface { struct command_output { std::string command; std::string success_string; std::string failure_string; void output_function(); } bool stop_loop = false; void Clear(); void Quit_loop(); }; interface.cpp #include <iostream> void Input_Interface::Quit_loop() { stop_loop = true; // ends loop and closes program } void Input_Interface::clear() { // does some action } Input_Interface::command_output clear_output{"CLEAR", "CLEARED", "", Input_Interface::Clear()}; Input_Interface::command_output quit_output{"QUIT", "GOODBYE", "", Input_Interface::Quit_loop()}; Input_Interface::command_output output_arr[]{clear_output, quit_output}; void Input_Interface::begin() { while (stop_loop == false) { Input_Interface::get_input(); //stores input into var called input_str shown later this->compare_input(); } } void Input_Interface::compare_input() { for (unsigned int i=0; i<2; i++) { if (this->input_str == output_arr[i].command) { std::cout << output_arr[i].success_string << std::endl; output_arr[i].output_function(); break; } } // ... goes through else for failure printing invalid if it did not match any of the command string in the command_output struct array My issue is with these lines Input_Interface::command_output clear_output{"CLEAR", "CLEARED", "", Input_Interface::Clear()}; //error: call to non-static function without an object argument Input_Interface::command_output quit_output{"QUIT", "GOODBYE", "", Input_Interface::Quit_loop()}; //error: call to non-static function without an object argument I know this is passed through member functions of the class but I don't know how to go about fixing this problem. I'm not really sure if the problem is the scope resolution operator inside the struct object or not because I can use it outside of parameters just fine. Any help would be greatly appreciated. A: You should do something as shown below: #include <string> struct Input_Interface { struct command_output { std::string command; void (*output_function)(); }; static void Clear(); static void Quit_loop(); }; int main() { Input_Interface::command_output t = {"CLEAR", Input_Interface::Clear}; return 0; } Live example here Although I would suggest using a functor object over function pointer.
{ "language": "en", "url": "https://stackoverflow.com/questions/27170673", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Packages incompatible with tensorflow and keras I am running mask-rcnn github repository , I have installed the packages as per his repository requirements but if I import keras , it gives me error. My installed packages on the notebook are shown below: ipykernel==5.5.3 ipyparallel==6.3.0 ipython==7.16.1 ipython-genutils==0.2.0 ipywidgets==6.0.0 jedi==0.18.0 Jinja2==2.10 joblib==1.0.1 jupyterlab-pygments==0.1.2 Keras==2.2.5 Keras-Applications==1.0.8 keras-nightly==2.5.0.dev2021032900 Keras-Preprocessing==1.1.2 keyring==10.6.0 keyrings.alt==3.0 opencv-python==4.5.2.54 Pillow==8.2.0 prometheus-client==0.11.0 prompt-toolkit==3.0.18 protobuf==3.17.2 ptyprocess==0.7.0 system-service==0.3 systemd-python==234 tensorboard==2.5.0 tensorboard-data-server==0.6.1 tensorboard-plugin-wit==1.8.0 tensorflow==2.5.0 tensorflow-estimator==2.5.0 termcolor==1.1.0 terminado==0.10.0 Code: import keras keras.__version__ Output: AttributeError: module 'tensorflow.compat.v2.__internal__' has no attribute 'tf2' Mask RCNN Repository link https://github.com/matterport/Mask_RCNN
{ "language": "en", "url": "https://stackoverflow.com/questions/67891942", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How stream a response from a Twisted server? Issue My problem is that I can't write a server that streams the response that my application sends back. The response are not retrieved chunk by chunk, but from a single block when the iterator has finished iterating. Approach When I write the response with the write method of Request, it understands well that it is a chunk that we send. I checked if there was a buffer size used by Twisted, but the message size check seems to be done in the doWrite. After spending some time debugging, it seems that the reactor only reads and writes at the end. If I understood correctly how a reactor works with Twisted, it writes and reads when the file descriptor is available. What is a file descriptor in Twisted ? Why is it not available after writing the response ? Example I have written a minimal script of what I would like my server to look like. It's a "ASGI-like" server that runs an application, iterates over a function that returns a very large string: # async_stream_server.py import asyncio from twisted.internet import asyncioreactor twisted_loop = asyncio.new_event_loop() asyncioreactor.install(twisted_loop) import time from sys import stdout from twisted.web import http from twisted.python.log import startLogging from twisted.internet import reactor, endpoints CHUNK_SIZE = 2**16 def async_partial(async_fn, *partial_args): async def wrapped(*args): return await async_fn(*partial_args, *args) return wrapped def iterable_content(): for _ in range(5): time.sleep(1) yield b"a" * CHUNK_SIZE async def application(send): for part in iterable_content(): await send( { "body": part, "more_body": True, } ) await send({"more_body": False}) class Dummy(http.Request): def process(self): asyncio.ensure_future( application(send=async_partial(self.handle_reply)), loop=asyncio.get_event_loop() ) async def handle_reply(self, message): http.Request.write(self, message.get("body", b"")) if not message.get("more_body", False): http.Request.finish(self) print('HTTP response chunk') class DummyFactory(http.HTTPFactory): def buildProtocol(self, addr): protocol = http.HTTPFactory.buildProtocol(self, addr) protocol.requestFactory = Dummy return protocol startLogging(stdout) endpoints.serverFromString(reactor, "tcp:1234").listen(DummyFactory()) asyncio.set_event_loop(reactor._asyncioEventloop) reactor.run() To execute this example: * *in a terminal, run: python async_stream_server.py * *in another terminal, run: curl http://localhost:1234/ You will have to wait a while before you see the whole message. Details $ python --version Python 3.10.4 $ pip list Package Version Editable project location ----------------- ------- -------------------------------------------------- asgiref 3.5.0 Twisted 22.4.0 A: You just need to sprinkle some more async over it. As written, the iterable_content generator blocks the reactor until it finishes generating content. This is why you see no results until it is done. The reactor does not get control of execution back until it finishes. That's only because you used time.sleep to insert a delay into it. time.sleep blocks. This -- and everything else in the "asynchronous" application -- is really synchronous and keeps control of execution until it is done. If you replace iterable_content with something that's really asynchronous, like an asynchronous generator: async def iterable_content(): for _ in range(5): await asyncio.sleep(1) yield b"a" * CHUNK_SIZE and then iterate over it asynchronously with async for: async def application(send): async for part in iterable_content(): await send( { "body": part, "more_body": True, } ) await send({"more_body": False}) then the reactor has a chance to run in between iterations and the server begins to produce output chunk by chunk.
{ "language": "en", "url": "https://stackoverflow.com/questions/71930649", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to transfer score from Game Scene to GameViewController I created share button for share my score in Facebook , twitter and more. the button created in GameViewController and my score and high score i created in Game Scene. I did share score but i don't know how to transfer my score from GameScene to GameViewController. The Score and High Score in Game Scene : func addPointsLabels() { pointsLabel = MLPointsLabel(num: 0) pointsLabel.fontColor = UIColor.whiteColor() pointsLabel.fontName = "Avenir Next Bold" pointsLabel.fontSize = 40.0 pointsLabel.position = CGPointMake(self.view!.frame.size.width/2, self.view!.frame.size.height/2 + 180) pointsLabel.name = "pointsLabel" addChild(pointsLabel) highscoreLabel = MLPointsLabel(num: 0) highscoreLabel.fontColor = UIColor(red: 0.0/255.0, green: 161.0/255.0, blue: 156.0/255.0, alpha: 1.0) highscoreLabel.name = "highscoreLabel" highscoreLabel.fontSize = 25.0 highscoreLabel.fontName = "Avenir Next Bold" highscoreLabel.position = CGPointMake(self.view!.frame.size.width/2 + 90, self.view!.frame.size.height/2 + 100) addChild(highscoreLabel) } GameViewController : class scene: SKScene { var currentScore: Int = 0 var highScore: Int = 0 func updateScore(withScore score: Int) { currentScore = score highScore = currentScore > score ? currentScore : score } } class GameViewController: UIViewController , MyGameDelegate { var scene: GameScene! var ShareButton = UIButton() var myDelegate : MyGameDelegate! override func viewDidLoad() { super.viewDidLoad() // Configure the view let skView = view as! SKView //skView.multipleTouchEnabled = false // Create and configure the scene scene = GameScene(size: skView.bounds.size) scene.scaleMode = .AspectFill // NSLog("width: %f", skView.bounds.size.width) // NSLog("height: %f", skView.bounds.size.height) // Present the scenee skView.presentScene(scene) } func addShareButton() { ShareButton.hidden = false } override func viewDidLayoutSubviews() { super.viewDidLayoutSubviews() // //create Share Button ShareButton = UIButton.init(frame: CGRectMake(self.view!.frame.size.width/2 - 80, self.view!.frame.size.height/2 + 60, 60, 60)) ShareButton.setImage(UIImage(named: "ShareButton.png"), forState: UIControlState.Normal) ShareButton.addTarget(self, action: "pressedShareButton:", forControlEvents: .TouchUpInside) self.view!.addSubview(ShareButton) } func pressedShareButton(sender: UIButton!) { // Now you can get your score and high score like this: let currentScore = scene.pointsLabel let highScore = scene.highscoreLabel UIGraphicsBeginImageContextWithOptions(view!.frame.size, false, 0.0) view!.drawViewHierarchyInRect(view!.frame, afterScreenUpdates: true) let image = UIGraphicsGetImageFromCurrentImageContext() UIGraphicsEndImageContext(); let myText = "WOW! I made \(currentScore) points playing #RushSamurai! Can you beat my score? https://itunes.apple.com/us/app/rush-samurai/id1020813520?ls=1&mt=8" let activityVC:UIActivityViewController = UIActivityViewController(activityItems: [myText,image], applicationActivities: nil) //New Excluded Activities Code if #available(iOS 9.0, *) { activityVC.excludedActivityTypes = [UIActivityTypeAirDrop, UIActivityTypeAddToReadingList, UIActivityTypeAssignToContact, UIActivityTypeCopyToPasteboard, UIActivityTypeMail, UIActivityTypeMessage, UIActivityTypeOpenInIBooks, UIActivityTypePostToTencentWeibo, UIActivityTypePostToVimeo, UIActivityTypePostToWeibo, UIActivityTypePrint] } else { // Fallback on earlier versions activityVC.excludedActivityTypes = [UIActivityTypeAirDrop, UIActivityTypeAddToReadingList, UIActivityTypeAssignToContact, UIActivityTypeCopyToPasteboard, UIActivityTypeMail, UIActivityTypeMessage, UIActivityTypePostToTencentWeibo, UIActivityTypePostToVimeo, UIActivityTypePostToWeibo, UIActivityTypePrint ] } activityVC.popoverPresentationController?.sourceView = view activityVC.popoverPresentationController?.sourceRect = ShareButton.frame presentViewController(activityVC, animated: true, completion: nil) } MLPointsLable : class MLPointsLabel: SKLabelNode { var number = 0 //var gameoverscore = 0 init(num: Int) { super.init() fontColor = UIColor.whiteColor() fontName = "Avenir Heavy" fontSize = 24.0 number = num text = "\(num)" } required init?(coder aDecoder: NSCoder) { fatalError("init(coder:) has not been implemented") } func increment() { number++ text = "\(number)" } func setTo(num: Int) { self.number = num text = "\(self.number)" } A: Since your GameViewController is presenting your GameScene you can just hold a reference to it and get the score and high score from properties in your GameScene. Something like this: class GameViewController: UIViewController { var gameScene: GameScene! override func viewDidLoad() { super.viewDidLoad() // Hold a reference to your GameScene after initializing it. gameScene = SKScene(...) } } class GameScene: SKScene { var currentScore: Int = 0 var highScore: Int = 0 func updateScore(withScore score: Int) { currentScore = score highScore = currentScore > score ? currentScore : score } } Update: You could use this values in your pressedShareButton like this: func pressedShareButton(sender: UIButton!) { let currentScore = scene.currentScore let highScore = scene.highScore ... let myText = "WOW! I made \(currentScore) points playing #RushSamurai! Can you beat my score? https://itunes.apple.com/us/app/rush-samurai/id1020813520?ls=1&mt=8" ... }
{ "language": "en", "url": "https://stackoverflow.com/questions/35536235", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Form data autosuggestion I have a button( Add More) which adds new row in my form/table. I want to add data autosuggestion in INPUT form. so i've tried the below codes. It's working but only for first row. It's not working for the next rows. can anyone correct me what i a doing wrong? You can check it in my jsFiddle too . <table class="table" id="dataTable" name="table"> <TD><INPUT type="text" id="tags" /></TD> </table> <br> <button type="button" class="btn btn-primary" value="Add Row" onclick="addRow('dataTable')">Add More</button> /*** Form autosuggestion **/ $(function() { var availableTags = [ "ActionScript", "AppleScript", "Asp", "BASIC", "C", "C++", "Clojure", "COBOL", "ColdFusion", ]; $( "#tags" ).autocomplete({ source: availableTags }); }); /*** Adding new row **/ function addRow(tableID) { var table = document.getElementById(tableID); var rowCount = table.rows.length; var row = table.insertRow(rowCount); var colCount = table.rows[0].cells.length; for(var i=0; i<colCount; i++) { var newcell = row.insertCell(i); if(i==1){newcell.innerHTML = (rowCount+1)} else{ newcell.innerHTML = table.rows[0].cells[i].innerHTML; } switch(newcell.childNodes[0].type) { case "text": newcell.childNodes[0].value = ""; break; case "test": newcell.childNodes[0].value=""; break; case "checkbox": newcell.childNodes[0].checked = false; break; case "select-one": newcell.childNodes[0].selectedIndex = 0; break; } } } A: The reason behind this is not working is because you have passed same id for two text fields, and in HTML you can't have duplicate ids as it doesn't make sense. here you can do two things * *Give class to new element and get element from class *Or when you create an element at that time create a autosuggest object for that field Here is a fiddle for you HTML CODE: <script src="//code.jquery.com/jquery-1.10.2.js"></script> <script src="//code.jquery.com/ui/1.11.4/jquery-ui.js"></script> <table class="table" id="dataTable" name="table"> <TD> <INPUT type="text" id="tags" class="autosuggest" /> </TD> </table> <br> <button type="button" class="btn btn-primary" value="Add Row" onclick="addRow('dataTable')"> Add More </button> JAVASCRIPT CODE: function createAutoSuggest(element) { // Created a common function for you var availableTags = [ "ActionScript", "AppleScript", "Asp", "BASIC", "C", "C++", "Clojure", "COBOL", "ColdFusion", ]; $(element).autocomplete({ source: availableTags }); } $(function() { createAutoSuggest($(".autosuggest")); }); function addRow(tableID) { var table = document.getElementById(tableID); var rowCount = table.rows.length; var row = table.insertRow(rowCount); var colCount = table.rows[0].cells.length; for (var i = 0; i < colCount; i++) { var newcell = row.insertCell(i); if (i == 1) { newcell.innerHTML = (rowCount + 1) } else { newcell.innerHTML = table.rows[0].cells[i].innerHTML; } switch (newcell.childNodes[0].type) { case "text": newcell.childNodes[0].value = ""; break; case "test": newcell.childNodes[0].value = ""; break; case "checkbox": newcell.childNodes[0].checked = false; break; case "select-one": newcell.childNodes[0].selectedIndex = 0; break; } if ($(newcell).find('.autosuggest').length > 0) { createAutoSuggest($(newcell).find('.autosuggest'));// This is the change you basically need } } } A: You can use live to attach events to dynamically created textboxes. Here is the example
{ "language": "en", "url": "https://stackoverflow.com/questions/37671313", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Change mouseover colour of all controls globally WPF Buttons, column headers, comboboxes etc. all go a nice Microsoft blue when you mouseover them. I'd much rather they went a nice shade of my corporate green. Is there any way to change this colour either globally for my application or by window. Failing that, will I have to change this for each individual control type? A: You have to have to define a Style for each control. This is because the visuals and visual states are defined by the internal ControlTemplate of each control. But you can significantly reduce the amount of work by reusing templates and cascading styles. To allow easy color theming and centralized customization, you should create a resource that defines all relevant colors of your theme: ColorResources.xaml <ResourceDictionary> <!-- Colors --> <Color x:Key="HighlightColor">DarkOrange</Color> <Color x:Key="DefaultControlColor">LightSeaGreen</Color> <Color x:Key="ControlDisabledTextColor">Gray</Color> <Color x:Key="BorderColor">DarkGray</Color> <Color x:Key="MouseOverColor">LightSteelBlue</Color> <!-- Brushes --> <SolidColorBrush x:Key="HighlightBrush" Color="{StaticResource HighlightColor}" /> <SolidColorBrush x:Key="ControlDisabledTextBrush" Color="{StaticResource ControlDisabledColor}" /> <SolidColorBrush x:Key="BorderBrush" Color="{StaticResource BorderColor}" /> <SolidColorBrush x:Key="MouseOverBrush" Color="{StaticResource MouseOverColor}" /> </ResourceDictionary> You may add the following styles an templates to the App.xaml file: App.xaml <Application> <Application.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="/File/Path/To/ColorResources.xaml" /> </ResourceDictionary.MergedDictionaries> ... </ResourceDictionary> </Application.Resources> </Application> To override the color of the selected row in a GridView or DataGrid you simply need to override the default brush SystemColors.HighlightBrush used by these controls: <SolidColorBrush x:Key="{x:Static SystemColors.HighlightBrushKey}" Color="{StaticResource HighlightColor}" /> To override the default color of controls like the column headers of e.g DatGrid you simply need to override the default brush SystemColors.ControlBrush used by these controls: <SolidColorBrush x:Key="{x:Static SystemColors.ControlBrushKey}" Color="{StaticResource DefaultControlColor}" /> For simple ContentControls like Button or ListBoxItem you can share a common ControlTemplate. This shared ControlTemplate will harmonize the visual states: <ControlTemplate x:Key="BaseContentControlTemplate" TargetType="ContentControl"> <Border BorderBrush="{TemplateBinding BorderBrush}" BorderThickness="{TemplateBinding BorderThickness}" Background="{TemplateBinding Background}"> <ContentPresenter/> </Border> <ControlTemplate.Triggers> <Trigger Property="IsEnabled" Value="False"> <Setter Property="Foreground" Value="{StaticResource ControlDisabledTextBrush}" /> </Trigger> <Trigger Property="IsMouseOver" Value="True"> <Setter Property="Opacity" Value="0.7" /> <Setter Property="Background" Value="{StaticResource MouseOverBrush}" /> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> This base template will be applied using base styles. This allows simple chaining (style inheritance) and reuse of the BaseContentControlTemplate for different controls like Button or ListBoxItem: <Style x:Key="BaseContentControlStyle" TargetType="ContentControl"> <Setter Property="Template" Value="{StaticResource BaseContentControlTemplate}" /> </Style> Some ContentControl like a Button might need additional states like Pressed. You can extend additional basic visual states by creating a base style that e.g., targets ButtonBase and can be used with any control that derives from ButtonBase: <Style x:Key="BaseButtonStyle" TargetType="ButtonBase" BasedOn="{StaticResource BaseContentControlStyle}"> <Style.Triggers> <Trigger Property="IsPressed" Value="True"> <Setter Property="Background" Value="{StaticResource HighlightBrush}" /> </Trigger> </Style.Triggers> </Style> To apply this base styles you have to target the controls explicitly. You use this styles to add more specific visual states or layout e.g., ListBoxItem.IsSelcted, ToggleButton.IsChecked or DataGridColumnHeader: <!-- Buttons --> <Style TargetType="Button" BasedOn="{StaticResource BaseButtonStyle}" /> <Style TargetType="ToggleButton" BasedOn="{StaticResource BaseButtonStyle}"> <Style.Triggers> <Trigger Property="IsChecked" Value="True"> <Setter Property="Background" Value="{StaticResource HighlightBrush}" /> </Trigger> </Style.Triggers> </Style> <!- ListBox --> <Style TargetType="ListBoxItem" BasedOn="{StaticResource BaseContentControlStyle}"> <Style.Triggers> <Trigger Property="IsSelected" Value="True"> <Setter Property="Background" Value="{StaticResource HighlightBrush}" /> </Trigger> </Style.Triggers> </Style> <!-- ListView --> <Style TargetType="ListViewItem" BasedOn="{StaticResource {x:Type ListBoxItem}}" /> <!-- GridView Since GridViewColumnHeader is also a ButtonBase we can extend existing style --> <Style TargetType="GridViewColumnHeader" BasedOn="{StaticResource BaseButtonStyle}" /> <Style x:Key="BaseGridViewStyle" TargetType="ListViewItem"> <Style.Triggers> <Trigger Property="IsSelected" Value="True"> <Setter Property="Background" Value="{StaticResource HighlightBrush}" /> </Trigger> <Trigger Property="IsMouseOver" Value="True"> <Setter Property="Background" Value="{StaticResource {x:Static SystemColors.HighlightBrushKey}}" /> </Trigger> </Style.Triggers> </Style> <!-- DataGrid Since DataGridColumnHeader is also a ButtonBase we can extend existing style --> <Style TargetType="DataGridColumnHeader" BasedOn="{StaticResource BaseButtonStyle}"> <Setter Property="BorderThickness" Value="1,0" /> <Setter Property="BorderBrush" Value="{StaticResource BorderBrush}" /> </Style> Other more complex composed controls like ComboBox, TreeView or MenuItem require to override the control's template individually. Since these controls are composed of other controls, you usually have to override the styles for this controls too. E.g., ComboBox is composed of a TextBox, ToggleButton an Popup. You can find their styles at Microsoft Docs: Control Styles and Templates. This is a very simple and basic way to add theming to your application. Knowledge of the inheritance tree of the controls helps to create reusable base styles. Reusing styles helps to reduce the effort to target and customize each control. Having all visual resources like colors or icons defined in one place makes it easy to modify them without having to know/modify each control individually. A: If you have a look at the default styles (https://learn.microsoft.com/en-us/dotnet/framework/wpf/controls/button-styles-and-templates), you will find out, that each style uses static resources for declaring the colors. So to achieve your project you would have to overwrite these static resources where the colors are stored. Unfortunately, this is not possible in WPF (was already asked before: Override a static resource in WPF). So your only solution would be to write a separate Style for each control and to re-declare the colors.
{ "language": "en", "url": "https://stackoverflow.com/questions/62054640", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Why is there an NumberFormatException? I keep on running into a problem with NumberFormatException. There is a problem with the readLine since I am reading the wrong line at the wrong time. I am writing a program which takes in that amount of money a person has and then calculates how much they will have after they gift some money to other people. The entire problems can be found at the link below. http://train.usaco.org/usacoprob2?a=RX1a1QYsOyX&S=gift1 BufferedReader f = new BufferedReader(new FileReader("gift1.in")); PrintWriter out = new PrintWriter(new BufferedWriter(new FileWriter("gift1.out"))); int ppl = Integer.parseInt(f.readLine()); out.println(ppl); HashMap<String, Integer> name = new HashMap<String, Integer>(); String[] names = new String[ppl]; for(int i = 0; i < ppl; i++ ){ String n = f.readLine(); names[i] = n; name.put(n, 0); } for (int i = 0; i < ppl; i++){ String person = f.readLine(); StringTokenizer st = new StringTokenizer(f.readLine()); int mon_lost = Integer.parseInt(st.nextToken()); int ppl_given = Integer.parseInt(st.nextToken()); name.put(person, (mon_lost%ppl_given)-mon_lost); for(int j =0; j< ppl_given-1; i++){ name.put(f.readLine(), mon_lost/ppl_given); } } for(int i = 0; i < ppl; i++){ out.println(names[i] +" "+ name.get(names[i])); } f.close(); out.close(); A: It isn't the BufferedReader. You can read millions of lines per second with BufferedReader.readLine(). It's your code. For example, the f.read() calls are not correct. They will deliver character values, not digit values, and from the next line, without consuming the line terminator, so you will get a blank line next readLine(), so you will be totally out of sync with your input. Basically your code doesn't even work yet, so timing it now is futile. A: Your first for-loop is probably testing with the wrong value. Instead of : for(int i = 0; i < ppl-1; ++i ) { try using: for(int i = 0; i < ppl; ++i ) { [UPDATE] Since you are looping too few times in this loop, the subsequent loop will be using the remaining input data inappropriately. For example, ppl_given would be some garbage number, which could be very big.
{ "language": "en", "url": "https://stackoverflow.com/questions/47425196", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-2" }
Q: project xxxx is associated to an unknown SonarQube server(). Please fix project association or add server in SonarQube plugin perfrences When I open any java class in eclipse editor it is showing below popup message. A: Right click your project and choose SonarQube. then click on Remove the SonarQube server nature EDIT Another option is to go to Windows -> Preferences -> SonarQube -> Server and to remove or fix your server here. A: Another solution: I had two Sonar-PlugIns (SonarQube and SonarLint). The first posted solution has't worked for me. But the following had worked: in eclipse installation directory, open the file: <yourEclipseInstallDirectory>\configuration\customization.ini And delete what is after the = sign. org.sonar.ide.eclipse.core/servers/default/url=<deleteWhatIsHere>
{ "language": "en", "url": "https://stackoverflow.com/questions/33075153", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Matplotlib scatter plot with two colors for only two series in dataframe I have a dataframe with two series and no categories. I want to plot the scatter-plot for x and y with different colors. I tried plt.scatter(obs_flow, sim_flow,c=('red','blue'), alpha=0.3) but gave an error : ValueError: 'c' argument has 2 elements, which is inconsistent with 'x' and 'y' with size 3954. How do I define the colors in this case when no category is available? Here's how my dataframe looks like: . EDIT : The plot without any color argument passed is looking like this: Can we have different colors for observed and simulated points?
{ "language": "en", "url": "https://stackoverflow.com/questions/62859156", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Angular 8+, make component factories in lazy module available in global service I have written a ModalService to wrap both MatDialog and MatSnackBar, to avoid injecting both in each of my components, and this service offers methods that wrap the material ones. So you can write modalService.dialog(MyDialogComponent, { data: {} }); instead of matDialog.open(MyDialogComponent, { data: {} }); ModalService is declared in ModalModule, and provided in its forRoot method, so in my AppModule I import it: ModalModule.ts: @NgModule({ imports: [MatDialogModule, ...], ... }) public class ModalModule { static forRoot() { return { ngModule: ModalModule, providers: [{ provide: ModalService, useClass: ModalService }] }; } } AppModule.ts: @NgModule({ imports: [ModalModule.forRoot(), ...], ... }) public class AppModule {} I have other modules, some of which are lazily loaded, which import the ModalModule and declare some entry components to open in a dialog: LazyModule.ts @NgModule({ imports: [MatDialogModule, ModalModule, ...], // <--- ModalModule without forRoot() declarations: [MyDialogComponent, MyLazyComponent], entryComponents: [MyDialogComponent], }) public class LazyModule {} Now, if in MyLazyComponent I try to use ModalService to open MyDialogComponent, I get an error saying that no factory has been found for MyDialogComponent, whereas when using dialog.open, it works as expected. MyLazyComponent: modalService.dialog(MyDialogComponent); // <--- error matDialog.open(MyDialogComponent); // <--- fine While debugging with dev tools I noticed that somewhere inside the code of MatDialog, matDialog has a reference to the injector of the LazyModule, which has a ComponentFactoryResolver which correctly contains the factory for MyDialogComponent, whereas ModalService has a reference to the AppModule's injector, which doesn't contain those factories. I understand the reason why: I've provided the service in the AppModule and not in the LazyModule. What I'm not getting is how to fix this, or why does MatDialog have a reference to the injector of the LazyModule. A: As stated in the comments to my question, MatDialog is provided in MatDialogModule's decorator, therefore for each lazy module a new instance of the MatDialog is created, with visibility on that module's components. After all, a dialog service doesn't need to be a singleton and this approach is fine, so I've ended up providing my modal service in the module's metadata too instead of just in the forRoot method.
{ "language": "en", "url": "https://stackoverflow.com/questions/58878405", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: One line of text refuses to change color I was doing a project recreating the google homepage in HTML/CSS and for the most part it came out fine except for some reason I can't get the text inside the "Sign Up" to change color. <div id="contents"> <div id="navbar"> <ul> <li id="signin"><a href="https://accounts...">Sign In</a></li> I call it out in CSS with these lines. #navbar a { text-decoration: none; width: 200px; color: #4c4c4c; font-size: 14px; } #signin { background-color: blue; font-weight: 800; color: #ffffff !important; padding: 7px 12px; } I added !important thinking it would change but it didn't do a thing. Also to note this is just some of the code there are other items on the nav bar that are all the same correct color. I got the background-color to change but not the actual text color. A: As the other commenters were alluding to, you just need a separate CSS rule (#signin a) for the nested anchor tag. #navbar a { text-decoration: none; width: 200px; color: #4c4c4c; font-size: 14px; } #signin { background-color: blue; font-weight: 800; color: #ffffff; padding: 7px 12px; } #signin a { color: #ffffff; } <div id="contents"> <div id="navbar"> <ul> <li id="signin"><a href="https://accounts...">Sign In</a></li> </ul> </div> </div>
{ "language": "en", "url": "https://stackoverflow.com/questions/26265947", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to use Google Cloud Functions / Tasks / PubSub for Batch Processing? We are currently using Rabbit MQ with Celery on some VMs for this: * *We have a batch of tasks we want to process in parallel (e.g. process some files concurrently or run some machine learning inference for images) *When the batch is done we callback our App, get the results and start some other batch of tasks which might depend on the results of the previously executed tasks So we have the requirements: * *Our App needs to know when a batch is done *Our App needs the gathered task results across the batch *It might kill the App when we do a callback to the App in every single task that succeeds Now we try to use Google Cloud for this and we would like to move away from VMs to something like Google Cloud Tasks or Pub / Sub in combination with Google Cloud Functions. Is there any best practice setup for our problem in Google Cloud? A: I think you need an architect to re-design your solution to lift in the cloud. It is good time to check whether you want to move to managed products or would prefer just the same in the cloud. Talking about the products: * *Rabbit QM should be replaced with Pub/Sub, which fits pretty good. If you would like to keep using RabbitMQ here. PubSub should be the best choice if you want to move most of your solution to the Google Cloud, and in the long term could bring more benefits in the Gooogle Cloud ecosystem. *Dataflow is a good batch processor. Here is an example of PubSub - Dataflow: Quickstart: stream processing with Dataflow. There are Google-Provided batch templates or you can create one: traditional or Flex. Don't rush and pick a solution. It is well worth to check all your business and technical requirements and explore the benefits of each product (managed or not) of the Google Cloud. The more detailed your requirements are, the best you can design your solution. A: Google Cloud offers, today, only one workflow manager named Cloud Composer (based on Apache Airflow project) (I don't take into account the AI Platform workflow manager (AI Pipeline)). This managed solution allow you to perform the same things than you do today with Celery * *An event occur *A Cloud Function is called to process the event *The Cloud Function trigger a DAG (Diagram Acyclic Graph - a workflow in Airflow) *A step in the DAG runs a lot of sub process (Cloud Function/Cloud Run/anything else) wait the end, and continue to the next step... 2 warnings: * *Composer is expensive (about $400 per month, for the minimal config) *DAG are acyclic. no loops are authorized Note: A new workflow product should come on GCP. No ETA for now, and at the beginning the parallelism want be managed. IMO, this solution is the right one for you, but not for short term, maybe in 12 months About the MQTT queue, you can use PubSub, very efficient and affordable. Alternative You can build your own system following this process * *An event occur *A Cloud Function is called to process the event *The cloud function create as many PubSub message as batched are required. *For each message generated, you write an entry into Firestore with the initial event, and the messageId *The generated messages are consumed (by Cloud Function, Cloud Run or anything else) and at the end of the process, the Firestore entry is updated saying that the sub process has been completed *You plug a Cloud Function on Firestore event On Write. The function checks if all the subprocess for an initial event are completed. If so, go to the next step... We have implemented a similar workflow in my company. But it's not easy to maintain and to debug when a problem occur. Else, it works great.
{ "language": "en", "url": "https://stackoverflow.com/questions/62532376", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: GeoFire query completion - Swift In my app, I use GeoFire to query users around user location. It is a Tinder-like app, with cards, etc... I use KolodaView for cards. Query function : func queryUsers(searchRadius: Int, center: CLLocation) { print("Query started") let geofireRef = Database.database().reference().child("usersLocations") let geoFire = GeoFire(firebaseRef: geofireRef) let circleQuery = geoFire?.query(at: center, withRadius: Double(searchRadius)) _ = circleQuery?.observe(.keyEntered, with: { (result, location) in print("User \(result!) found") print("Adding user \(result!)") addUser(userID: result!, completionHandler: { (success) in print("User added") }) }) } Add user function : func addUser(userID: String, completionHandler:@escaping (Bool) -> ()) { let foundUsers = UserDefaults.standard.array(forKey: "foundUsers") let databaseRef = Database.database().reference() databaseRef.observeSingleEvent(of: .value, with: { (snapshot) in if (foundUsers?.count)! < 20 { // USERS let users = (snapshot.value as? NSDictionary)?["users"] as! NSDictionary // USER let user = users[userID] as! NSDictionary getFirstUserPicture(userID: userID, completionHandler: { (data) in if let data = data { DispatchQueue.main.async { user.setValue(data, forKey: "picture") // APPEND FOUND USERS ARRAY var foundUsers = UserDefaults.standard.array(forKey: "foundUsers") foundUsers?.append(user) // STORE FOUND USERS ARRAY UserDefaults.standard.set(foundUsers, forKey: "foundUsers") UserDefaults.standard.synchronize() // UPDATE CARD VIEW if foundUsers?.count == 1 { NotificationCenter.default.post(name: .loadCardView, object: nil) } else { NotificationCenter.default.post(name: .loadCardsInArray, object: nil) } // COMPLETION completionHandler(true) } } }) } }) { (error) in print(error.localizedDescription) } } When I launch the app, the queryUsers function is called, the query starts Output User 1XXXXXXXXXXXXXXXX found Adding user 1XXXXXXXXXXXXXXXX User 2XXXXXXXXXXXXXXXX found Adding user 2XXXXXXXXXXXXXXXX User added User added * *User is found *Adding user (call addUser function) The problem is that it didn't wait the addUser completion to call addUser for the second user found. The result is that in my KolodaView, there is the second user found two times because I think the call of addUser for the second user found uses first user found parameters. Is it possible to wait for the first addUser completion and start again the query ? Or just "pause" the query after the first user was found, and start it again after the completion of the first addUser call ? Thanks for your help Update 1 I tried the @Rlee128 solution but nothing changed, I have the same output :( : // MARK: UPDATE USER LOCATION - FUNCTION func updateUserLocation() { let databaseRef = Database.database().reference().child("usersLocations") let geoFire = GeoFire(firebaseRef: databaseRef) let userID = UserDefaults.standard.string(forKey: "userID")! let locationManager = CLLocationManager() locationManager.delegate = self locationManager.startUpdatingLocation() geoFire?.setLocation(locationManager.location, forKey: userID, withCompletionBlock: { (error) in if error != nil { print(error as Any) } else { // Location updated } }) } // MARK: LOCATION MANAGER DELEGATE - FUNCTION func locationManager(_ manager: CLLocationManager, didUpdateLocations locations: [CLLocation]) { manager.stopUpdatingLocation() manager.delegate = nil } A: In then location manger didUpdateLocations set manager.delegate to = nil after you call manager.stopUpdatingLocation(). Let me know if you want me to show you how it looks in my code. func findLocation() { locationManager = CLLocationManager() locationManager.delegate = self locationManager.desiredAccuracy = kCLLocationAccuracyBest locationManager.requestAlwaysAuthorization() if CLLocationManager.locationServicesEnabled() { locationManager.startUpdatingLocation() } } func locationManager(_ manager: CLLocationManager, didUpdateLocations locations: [CLLocation]) { let userLocation = locations[0] as CLLocation manager.stopUpdatingLocation() manager.delegate = nil let loc = CLLocation(latitude: userLocation.coordinate.latitude, longitude: userLocation.coordinate.longitude) getLocation(location: loc) }
{ "language": "en", "url": "https://stackoverflow.com/questions/44085598", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Form POST or sessions? If you have an item where you allow users to add comments, how can you pass which item the user is replying too? I've though of using a hidden field in a form, however this can be easily changed using plugins such as firebug: <form method="post" action="blah"> <input type="hidden" name="item_id" value="<?php echo $item_id; ?>"> <!-- other form data here --> <input type="submit" name="submit"> </form> Or just simply using a session: $_SESSION['item_id'] = $item_id Is there a safe way to send the item data in a form? Edit: This is after validation,... I do implement some XSS protection (form tokens etc). The reason I was asking was just to know what the best practise is. I though of doing something like $_SESSION['item_id'] = $id //this is set when they visit the current item then in the form have a hidden field: <input type="hidden" name="item_id" value="<?php echo $id?>"> Finally check the session matches the id clicked: if ($_SESSION('item_id') !== $item_id) //the value posted in the form { die('There\'s got to be a morning after If we can hold on through the night We have a chance to find the sunshine Let\'s keep on looking for the light'); } However after reading some of your comments I guess this is a bad idea? To be fair (@Surreal Dreams): it isn't that big a deal if they do change the id, I as I've said,I was just looking for the best practice. Cheers. A: Using a session the way you suggested would screw up cases where (1) a visitor opens several different articles in multiple tabs, and (2) tries to write a reply on any tab other than the one that was opened last. The user might even write two replies simultaneously in different tabs; I sometimes do that on StackOverflow. Web developers so easily forget that today's visitors may have several browser tabs open at the same time. Really, we don't use IE6 anymore. A solution would be to make $_SESSION['item_id'] an array of recently viewed article IDs, but then you won't be able to stop some Firebug user (or any other tech savvy person) from replying to a previously viewed article. Adding time limits won't change anything, either. But why would somebody intentionally change the ID of the post to which they're replying, except to troll or spam the site? And if somebody really wanted to screw your site, they can easily get around any protection by making their bot request the appropriate page just before posting a spam comment. You'd be much better off investing in a better CSRF token generator, spam filter, rate limiter, etc. A: Honestly, you're probably okay using the hidden form element. If you're really concerned about someone changing it, you could always base64() encode it to make it harder to change. However, you could always set a session variable on the page and then when the form is submitted call that value back. Form <? session_start(); //Make a random ID for this form instance $form_id = rand(1, 500); //Set session variable for this form $_SESSION[$form_id]['item_id'] = $item_id; ?> <form method="post" action="process.php?n=<?=$form_id?>"> <!-- form data here --> <input type="submit" name="submit"> </form> Process <? session_start(); //Process only if the number submitted matches the SESSION variable if(array_key_exists($_GET['n'], $_SESSION) { //Process tasks echo $_SESSION[$_GET['n']['item_id']; //Unset session variable when done processing unset($_SESSION[$_GET['n']); } ?> A: Use the hidden field. If the user modifies the DOM to say it is a reply to a different comment, so what? It only affects them. If you want to limit what the user can reply to, then you need to implement a proper access control layer and not try to enforce it in the user interface. A: As you may have noticed from the angry, pitchfork-wielding mob, neither solution is very satisfying due to issues like multiple tabs. If there is a genuine concern about people being able to post comments on another object then the one they were initially visiting, consider storing them in $_SESSION using an array, and having them post the ID back using a hidden form. If the value posted back is not in the array, then it is obviously a post that he was not looking at. To increase tamper-proofness, considering about hashing the ID (of course with a generous salting), and storing those in the array. Do note that I feel you may be trying to solve the wrong problem here; why not validate if the person has access to make a comment on the post he is commenting on? If you have access, then you should be allowed to comment - if you want to tamper with the ID and make your comment end up under the wrong post.. well, that's basically your problem. I mean, the user could also go to the other post, and make the wrong comment there manually... so what is the issue? A: You could have a form like this <? $saltedhash = md5("MYSEED"+$item_id;) ?> <form method="post" action="blah"> <!-- form data here --> <input type="hidden" name="item_id" value="<? echo $item_id ?>"> <input type="hidden" name="item_hash" value="<? echo $saltedhash ?>"> <input type="submit" name="submit"> </form> This was, you can always check that the passed item_id matches its corresponding hash and see if the user changed it. However, as pointed by others, this won't prevent the user from posting on different items as if they can get the hash from somewhere... An access control mechanism would be preferable A: Using $_SESSION to store the post ID is the ideal solution since it does avoid the ability to modify that value. That being said, what are the benefits to someone doing that? As well, many comment systems have an approval process that requires an admin to approve the comment before posting anyway. But yes, I would recommend sticking with the session value.
{ "language": "en", "url": "https://stackoverflow.com/questions/4629628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Will I Be Able to Manipulate Excel Data in AWS Python Lambda Function I will be writing a function in AWS Lambda using Python 2.7 that does the following: * *Grabs a stored Excel file from Amazon s3. *Opens the Excel file. *Manipulates data in multiple Excel sheets rows of data that will eventually be inserted into an RDS instance. *Repeats the process once a day at the same time. Locally, I have written and tested the script with no issues. My only concern is whether or not the script will open the Excel file if it's not stored locally. Can anyone shed light? Here is some of my code (there's much more)- excel functions are used with the xlrd package: import boto from boto.s3.key import Key import datetime import requests import xlrd """ Get the Key object of the given key, in the bucket """ k2 = Key(bucketName,"srcFileName.xlsx") """Get the contents of the key into a file """ k2.get_contents_to_filename("destFileName.xlsx") """Open the file.""" fl = xlrd.open_workbook(destFileName) """Access the sheet specified""" sh = fl.sheet_by_name(sheet_names[0]) When the code finishes running the k2.get_contents_to_filname() function, it writes the excel file to the current working directory on my computer. When I open the file to manipulate using the functions in variables 'fl' and 'sh' is it calling the file stored locally or has the python session stored it in memory to be accessed? If it can only be accessed from local storage, how do I handle opening the file for manipulation in Lambda? Thanks.
{ "language": "en", "url": "https://stackoverflow.com/questions/44032391", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Pyignite: connection timeout error when trying to load json into cache using put_all() The code works when tried with fewer values but gives connection timeout error when tried with even 30k entries. Sample code with sample json: a = {} for i in range(10000): a.update({"test"+str(i):((MapObject.HASH_MAP, {"key_1": ((1,["value_1",1.0]),MapObject), "key_2": ((1, [["value_2_1","1.0"],["value_2_2","0.25"]]),CollectionObject), "key_3": ((1, [["value_3_1","1.0"],["value_3_2","0.25"]]),CollectionObject), "key_4": ((1, [["value_4_1","1.0"],["value_4_2","0.25"]]),CollectionObject), 'key_5': False, "key_6":"value_6"}),MapObject)}) test_cache.put_all(a) A: Tried this a = {} for i in range(10000): a.update({"test" + str(i): ((MapObject.HASH_MAP, {"key_1": ((1, ["value_1", 1.0]), CollectionObject), "key_2": ((1, [["value_2_1", "1.0"], ["value_2_2", "0.25"]]), CollectionObject), "key_3": ((1, [["value_3_1", "1.0"], ["value_3_2", "0.25"]]), CollectionObject), "key_4": ((1, [["value_4_1", "1.0"], ["value_4_2", "0.25"]]), CollectionObject), 'key_5': False, "key_6": "value_6"}), MapObject)}) start = time.time() cache.put_all(a) print(f'duration {time.time() - start}') on master branch -- works as expected, took about 7 sec. to complete against 4 ignite nodes on average laptop. We will release 0.4.0 soon, stay tuned!
{ "language": "en", "url": "https://stackoverflow.com/questions/65303573", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Whats a quick way to convert an IEnumerable to List in C# 2.0? we all know the slow way: foreach.. A: How about: IEnumerable<T> sequence = GetSequenceFromSomewhere(); List<T> list = new List<T>(sequence); Note that this is optimised for the situation where the sequence happens to be an IList<T> - it then uses IList<T>.CopyTo. It'll still be O(n) in most situations, but potentially a much faster O(n) than iterating :) (It also avoids any resizing as it creates it.) A: You want to use the .AddRange method on generic List. List<string> strings = new List<string>(); strings.AddRange(...); .. or the constructor .. new List<string>(...); A: The List constructor is a good bet. IEnumerable<T> enumerable = ...; List<T> list = new List<T>(enumerable);
{ "language": "en", "url": "https://stackoverflow.com/questions/1758464", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: could not resolve state "home.app.detail" from state "home.apps.list", UI-Router I have a page showing the list of applications that I want to be able to go to the page of the details of the app, when I click on each one of them. Here is my config: module bandar { 'use strict'; export class RouterConfig { /** @ngInject */ constructor($stateProvider: ng.ui.IStateProvider, $urlRouterProvider: ng.ui.IUrlRouterProvider, $locationProvider: ng.ILocationProvider) { $stateProvider .state('home', { url: '', abstract: true, templateUrl: 'app/components/main/main.html', controller: 'MainController', controllerAs: 'mainCtrl' }) .state('home.apps', { url: '/apps', abstract: true, templateUrl: 'app/components/apps/apps.html', controller: 'AppsController', controllerAs: 'appsCtrl', }) .state('home.apps.list', { url: '', templateUrl: 'app/components/apps/list.html', }) .state('home.app.detail', { url: '/app/:package_name', templateUrl: 'app/components/apps/app.html', controller: 'AppController', controllerAs: 'appCtrl', }); $urlRouterProvider.otherwise('/apps'); /*$locationProvider.html5Mode(true).hashPrefix('');*/ } } } And here is the part of the list template which is anchoring to the app's details page: <a ui-sref="home.app.detail({package_name: app.package_name})">{{app.title}}</a> But when I hit it in my browser, the following error occurs in the console: Error: Could not resolve 'home.app.detail' from state 'home.apps.list' at Object.transitionTo (angular-ui-router.js:3140) at Object.go (angular-ui-router.js:3068) at angular-ui-router.js:4181 at angular.js:17682 at completeOutstandingRequest (angular.js:5387) at angular.js:5659 I guess the problem is UI-Router thinks that I'm pointing at the state relatively, but I wanna do it in the absolute way. A: The problem is parent name 'home.app' instead of 'home.apps' // wrong .state('home.app.detail', { ... // should be .state('home.apps.detail', { ... because parent is .state('home.apps', { ... EXTEND in case, that this should not be child of 'home.apps' we have to options 1) do not inherit at all .state('detail', { ... 2) introduce the parent(s) which is(are) used in the dot-state-name-notation // exists already .state('home', { ... // this parent must be declared to be used later .state('home.app', { // now we can use parent 'home.app' because it exists .state('home.app.detail', {
{ "language": "en", "url": "https://stackoverflow.com/questions/31312233", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Why are in JavaBeans the setter Public (JSP EL) Why is it required that your setters in JavaBeans are public although it is not a usual way to edit the properties of an object from the expressions, because changing the state of an property is the task of the Controller (in case you are using the MVC pattern). Does someone know this? Thanks in advance! A: If you use MVC it is recommended to encapsulate the setter (private). This because MVC discribes that your view does NOT change the model but the controller should do this. You can use ${model.property = 100}, which requires public setter Allthough in the MVC it is recommended to private the setter
{ "language": "en", "url": "https://stackoverflow.com/questions/26930720", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Jquery Jcrop plugin loads image multiple times I ran into an issue with jquery Jcrop plugin I'm using it to allow the user to crop an image, but when he/she is not satisfied with that, I set to reset the original image. I use PHP to store the original image and cropped the image as well. When the user hits the reset button the original image loads from the server and jcrop is reapplied. But dear friends my issue is when the user clicks the "reset image" button the cropped image and original image both are displaying. there is a new tag with no id or class is created by Jcrop and it doesn't remove when I destroy jcrop. below is my code for the reset button $("#reset_crop").click(function(){ refreshImg($("#profpic"), "files/"+fileName); /* refreshImg function adds date at Date() at the end of the image url to ignore cache. and change the src attr to the original file name. */ $(this).attr("disabled","disabled"); jcrop.destroy(); $("#profpic").on('load',function(){ jcrop = createJcrop(); }); }); My Jcrop settings are function createJcrop(){ return $.Jcrop("#profpic",{ boxWidth:345, trueSize:[width, height], width:345, onSelect: showCoords, onChange: showCoords }); } the function of the crop button is $("#crop").click(function(){ jcrop.release(); var newfile; jcrop.destroy(); $.ajax({ url:'crop.php', async:false, data:{fileName:fileName,width:coordinates[0],height:coordinates[1],x:coordinates[2],y:coordinates[3]}, success:function(data){ //console.log(data); newfile = data; }, error:function(){ alert("something went wrong"); } }); refreshImg($("#profpic"), "files/"+newfile); $("#reset_crop").removeAttr('disabled'); $(this).attr("disabled","disabled"); });
{ "language": "en", "url": "https://stackoverflow.com/questions/20350859", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: HOC for JSX element I'm trying to make a HOC function that would wrap a list item, do some conditional checking and return this JSX element if condition pass or being wrapped by another component if fail. Here is the part of the code inside the render method: const workspaceListItem = ( <React.Fragment> <ListItem button onClick={() => this.handleOpening(workspace)}> <Avatar> <WorkIcon /> </Avatar> <ListItemText inset primary={workspace.name} secondary={`Created: ${workspace.createdTime.split("T")[0]}`} /> {expandButton} </ListItem> <Collapse in={isOpen} timeout="auto" unmountOnExit> {groupList} </Collapse> </React.Fragment> ); const WithToolTipWorkspace = withToolTip(workspaceListItem); return WithToolTipWorkspace; I assign this JSX element to workspaceListItem variable then I call my HOC withToolTip() and pass workspaceListItem as argument. Here is the withToolTip definition: import React from "react"; import Tooltip from "@material-ui/core/Tooltip"; function withToolTip(WrappedComponent) { return function WrappedWithToolTip(props) { return props.parent.children === undefined || props.parent.children.length === 0 ? ( <Tooltip title="Children of this element does not exist"> {WrappedComponent} </Tooltip> ) : ( { WrappedComponent } ); }; } export default withToolTip; When I compile it I get React error Functions are not valid as a React child. This may happen if you return a Component instead of <Component /> from render. Or maybe you meant to call this function rather than return it. Can anyone explain to me what I'm doing wrong? I'm a beginner when it comes to React and still learning. Thank you for any tips I would really appreciate it. Edit: I did what estus suggested like so: return <WithToolTipWorkspace parent={workspace} />; and inside HOC import React from "react"; import Tooltip from "@material-ui/core/Tooltip"; function withToolTip(WrappedComponent) { return function WrappedWithToolTip(props) { console.log(props); return props.parent.children === undefined || props.parent.children.length === 0 ? ( <Tooltip title="Children of this element does not exist"> <WrappedComponent {...props} /> </Tooltip> ) : ( <WrappedComponent {...props} /> ); }; } export default withToolTip; now error changed to this one: Warning: React.createElement: type is invalid -- expected a string (for built-in components) or a class/function (for composite components) but got: <Fragment />. Did you accidentally export a JSX literal instead of a component? Most probably because I'm passing JSX element to HOC and not a component but how can I do it otherwise. A: The error explains the problem: This may happen if you return a Component instead of <Component /> from render An element should be created from WrappedComponent. HOC likely needs to pass props to it as well: return props.parent.children === undefined || props.parent.children.length === 0 ? ( <Tooltip title="Children of this element does not exist"> <WrappedComponent {...props}> </Tooltip> ) : ( <WrappedComponent {...props}> );
{ "language": "en", "url": "https://stackoverflow.com/questions/56223814", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Purpose of spark worker directories in /work/app-xxxxxxx/{0, 1, 2, ...} and periodic cleanup I'm running a Spark 3.4 long running structured streaming job. Whenever the job starts, an application directory of the form app-xxxxxxxxxx is created for the job in the work directory. However within that directory, additional directories are created, the first being named 0, the second named 1 and so on. My first question is, why are these directories being created? Over the course of the structured streaming job, the micro batch may get triggered 20 times but only 4 of these sub directories under the app-xxxxxxxxxx directory are created, the point being that the creation of these sub directories doesn't correspond to execution of a micro batch. So, I'm not sure why they're being created. My second related question is, how can I configure Spark to delete these folders after a certain amount of time? Each contains the application .jar and stderr and stdout files, so over time they take up a significant amount of space. My understanding is that setting spark.worker.cleanup.enabled=true only enables cleanup for stopped applications. However in my case, I have a long running application which I would like to enable cleanup for. A: You are talking about the work directory and the configuration spark.worker, so my assumption is you are running the streaming job in Spark's standalone mode (not using a cluster manager such as YARN because things are quite different there). According to the documentation on Spark Standalone Mode the work directory is described as: Directory to run applications in, which will include both logs and scratch space (default: SPARK_HOME/work). Here scratch space means that it is "including map output files and RDDs that get stored on disk. This should be on a fast, local disk in your system. It can also be a comma-separated list of multiple directories on different disks." In the work folder you will find for each application the .jar libraries such that the executor have access to the libraries. In addition, it contains some temporary data based on the processing logic and actual data (not on the amount of processing triggers). The sub-folders 0, 1 are incremental for different jobs/stages or runs of the same application. (To be frank, I am not fully knowledgeable about those sub-folders.) The cleaning of this folder can be adjusted by the following three configurations for the SPARK_WORKER_OPTS as described here: spark.worker.cleanup.enabled - Default: false: Enable periodic cleanup of worker / application directories. Note that this only affects standalone mode, as YARN works differently. Only the directories of stopped applications are cleaned up. This should be enabled if spark.shuffle.service.db.enabled is "true" spark.worker.cleanup.interval - Default: 1800 (30 minutes): Controls the interval, in seconds, at which the worker cleans up old application work dirs on the local machine. spark.worker.cleanup.appDataTtl - Default: 604800 (7 days, 7 * 24 * 3600): The number of seconds to retain application work directories on each worker. This is a Time To Live and should depend on the amount of available disk space you have. Application logs and jars are downloaded to each application work dir. Over time, the work dirs can quickly fill up disk space, especially if you run jobs very frequently.
{ "language": "en", "url": "https://stackoverflow.com/questions/64666764", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: LINQ complex query navigation property I have a problem here with getting data from my database with LINQ I have two tables Team and TeamMember, which are related by 1-N relationship. I am using Entity Framework and I have one entity for each of the tables with a property for each column. Also in the Team entity there is a TeamMember navigation property as result of this relationship. I want to do a query where I can get all my Team's with their Team Members. result = (from t in this.context.Teams orderby t.Name select t) .Include("TeamMembers") That works fine. I get a collection of Team Entities with the Team.TeamMember property populated with the data of the member of each team. The problem is when I want to do a more complex query like filtering the query for the TeamMembers. For example, both tables have a column EndDateTime. If I want to get all the team and team members which are not ended (their end date time is not null) I don't know how to do it. With this query I will filter just the teams, but not the team members. result = (from t in this.context.Teams where t.EndDateTime == null orderby t.Name select t) .Include("TeamMembers") .ToList(); Any idea? I kind of "solve" it doing the filter of the member after the query, to the collection. Like this: //Filter out the End dated care coordiantors var careCoordinatorsToDelete = new List<CareCoordinator>(); foreach (var team in result) { careCoordinatorsToDelete.Clear(); foreach (var careCoordinator in team.CareCoordinators) { if (careCoordinator.EndDateTime != null) careCoordinatorsToDelete.Add(careCoordinator); } foreach (var toDelete in careCoordinatorsToDelete) { team.CareCoordinators.Remove(toDelete); } } But I don't think this is a good solution at all. A: As I've pointed out, I think this is a duplicate. But summarising the answers, you simply need to include the Where clause on the child as part of the Select statement (by using it as part of an anonymous type), enumerate the query, and then retrieve the objects you want. Because you've selected the TeamMembers that you want into another property they will be retrieved from the database and constructed in your object graph. result = (from t in this.context.Teams where t.EndDateTime == null orderby t.Name select new { Team = t, Members = t.TeamMembers.Where(tm => tm.EndDateTime == null) }) .ToList() .Select(anon => anon.Team) .ToList(); A: this should work: var result = this.context.Teams.Where(t=>t.EndDateTime==null).Select(t=> new { Name = t.Name, PropertyX = t.PropertyX... //pull any other needed team properties. CareCoordinators = t.CareCoordinators.Where(c=>c.EndDateTime==null) }).ToList(); this returns a list of anonymous objects.
{ "language": "en", "url": "https://stackoverflow.com/questions/7676364", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Getting dualshok 4 positions I trying to make a simple app that can be controlled by my gamepad (dualshok 4) and i stacked at getting current positions axies and buttons of my gamepad. Below my code that i already tried: class MainActivity : AppCompatActivity(), OnGenericMotionListener{ override fun onGenericMotion(p0: View?, p1: MotionEvent?): Boolean { if (p1!!.getSource() and InputDevice.SOURCE_CLASS_JOYSTICK !== 0 && p1!!.getAction() === MotionEvent.ACTION_MOVE) { val motionRanges = p1!!.getDevice().getMotionRanges() for (mr in motionRanges) { val axis = mr.getAxis() if (p1.getAxisValue(axis) > 0.5 || p1.getAxisValue(axis) < -0.5) { Log.d("JOY_DEBUG", "Axis found: " + MotionEvent.axisToString(axis)) } } } else { Log.i("JOY_DEBUG", "Not a joystick event.") } return true } override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) } } The method "onGenericMotion" seems like not working, because it is not called even if i did this: class MainActivity : AppCompatActivity(), OnGenericMotionListener{ override fun onGenericMotion(p0: View?, p1: MotionEvent?): Boolean { Log.i("JOY_DEBUG", "method was called") if (p1!!.getSource() and InputDevice.SOURCE_CLASS_JOYSTICK !== 0 && p1!!.getAction() === MotionEvent.ACTION_MOVE) { val motionRanges = p1!!.getDevice().getMotionRanges() for (mr in motionRanges) { val axis = mr.getAxis() if (p1.getAxisValue(axis) > 0.5 || p1.getAxisValue(axis) < -0.5) { Log.d("JOY_DEBUG", "Axis found: " + MotionEvent.axisToString(axis)) } } } else { Log.i("JOY_DEBUG", "Not a joystick event.") } return true } override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) } } Then in the Logcat i do not see anything. Please, help
{ "language": "en", "url": "https://stackoverflow.com/questions/60344302", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to convert string to a function name? How would I go about converting x = 'abs' Into abs so that I could do z = abs(-5) = 5. Or where x = 'randfunc' where 'randfunc' can be any input string relating to a function. >> x x = abs >> x(-5) Subscript indices must either be real positive integers or logicals. A: Use str2func: x = 'abs'; fh = str2func(x); fh(-5) % Prints 5
{ "language": "en", "url": "https://stackoverflow.com/questions/30557919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Creating Message Object Parser I'm receiving results from a web service like this: result.body returns: [2] pry(#<User::EmailSettingsController>)> result.body => {"RESULT"=> {"MESSAGES"=> [{"MESSAGE"=> {"TYPE"=>"E", "ID"=>"HRRCF_WD_UI", "NUMBER"=>"025", "MESSAGE"=>"U kunt maximaal \"5\" jobagents creëren 1", "LOG_NO"=>"", "LOG_MSG_NO"=>"000000", "MESSAGE_V1"=>"5", "MESSAGE_V2"=>"1", "MESSAGE_V3"=>"", "MESSAGE_V4"=>"", "PARAMETER"=>"", "ROW"=>"0", "FIELD"=>"", "SYSTEM"=>""}}, {"MESSAGE"=> {"TYPE"=>"E", "ID"=>"HRRCF_WD_UI", "NUMBER"=>"025", "MESSAGE"=>"U kunt maximaal \"5\" jobagents creëren 2", "LOG_NO"=>"", "LOG_MSG_NO"=>"000000", "MESSAGE_V1"=>"5", "MESSAGE_V2"=>"2", "MESSAGE_V3"=>"", "MESSAGE_V4"=>"", "PARAMETER"=>"", "ROW"=>"0", "FIELD"=>"", "SYSTEM"=>""}}, {"MESSAGE"=> {"TYPE"=>"E", "ID"=>"HRRCF_WD_UI", "NUMBER"=>"025", "MESSAGE"=>"U kunt maximaal \"5\" jobagents creëren 3", "LOG_NO"=>"", "LOG_MSG_NO"=>"000000", "MESSAGE_V1"=>"5", "MESSAGE_V2"=>"3", "MESSAGE_V3"=>"", "MESSAGE_V4"=>"", "PARAMETER"=>"", "ROW"=>"0", "FIELD"=>"", "SYSTEM"=>""}}]}} Is it possible to create something ParseMessageObject(result.body) that returns that I can do something like this. message_list = ParseMessageObject(result.body) message_list.each do |message| puts message.message puts message.type end I have no idea if this is possible or how to do this any suggestions to get me started are welcome! EDIT 1: Created my class in lib: class MessageParser def self.parse(result) end end A: This should basically do what you want, using a simple open struct to create a message class which has accessors for each of the keys in your message hash require 'ostruct' class MessageParser Message = Struct.new(:type, :id, :number, :message, :log_no, :log_msg_no, :message_v1, :message_v2, :message_v3, :message_v4, :parameter, :row, :field, :system) attr_reader :messages def initialize(data) @data = data.fetch("MESSAGES",[]) @messages = [] parse_data end private def parse_data @data.each do | msg | message = Message.new msg.fetch("MESSAGE",{}).each do |key, value| message[key.downcase.to_sym] = value end @messages << message end end end parser = MessageParser.new(result.body["RESULT"]) parser.messages.each do |message| puts message.message puts message.type end A: Something like this should work: class ParsedMessages include Enumerable attr_reader :messages def initialize(data) @messages = extract_messages_from_data(data) end def extract_messages_from_data(data) # TODO: Parse data and return message objects end def each &block @messages.each do |message| if block_given? block.call message else yield message end end end end Now you can use all methods from Enumerable on ParsedMessages, like each, find, map etc etc.
{ "language": "en", "url": "https://stackoverflow.com/questions/22406199", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How can I set an NSDate object to midnight? I have an NSDate object and I want to set it to an arbitrary time (say, midnight) so that I can use the timeIntervalSince1970 function to retrieve data consistently without worrying about the time when the object is created. I've tried using an NSCalendar and modifying its components by using some Objective-C methods, like this: let date: NSDate = NSDate() let cal: NSCalendar = NSCalendar(calendarIdentifier: NSGregorianCalendar)! let components: NSDateComponents = cal.components(NSCalendarUnit./* a unit of time */CalendarUnit, fromDate: date) let newDate: NSDate = cal.dateFromComponents(components) The problem with the above method is that you can only set one unit of time (/* a unit of time */), so you could only have one of the following be accurate: * *Day *Month *Year *Hours *Minutes *Seconds Is there a way to set hours, minutes, and seconds at the same time and retain the date (day/month/year)? A: Your statement The problem with the above method is that you can only set one unit of time ... is not correct. NSCalendarUnit conforms to the RawOptionSetType protocol which inherits from BitwiseOperationsType. This means that the options can be bitwise combined with & and |. In Swift 2 (Xcode 7) this was changed again to be an OptionSetType which offers a set-like interface, see for example Error combining NSCalendarUnit with OR (pipe) in Swift 2.0. Therefore the following compiles and works in iOS 7 and iOS 8: let date = NSDate() let cal = NSCalendar(calendarIdentifier: NSCalendarIdentifierGregorian)! // Swift 1.2: let components = cal.components(.CalendarUnitDay | .CalendarUnitMonth | .CalendarUnitYear, fromDate: date) // Swift 2: let components = cal.components([.Day , .Month, .Year ], fromDate: date) let newDate = cal.dateFromComponents(components) (Note that I have omitted the type annotations for the variables, the Swift compiler infers the type automatically from the expression on the right hand side of the assignments.) Determining the start of the given day (midnight) can also done with the rangeOfUnit() method (iOS 7 and iOS 8): let date = NSDate() let cal = NSCalendar(calendarIdentifier: NSCalendarIdentifierGregorian)! var newDate : NSDate? // Swift 1.2: cal.rangeOfUnit(.CalendarUnitDay, startDate: &newDate, interval: nil, forDate: date) // Swift 2: cal.rangeOfUnit(.Day, startDate: &newDate, interval: nil, forDate: date) If your deployment target is iOS 8 then it is even simpler: let date = NSDate() let cal = NSCalendar(calendarIdentifier: NSCalendarIdentifierGregorian)! let newDate = cal.startOfDayForDate(date) Update for Swift 3 (Xcode 8): let date = Date() let cal = Calendar(identifier: .gregorian) let newDate = cal.startOfDay(for: date) A: Here's an example of how you would do it, without using the dateBySettingHour function (to make sure your code is still compatible with iOS 7 devices): NSDate* now = [NSDate date]; NSCalendar *gregorian = [[NSCalendar alloc] initWithCalendarIdentifier:NSGregorianCalendar]; NSDateComponents *dateComponents = [gregorian components:(NSYearCalendarUnit | NSMonthCalendarUnit | NSDayCalendarUnit) fromDate:now]; NSDate* midnightLastNight = [gregorian dateFromComponents:dateComponents]; Yuck. There is a reason why I prefer coding in C#... Anyone fancy some readable code..? DateTime midnightLastNight = DateTime.Today; ;-) A: Swift iOS 8 and up: People tend to forget that the Calendar and DateFormatter objects have a TimeZone. If you do not set the desired timzone and the default timezone value is not ok for you, then the resulting hours and minutes could be off. Note: In a real app you could optimize this code some more. Note: When not caring about timezones, the results could be OK on one device, and bad on an other device just because of different timezone settings. Note: Be sure to add an existing timezone identifier! This code does not handle a missing or misspelled timezone name. func dateTodayZeroHour() -> Date { var cal = Calendar.current cal.timeZone = TimeZone(identifier: "Europe/Paris")! return cal.startOfDay(for: Date()) } You could even extend the language. If the default timezone is fine for you, do not set it. extension Date { var midnight: Date { var cal = Calendar.current cal.timeZone = TimeZone(identifier: "Europe/Paris")! return cal.startOfDay(for: self) } var midday: Date { var cal = Calendar.current cal.timeZone = TimeZone(identifier: "Europe/Paris")! return cal.date(byAdding: .hour, value: 12, to: self.midnight)! } } And use it like this: let formatter = DateFormatter() formatter.timeZone = TimeZone(identifier: "Europe/Paris") formatter.dateFormat = "yyyy/MM/dd HH:mm:ss" let midnight = Date().midnight let midnightString = formatter.string(from: midnight) let midday = Date().midday let middayString = formatter.string(from: midday) let wheneverMidnight = formatter.date(from: "2018/12/05 08:08:08")!.midnight let wheneverMidnightString = formatter.string(from: wheneverMidnight) print("dates: \(midnightString) \(middayString) \(wheneverMidnightString)") The string conversions and the DateFormatter are needed in our case for some formatting and to move the timezone since the date object in itself does not keep or care about a timezone value. Watch out! The resulting value could differ because of a timezone offset somewhere in your calculating chain! A: Swift 5+ let date = Calendar.current.date(bySettingHour: 0, minute: 0, second: 0, of: Date()) A: Yes. You don't need to fiddle with the components of the NSCalendar at all; you can simply call the dateBySettingHour method and use the ofDate parameter with your existing date. let date: NSDate = NSDate() let cal: NSCalendar = NSCalendar(calendarIdentifier: NSGregorianCalendar)! let newDate: NSDate = cal.dateBySettingHour(0, minute: 0, second: 0, ofDate: date, options: NSCalendarOptions())! For Swift 3: let date: Date = Date() let cal: Calendar = Calendar(identifier: .gregorian) let newDate: Date = cal.date(bySettingHour: 0, minute: 0, second: 0, of: date)! Then, to get your time since 1970, you can just do let time: NSTimeInterval = newDate.timeIntervalSince1970 dateBySettingHour was introduced in OS X Mavericks (10.9) and gained iOS support with iOS 8. Declaration in NSCalendar.h: /* This API returns a new NSDate object representing the date calculated by setting hour, minute, and second to a given time. If no such time exists, the next available time is returned (which could, for example, be in a different day than the nominal target date). The intent is to return a date on the same day as the original date argument. This may result in a date which is earlier than the given date, of course. */ - (NSDate *)dateBySettingHour:(NSInteger)h minute:(NSInteger)m second:(NSInteger)s ofDate:(NSDate *)date options:(NSCalendarOptions)opts NS_AVAILABLE(10_9, 8_0); A: Just in case someone is looking for this: Using SwiftDate you could just do this: Date().atTime(hour: 0, minute: 0, second: 0) A: In my opinion, the solution, which is easiest to verify, but perhaps not the quickest, is to use strings. func set( hours: Int, minutes: Int, seconds: Int, ofDate date: Date ) -> Date { let dateFormatter = DateFormatter() dateFormatter.dateFormat = "yyyy-MM-dd" let newDateString = "\(dateFormatter.string(from: date)) \(hours):\(minutes):\(seconds)" dateFormatter.dateFormat = "yyyy-MM-dd HH:mm:ss" return dateFormatter.date(from: newDateString) } A: func resetHourMinuteSecond(date: NSDate, hour: Int, minute: Int, second: Int) -> NSDate{ let nsdate = NSCalendar.currentCalendar().dateBySettingHour(hour, minute: minute, second: second, ofDate: date, options: NSCalendarOptions(rawValue: 0)) return nsdate! } A: Use the current calendar to get the start of the day for the current time. let today = Calendar.current.startOfDay(for: Date())
{ "language": "en", "url": "https://stackoverflow.com/questions/26189656", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "59" }
Q: How to integrate Stripe with Apple iOS App store purchases (subscriptions)? What workflow have others use to cleanly integrate subscriptions across Stripe and Apple Purchases (subscriptions), and eventually Android. I have a web app that uses Stripe Subscriptions (monthly and annual plans to pay for the app) and am launching an iOS App which requires in-app (Guideline 3.1.1 by apple) purchases. How do you keep the 2 in sync? knowing that all the webhook and subscription data/database connection is connected to Stripe (i.e. Stripe is the source of truth) Option 1) Create a Stripe Customer & Subscription with a 100% off coupon so subscription stays live and then use Apple webhooks to cancel or change the status of the stripe data. (So the app/database is un-aware of Apple/Android) Option 2) Create a parallel functionality from Apple with apple webhooks to app/database around subscriptions, so it's independent from Stripe. Option 3) ??? Thanks!
{ "language": "en", "url": "https://stackoverflow.com/questions/73198734", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How can I make the origin of aliases in Cypress more apparent from the it/spec files? My team is using aliases to set some important variables which are used within the it('Test',) blocks. For example, we may be running the following command in a before step: cy.setupSomeDynamicData() Then the setupSomeDynamicData() method exists in a another file (ex: commands.js) and the setupSomeDynamicData() method may setup a couple aliases: cypress/support/commands.js setupSomeDynamicData() { cy.createDynamicString(first).as('String1') cy.createDynamicString(second).as('String2') cy.createDynamicString(third).as('String3') } Now we go back to our spec/test file, and start using these aliases: cypress/e2e/smallTest.cy.js it('A small example test', function () { cy.visit(this.String1) // do some stuff... cy.get(this.String2) // do some stuff... cy.visit(this.String3) // do some stuff... }) The problem is that unless you're the person who wrote the code, it's not obvious where this.String1, this.String2, or this.String3 are coming from nor when they were initialized (from the perspective of smallTest.cy.js) since the code that initializes the aliases is being executed in another file. In the example, it's quite easy to Ctrl+F the codebase and search for these aliases but you have to really start doing some reverse engineering once you have more complex use cases. I guess this feels like some sort of readability/maintainability problem because once you setup enough of these and the example I provided starts to get more complex then finding out where these aliases are created can be inconvenient. The this.* syntax makes it feel like you'd find these aliases variables somewhere within the same file in which they're being used but when you don't see any sign of them then it becomes evident that they've just magically been initialized (somewhere/somehow) and then the hunt ‍♂️ begins. Some solutions that come to mind (which may be bad ideas) are: * *Create JS objects with getters/setters. This way, it'll be a bit easier to trace where the variable you're using was "set" *Not use aliases, and instead, use global variables that can be imported into the spec/test files so it's clear where they are coming from then run a before/after hook to clear these variables so that the reset-per-test functionality remains. *Name the variables in a way where it's obvious that they are aliased and then spread the word/document this method within my team so that anytime they see this.aliasedString2 then they know it's coming from some method that performs these alias assignments. I'm sure there may be a better way to handle this so just thought I'd post this question.
{ "language": "en", "url": "https://stackoverflow.com/questions/74935349", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do you import a gradle project in eclipse? I have installed https://marketplace.eclipse.org/content/gradle-integration-eclipse-0, which didn't change anything in the import dialog of eclipse (no specialized "Gradle" import option showing to select in eclipse, after the installation). I have run gradle eclipse in the shell, in case that's required, then tried to import as a general existing project. A lot of compile errors that shouldn't be, after that hacky import. This is more annoying than sbt :-)
{ "language": "en", "url": "https://stackoverflow.com/questions/35971580", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: jQuery setTimeout() callback not working I am having trouble figuring out how the $("identifier").remove works with setTimeout(). The code below tries to animate a snowflake dropping down //construct a html string var html_str = "<img class='snowflakes' src = 'snowflake1.png' style='position: absolute; left: " + String(pos_x) + "px'> " //Append the element to field var flake = $(html_str).appendTo('#field'); flake.animate({top: String(FIELD_SIZE-FLAKE_SIZE)+'px'}, drop_speed, //callback function when finished animating function(){ setTimeout(function(){flake.remove();},1000); } ); I don't get how setTimeout(function(){flake.remove();},1000); //this works setTimeout(flake.remove,1000); //but this doesn't remove the element It seemed to me both should perform the same function. What is going on here? A: The second one didn't work because it was executed in the global context. Here is an article regarding the this context in the function passed to setTimeout, as per MDN (Check 'The "this" problem)' Your code, if written this way, would work: setTimeout(flake.remove.bind(flake),1000);
{ "language": "en", "url": "https://stackoverflow.com/questions/42883797", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Copying python scripts from local to remote mahcine I have two computers, both windows 64-bit machines. Call the local computer machine A and the remote computer machine B. I have a script.py file on machine A. Without leaving machine A, I want to: * *Copy script.py onto machine B; *Run script.py on machine B; *Get the output on machine A. I'm having problems with step 1. Steps 2. and 3. are solved already. I've configured both computers so that I can successfully run from PowerShell6 Invoke-Commands. The PSSessions established between machine A and B are functional. I can successfully run a script.py that is on machine B from machine A: Enter-PSSession -hostname $hostname -username pshell -ScriptBlock{c:\Users\pshell\Anaconda3\python.exe script.py}; and get the output back to machine A. However, I don't know find the commands to copy script.py from machine A to machine B. I think it's a relatively easy task but I can't find the relevant commands. Any indications/suggestions not including third-party software/packages are welcome. A: From Is there a SCP alternative for PowerShell?: the little tool pscp.py, which comes with Putty can solve your task 1. For an example with PowerShell see this answer. A: Copy-Item -Path "ENTER_PATH HERE" -Destination "ENTER_PATH_HERE" -Recurse
{ "language": "en", "url": "https://stackoverflow.com/questions/57389849", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Scaling FastApi and Redis Pub/Sub for chat trying to build a scalable chat app with FastApi and Redis pub/sub. Suppose we have 10 processes running FastApi app. Each process will create 1 connection pool to Redis at startup. Redis instance allows max 10 connections. Each user has its own redis channel where all notifications (chat messages, app notifications etc) are coming. When the user connects to a websocket 2 tasks are launched, 1 that listens to websocket, and 1 that listens to redis user channel. Below is a simplified thing we have now. resources.py redis = None async def startup_event(): global redis redis = aioredis.from_url(url=REDIS_URL, password=REDIS_PASSWORD, encoding='utf-8', decode_responses=True) async def get_redis() -> Redis: return redis views.py import orjson as json channel = 'user:channel' async def listen_socket( websocket: WebSocket, redis: Redis, ): while True: try: data = await websocket.receive_bytes() except: await redis.publish(channel, json.dumps({'type': 'disconnect_user'})) return None async def listen_redis( websocket: WebSocket, redis: Redis, ): ps = redis.pubsub() await ps.psubscribe(channel) async for data in ps.listen(): if data['type'] == 'pmessage': data = json.loads(data['data']) event_type = data.get('type') if event_type == 'disconnect_user': return None elif event_type == 'echo': await websocket.send_bytes(json.dumps(data)) @router.websocket('/', name='ws', ) async def process_ws( websocket: WebSocket, redis: Redis = Depends(get_redis), ): await websocket.accept() await asyncio.gather( listen_redis( websocket=websocket, redis=redis, ), listen_socket( websocket=websocket, redis=redis, ), ) * *This line async for data in ps.listen(): blocks the connection and this particular connection cannot serve clients on different threads of the same process, not even the current client. Is this true? If yes then this approach is absolutely not scalable, because we cannot afford 1 Redis connection per user. *What would solve the above issue? 2 Redis connections per process? 1 connection pool and 1 connection dedicated to consume redis pub/sub channel? In this case publishing will be done to a process channel not to a specific user channel. We would need a thread that consumes the pub/sub channel and routes to the user websocket connected to that process. Is this correct? Am I overthinking? Are there better approaches? Thank you so much for help! A: I recommend that you don't try to implement the functionality you describe manually just by using a fastapi and redis. Is a path of pain and suffering that is unjustified and highly ineffective. Just use centrifugo and you'll be happy. A: I recommend using queues to scale your real time application. e.g. RabbitMQ or even rpush und lpop with redis lists - if you want stay with redis. this approch is much easier to implement as pub/sub and scales great. Handling and sharing events bidirectional with Pub/Sub & WebSockets is a pain in most languages.
{ "language": "en", "url": "https://stackoverflow.com/questions/73540966", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: SWIG_NewPointerObj and values always being nil I'm using SWIG to wrap C++ objects for use in lua, and Im trying to pass data to a method in my lua script, but it always comes out as 'nil' void CTestAI::UnitCreated(IUnit* unit){ lua_getglobal(L, "ai"); lua_getfield(L, -1, "UnitCreated"); swig_module_info *module = SWIG_GetModule( L ); swig_type_info *type = SWIG_TypeQueryModule( module, module, "IUnit *" ); SWIG_NewPointerObj(L,unit,type,0); lua_epcall(L, 1, 0); } Here is the lua code: function AI:UnitCreated(unit) if(unit == nil) then game:SendToConsole("I CAN HAS nil ?") else game:SendToConsole("I CAN HAS UNITS!!!?") end end unit is always nil. I have checked and in the C++ code, the unit pointer is never invalid/null I've also tried: void CTestAI::UnitCreated(IUnit* unit){ lua_getglobal(L, "ai"); lua_getfield(L, -1, "UnitCreated"); SWIG_NewPointerObj(L,unit,SWIGTYPE_p_IUnit,0); lua_epcall(L, 1, 0); } with identical results. Why is this failing? How do I fix it? A: When you use the colon in function AI:UnitCreated(unit), it creates a hidden self parameter that receives the AI instance. It actually behaves like this: function AI.UnitCreated(self, unit) So when calling that function from C, you need to pass both parameters: the ai instance and the unit parameter. Since you passed only one parameter, self was set to it and unit was set to nil.
{ "language": "en", "url": "https://stackoverflow.com/questions/2406410", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Group a flat object list into a nested list by specific object property using RXJS I have a collection like this: FlatObject [ { id:"1", name:"test1", group: "A" }, { id:"2", name:"test2", group: "B" }, { id:"3", name:"test3", group: "B" }, { id:"4", name:"test4", group: "A" }, ] And I want to get using Observable with RxJs a dictionary grouped by group something like this: NestedObjects [{ "group": "A", "objectProps": [{ "id": "1" "name": "test1", }, { "id": "4" "name": "test4", }] }, { "group": "B", "objectProps": [{ "id": "2" "name": "test2", }, { "id": "3" "name": "test4", }] }] When I was try the operator that I think is closer is the reduce or just use do and I was thinking to do something like this code where I have side effects on a collection object. let collectionNestedOBjects: NestedObjects[]; ..... .map((response: Response) => <FlaTObject[]>response.json().results) .reduce(rgd, rwgr => { // Soudo Code // Create NestedObject with group // Check if collectionNestedOBjects has an object with that group name Yes: Create a objectProps and add it to the objectProps collection No: Create a new NestedObject in collectionNestedObjects and Create a objectProps and add it to the objectProps collection } ,new ReadersGroupDetail()); Is there a another operator that make this projection clear and no having side effects ? A: You can use .map() operator and map to the type you want: const data: Observable<NestedObject[]> = getInitialObservable() .map((response: Response) => <FlatObject[]>response.json().results) .map((objects: FlatObject[]) => { // example implementation, consider using hashes for faster lookup instead const result: NestedObjects[] = []; for (const obj of objects) { // get all attributes except "group" into "props" variable const { group, ...props } = obj; let nestedObject = result.find(o => o.group === group); if (!nestedObject) { nestedObject = { group, objectProps: [] }; result.push(nestedObject); } nestedObject.objectProps.push(props); } return result; });
{ "language": "en", "url": "https://stackoverflow.com/questions/46550051", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: I don't understand the concept of High Memory and I'd like to I have gathered this much. "High Memory is memory for which logical addresses do not exist, because it is beyond the address range set aside for kernel virtual addresses." It seems to me there would be overhead for creating mappings to high memory. Is high memory a set area in the physical mem of the machine? Where does it start and end, typically? And most importantly - why have it at all? Why not have the normal 3 GB/ 1 GB split with mappings/kernel code in that 1 GB? A: there might be more memory available than what the CPU is currently able to address. The same limit exists for an userland process that is able to address only a subset of the memory according to its mapping table. Look at PAE extensions for example, you can have up to 64GB of RAM but the kernel or any process can access only up to 4GB of memory.
{ "language": "en", "url": "https://stackoverflow.com/questions/12565567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Regarding Structs and Using Them With Array's I'm trying to write a program that will take every line from a text file and load the value into an array. For some reason however, when I try create a dynamic array and try to put information in any position beyond 0, the information from from position zero gets copied over and I can't seem to understand why. Specifically in this program its in the readInventory function I have written. Basically, why can't I copy one struct to the other? Sample from file A009 Strawberries_Case 0 12.50 8 4028 STRAWBERRIES_PINT 0 0.99 104 4383 MINNEOLAS 1 0.79 187.3 4261 Rice_1_LB_Bag 0 0.49 107 Code from program #include <iostream> #include <string> #include <cstring> #include <iomanip> #include <fstream> using namespace std; struct Product { string PLU; string name; int salesType; double unitPrice/*rice per pound*/; double inventory; }; struct ItemSold { string PLU; string name; double cost; }; Product *inventoryLevels = new Product[100]; ItemSold *itemsSold = new ItemSold[100]; bool readInventory(string filename, int &numberOfItems); double checkout(int inventoryLength); double price(string PLU, double units); int typeCheck(string PLU, int inventoryLength); string nameCheck(string PLU, int inventoryLength); int main() { int numberOfItems = 0; string filename = "products.txt"; int total; if (readInventory(filename, numberOfItems)) { cout << "Inventory file has errors, please make changes before continuing" << endl << endl; } total = checkout(numberOfItems); cout << total; system("pause"); } double checkout(int inventoryLength) { // Function that will be used to perform the checkout by the user string PLU = "1"; double units/*pounds*/; int salesType; int counter = 0; int temp; double total = 0; while (PLU != "0") { cout << "Enter a PLU: "; cin >> PLU; itemsSold[counter].PLU = PLU; if (PLU == "0") { // do nothing } else { itemsSold[counter].name = nameCheck(PLU, inventoryLength); if (typeCheck(PLU, inventoryLength) == 0) { cout << " Enter the number of units being bought: "; cin >> units; while (units > inventoryLevels[counter].inventory) { cout << "You have entered in more units than we have on hand \n Please reduce the number of units being bought\n"; cout << " Enter the number of units being bought: "; cin >> units; } itemsSold[counter].cost = price(PLU, units); inventoryLevels[counter].inventory -= units; } else { cout << "Enter the number of pounds of the item being bought: "; cin >> units; itemsSold[counter].cost = price(PLU, units); while (units > inventoryLevels[counter].inventory) { cout << "You have entered in more pounds than we have on hand \n Please reduce the number of pounds being bought\n"; cout << "Enter the number of pounds of the item being bought: "; cin >> units; } inventoryLevels[counter].inventory -= units; } counter++; } } temp = counter; while (temp >= 0) { total += itemsSold[temp].cost; temp--; } return total; } string nameCheck(string PLU, int inventoryLength) { for (int k = 0; k < inventoryLength; k++) { if (inventoryLevels[k].PLU == PLU) { return inventoryLevels[k].name; } } return "We are currently out of stock of this item."; } int typeCheck(string PLU, int inventoryLength) { for (int k = 0; k < inventoryLength ; k++) { if (inventoryLevels[k].PLU == PLU) { return inventoryLevels[k].salesType; } } } double price(string PLU, double units) { // double price; for (int k = 0; k < 100; k++) { if (inventoryLevels[k].PLU == PLU) { price = units * (inventoryLevels[k].unitPrice); return price; } } } bool readInventory(string filename, int &numberOfItems) { // File object fstream inventory; // Some temp variable used to validate information is still in file while it is being transfered to array //string temp; // Open the inventory file inventory.open(filename); // Will temporarily hold the properties of an item until loaded onto the array Product temp; // Counter will allow for a new item to be stored onto the next available location in the array int counter = 0; // Will demonstrate whether or not there is an error int error = 0; // Store items and their properties in the global array while (inventory >> temp.PLU >> temp.name >> temp.salesType >> temp.unitPrice >> temp.inventory) { // Checks to see if they if ((temp.PLU.at(0) > 57) || (temp.PLU.at(1) > 57) || (temp.PLU.at(2) > 57) || (temp.PLU.at(3) > 57)) { error++; } else { inventoryLevels[numberOfItems].PLU = temp.PLU; inventoryLevels[numberOfItems].name = temp.name; inventoryLevels[numberOfItems].salesType = temp.salesType; inventoryLevels[numberOfItems].unitPrice = temp.unitPrice; inventoryLevels[numberOfItems].inventory = temp.inventory; numberOfItems++; counter++; } } // If there is no error return true if (error == 0) { return false; } // If there is an error return false else if (error > 0) { return true; } } A: When you assign values here, while (inventory >> temp.PLU >> temp.name >> temp.salesType >> temp.unitPrice >> temp.inventory) Am I right to assume that the input file is in the format (since you're assigning each line to the variables? line 1: Some string you want assigned to PLU line 2: Some string you want assigned to name line 3: Some Int you want assigned to salestype .......... .......... line n:string PLU
{ "language": "en", "url": "https://stackoverflow.com/questions/29112501", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Combo box with ability to constraint displayed items. WinForms Hi I need a combo box control (.NET, WinForms) and to be able to constraint available combo box values by typing a string of characters and have those apply a contains/like search to constrain the available values in the combo box. I mean entering "Un" will show me "United Kingom", "United States"... Can you please advise any existent implementations? A: This feature is provided in the System.Windows.Forms.ComboBox. Check out the AutoCompleteMode To do the things you described, you need to set the Items property of the ComboxBox to have all your options "United Kingdom", "United States", etc. Then, change the AutoCompleteMode to "SuggestAppend". Change the AutoCompleteSource to "ListItems"
{ "language": "en", "url": "https://stackoverflow.com/questions/4540411", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Only accept certain certificates I have created my own CA, and I want to use its certificates to communicate with a server using SSLSockets. I can do that with the truststore I am currently using, but I would like to be more restrictive, so that my server only accepts connections from the clients I explicitly decide, which must own certificates signed by my CA (right now, anyone with a certificate signed by my CA is granted access). The goal behind this is to be able to revoke certificates, by eliminating some certificates from the server's truststore. Imagine there are two devices, A and B, both with signed certificates by my CA. I only want to grant access to A, not B. If I only have in the server's truststore A's certificate, I get a BadCertificate exception for both of the clients; the moment I add my CA's pem file both A and B are granted access, regardless of whether A's or B's certificates are explicitly added to the truststore. Any ideas or alternatives to this approach? Thanks. A: The revocation part of a PKI infrastructure (e.g. what you get if you have your own CA) is usually done with CRL (certificate revocation lists) or OCSP (online certificate status protocol). If this is too much effort for a small PKI with only few clients you can also hard code the fingerprints of the certificates your accept (white list) or which got revoked (blacklist) into your application and check on each connect if the certificate you got matches the fingerprint. Of course you need to update the application on each revocation (blacklist) or whenever you issue a new certificate (white list) so this does not scale very well. But the same problems occurs with CRLs which need to be distributed to each client. OCSP scales much better because the client try to retrieve the revocation status on connect, but then you need to setup an OCSP responder.
{ "language": "en", "url": "https://stackoverflow.com/questions/24099219", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Why is @UIViewController referenced in UIView? I was looking through UIView.h and I noticed on line 123 a @class UIViewController; I see there is a UIViewController *_viewDelegate; on line 133 but I don't understand why it would have a delegate? A: At a guess, that'll be so that the controller can deliver viewDidLayoutSubviews and the similar messages. A: The delegate used when the view controller adds a subview to its own view or add a view to a window. Also used so that a UIView can call nextResponder.
{ "language": "en", "url": "https://stackoverflow.com/questions/27284189", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Algorithms for establishing baselines from time series data In my app I collect a lot of metrics: hardware/native system metrics (such as CPU load, available memory, swap memory, network IO in terms of packets and bytes sent/received, etc.) as well as JVM metrics (garbage collectins, heap size, thread utilization, etc.) as well as app-level metrics (instrumentations that only have meaning to my app, e.g. # orders per minute, etc.). Throughout the week, month, year I see trends/patterns in these metrics. For instance when cron jobs all kick off at midnight I see CPU and disk thrashing as reports are being generated, etc. I'm looking for a way to assess/evaluate metrics as healthy/normal vs unhealthy/abnormal but that takes these patterns into consideration. For instance, if CPU spikes around (+/- 5 minutes) midnight each night, that should be considered "normal" and not set off alerts. But if CPU pins during a "low tide" in the day, say between 11:00 AM and noon, that should definitely cause some red flags to trigger. I have the ability to store my metrics in a time-series database, if that helps kickstart this analytical process at all, but I don't have the foggiest clue as to what algorithms, methods and strategies I could leverage to establish these cyclical "baselines" that act as a function of time. Obviously, such a system would need to be pre-seeded or even trained with historical data that was mapped to normal/abnormal values (which is why I'm learning towards a time-series DB as the underlying store) but this is new territory for me and I don't even know what to begin Googling so as to get back meaningful/relevant/educated solution candidates in the search results. Any ideas? A: You could categorize each metric (CPU load, available memory, swap memory, network IO) with the day and time as bad or good for each metric. Come up with a set of data for a given time frame with metric values and whether they are good or bad. Train a model using 70% of the data with the good and bad answers in the data. Then test the trained model using the other 30% of data without the answers to see if you get the predicted results (good,bad) from the model. You could use a classification algorithm.
{ "language": "en", "url": "https://stackoverflow.com/questions/50456409", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Set dynamic ENV in Dockerfile (and I can modify the Dockerfile only) At work we are able to build image from Dockerfile and we do not have access to the docker command. My goal is to be able to set environment variables based on my commit message or branch. Here is my try: RUN branch=$(git branch | sed -n -e 's/^\* \(.*\)/\1/p' | awk -Frelease/ '/release/{print $2}') \ && if [[ "$branch" != qa* ]]; then branch=$(git log -1 --pretty | grep release\/ | awk -Frelease/ '/release/{print $2}' | awk -F: '{print $1}'); fi ENV EXPORT_ENV $branch CMD ["npm", "run", "start"] However, in my npm run start script it gets process.env.EXPORT_ENV as undefined. A: On second thought, let me expand on my comment: You can't set Dockerfile environment variables to the result of commands in a RUN statement. Variables set in a RUN statement are ephemeral; they exist only while the RUN statement is active If you don't have access to the host environment (to pass arguments to the docker build command), you're not going to be able to do exactly what you want. However, you can add an ENTRYPOINT script to your container that will set up dynamic environment variables before the main process runs. That is, if you have in your Dockerfile: ENTRYPOINT ["/docker-entrypoint.sh"] And in /docker-entrypoint.sh you have: #!/bin/bash branch=$(git branch | sed -n -e 's/^\* \(.*\)/\1/p' | awk -Frelease/ '/release/{print $2}') \ && if [[ "$branch" != qa* ]]; then branch=$(git log -1 --pretty | grep release\/ | awk -Frelease/ '/release/{print $2}' | awk -F: '{print $1}'); fi) export EXPORT_ENV="$branch" exec "$@" Then the EXPORT_ENV environment variable would be available in the environment of your CMD process.
{ "language": "en", "url": "https://stackoverflow.com/questions/71245589", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }