id
stringlengths
5
11
text
stringlengths
0
146k
title
stringclasses
1 value
doc_23536900
I managed to get the first invalid form field, but I am unable to determine which accordion page contains this field. A: If you have the first invalid form field then you can use closest to find it's accordion card via the class 'collapse'. Add 'show' to its classes so it will open then you can scroll to the invalid element via scrollIntoView handleSubmit = (event) => { const form = event.currentTarget; if (form.checkValidity() === false) { let firstInvalidFieldInTheForm = form.querySelectorAll(':invalid')[0]; let ancestorAccordion = firstInvalidFieldInTheForm.closest(".collapse"); ancestorAccordion.classList.add("show") firstInvalidFieldInTheForm.scrollIntoView(); event.preventDefault(); event.stopPropagation(); }else{ this.handleSaveButtonClick(); } this.setState({validated: true}); };
doc_23536901
With the code below I've gotten through the first few steps - which were hard enough for me. It successfully creates the DataURI (data:image/png;base64,iVBOR...) and adds it to the value of hidden field _fid_86 when the placesig function is called. What I need help with: 1) How do I convert the DataURI to an image that can be included in the form data when it is submitted? 2) How can I connect that image to the hidden input? Will the same method work, or does it need to be a file input. I'm posting this form to QuickBase from a webapp. Ideally, I'd like to be able to accomplish this without any server-side involvement, so that the app will be self-sufficient and not rely on php or other hosted scripts. If that's not possible, I could do it with php, but still do not know how to make that happen. I've spent days on this. And I know my lack of programming knowledge is to blame. A solution to this would be HUGE for me. Thanks!! <body> <script src="jquery-1.8.2.min.js"></script> <script src="jquery.mobile-1.2.0.min.js"></script> <script src="jSignature.js"></script> </head> <body> <form name=qdbform method=POST onsubmit='return validateForm(this)' encoding='multipart/form-data' encType='multipart/form-data' action=https://www.quickbase.com/db/...?act=API_AddRecord> <script type="text/javascript"> var $sigDiv = null; $(document).ready( function() { $sigDiv = $("#signature1").jSignature({'UndoButton':false,color:"#000000",lineWidth:2}); } ); function placesig() { var datapair = $sigDiv.jSignature("getData", "image"); var i = new Image(); i.src = "data:" + datapair[0] + "," + datapair[1]; var src = $(i).attr('src'); $("#_fid_86").val(src); }; </script> <div id="signature1"></div> <input data-theme="a" type="button" value="Place Image" onclick="placesig();"/> <input type="hidden" id="_fid_86" name="_fid_86" /> <input type="submit" value="Save"> </form> A: It's not possible to do this without server side code. jSignature will generate a Data URL but this cannot be passed as part of a multipart form the way file uploads are handled. Instead jSignature will set a textbox with the dataURL and on the server you can read this and produce a file. This is actually covered in one of the examples (in PHP even) called jSignature_Tools_Base30.php. Personally I would go with the SVN example (less data across the wire and more flexibility for scaling.
doc_23536902
I need to make an API call synchronized. In order to make this call synchronized, I defined a ManualResetEvent and called WaitOne on it. WaitOne is called after the call to external COM object. In some circumstances it never returns. That's why I must define a timeout for that Wait call. But, I can't pass a constant into Wait method, because if the call was successfull than this API receives events from the COM object and in each handler of an event, timeout passed into WaitOne should be reset. Consider the example: private ManualResetEvent operationIsInProcess; private static readonly IComObject sender; private int timeout = 30000; public void Start() { sender.OnExchange += SenderOnOnExchange; } private void StartOperation(){ sender.StartAsyncExchange(); operationIsInProcess.WaitOne(timeout); } private void SenderOnOnExchange(){ //somehow we need to reset or update that timeout on WaitOne //operationInProcess.Update(timeout); } I'm just wondering whether anybody faced this problem or not. I'm sure this should be a common situation. As I understand there is no "out of the box" solution. So I have to build my own syncronization primitive, or maybe someone have already done it? Update. I wanted something like this (implemented by myself): public class UpdateableSpin { private readonly object lockObj = new object(); private bool shouldWait; private long taskExecutionStartingTime; public UpdateableSpin(bool initialState) { shouldWait = initialState; } public void Wait(TimeSpan executionTimeout, int spinDuration = 0) { UpdateTimeout(); while (shouldWait && DateTime.UtcNow.Ticks - taskExecutionStartingTime < executionTimeout.Ticks) { lock (lockObj) { Thread.Sleep(spinDuration); } } } public void UpdateTimeout() { lock (lockObj) { taskExecutionStartingTime = DateTime.UtcNow.Ticks; } } public void Reset() { lock (lockObj) { shouldWait = true; } } public void Set() { lock (lockObj) { shouldWait = false; } } } A: You could enter a loop and restart the wait: while (true) { if (!operationIsInProcess.WaitOne(timeout)) { // timed out break; } else { // Reset the signal. operationIsInProcess.Reset(); } } Then set your event in the event handler. private void SenderOnOnExchange () { operationInProcess.Set(); }
doc_23536903
So 3,1,4,6 ... to ,0,0,0,1,0,0,0,0,0,0,1,4,6... (there's "" at first digit so total 11 digits) I thought it's an easy job but it wasn't. import java.io.*; public class T { public static void main(String[] args){ File file = new File("./src/dataset/mnist_train.csv"); File wfile = new File("./src/dataset/conv_mnist_train2.txt"); try{ BufferedReader bufferedReader = new BufferedReader(new FileReader(file)); BufferedWriter fileWriter = new BufferedWriter(new FileWriter(wfile)); String line; String[] numbers; int g = 0, cnt = 0, cnt2 = 0; while ((line = bufferedReader.readLine()) != null) { cnt2++; numbers = line.split(","); for(String i : numbers){ if(g == 0){ for(int j=0; j<10; ++j) { if(j == Integer.parseInt(i)) fileWriter.write("," + 1); else{ fileWriter.write("," + 0); cnt++;} } g++; } else {fileWriter.write("," +i); cnt++;} } fileWriter.newLine(); System.out.println(numbers.length + " " + cnt + " " + cnt2); g = 0; cnt = 0; } }catch(Exception e){ e.printStackTrace(); } } } g, cnt, cnt2 are numbers I used for debugging but I didn't find any problem here; it naturally converted each lines with 785 letters into new lines with 795 letters. import java.io.*; public class Tes { public static void main(String[] args){ File file = new File("./src/dataset/conv_mnist_train2.txt"); try{ BufferedReader bufferedReader = new BufferedReader(new FileReader(file)); String line; int g = 0; while ((line = bufferedReader.readLine()) != null) { g++; String[] N = line.split(","); if(N.length != 795){ System.out.println(N.length + " " + g); for(String i : N) System.out.print(i + " "); System.out.println(); } } }catch(Exception e){ e.printStackTrace(); } } } But what happened is that when I run my second code, which shouldn't print anything, printed result and said my 59994th row data is only consisted of 311 letters. But from my first code, I confirmed that my 59994th row has 795 letters. I don't know what's going on here. Also I tried to use FileWriter and FileReader instead of BufferedWriter & Reader, but it didn't solve problem. Could somebody tell me what's going on, and how to fix this? A: The problem was that I didn't close the reader/writer. Didn't know it could end up in serious error.
doc_23536904
<?php if (!file_exists('http://domain.com/tourism/data/new')) { if(mkdir('http://domain.com/tourism/data/new', 0777, true)) { echo "success"; } else echo "error"; } ?> A: <?php // Desired folder structure relative path $str = './depth1/depth2/depth3/'; // To create the nested structure, the $recursive parameter // to mkdir() must be specified. if (!mkdir($str, 0777, true)) { die('Failed to create folders...'); } // ... ?> Make sure you have permission to create any folder. A: If your script needs to store files, common practice is to designate a dedicated folder for this. This folder will be outside the web-root (not available over http/https) and will have its permissions set in such a way that the webserver can write to it. You can do this with (for example) chgrp https $FOLDER chmod g+w $FOLDER Consequently, you cannot use http:// in your mkdir. You should use a filesystem path, for example /srv/my-script/data. It is good practice to make this path configurable, for example by putting it in a configuration file. […] create folder in the source code of php web application on server. This is a very bad idea, and might leave you vulnerable to attacks. Attackers may be able to upload their own PHP code. A: To do this, the 'user' executing the script will need write permissions. This is usually the web-server user (in many linuxes, 'www-data'). This may or may not be advisable. Most folks advise at least making the location outside the web-root.
doc_23536905
You can see it here: http://denverbarr.com For some reason the placeholders aren't behaving correctly in that they have a weird padding that was not set. And for some reason I can't get the name input to validate. Here is the code I've been using: HTML CODE <div class="six col text-center" id="contact"> <div id="contactform"> <fieldset> <div class="row"> <div class="twelve col"> <input placeholder="Name" id="user_name" name="user_name" required="true" size="30" type="text" value=""> </div> <div class="twelve col"> <input id="firm" name="firm" placeholder="Firm" size="30" type="text" value=""> </div> <div class="twelve col"> <input id="email" name="email" placeholder="Email" required="true" size="30" type="email" value=""> </div> <div class="twelve col"> <input id="phone" name="phone" placeholder="Phone" required="true" size="30" type="tel" value=""> </div> <div class="twelve col"> <textarea cols="40" id="message" name="message" placeholder="Message" required="true" rows="5"> </textarea> </div> <div class="twelve col"> <input type="submit" id="submit_btn" value="Submit" /> </div> </div> </fieldset> <div id="contact_results"></div> </div> </div> $(document).ready(function() { $("#submit_btn").click(function() { var proceed = true; $("#contactform input[required=true], #contactform textarea[required=true]").each(function(){ $(this).css('border-color',''); if(!$.trim($(this).val())){ //if this field is empty $(this).css('border-color','red'); //change border color to red proceed = false; //set do not proceed flag } var email_reg = /^([\w-\.]+@([\w-]+\.)+[\w-]{2,4})?$/; if($(this).attr("type")=="email" && !email_reg.test($.trim($(this).val()))){ $(this).css('border-color','red'); //change border color to red proceed = false; //set do not proceed flag } }); if(proceed) //everything looks good! proceed... { post_data = { 'user_name' : $('input[name=user_name]').val(), 'user_email' : $('input[name=email]').val(), 'phone' : $('input[name=phone]').val(), 'firm' : $('input[name=firm]').val(), 'msg' : $('textarea[name=message]').val() }; $.post('emal.php', post_data, function(response){ if(response.type == 'error'){ //load json data from server and output message output = '<div class="error">'+response.text+'</div>'; }else{ output = '<div class="success">'+response.text+'</div>'; //reset values in all input fields $("#contactform input[required=true], #contactform textarea[required=true]").val(''); $("#contactform").slideUp(); //hide form after success } $("#contactform #contact_results").hide().html(output).slideDown(); }, 'json'); } }); $("#contactform input[required=true], #contactform textarea[required=true]").keyup(function() { $(this).css('border-color',''); $("#result").slideUp(); }); }); And the php side: <?php if($_POST) { $to_email = "info@denverbarr.com"; //Recipient email, Replace with own email here //check if its an ajax request, exit if not if(!isset($_SERVER['HTTP_X_REQUESTED_WITH']) AND strtolower($_SERVER['HTTP_X_REQUESTED_WITH']) != 'xmlhttprequest') { $output = json_encode(array( //create JSON data 'type'=>'error', 'text' => 'Sorry Request must be Ajax POST' )); die($output); //exit script outputting json data } //Sanitize input data using PHP filter_var(). $user_name = filter_var($_POST["user_name"], FILTER_SANITIZE_STRING); $user_email = filter_var($_POST["user_email"], FILTER_SANITIZE_EMAIL); $phone = filter_var($_POST["phone"], FILTER_SANITIZE_NUMBER_INT); $firm = filter_var($_POST["firm"], FILTER_SANITIZE_STRING); $message = filter_var($_POST["msg"], FILTER_SANITIZE_STRING); //additional php validation if(strlen($use_name)<4){ // If length is less than 4 it will output JSON error. $output = json_encode(array('type'=>'error', 'text' => 'Name is too short or empty!')); die($output); } if(!filter_var($user_email, FILTER_VALIDATE_EMAIL)){ //email validation $output = json_encode(array('type'=>'error', 'text' => 'Please enter a valid email!')); die($output); } if(!filter_var($phone, FILTER_SANITIZE_NUMBER_FLOAT)){ //check for valid numbers in phone number field $output = json_encode(array('type'=>'error', 'text' => 'Enter only digits in phone number')); die($output); } if(strlen($message)<3){ //check emtpy message $output = json_encode(array('type'=>'error', 'text' => 'Too short message! Please enter something.')); die($output); } //email body $message_body = $message."\r\n\r\n-".$user_name."\r\nEmail : ".$user_email."\r\nPhone Number : ". $phone_number ; //proceed with PHP email. $headers = 'From: '.$user_name.'' . "\r\n" . 'Reply-To: '.$user_email.'' . "\r\n" . 'X-Mailer: PHP/' . phpversion(); $send_mail = mail($to_email, $subject, $message_body, $headers); if(!$send_mail) { //If mail couldn't be sent output error. Check your PHP email configuration (if it ever happens) $output = json_encode(array('type'=>'error', 'text' => 'Could not send mail! Please check your PHP mail configuration.')); die($output); }else{ $output = json_encode(array('type'=>'message', 'text' => .$user_name .' Thank you for your email')); die($output); } } ?> I'm going to assume the issue is here: $("#submit_btn").click(function() { var proceed = true; $("#contactform input[required=true], #contactform textarea[required=true]").each(function(){ $(this).css('border-color',''); if(!$.trim($(this).val())){ //if this field is empty $(this).css('border-color','red'); //change border color to red proceed = false; //set do not proceed flag } var email_reg = /^([\w-\.]+@([\w-]+\.)+[\w-]{2,4})?$/; if($(this).attr("type")=="email" && !email_reg.test($.trim($(this).val()))){ $(this).css('border-color','red'); //change border color to red proceed = false; //set do not proceed flag } }); Because Its the highlighting that is tripping out. It should be highlighting the Name, email, phone, and message boxes A: Css Padding issue You have css code mistake for placeholder. You have missed line-height. You have to add line-hight:20px; in #contact input, select class. This class is in main.css file on near about 842 line. Validation issue On $("#submit_btn").click if you alert value of user_name text box. It gives 'Name' value. So that your validation getting true. Try to add bellow in your $("#submit_btn").click and check : alert($('#user_name').val()) OR for all input alert alert($(this).val()) A: You could start by fixing those errors : (Sorry, can't post screenshot) Uncaugh TypeError : Cannot read property 'addEventListener' of null placeholder.js:70 Uncaught SyntaxError : missing ) after argument list main.js:18
doc_23536906
id, text and tags in the schema.xml I set the following <uniqueKey>id</uniqueKey> <defaultSearchField>text</defaultSearchField> <solrQueryParser defaultOperator="AND"/> <copyField source="tags" dest="text"/> However, when I search a word that only appears as a tag, then the document is not found. My question here is: does copyField happen before any analyzer runs (index and query) as described here or just before the query analyzer? EDIT the analyzer def: <fieldType name="text" class="solr.TextField" positionIncrementGap="100"> <analyzer type="index"> <tokenizer class="solr.WhitespaceTokenizerFactory" /> <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="1" preserveOriginal="1" /> <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" enablePositionIncrements="true" /> <filter class="solr.LowerCaseFilterFactory" /> <filter class="solr.SnowballPorterFilterFactory" language="German" /> <filter class="solr.RemoveDuplicatesTokenFilterFactory"/> </analyzer> <analyzer type="query"> <tokenizer class="solr.WhitespaceTokenizerFactory"/> <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="1" preserveOriginal="1" /> <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" enablePositionIncrements="true" /> <filter class="solr.LowerCaseFilterFactory" /> <filter class="solr.SnowballPorterFilterFactory" language="German" /> <filter class="solr.RemoveDuplicatesTokenFilterFactory"/> </analyzer> </fieldType> and the field-type definitions (they are pretty much as the default configs): <fieldType name="string" class="solr.StrField" sortMissingLast="true" omitNorms="true"/> <fieldType name="int" class="solr.TrieIntField" precisionStep="0" omitNorms="true" positionIncrementGap="0"/> and last the field definitions: <fields> <field name="id" type="string" indexed="true" stored="true" required="true" /> <field name="text" type="text" indexed="true" stored="false" multiValued="true" /> <field name="tags" type="text" indexed="false" stored="false" /> </fields> <uniqueKey>id</uniqueKey> <defaultSearchField>text</defaultSearchField> <solrQueryParser defaultOperator="AND"/> <copyField source="tags" dest="text"/> A: The copyField is done when a document is indexed, so it is before the index analyzer. It is really like you had put the same input text in two different fields. But after that, it all depends on the analyzers you defined for both fields. A: If you search q=tags:xyz then xyz will not be found because you had sent it not be indexed. If you do a default search, yes, it should search the copyfield, however, according to the Solr wiki Any number of declarations can be included in your schema, to instruct Solr that you want it to duplicate any data it sees in the "source" field of documents that are added to the index I think that having not added 'tags' to index would also cause the copyfield of 'tags' to not be indexed. A: I haven't tried using the copyField to append additional text to an existing field. I suppose Solr could concatenate it, or add it as a second value. But here's a couple ideas to try: * *Experiment with a document where the text field is blank, perhaps not even mentioned as a under the structure. Does it seem to make a difference when tags make it into the main text whether text starts out as totally blank or not? *Declare a second field, call it text2. And then ALSO copy tags into text2 via a second copyField directive. This text2 field won't have anything else in it, presumably not even mentioned in your fields, so for sure it should get the content. In both cases you'd check results with the schema browser, as before. I'd be very curious to hear how you find out!
doc_23536907
Closest is Create a JSON tree in Node.Js from MongoDB but it still doesnt work as expected. Or maybe my head cant wrap this problem... I have schema that key components for my problem looks like this: var userSchema = new Schema({ _id: {type: Number}, children: [{type: Number, ref: 'User'}] )}; each user may have three children users, so it can go infinately deep. Fortunately, i have to cover two scenarios - * *build json tree from specific user up to 3 nestings *calculate data for 10 nestings from specific root. I tried to write recursive function like this in my express.js api: api.get('/user/tree/:user_id', function (req, res) { var user_id = req.params.user_id; var depth = 0; var root = {}; function walker(parent) { if (depth >= 3) { return res.send('whole data, not just last user'); // this is wrong. it will try to res.send for each iteration of forEach, and it sends only last user. } depth += 1; _.forEach(parent.mlm.childs, function (userid, index) { User.findOneAsync({_id: userid}).then(function(user) { parent.mlm.childs[index] = user; walker(parent.mlm.childs[index]); }); }); } User.findOneAsync({_id: user_id}).then(function(user) { root = user; walker(user, root); }); }); but of course it only traverse the tree, instead of traverse and create whole JSON. Im stuck on how to be able to access the root and send whole tree. problem of sending many res.send can be solved by counting iterations and send only if forEach ended, i guess. Thanks for any help. A: Ok. I found a solution. api.get('/user/tree/:user_id', function (req, res) { var user_id = req.params.user_id; var tree = {}; var counter = 0; var gloCounter = 0; function walker(parent) { gloCounter += 1; if (parent.mlm.childs.length === 0 && gloCounter > counter) { res.send(tree); return; } _.forEach(parent.mlm.childs, function (child, i) { counter += 1; User.findOneAsync({_id: child}) .then(function(child) { parent.mlm.childs[i] = child; var newParent = parent.mlm.childs[i]; walker(newParent); }).catch(function(err) { console.log('error: ' + err); }); }); } User.findOneAsync({_id: user_id}) .then(function(user) { tree = user; walker(tree); }).catch(function(err) { console.log('err: ' + err); }); }); It works as expected - traverse through whole structure, and create json, that is sent back. It uses lodash and bluebird promises, for those who would solve similar problem in future and dont understand what is happening with all those "Async" sufixes and _.forEach.
doc_23536908
I don't get those options in the .netcore project. I do see a way to add a connected service and tried adding using a wcf but when I add the reference it shows the methods appended with "body", "response", and "requestbody" How can I add an older legacy web reference in a .netcore project and access it?
doc_23536909
Here are my Enums: src/shared/models/report/data-type.enum.ts export enum DataType { DASHBOARD = 'DASHBOARD', REPORT = 'REPORT', APP = 'APP', } In my component, I'd like to compare if it's an APP or some other value. If it's an APP Enum, I'd like to show an other button in my frontend. src/shared/components/report-modal/report-modal.component.ts readonly isApplication = this.DataType.APP === true Here is the code of the html component: src/shared/components/report-modal/report-modal.component.html <div *ngIf="!hasNoAccess && isApplication" > <a class="button" [class.button--primary]="hasAccess" [class.button--secondary]="!hasAccess" [attr.href]="report.link" aria-describedby="access-btn-hint" target="_blank">{{'REPORT.GO_TO_APP'|translate}}</a> </div> <div *ngIf="!hasNoAccess && !isApplication" > <a class="button" [class.button--primary]="hasAccess" [class.button--secondary]="!hasAccess" [attr.href]="report.link" aria-describedby="access-btn-hint" target="_blank">{{'REPORT.GO_TO_REPORT'|translate}}</a> </div> How can I make that work? THX
doc_23536910
I get the error function "pod" not defined which make sense because I really have no such function. The "pod" is coming from a json file which I convert into a configmap and helm is reading this value as a function and not as a straight string which is part of the json file. This is a snippet of my configmap: # Generated from 'pods' from https://raw.githubusercontent.com/coreos/prometheus-operator/master/contrib/kube-prometheus/manifests/grafana-dashboardDefinitions.yaml # Do not change in-place! In order to change this file first read following link: # https://github.com/helm/charts/tree/master/stable/prometheus-operator/hack {{- if and .Values.grafana.enabled .Values.grafana.defaultDashboardsEnabled }} apiVersion: v1 kind: ConfigMap metadata: name: {{ printf "%s-%s" (include "prometheus-operator.fullname" $) "services-health" | trunc 63 | trimSuffix "-" }} labels: {{- if $.Values.grafana.sidecar.dashboards.label }} {{ $.Values.grafana.sidecar.dashboards.label }}: "1" {{- end }} app: {{ template "prometheus-operator.name" $ }}-grafana {{ include "prometheus-operator.labels" $ | indent 4 }} data: services-health.json: |- { "annotations": { "list": [ { "builtIn": 1, "datasource": "-- Grafana --", "enable": true, "hide": true, "iconColor": "rgba(0, 211, 255, 1)", "name": "Annotations & Alerts", "type": "dashboard" } ] }, "targets": [ { "expr": "{__name__=~\"kube_pod_container_status_ready\", container=\"aggregation\",kubernetes_namespace=\"default\",chart=\"\"}", "format": "time_series", "instant": false, "intervalFactor": 2, "legendFormat": "{{pod}}", "refId": "A" } } {{- end }} The error I get is coming from this line: "legendFormat": "{{pod}}", And this is the error I get: helm upgrade --dry-run prometheus-operator-chart /home/ubuntu/infra-devops/helm/vector-chart/prometheus-operator-chart/ Error: UPGRADE FAILED: parse error in "prometheus-operator/templates/grafana/dashboards/services-health.yaml": template: prometheus-operator/templates/grafana/dashboards/services-health.yaml:1213: function "pod" not defined I tried to escape it but nothing worked. Anyone get idea about how I can work around this issue? A: Move your dashboard json to a separate file, let's say name it dashboard.json. Then in your configmap file: instead of listing the json down inline, reference the dashboard.json file by typing the following: data: services-health.json: |- {{ .Files.Get "dashboard.json" | indent 4 }} That would solve the problem! A: In the case of my experiments, I replaced "legendFormat": "{{ pod }}", with "legendFormat": "{{ "{{ pod }}" }}", and it was very happy to return the syntax I needed (Specifically for the grafana-operator GrafanaDashboard CRD). A: Escaping gotpl placeholders is possible using backticks. For example, in your scenario, instead of using {{ pod }} you could write {{` {{ pod }} `}}. A: Keeping json file out of config map and sourcing it within config map works, but make sure to keep the json file out of template directory while using with helm, or else it will try to search for {{ pod }} .
doc_23536911
<pagination total-items="bigTotalItems" ng-model="bigCurrentPage" max-size="maxSize" class="pagination-sm" boundary-links="true" rotate="false" num-pages="numPages"></pagination> per this stackoverflow question What are the advantages of using data- rather than x- prefix for custom attributes? a better way of representing this directive would be as follows & conforms to HTML5 specifications <div data-pagination data-total-items="totalItems" data-ng-model="currentPage" data-max-size="5" class="pagination-sm" data-boundary-links="true" data-rotate="false" data-ng-change="pageChanged()"></div> A: Both are the same -> angular directives. Use the shorter one. A: HTML validation isn't all that important. Sometimes they can be safely ignored. However just by adding a 'data-' prefix on all attributes, the editor would stop complaining about invalid html. read more about it here A: data-ng-model = "user.name" and ng-model="user.name" provide you with the same outcome. You can use either. You can replace 'data' with 'x' and get the same outcome as well. You add the 'data' prefix to have them validated by html5 validators.
doc_23536912
The artefacting (image below) is only present when the last row of the video is within chrome viewport (so it disapears if the page is scrolled up). It manifests itself as stretching of the center row of pixels downwards, and appears to only affect some color channels. I have attempted changing the bitrate, and cutting the last row from the source, thinking the issue could be on the server side, without any impact. The fact that the issue depends on the position in the viewport makes me suspect a glitch in chrome itself. I have also attempted to force hardware decoding off in chrome:\flags and it does not solve the issue. Please submit your hypothesis on what could be the cause of this issue. Thanks. Update #1 Here is the ffmpeg command line and logs: export DISPLAY=:0 && ffmpeg -f x11grab -framerate 60 -video_size 1920x1080 -i :0.0+0,0 -draw_mouse 0 -f dash -utc_timing_url https://time.akamai.com/?iso -streaming 1 -seg_duration 2 -frag_duration 0.033 -fflags nobuffer -fflags flush_packets -c:v h264 -preset ultrafast data/stream.mpd And the logs: ffmpeg version 4.2.4-1ubuntu0.1 Copyright (c) 2000-2020 the FFmpeg developers built with gcc 9 (Ubuntu 9.3.0-10ubuntu2) configuration: --prefix=/usr --extra-version=1ubuntu0.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-nvenc --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared libavutil 56. 31.100 / 56. 31.100 libavcodec 58. 54.100 / 58. 54.100 libavformat 58. 29.100 / 58. 29.100 libavdevice 58. 8.100 / 58. 8.100 libavfilter 7. 57.100 / 7. 57.100 libavresample 4. 0. 0 / 4. 0. 0 libswscale 5. 5.100 / 5. 5.100 libswresample 3. 5.100 / 3. 5.100 libpostproc 55. 5.100 / 55. 5.100 [x11grab @ 0x561ca34b9980] Stream #0: not enough frames to estimate rate; consider increasing probesize Input #0, x11grab, from ':0.0+0,0': Duration: N/A, start: 1618941693.853256, bitrate: N/A Stream #0:0: Video: rawvideo (BGR[0] / 0x524742), bgr0, 1920x1080, 60 fps, 1000k tbr, 1000k tbn, 1000k tbc Stream mapping: Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264)) Press [q] to stop, [?] for help [libx264 @ 0x561ca34c5300] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2 AVX512 [libx264 @ 0x561ca34c5300] profile High 4:4:4 Predictive, level 4.2, 4:4:4 8-bit [libx264 @ 0x561ca34c5300] 264 - core 155 r2917 0a84d98 - H.264/MPEG-4 AVC codec - Copyleft 2003-2018 - http://www.videolan.org/x264.html - options: cabac=0 ref=1 deblock=0:0:0 analyse=0:0 me=dia subme=0 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=1 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=6 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=0 weightp=0 keyint=250 keyint_min=25 scenecut=0 intra_refresh=0 rc=crf mbtree=0 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=0 [dash @ 0x561ca34c3740] No bit rate set for stream 0 [dash @ 0x561ca34c3740] Opening 'data/init-stream0.m4s' for writing Output #0, dash, to 'data/stream.mpd': Metadata: encoder : Lavf58.29.100 Stream #0:0: Video: h264 (libx264), yuv444p, 1920x1080, q=-1--1, 60 fps, 15360 tbn, 60 tbc Metadata: encoder : Lavc58.54.100 libx264 Side data: cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1 [dash @ 0x561ca34c3740] Opening 'data/chunk-stream0-00001.m4s.tmp' for writing frame= 34 fps=0.0 q=15.0 size=N/A time=00:00:00.43 bitrate=N/A dup=5 drop=0 speed=0.836x frame= 65 fps= 64 q=15.0 size=N/A time=00:00:00.95 bitrate=N/A dup=5 drop=0 speed=0.929x frame= 96 fps= 62 q=15.0 size=N/A time=00:00:01.46 bitrate=N/A dup=5 drop=2 speed=0.955x frame= 126 fps= 62 q=15.0 size=N/A time=00:00:01.96 bitrate=N/A dup=5 drop=3 speed=0.962x frame= 157 fps= 62 q=15.0 size=N/A time=00:00:02.48 bitrate=N/A dup=5 drop=3 speed=0.973x frame= 188 fps= 61 q=15.0 size=N/A time=00:00:03.00 bitrate=N/A dup=5 drop=3 speed=0.98x frame= 217 fps= 61 q=15.0 size=N/A time=00:00:03.48 bitrate=N/A dup=5 drop=3 speed=0.977x frame= 247 fps= 61 q=15.0 size=N/A time=00:00:03.98 bitrate=N/A dup=6 drop=3 speed=0.976x [dash @ 0x561ca34c3740] Opening 'data/stream.mpd.tmp' for writing [dash @ 0x561ca34c3740] Opening 'data/chunk-stream0-00002.m4s.tmp' for writing frame= 279 fps= 61 q=15.0 size=N/A t A: Add the -vf format=yuv420p output option for YUV 4:2:0 chroma subsampling. This is the only widely supported chroma subsampling scheme for H.264. Your input pixel format is bgr0. Your output is yuv444p. ffmpeg tries to preserve as much fidelity as it can so it auto converts it to a pixel format supported by the selected encoder that most resembles the source. In this case it is yuv444p (YUV 4:4:4) which is not universally supported.
doc_23536913
The face detection routines in CoreImage naturally work faster on smaller images so I have been investigating using the aspectRatioThumbnail to generate the face data with the plan to scale it up to draw on the fullScreenImage representation. The reason I am doing this is that I have potentially 20-30 images to process so I want to reduce the task time. This may be a simple math problem but I am getting inaccurate results trying to map a point in one image to another. 90 x 120 image - CGPoint(64, 50) rightEyePosition to 480 x 640 image - CGPoint(331, 303) rightEyePosition (480 /90) * 64 = 341.333 - but it should be 331, yes? Am I doing it wrong? Update - a few more tests later. So perhaps it is just that the face data result varies slightly because of the different image resolutions? That would make sense: that there is not a scalable relationship between data results. I still wonder though: is my scaling math wrong above? Using CIDetectorAccuracyHigh useImageOptions: 0 ------------ aspectRatioThumbnail 90.000000 120.000000 orientation: 0 2013-01-18 12:33:30.378 SeqMeTestBed[9705:907] aspectRatioThumbnail: features { bounds = "{{23, 16}, {56, 56}}"; hasLeftEyePosition = 1; hasMouthPosition = 1; hasRightEyePosition = 1; leftEyePosition = "{43, 59}"; mouthPosition = "{51, 31}"; rightEyePosition = "{64, 50}"; } ------------ fullScreenImage 480.000000 640.000000 orientation: 0 2013-01-18 12:33:33.029 SeqMeTestBed[9705:907] fullScreenImage: features { bounds = "{{135, 81}, {298, 298}}"; hasLeftEyePosition = 1; hasMouthPosition = 1; hasRightEyePosition = 1; leftEyePosition = "{228, 321}"; mouthPosition = "{290, 156}"; rightEyePosition = "{331, 303}"; } ------------ fullResolutionImage 640.000000 480.000000 orientation: 0 2013-01-18 12:33:35.745 SeqMeTestBed[9705:907] fullResolutionImage: features { bounds = "{{195, 105}, {366, 366}}"; hasLeftEyePosition = 1; hasMouthPosition = 1; hasRightEyePosition = 1; leftEyePosition = "{356, 411}"; mouthPosition = "{350, 201}"; rightEyePosition = "{455, 400}"; // code used // - (void)detectFacialFeatures { NSDictionary *detectorOptions = [[NSDictionary alloc] initWithObjectsAndKeys:CIDetectorAccuracyHigh, CIDetectorAccuracy, nil]; CIDetector* faceDetector = [CIDetector detectorOfType:CIDetectorTypeFace context:nil options:detectorOptions]; NSDictionary *imageOptions = nil; UIImage *tmpImage; NSNumber* orientation; CIImage *ciImage; NSArray *array; NSMutableDictionary* featuresDictionary; Boolean useImageOptions = NO; printf("Using CIDetectorAccuracyHigh \n"); printf("useImageOptions: %d\n", useImageOptions); //-----------------aspectRatioThumbnail tmpImage = [[UIImage alloc] initWithCGImage:self.asset.aspectRatioThumbnail]; orientation = [NSNumber numberWithInt:tmpImage.imageOrientation]; printf("------------ aspectRatioThumbnail %f %f orientation: %d\n", tmpImage.size.width, tmpImage.size.height, [orientation integerValue]); ciImage = [CIImage imageWithCGImage:tmpImage.CGImage]; if (ciImage == nil) printf("----------!!!aspectRatioThumbnail: ciImage is nil \n"); imageOptions = [NSDictionary dictionaryWithObjectsAndKeys:orientation, CIDetectorImageOrientation, CIDetectorAccuracyHigh, CIDetectorAccuracy, nil]; if (useImageOptions) { array = [faceDetector featuresInImage:ciImage]; } else { array = [faceDetector featuresInImage:ciImage options:imageOptions]; } featuresDictionary = [self convertFeaturesToDictionary:array]; NSLog(@"aspectRatioThumbnail: features %@", featuresDictionary); //-----------------fullScreenImage tmpImage = [[UIImage alloc] initWithCGImage:self.asset.defaultRepresentation.fullScreenImage]; orientation = [NSNumber numberWithInt:tmpImage.imageOrientation]; printf("------------ fullScreenImage %f %f orientation: %d\n", tmpImage.size.width, tmpImage.size.height, [orientation integerValue]); ciImage = [CIImage imageWithCGImage:tmpImage.CGImage]; if (ciImage == nil) printf("----------!!!fullScreenImage: ciImage is nil \n"); imageOptions = [NSDictionary dictionaryWithObjectsAndKeys:orientation, CIDetectorImageOrientation, CIDetectorAccuracyHigh, CIDetectorAccuracy, nil]; if (useImageOptions) { array = [faceDetector featuresInImage:ciImage]; } else { array = [faceDetector featuresInImage:ciImage options:imageOptions]; } featuresDictionary = [self convertFeaturesToDictionary:array]; NSLog(@"fullScreenImage: features %@", featuresDictionary); //-----------------fullResolutionImage tmpImage = [[UIImage alloc] initWithCGImage:self.asset.defaultRepresentation.fullResolutionImage]; orientation = [NSNumber numberWithInt:tmpImage.imageOrientation]; printf("------------ fullResolutionImage %f %f orientation: %d\n", tmpImage.size.width, tmpImage.size.height, [orientation integerValue]); ciImage = [CIImage imageWithCGImage:tmpImage.CGImage]; if (ciImage == nil) printf("----------!!!fullResolutionImage: ciImage is nil \n"); imageOptions = [NSDictionary dictionaryWithObjectsAndKeys:orientation, CIDetectorImageOrientation, CIDetectorAccuracyHigh, CIDetectorAccuracy, nil]; if (useImageOptions) { array = [faceDetector featuresInImage:ciImage]; } else { array = [faceDetector featuresInImage:ciImage options:imageOptions]; } featuresDictionary = [self convertFeaturesToDictionary:array]; NSLog(@"fullResolutionImage: features %@", featuresDictionary); } - (NSMutableDictionary*)convertFeaturesToDictionary:(NSArray*)foundFaces { NSMutableDictionary * faceFeatures = [[NSMutableDictionary alloc] init]; if (foundFaces.count) { CIFaceFeature *face = [foundFaces objectAtIndex:0]; NSNumber* hasMouthPosition = [NSNumber numberWithBool:face.hasMouthPosition]; NSNumber* hasLeftEyePosition = [NSNumber numberWithBool:face.hasLeftEyePosition]; NSNumber* hasRightEyePosition = [NSNumber numberWithBool:face.hasRightEyePosition]; [faceFeatures setValue:hasMouthPosition forKey:@"hasMouthPosition"]; [faceFeatures setValue:hasLeftEyePosition forKey:@"hasLeftEyePosition"]; [faceFeatures setValue:hasRightEyePosition forKey:@"hasRightEyePosition"]; NSString * boundRect = NSStringFromCGRect(face.bounds); // NSLog(@"------------boundRect %@", boundRect); [faceFeatures setValue:boundRect forKey:@"bounds"]; if (hasMouthPosition){ NSString * mouthPosition = NSStringFromCGPoint(face.mouthPosition); [faceFeatures setValue:mouthPosition forKey:@"mouthPosition"]; } if (hasLeftEyePosition){ NSString * leftEyePosition = NSStringFromCGPoint(face.leftEyePosition); [faceFeatures setValue:leftEyePosition forKey:@"leftEyePosition"]; } if (hasRightEyePosition){ NSString * rightEyePosition = NSStringFromCGPoint(face.rightEyePosition); [faceFeatures setValue:rightEyePosition forKey:@"rightEyePosition"]; } } return faceFeatures; } A: Your math is correct based on the assumption that the thumbnail image retains all the facial data required for detection. That assumption does not hold since in a thumbnail image it is harder, even for a human, to recognize a face. Thus, with the higher resolution image the engine should return a more accurate face location that should be more tightly bound to the actual face. By simply scaling the values from the thumbnail image, it should still generally match to the detected face but you should definitely expect lower accuracy.
doc_23536914
{ searchdata = (from po in db.POModels from ac in db.AccountMasterModels.Where(c => c.AccountTypeID == 1) join pd in db.PODetailsModels on po.POID equals pd.POID where (pd.Remarks.Contains(Search) && (po.PODate >= FromDate && po.PODate <= ToDate)) && (po.FirmID == userinfo.FirmID) select new { PONumber =po.PONumber, PODate = po.PODate, CompanyName = ac.CompanyName, Remarks = po.Remarks, } ).ToList<POModels>(); ##here is the error ## } Unable to return this to the list How do I do this in this case, please help!! I also tried this else if (searchtype == "Remarks") { searchdata = (from po in db.POModels from ac in db.AccountMasterModels.Where(c => c.AccountTypeID == 1) join pd in db.PODetailsModels on po.POID equals pd.POID where (pd.Remarks.Contains(Search) && (po.PODate >= FromDate && po.PODate <= ToDate)) && (po.FirmID == userinfo.FirmID) select po).ToList(); It doesn't work either A: Don't put <POModels> into your ToList method. just do ToList , and if you want a POModel list instead of define an anonymous type you should define a POModel: select new POModel { PONumber =po.PONumber, PODate = po.PODate, CompanyName = ac.CompanyName, Remarks = po.Remarks, }.ToList(); Or probably your POModel class doesn't contain CompanyName property so you should use anonymous type: select new { PONumber =po.PONumber, PODate = po.PODate, CompanyName = ac.CompanyName, Remarks = po.Remarks, }.ToList(); A: You have too use an anonymous object or you define your own. Example public class NewPPModel { public int PONumber { get; set; } public DateTime PODate { get; set; } public string CompanyName { get; set; } public string Remarks { get; set; } } Then: searchdata = (from po in db.POModels from ac in db.AccountMasterModels.Where(c => c.AccountTypeID == 1) join pd in db.PODetailsModels on po.POID equals pd.POID where (pd.Remarks.Contains(Search) && (po.PODate >= FromDate && po.PODate <= ToDate)) && (po.FirmID == userinfo.FirmID) select new NewPPModel { PONumber =po.PONumber, PODate = po.PODate, CompanyName = ac.CompanyName, Remarks = po.Remarks } ).ToList();
doc_23536915
I'm developing a land-use model with a forested World and turtles that have the ability to convert the forest into crop land. Turtles (in this specific case companies) have the ability to move to a destination-patch within their range of mobility and clear the forest in a radius around them (to turn it into cropland). The goal is to have the companies choose their destination-patches based on the predicted profit from converting the patches around it, i.e., making economical decisions as to where to move in the landscape. Profit (or land-rent in my model) is a function of cost of conversion, maintenance cost, and potential penalties, subtracted from the patch-yield. Thus, the ideal destination-patch is a patch whose cluster of patches around it has the highest sum of predicted profits. I made a little figure to help visualizing the concept: concept of cluster profit in radius What I did so far So far, I have the following procedures relevant to seeking the maximum land-rent patch: 1) moving the companies to the patch with the maximum predicted land rent 2)reporting the maximum predicted land rent, using a to-report function. I've also tried a ask-patches function instead, but to no avail. Patches have a penalty associated to them, depending on whether they are part of a protected area, and can be owned-by a certain actor (depending on where they are located and who converts them). Problem/Goal What I need is a structure that asks each patch within a given radius of the turtle (company) to calculate the land-rent for each patch in another given radius. In other words, I want the turtle to be able to say: if I go to this patch xy which is in my radius of movement, I get the maximum land-rent out of converting all the patches around that patch xy. The code below does not produce any error messages, but from the turtles behavior, it doesn't seem it's running correctly either. Turtles move across the World randomly, and directly run into areas that are protected (incurring heavy fines for encroaching), causing them to go bankrupt. patches-own owned-by ;; "R" indicated it's unoccupied forest protected-area ;; whether the patch is part of a protected area encroachment-fine ;; the $-amount a turtle is fined for converting this patch of forest GUI inputs company-conversion-radius ;in what radius around themselves companies can convert land to move-to-max-rent-C ifelse any? patches in-radius (company-conversion-radius * 2 - 1) with [owned-by = "R"] [ ;here, companies 'scan' their environment for any patches that have forest (expressed through owned-by = "R"), if there are forested patches, companies move to the destination-patch that promises the highest profit (land-rent) let destination-C max-rent-C move-to destination-C ] ;; if no forest patch within their scanning-radius, they face the nearest forest patch anywhere and move towards it [ face min-one-of patches with [owned-by = "R"] [distance myself] move-to patch-ahead company-conversion-radius ] end to-report max-rent-C ask patches in-radius (company-conversion-radius * 2 - 1) [ let available-conversion-patches count patches in-radius company-conversion-radius with [owned-by = "R"] report max-one-of patches in-radius company-conversion-radius with [owned-by = "R"] [;;formula for calculating land rent] ] end I found this thread ask turtle to perform calculations from patch set, but it seemed not to quite answer my problem as it only ask for calculations around the turtle, not around the patches the turtle can reach. A: assuming that the radius is the same for all companies, I would probably have each patch own its value as a variable. then you can just choose the one with the highest in a seperate function. also it seems like the movemtent range for companies is now equal to the conversion range. I would expect these to be different things, so I would use different names even if they have the same value. I split it up in my code to make it clearer. you should also be aware that what you are asking can be computationally expensive. if N companies are asking M patches what the combined value of P surrounding patches is, you are doing N * M * P calculations. if your program is running slow, this is probably causing it. generaly I would think that your code would look something like: patches-own rent-value ;the value of this specific tile HQ-value ;the value of making this a destination patch to update-value ask patches [ ; ask every patch to update its HQ value set HQ-value 0 ask patches in-radius company-conversion-radius [ ; by summing over the rent-value of its radius set HQ-value of [myself] HQ-value of [myself] + rent-value ;you could incorporate your protections and penalties here too. ] ] this should cause ALL patches to update their HQ-value. if you have few companies and many patches, it will be faster to ask turtles to do the following: to update-value ask turtles [ ask patches in-radius company-movement-range [ set HQ-value 0 ask patches in-radius company-conversion-radius [ ;... A: You're almost there (I think). I can't run your code, but from reading it, the max-rent-C procedure will correctly identify the patch in a given radius that returns the best profit. Your problem is that you aren't calling that procedure the right way. Imagine a turtle is at patch A, and has the opportunity to move to patch B but wants to select the B that gives the most profit. What that turtle has to do is to imagine itself at all the possible patch Bs and calculate the profit from all the patches around it while at that location. This is what you said in your question, but restating it for clarity. So instead of asking from A for the maximum, it has to find the maximum that gives the maximum. Instead of: let destination-C max-rent-C try: let potential-destinations patches in-radius (company-conversion-radius * 2 - 1) with [owned-by = "R"] let destination-C max-one-of potential-destinations [max-rent-C]
doc_23536916
https://xx.yyy.ir/xx/ff/addUser?name=%d8%b3%d9%84%d8%a7%d9%85 But when I use Uri to convert it to a URL and send it result = "https://xx.yyy.ir/xx/ff/addUser?name=%d8%b3%d9%84%d8%a7%d9%85" var client = new HttpClient { BaseAddress = new Uri(result.ToString()), }; var response = await client.GetAsync(""); it send this request : https://xx.yyy.ir/xx/ff/addUser?name=سلام why this happen? how to prevent from this? A: This is what's causing your problem: new Uri(result.ToString()) Let's try to do this in a proper manner and see what happens. var builder = new UriBuilder("https://xx.yyy.ir/xx/ff/addUser") { Port = -1 }; var query = HttpUtility.ParseQueryString(builder.Query); query["name"] = "سلام"; builder.Query = query.ToString(); using var httpClient = new HttpClient(); var response = await client.GetAsync(builder.ToString()); builder.ToString() returns https://xx.yyy.ir/xx/ff/addUser?name=%d8%b3%d9%84%d8%a7%d9%85 So basically, the above code boils down to this: using var httpClient = new HttpClient(); var response = await client.GetAsync("https://xx.yyy.ir/xx/ff/addUser?name=%d8%b3%d9%84%d8%a7%d9%85"); Tested and verified on my computer.
doc_23536917
I have encountered couple of problems: * *I have a class which extends SeekBar, and I have implemented onDraw and onMeasure methods as well, but I am not able to view that in layout editor in eclipse, here is the code for the custom view class: package com.custom.android.views; import android.content.Context; import android.graphics.Canvas; import android.graphics.Color; import android.graphics.Paint; import android.graphics.Path; import android.graphics.Path.Direction; import android.graphics.PathMeasure; import android.util.AttributeSet; import android.view.MotionEvent; import android.view.View; import android.widget.SeekBar; import android.widget.Toast; public class CustomSeekBar extends SeekBar { public CustomSeekBar(Context context) { super(context); // TODO Auto-generated constructor stub } public CustomSeekBar(Context context, AttributeSet attrs) { this(context, attrs,0); } public CustomSeekBar(Context context, AttributeSet attrs, int defStyle) { super(context, attrs, defStyle); } @Override public void draw(Canvas canvas) { // TODO Auto-generated method stub super.draw(canvas); } @Override protected synchronized void onMeasure(int widthMeasureSpec, int heightMeasureSpec) { // TODO Auto-generated method stub super.onMeasure(widthMeasureSpec, heightMeasureSpec); } } Here is my layout xml : <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:paddingBottom="@dimen/activity_vertical_margin" android:paddingLeft="@dimen/activity_horizontal_margin" android:paddingRight="@dimen/activity_horizontal_margin" android:paddingTop="@dimen/activity_vertical_margin" tools:context=".MainActivity" > <com.custom.android.views.CustomSeekBar android:layout_width="match_parent" android:layout_height="wrap_content" android:id="@+id/seekBar"/> </RelativeLayout> * *If I use canvas class to draw an arc or any shape, would that be a good starting point? What exactly is wrong with the eclipse adt and how could I use the onDraw method to give shape to that seekbar? A: Drawing a ProgressBar with any shape, is pretty easy. With the SeekBar you have some complexity, since you have to achieve 3 diferent things: * *Draw the line *Draw the draggable thumb, if you want. *Handle the user interaction You have to think of it as an arc that is draw inside a rectangle. So point 3 could be easy: just let the user move the finger in a horizontal line, or exactly over the arc, but considering only the x coordinate of the touch event. What does this mean, in short? ok, good news: you dont have to do anything, since thats the normal behavior of the base SeekBar. For the second point, you can choose an image for the handler, and write it in the corresponding position with a little maths. Or you can forget the handler for know, and just draw the seek bar as a line representing the full track, and another line over it representing the progress. When you have this working, if you want you can add the handler. And for the first point, this is the main one, but its not hard to achieve. You can use this code: UPDATE: I made some improvements in the code public class ArcSeekBar extends SeekBar { public ArcSeekBar(Context context, AttributeSet attrs) { super(context, attrs); } public ArcSeekBar(Context context, AttributeSet attrs, int defStyle) { super(context, attrs, defStyle); } private Paint mBasePaint; private Paint mProgressPaint; private RectF mOval = new RectF(5, 5, 550, 550); private int defaultmax = 180; private int startAngle=180; private int strokeWidth=10; private int trackColor=0xFF000000; private int progressColor=0xFFFF0000; public void setOval(RectF mOval) { this.mOval = mOval; } public void setStartAngle(int startAngle) { this.startAngle = startAngle; } public void setStrokeWidth(int strokeWidth) { this.strokeWidth = strokeWidth; } public void setTrackColor(int trackColor) { this.trackColor = trackColor; } public void setProgressColor(int progressColor) { this.progressColor = progressColor; } public ArcSeekBar(Context context) { super(context); mBasePaint = new Paint(); mBasePaint.setAntiAlias(true); mBasePaint.setColor(trackColor); mBasePaint.setStrokeWidth(strokeWidth); mBasePaint.setStyle(Paint.Style.STROKE); mProgressPaint = new Paint(); mProgressPaint.setAntiAlias(true); mProgressPaint.setColor(progressColor); mProgressPaint.setStrokeWidth(strokeWidth); mProgressPaint.setStyle(Paint.Style.STROKE); setMax(defaultmax);// degrees } @Override protected void onDraw(Canvas canvas) { canvas.drawArc(mOval, startAngle, getMax(), false, mBasePaint); canvas.drawArc(mOval, startAngle, getProgress(), false, mProgressPaint); invalidate(); //Log.i("ARC", getProgress()+"/"+getMax()); } } Of course, you can and you should make everything configurable, be means of the contructor, or with some setters for the start and end angles, dimensions of the containing rectangle, stroke widths, colors, etc. Also, note that the arc is drawn from 0 to getProgress, being this number an angle relative to the x axis, growing clocwise, so, if it go from 0 to 90 degrees, it will be something like: Of course you can change this: canvas.drawArc get any number as an angle, and it is NOT treated as module 360, but you can do the maths and have it starting and ending in any point you want. In my example the beggining is in the 9 of a clock, and it takes 180 degrees, to the 3 in the clock. UPDATE I uploaded a running example to github
doc_23536918
I'm looping to get a specific city: cities.find(city => city.name === currentAddress.city.name) Which is O(n) Is there a more efficient way to do so than go through all 1500 elements every time? A: find() will break when it encounters a match. So it really boils down to how often you need to do this search. If you do it a lot , create a hashmap using city names as keys to allow an O(1) lookup. You can use a regular object or a Map const cityMap = new Map(cities.map(city => ([city.name, city])); Usage: // returns undefined or city object const cityDetails = cityMap.get(currentAddress.city.name)
doc_23536919
here my code : $range = range('A', 'Z'); $quota = 100; $quotaperclass = 25; for ($x = 0; $x < $quota/$quotaperclass ; $x++) { // Automated Create Class $sql= "INSERT INTO class(class_id,class_name,prodi_name) values('A25$x','".$range[$x]."','TK')"; $subquery = "select student_id from alocatedclass"; //Check student already in class $sql = "SELECT * FROM student WHERE student.prodi_name='TK' and student.student_id not in ({$subquery}) ORDER by student.student_id ASC LIMIT $quotaperclass"; $result = mysql_query($sql); while($data = mysql_fetch_array($result)){ // Should insert data to every class $sql = "insert into alocatedclass(student_id,class_id) values ('$data[student_id]','A25$x')"; } } Automated Create Class work well, but insert student data to the class is not work. i want like Create Class A insert student 1 insert student 2 insert student 3 ..... insert student 25 Create Class B insert student 26 insert student 27 insert student 28 ..... insert student 50 any have solution? or have better code? let me know. or if can add random student, should be better. and please explain the code, because iam noob in php. thanks. A: Replace the below line $sql = "insert into alocatedclass(student_id,class_id) values ('$data[student_id]','A25$x')"; to $sql = "insert into alocatedclass(student_id,class_id) values (".$data['student_id'].",'A25".$x."')";
doc_23536920
Example Bar Chart Made in Excel
doc_23536921
I have a list of colours... Example: Blue Red Green Yellow Purple etc. I am using the following formula to detect if one of these colours has been used: =IF(SUMPRODUCT(--ISNUMBER(SEARCH(Table1[Colors],A1)))>0,"Cannot include a colour","") Where Table1[Colours] contains my list of colour text strings and A1 contains my first product description. (Dave Bruns @ ExcelJet has a great read for anyone wanting to use SUMPRODUCT/ISNUMBER/SEARCH combinations.) If a product description contains a colour specified in my Table1[Colours] list the formula produces "Cannot include a colour" to remind the user this is not allowed. Example: "Garmin Forerunner 10 Running Sportswatch Green" The first issue i am faced with is that my current formula procs when the product description contains a compound word containing a colour... Example: "Blackberry Z10 Smartphone" This inaccurately invalidates the description because the string "Black" in this text is not being used to describe the colour of the product. As the title suggests, my main issue lies with outsmarting 'complicated compound' words... ... my Table1[Colours] list does not simply contain the basic Primary, Secondary and Tertiary colours, but also more exotic ones like Coral, Fuchsia and Tan. This causes complication when the product description contains a word like "Stand". Why is this a problem you may be thinking? Stand contains one of my exotic colours 'Tan' S-Tan-d Unfortunately this also causes my formula to proc. (Annoying right?) The solution I am looking for is an addition to my existing formula =IF(SUMPRODUCT(--ISNUMBER(SEARCH(Table1[Colors],A1)))>0,"Cannot include a colour","") which accounts for the possible occurrence of a 'complicated compound' be this by a counter list of acceptable words (e.g. Table2[Exceptions] or by wild carding the search to match the exact colour with no Prefix or Suffix (this option would have to allow for the possibility of a dual colour separated by a / e.g. "Black/Red", so wild carding with certain punctuation exceptions?) ...Its all just a bit horrible and inconvenient. Any advice is appreciated. Thanks, Mr. J A: You need to search for the word boundaries. If you add a space to the beginning and end of the color, and also to the beginning and end of your description, that should do the trick depending on your data. So the search part of your formula could read: SEARCH(" " &Table1[Colors]& " "," "&A1&" ") Or, for your entire formula: =IF(SUMPRODUCT(--ISNUMBER(SEARCH(" "&Table1[Colors]&" "," "&A1&" ")))>0,"Cannot include a colour","") If you have hyphenated colors, e.g: blue-green, or a color like Cherry3, you would need to list them separately in your table. EDIT: As your comment suggests a much more complex situation, I would suggest a User Defined Function (UDF) for ease of maintenance. The following UDF can accept, as findtext a range, a single string, or an array constant consisting of several strings. If you use an array constant, you must use the semicolon ; and not a comma as the separator. A usage example: =IF(reMatch(Table1[Colors],A1),"Cannot include a colour","") The code uses the Regex token for Word Boundary. A word boundary is the point at which a character in the set [0-9A-Za-z_] is adjacent to any character not in that set, or next to the beginning or end of the string. That should cover all of your IF function examples, and more. Option Explicit Function reMatch(FindText As Variant, WithinText As String) As Boolean 'FindText could be a Range, an array constant, or a single item 'If FindText is an array constant, separate elements by semicolons, not commas Dim RE As Object Dim I As Long Dim C As Range Dim vFind As Variant reMatch = False Set RE = CreateObject("vbscript.regexp") With RE .Global = True .IgnoreCase = True vFind = FindText 'will create array if FindText is a range If IsArray(vFind) Then For I = 1 To UBound(vFind) .Pattern = "\b" & vFind(I, 1) & "\b" If .Test(WithinText) = True Then reMatch = True Exit Function End If Next I Else .Pattern = "\b" & vFind & "\b" If .Test(WithinText) = True Then _ reMatch = True End If End With End Function EDIT: As written, FindText can be a range of cells; however, that range must be a single column vertical range. If it is a horizontal range, the function will return the #VALUE! error. If necessary, the UDF could be modified to handle that by testing vFind and ensuring it is a 2D array. This would also enable the use of array constants with comma separators (the additional code is that which is seen between the first and last lines of code below. ... vFind = FindText 'will create array if FindText is a range 'make sure vFind is 2D (if array) On Error Resume Next J = UBound(vFind, 2) If Err.Number <> 0 Then vFind = WorksheetFunction.Transpose(vFind) On Error GoTo 0 If IsArray(vFind) Then ... A: =SUMPRODUCT( ($G$2:$G$159 >= $E$164) * ($G$2:$G$159 <= $F$164 ) * (EXACT($E165,$F$2:$F$159))) Better use "EXACT" instead of is number
doc_23536922
UI controls for previous and next page and also the input for page number so that user can jump right into the specified page?
doc_23536923
if (senderLabel.text = [tempMsg objectForKey:@"sender"]) { [cell.msgText setText:[tempMsg objectForKey:@"message"]]; [messTableView setSeparatorStyle:UITableViewCellSeparatorStyleNone]; [cell setSelectionStyle:UITableViewCellSelectionStyleNone]; //[cell setFrame:CGRectMake(0,0, size.width, size.height)]; [cell.msgText setFrame:CGRectMake(15,3, size.width, result)]; UIImage* balloon = [[UIImage imageNamed:@"grey.png"] stretchableImageWithLeftCapWidth:24 topCapHeight:15]; UIImageView *newImage = [[UIImageView alloc] initWithFrame:CGRectMake(5, 5, size.width+10, result+10)]; UIView *newView =[[UIView alloc] initWithFrame:CGRectMake(0.0, 0.0, cell.frame.size.width, cell.frame.size.height)]; [newImage setImage:balloon]; [newView addSubview:newImage]; [cell setBackgroundView:newView]; }else { [cell.msgText setText:[tempMsg objectForKey:@"message"]]; [messTableView setSeparatorStyle:UITableViewCellSeparatorStyleNone]; [cell setSelectionStyle:UITableViewCellSelectionStyleNone]; //[cell setFrame:CGRectMake(0,0, size.width, size.height)]; [cell.msgText setFrame:CGRectMake(15,3, size.width, result)]; //propriété du texte UIImage* balloon = [[UIImage imageNamed:@"green.png"] stretchableImageWithLeftCapWidth:24 topCapHeight:15]; UIImageView *newImage = [[UIImageView alloc] initWithFrame:CGRectMake(5, 5, size.width+10, result+10)]; UIView *newView =[[UIView alloc] initWithFrame:CGRectMake(0.0, 0.0, cell.frame.size.width, cell.frame.size.height)]; [newImage setImage:balloon]; [newView addSubview:newImage]; [cell setBackgroundView:newView]; } return cell; } A: if (senderLabel.text = [tempMsg objectForKey:@"sender"]) is not correct. You need to use isEqual in your comparison. if ([senderLabel.text isEqual:[tempMsg objectForKey:@"sender"]]) A: First of all, here if (senderLabel.text = [tempMsg objectForKey:@"sender"]) you are making an assignment, not a comparison, comparing would be if (senderLabel.text == [tempMsg objectForKey:@"sender"]) However, that would compare if it's the exact same object in memory, which is probably not what you want, if you want to check if the string has the same text, you have to use [string isEqualToString:anotherString] A: To get indexpath you can use like this: NSIndexPath *indexpath = [NSIndexPath indexPathForRow:0 inSection:0]; and for comparison: if ([string compare:anotherString] == NSOrderedSame)
doc_23536924
I have a panel that i can turn on and off. Within this panel i have a div with style attached to it. My problem is when i turn the visible of the panel to false the style of the div is still there. What the solution to this? Regards <div id="ctl00_FullContentRegion_xFormRightPanel"> <div class="contactform form-orange" style="float: right; margin-left: 10px; width: 462px;"> </div> </div> HTML: <asp:Panel ID="xFormRightPanel" runat="server"> <div class="contactform form-orange" style="float: right; margin-left: 10px; width: 462px;"> <EPiServer:Property ID="Property3" PropertyName="XformRight" runat="server" /> </div> </asp:Panel> A: instead of changing its visibility, change its CSS display property i.e. panelName.style.display = "none" // check your syntax since items with the hidden visibility property are not shown, but still take space on the page where as items with display:none property will be hidden completely, and will take no space. A: Is this action happening on a postback? If so make the panel invisible from the server side <asp:Panel ID="xFormRightPanel" visible="false" runat="server"> </asp:Panel> This will ensure everything within the panel is not rendered at all on the browser. From what I can understand above you are possibly already doing this so just run over these points: * *Have you double checked where you've closed the panel? *Does it contain the elements you expect? *You're not looking at another element (somewhere else in the HTML) that you've mistaken for this?
doc_23536925
Error Message: Manifest merger failed : Attribute application@appComponentFactory value=(android.support.v4.app.CoreComponentFactory) from [com.android.support:support-compat:28.0.0] AndroidManifest.xml:22:18-91 is also present at [androidx.core:core:1.0.0] AndroidManifest.xml:22:18-86 value=(androidx.core.app.CoreComponentFactory). Manifest: <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" package="com.kanwarpreet.dealmybook"> <application android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:roundIcon="@mipmap/ic_launcher_round" android:supportsRtl="true" android:theme="@style/AppTheme"> <activity android:name=".activities.SplashActivity" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <activity android:name=".activities.LoginActivity" /> <activity android:name=".activities.RegisterActivity" /> <activity android:name=".activities.HomeActivity" android:label="@string/title_activity_home" android:theme="@style/AppTheme.NoActionBar" /> <activity android:name=".activities.BookDetailsActivity" android:label="@string/title_activity_book_details" android:theme="@style/AppTheme.NoActionBar"/> <activity android:name=".activities.AddBookActivity" /> </application> </manifest> Build.Gradle: apply plugin: 'com.android.application' android { compileSdkVersion 28 defaultConfig { applicationId "com.kanwarpreet.dealmybook" minSdkVersion 21 targetSdkVersion 28 versionCode 1 versionName "1.0" testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner" } buildTypes { release { minifyEnabled false proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro' } } } dependencies { implementation fileTree(dir: 'libs', include: ['*.jar']) implementation 'com.android.support:appcompat-v7:28.0.0' implementation 'com.android.support.constraint:constraint-layout:1.1.3' implementation 'com.android.support:support-v4:28.0.0' implementation 'com.google.android.material:material:1.0.0' implementation 'com.jakewharton:butterknife:10.1.0' annotationProcessor 'com.jakewharton:butterknife-compiler:10.1.0' testImplementation 'junit:junit:4.12' androidTestImplementation 'com.android.support.test:runner:1.0.2' androidTestImplementation 'com.android.support.test.espresso:espresso-core:3.0.2' } A: After hours of struggling, I solved it by including the following within app/build.gradle: android { compileOptions { sourceCompatibility JavaVersion.VERSION_1_8 targetCompatibility JavaVersion.VERSION_1_8 } } Put these flags in your gradle.properties android.enableJetifier=true android.useAndroidX=true Changes in build.gradle: implementation 'androidx.appcompat:appcompat:1.0.2' implementation 'androidx.constraintlayout:constraintlayout:1.1.3' implementation 'androidx.legacy:legacy-support-v4:1.0.0' implementation 'com.google.android.material:material:1.1.0-alpha04' Refer to: https://developer.android.com/jetpack/androidx/migrate A: Error explicitly says- [com.android.support:support-compat:28.0.0] AndroidManifest.xml:22:18-91 is also present at [androidx.core:core:1.0.0] AndroidX is the latest support library from Google. It contains all previous components from all older appcompat versions. Do NOT use appcompat-v-any number. Instead, use a similar component from AndroidX libraries. Remove the numbered support libraries from your Gradle and your code wherever it is imported. Then sync your gradle. Component similarity table can be found here. Also, follow the steps mentioned in Migrating to AndroidX. Again, stop using any previous appcompat numbered versions. There's only AndroidX now. Hope this helps. A: you have to move on the Androidx because your project is using some feature from there.so you need to migrate to AndroidX follow these snippets look at this second snippet A: Reason of this error- Because after upgrade, androidx.core:core is accessed somewhere, when your project is still not using androidx. So classes like CoreComponentFactory and many others are now found at two places - androidx.core:core and com.android.support:support-compat. That's why this error occured. What is solution? You should migrate to AndroidX. If you don't know about AndroidX. Please read What is AndroidX? How to migrate your project After Android Studio 3.2 (September 2018), there is direct option to migrate existing project to AndroidX. This refract all packages automatically. Before you migrate, it is strongly recommended to backup your project. Existing project * *Android Studio > Refactor Menu > Migrate to AndroidX... *It will analysis and will open Refractor window in bottom. Accept changes to be done. New project Put these flags in your gradle.properties android.enableJetifier=true android.useAndroidX=true Check @Library mappings for equal AndroidX package. Check @Official page of Migrate to AndroidX A: Put these flags in your gradle.properties android.enableJetifier=true android.useAndroidX=true A: One suggestion to find out the exact reason is to open the manifest file and in bottom you will see a Merge Manifest option where you will see exact reason for failure. See below image A: Just add a line into gradle.properties android.enableJetifier=true android.useAndroidX=true A: Project-wide Gradle settings. IDE (e.g. Android Studio) users: Gradle settings configured through the IDE will override any settings specified in this file. For more details on how to configure your build environment visit http://www.gradle.org/docs/current/userguide/build_environment.html Specifies the JVM arguments used for the daemon process. The setting is particularly useful for tweaking memory settings org.gradle.jvmargs=-Xmx1536m android.enableJetifier=true android.useAndroidX=true When configured, Gradle will run in incubating parallel mode. This option should only be used with decoupled projects. More details, visit http://www.gradle.org/docs/current/userguide/multi_project_builds.html sec:decoupled_projectsvorg.gradle.parallel=true A: I also faced this problem because I was using some external library in my project and one of them was not converted into AndroidX. A: add below code to android/build.gradle under buildscript ext googlePlayServicesVersion = "16.0.0" googlePlayServicesVisionVersion = "17.0.2" and below code to gradle.properties android.enableJetifier=true android.useAndroidX=true A: I let Android Studio convert my Relative layout views to Constraint layout. So Android Studio added one of the com.andriod.support... while I added the androidx... dependency when I removed the second one the error was gone. dependencies { implementation "androidx.constraintlayout:constraintlayout:2.1.0" implementation 'com.android.support.constraint:constraint-layout:2.0.4' } This was my error: Manifest merger failed : Attribute application@appComponentFactory value=(androidx.core.app.CoreComponentFactory) from [androidx.core:core:1.3.2] AndroidManifest.xml:24:18-86 is also present at [com.android.support:support-compat:28.0.0] AndroidManifest.xml:22:18-91 value=(android.support.v4.app.CoreComponentFactory). Suggestion: add 'tools:replace="android:appComponentFactory"' to <application> element at AndroidManifest.xml:7:3-26:17 to override. A: implementation fileTree(dir: 'libs', include: ['*.jar']) implementation 'com.android.support:appcompat-v7:28.0.0' implementation 'com.android.support:support-v4:28.0.0' implementation 'com.android.support:recyclerview-v7:28.0.0' implementation 'com.android.support:design:28.0.0' testImplementation 'junit:junit:4.12' androidTestImplementation 'com.android.support.test:runner:one.0.2' androidTestImplementation 'com.android.support.test.espresso:espresso-core:3.0.2' A: In manifest add tools:replace="android:theme" to your application A: I have resolve problem by removing implementation 'com.android.support.constraint:constraint-layout:2.0.4' from app build.gradle and using implementation 'androidx.constraintlayout:constraintlayout:1.1.3' implementation 'androidx.legacy:legacy-support-v4:1.0.0' implementation 'com.google.android.material:material:1.1.0-alpha04' instead of it.
doc_23536926
<?php $data = ['first_name' => 'ben'] ?> <?php $sql = "INSERT INTO names (first_name) values (?);" ?> <?php $statement = $pdo->prepare($sql); ?> <?php $statement->execute([$data]); ?> A: PDO has two different ways to bind parameters. The first is positional. In this case, the array you pass to execute() should be an indexed array, with values in the same order that you want them to bind to the question marks: $sql = "INSERT INTO table (col1, col2) values (?, ?)"; $data = ['value for col1', 'value for col2']; Note the values must be in the same order that they're going to be used: $data = ['value for col2', 'value for col1']; // This won't work, wrong order! The alternative (and in my opinion, superior) method is to use named parameters. Here, you need to use an associative array with a key named the same as your parameter placeholder. $sql = "INSERT INTO table (col1, col2) values (:col1, :col2)"; $data = ['col1' => 'value for col1', 'col2' => 'value for col2']; The order of these now does not matter because they're keyed by the array name instead of the position: $data = ['col2' => 'value for col2', 'col1' => 'value for col1']; // Still good! Your problem (in addition to the extra array wrap that @Sammitch pointed out) is that you have mixed these two techniques together in an incompatible way -- you're using positional parameters, but have provided an associative array. So, in your case, you either need to use positional parameters and an indexed array: $data = ['ben']; $sql = "INSERT INTO names (first_name) values (?);"; $statement = $pdo->prepare($sql); $statement->execute($data); Or named parameters and an associative array: $data = ['first_name' => 'ben']; $sql = "INSERT INTO names (first_name) values (:first_name);"; $statement = $pdo->prepare($sql); $statement->execute($data);
doc_23536927
Reproducable example lvls <- c('a ', 'b ', 'c ') set.seed(314) raw <- data.frame(a = factor(sample(lvls,100, replace=T)), b = sample(1:100,100)) proc <- raw %>% mutate_each(funs(ifelse(is.factor(.), factor(as.character(trimws(.)), labels=unique(as.character(.))), .))) str(proc) gives 'data.frame': 100 obs. of 2 variables: $ a: int 1 1 1 1 1 1 1 1 1 1 ... $ b: int 31 31 31 31 31 31 31 31 31 31 ... Which is wrong on two levels. The factor has no labels. Only the first observation is repeated 100 times A: mutate_if is your friend. If you don't care if you convert to character, you can just use raw %>% mutate_if(is.factor, trimws) which suggests that you can just reconvert to factor: raw %>% mutate_if(is.factor, funs(factor(trimws(.)))) If you want to maintain the type, you can use the more convoluted raw %>% mutate_if(is.factor, funs(`levels<-`(., trimws(levels(.))))) The base R equivalent would be raw[] <- lapply(raw, function(x){if (is.factor(x)) {levels(x) <- trimws(levels(x))} ; x}) though if it's a single variable and you know which, base is pretty clean: levels(raw$a) <- trimws(levels(raw$a)) Edit: Now forcats::relabel (part of the tidyverse) makes changing levels with a function easier: raw %>% mutate_if(is.factor, fct_relabel, trimws) or for a single variable, raw %>% mutate(a = fct_relabel(a, trimws)) It will accept anonymous functions as well, including purrr-style ~trimws(.x) if you like. A: something along these lines? l = lapply(raw, function(x) {if(is.factor(x)){x <- trimws(x)};x}) head(as.data.frame(l)) # a b #1 a 31 #2 a 55 #3 c 68 #4 a 18 #5 a 72 #6 a 64
doc_23536928
In my cpanel home page, I have an icon to setup Ruby on Rails. And, also I want to use ruby on rails for my new websites. But I could not find any information for it's version. So how can I know my Ruby on Rails installed version inside my cpanel? A: Try Rails::VERSION::STRING It returns current version of rails gem
doc_23536929
transactions.php <table border=0 cellspacing=0 width=100%> <tr> <td colspan="2">&nbsp;</td> </tr> <tr> <td width="30%" class="Mellemrubrikker">Transaction Number::</td> <td width="70%">24752734576547IN</td> </tr> <tr> <td width="30%" class="Mellemrubrikker">Weight:</td> <td width="70%">0.85 kg</td> </tr> <tr> <td width="30%" class="Mellemrubrikker">Length:</td> <td width="70%">543 mm.</td> </tr> <tr> <td width="30%" class="Mellemrubrikker">Height:</td> <td width="70%">156 mm.</td> </tr> <tr> <td width="30%" class="Mellemrubrikker">Width:</td> <td width="70%">61 mm.</td> </tr> <tr> <td colspan="2">&nbsp;</td> </tr> </table> index.php <?php $url = "http://localhost/htmlparse/transactions.php"; $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_HTTPAUTH, CURLAUTH_BASIC); $output = curl_exec($ch); $info = curl_getinfo($ch); curl_close($ch); //print_r($output); echo $output; ?> This code gets whole html content from transactions.php . How to get data between <table> as an array value ? A: Try simple html dom from http://simplehtmldom.sourceforge.net/ If you don't mind to use python or perl you can use beautifulsoup or WWW-Mechanize A: I would use the Document Object Model rather than writing your own parsing code or (God forbid!) regular expressions. Here's an example in PHP: PHP Parse HTML code
doc_23536930
import numpy as np import matplotlib.pyplot as plt from math import * K=np.array([np.random.choice([1,0]) for i in range(20)]) print(K) The OUTPUT gives: [1 1 0 1 1 0 0 1 1 0 0 0 1 0 0 1 0 0 0 0] I want to fill this list like above where the positions for ones and zeros are random but the above method does not render them to be equal in number. I understand why this happens. The "choice" randomly chooses from 1 and 0 so there is no reason for them to be equal in number in the list. But if I want them to be randomly chosen and still be equal in number (10 ones and 10 zeros for the above case) what do I do? A: You can do it like this: import numpy as np import matplotlib.pyplot as plt from math import * K = np.zeros(20, dtype=int) K[:10] = 1 np.random.shuffle(K) print(K) i.e. first create an array with equal numbers of contiguous 1s and 0s, then randomise the order. A: Build a list comprised of an equal number of zeroes and ones. Take a pseudo-random sample. Construct array. For example: from random import sample from numpy import array N = 20 list_ = [0] * (N//2) + [1] * (N//2) K = array(sample(list_, k=N)) print(K) Sample output: [1 1 0 1 0 0 1 0 0 0 0 1 0 1 1 1 1 0 1 0]
doc_23536931
MainActivity.kt import androidx.appcompat.app.AppCompatActivity import android.os.Bundle import com.test.testapp.classes.ExampleClass import kotlinx.android.synthetic.main.activity_main.* class MainActivity : AppCompatActivity() { override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) testMessage.text = "1" ExampleClass.writeText("2") } } ExampleClass.kt import com.test.testapp.MainActivity import kotlinx.android.synthetic.main.activity_main.* class ExampleClass { companion object{ fun writeText(textValue:String) { MainActivity().testMessage.text = textValue } } } Android studio error message: FATAL EXCEPTION: main Process: com.test.testapp, PID: 15819 java.lang.RuntimeException: Unable to start activity ComponentInfo{com.test.testapp/com.test.testapp.MainActivity}: java.lang.NullPointerException: Attempt to invoke virtual method 'android.content.pm.ApplicationInfo android.content.Context.getApplicationInfo()' on a null object reference A: Because with the code MainActivity()... you aren't getting the Activity that has been loaded, but you are constructing a new Activity that hasn't been shown yet so the view doesn't exist. There are various ways to achieve what you want, even if the flow is wrong example class MainActivity : AppCompatActivity() { override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) testMessage.text = "1" ExampleClass.writeText("2",this) } } class ExampleClass { companion object{ fun writeText(textValue:String,mainActivity:MainActivity) { mainActivity.testMessage.text = textValue } } } I don't know exactly why do you want to do that but if you want to pass data between activities or fragments or services check that https://developer.android.com/guide/components/intents-filters A: inside the writeText (textValue: String) method, you create a new instance of MainActivity (MainActivity()) in which textView is null, and not get the existing one you should not use companion object for this. Сould explain the situation, why do you need it if it necessary, you can do so: import androidx.appcompat.app.AppCompatActivity import android.os.Bundle import com.test.testapp.classes.ExampleClass import kotlinx.android.synthetic.main.activity_main.* class MainActivity : AppCompatActivity() { override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) testMessage.text = "1" ExampleClass.writeText("2", testMessage) } } ExampleClass.kt import com.test.testapp.MainActivity import kotlinx.android.synthetic.main.activity_main.* class ExampleClass { companion object{ fun writeText(textValue:String, textView: TextView) { textView.text = textValue } } } A: Try like this Pass MainActivity Object to ExampleClass class MainActivity : AppCompatActivity() { override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) testMessage.text = "1" ExampleClass.writeText("2",this) } } And use MainActivity object to access its properties. class ExampleClass { companion object{ fun writeText(textValue:String,mainActivity: MainActivity) { mainActivity.testMessage.text = textValue } } }
doc_23536932
{ "Version" : "2012-10-17", "Statement" : [ { "Sid" : "policyForSomething", "Effect" : "Deny", "Condition": { "StringNotEquals": { "aws:PrincipalArn": [ "arn:aws:sts::**********:assumed-role/####/USERG", "arn:aws:sts::**********:assumed-role/####/USER1", "arn:aws:sts::**********:assumed-role/####/USER2", "arn:aws:sts::**********:assumed-role/####/USER3", "arn:aws:sts::**********:assumed-role/####/USER4" ] } }, "Action" : "secretsmanager:*", "Resource" : "arn:aws:secretsmanager:us-west-2:*******:secret:/*" }] } When I try to check using New Policy wizard, I don't see any error. But when I put it in the Resource Policy area for Secrets Manager, it's always Complaining "This Resource policy contains a syntax error". Other than the fact that "AWS UI and error messages aren't always helpful" - could anyone help me understanding why this is an issue? A: You're required to have one of Principal and NotPrincipal in your resource-based policy. Try using Principal with Allow, or NotPrincipal with Deny. Also, since you are using a resource-based policy, the Resource automatically and implicitly becomes the secret with your policy. (So you can safely use '*' there) * *Principal with Allow: { "Version": "2012-10-17", "Statement": [{ "Sid": "policyForSomething", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:sts::**********:assumed-role/####/USERG", "arn:aws:sts::**********:assumed-role/####/USER1", "arn:aws:sts::**********:assumed-role/####/USER2", "arn:aws:sts::**********:assumed-role/####/USER3", "arn:aws:sts::**********:assumed-role/####/USER4" ] }, "Action": "secretsmanager:*", "Resource": "*" }] } *NotPrincipal with Deny: { "Version": "2012-10-17", "Statement": [{ "Sid": "policyForSomething", "Effect": "Deny", "NotPrincipal": { "AWS": [ "arn:aws:sts::**********:assumed-role/####/USERG", "arn:aws:sts::**********:assumed-role/####/USER1", "arn:aws:sts::**********:assumed-role/####/USER2", "arn:aws:sts::**********:assumed-role/####/USER3", "arn:aws:sts::**********:assumed-role/####/USER4" ] }, "Action": "secretsmanager:*", "Resource": "*" }] } Reference: * *https://docs.amazonaws.cn/en_us/IAM/latest/UserGuide/reference_policies_grammar.html
doc_23536933
struct Foo { void do_this(int x) {} }; struct Bar { void do_that(int x) {} }; struct FooBarUtil { static int get_this_n_that() { return 0; } }; struct FooBar { template<typename FOO_TYPE, typename BAR_TYPE> static void foo_bar(FOO_TYPE foo, BAR_TYPE bar) { auto x = FooBarUtil::get_this_n_that(); foo->do_this(x); bar->do_that(x); } }; int main() { auto foo = std::make_shared<Foo>(); auto bar = std::make_shared<Bar>(); FooBar::foo_bar(foo, bar); } I would like to make FooBarUtil a template argument of FooBar::foo_bar(...) for ease of making unit tests so I have changed the relevant code to: template<typename FOO_TYPE, typename BAR_TYPE, typename FOO_BAR_UTIL_TYPE> static void foo_bar(FOO_TYPE foo, BAR_TYPE bar) { auto x = FOO_BAR_UTIL_TYPE::get_this_n_that(); foo->do_this(x); bar->do_that(x); } I then have to update the usage code as well: FooBar::foo_bar<Foo, Bar, FooBarUtil>(foo, bar); In fact, my foo_bar would need more number of template arguments thus making the caller code too lengthy (due to naming all template arguments). I would expect something like: FooBar::foo_bar(foo, bar, FooBarUtil); But it definitely doesn't work I know. Are there any ways to workaround this? A: If you're OK with having to name FooBarUtil at the call site, just make it the first template parameter. Optional and deduced template parameters come after the ones you need to specify: template<typename FOO_BAR_UTIL_TYPE, typename FOO_TYPE, typename BAR_TYPE> static void foo_bar(FOO_TYPE foo, BAR_TYPE bar); Which you call with FooBar::foo_bar<FooBarUtil>(...).
doc_23536934
Today I am having a cup of coffee and opened Android Studio, opened my project. Then saw this. It was fine yesterday. But today suddenly come across this ... I have tried to cleaned my project and rebuilt again, does not work. Please help...I am 32 now, I guess after 51 years I will be 83..not sure I will still know who I am, and only if I survive.
doc_23536935
the featgen.py code is below: import os import sys import cPickle as pickle import numpy as np import pandas as pd from pandas import DataFrame from pprint import pprint import csv import talib from talib import abstract from talib import common from talib import func from featsel import Feature_Sel class Feature_Gen(): ############################################################################### def __init__(self, csv_path = './data/ZJIFMI201210-201410.csv', pkl_path = './data/data.pkl', resample_time = "10min"): self.csv_path = csv_path self.resample_time = resample_time self.pkl_path = os.path.join("data", "data_{}.pkl".format(resample_time)) ############################################################################### def feature_gen(self): if os.path.exists(self.pkl_path): print 'read data from:', self.pkl_path data = pd.read_pickle(self.pkl_path) else: print 'read data from:', self.csv_path lines = sum(1 for _ in csv.reader(open(self.csv_path))) rs_num = 10000 col_names = ['Date', 'Time', 'Open', 'High', 'Low', 'Close', 'Volume', 'Adjust'] rs = pd.read_csv( self.csv_path, header = None, index_col = 0, names = col_names, parse_dates = {'Timestamp':['Date', 'Time']}, # skiprows = lines-rs_num ) # print rs.head(10) # print rs.tail(10) ############################################################################# ## resample if self.resample_time is None: data = rs.ix[:, 0:5] # print data.shape # OHLCV save_pkl_path = os.path.split(self.pkl_path)[0]+'/data.pkl' else: ***tt1 = rs.Close.resample(self.resample_time, how = 'ohlc')*** Volume = rs.Volume.resample(self.resample_time, how = 'sum') tt1['volume'] = Volume data = tt1.dropna() # print data.shape # ohlcv print 'resample_time:', self.resample_time save_pkl_path = os.path.split(self.pkl_path)[0]+'/data_'+self.resample_time+'.pkl' print save_pkl_path ##################################### Feature Selection ##################################### ###################################################################### need to discuss '''you can switch the feature selection function here''' data = Feature_Sel(data).feature_sel5() ############################################################################### # data = data.dropna() # data = data.fillna(method="bfill") ############################################################################### with open(save_pkl_path, "wb") as fp: pickle.dump(data, fp) ############################################################################### # import pylab as pl # pl.plot(data.values[:, 3]) # pl.savefig("./data/Close.png") return data if __name__ == '__main__': '''First, search the *pkl file, if not exist, search the *csv file.''' '''Default Parameters: csv_path = './data/ZJIFMI201210-201410.csv', pkl_path = './data/data.pkl', resample_time = None''' data = Feature_Gen().feature_gen() print data When run the code under windows, it returned below error: File "C:\Anaconda\sigming-task1-DEV\featgen.py", line 57, in feature_gen tt1 = rs.Close.resample(self.resample_time, how = 'ohlc') File "C:\Anaconda\lib\site-packages\pandas\core\generic.py", line 3032, in resample return sampler.resample(self).finalize(self) File "C:\Anaconda\lib\site-packages\pandas\tseries\resample.py", line 105, in resample raise TypeError('Only valid with DatetimeIndex, TimedeltaIndex or PeriodIndex') TypeError: Only valid with DatetimeIndex, TimedeltaIndex or PeriodIndex Any good suggestions?
doc_23536936
If I have a form open and, within that form, I pop open a modal (with its own post action), the route that fires is the one for the open form, not the modal. The simplest example is an update form that includes a grid with a delete action on each line item. After prompting for the delete confirmation, the route that fires is the form's update, not the grid's line item delete. Any ideas? Do I have to move back to 2.0.3? Mahalo. Best, Joe
doc_23536937
model.Save(modelFilePath); Now I want to load it again and e.g. continue training or just evaluate samples. I can see two ways how this could be possible. One way works, but is impracticable. The second one does not work. * *I build the whole structure of my neural network from scratch again and then I call the following method on it: model.Restore(modelFilePath); Indeed, this works. *I create my model using the following static method: Function.Load(modelFilePath, DeviceDescriptor.GPUDevice(0)); This does not work. After these actions I just create a trainer for the model, create a minibatchSource and try to train the model the same way I did before I saved the model. But with the second strategy I get the following exception: System.ArgumentOutOfRangeException: 'Values for 1 required arguments 'Input('features', [28 x 28 x 1], [, #])', that the requested output(s) 'Output('aggregateLoss', [], []), Output('lossFunction', [1], [, #]), Output('aggregateEvalMetric', [], [])' depend on, have not been provided. [CALL STACK] > CNTK::Internal:: UseSparseGradientAggregationInDataParallelSGD - CNTK::Function:: Forward - CNTK:: CreateTrainer - CNTK::Trainer:: TotalNumberOfSamplesSeen - CNTK::Trainer:: TrainMinibatch (x2) - CSharp_CNTK_Trainer_TrainMinibatch__SWIG_0 - 00007FFA34AE8967 (SymFromAddr() error: The specified module could not be found.) It says that the input features have not been provided. I am using the input when training and when creating the model by scratch: var input = CNTKLib.InputVariable(_imageDimension, DataType.Float, _featureName); var scaledInput = CNTKLib.ElementTimes(Constant.Scalar<float>(0.002953125f, _device), input); ... So I thought I would have to replace the input of the loaded model with the one I create for training, and use when I create the model by scratch - although the input is not different. But I stuck at trying this because I could not retrieve the input of the model object, which I would need for replacement (I think). model.FindByName(inputLayerName); just returns null, although I clearly can see that the name matches with the layer name in the model's "Inputs" List in the debugger. In consequence I do not know how to properly load a saved model. I hope someone can help me. A: Luckily I just found the answer by myself. I'll post it here, because there are probably other CNTK beginners, who might stumble about that issue or generally want to know how you load a model properly. The problem was that I did not use the same input object for training and the model creation. In other words, if I let create my model by the mentioned static method, I still have to ensure that the object in the model and the one used for training are the same. This should be possible by the following ways: * *Replace the input of the loaded model with your own input object and use this one also for training. I did not test that, but it should work. *Extract the input of the loaded model and use that one for training. I just tested this and it works. There is the code I use: var labels = CNTKLib.InputVariable(new int[] {_classesNumber}, DataType.Float, _labelNa Variable input; Function model; if (File.Exists(_modelFile)) { model = Function.Load(_modelFile, DeviceDescriptor.GPUDevice(0)); input = model.Arguments.Single(a => a.Name == _featureName); } else { input = CNTKLib.InputVariable(_imageDimension, DataType.Float, _featureName); model = BuildNetwork(input); } var trainer = CreateTrainer(model, labels); IList<StreamConfiguration> streamConfigurations = new StreamConfiguration[] { new StreamConfiguration(_featureName, _imageSize), new StreamConfiguration(_labelName, _classesNumber) }; var minibatchSource = MinibatchSource.TextFormatMinibatchSource( Path.Combine(_ressourceFolder, _trainingDataFile), streamConfigurations, MinibatchSource.InfinitelyRepeat); TrainModel(minibatchSource, trainer, labels, input); One mistake I made in the beginning, too, was to use Variable layer model.FindByName(inputLayerName) although I had to use Variable layer = model.Arguments.Single(a => a.Name == inputLayerName);
doc_23536938
From: [{ id: 1, name: 'House sth', desc: 'lorem ipsum', type: 'house' }, { id: 2, name: 'Building sth', desc: 'lorem ipsum', type: 'building' }, { id: 3, name: 'House two', desc: 'lorem ipsum', type: 'house' }] To this result: { house: 2, building: 1 } (two of type === 'house' and one of type === 'building') The next was my approach, but of course the values are NaN: useEffect(() => { listings.map((listing, index) => { setPropertyCount(prevState => ({ ...prevState, [listing.type]: prevState[listing.type] + 1, })); }); }, []);
doc_23536939
productList = (List<Object[]>) session.createSQLQuery(" SELECT User.username, User.email, Orders.p_id, Orders.o_id, Product.listed_price " + "FROM Orders " + "INNER JOIN User ON User.u_id = Orders.u_id " + "INNER JOIN Product ON Product.p_id = Orders.p_id " + "WHERE Product.p_id = '"+p_id +"' " + "ORDER BY User.username").list(); I have 3 tables User, Product and Orders. USER: -----+-----------+------------+---------+----------+-----------+ u_id | username | password | contact | email | city | ------+-----------+------------+---------+---------+-----------+ PRODUCT: +------+----------+--------------+------+--------------+ | p_id | category | listed_price | qty | description | +------+----------+--------------+------+--------------+ ORDERS: +------+--------+------+------+-----------+ | o_id | date | u_id | p_Id | order_qty | +------+--------+------+------+-----------+ I wants to Inner joins User table with Product along with o_id (from ORDER) in below order: +-----------+-------+------+------+--------------+ | username | email | p_id | o_id | listed_price | +-----------+-------+------+------+--------------+ In my ActionClass/View class i declare public List<Object[]> productList; to access object list from controller class above. public List<Object[]> productList; public String listAllProduct(){ HttpServletRequest request = (HttpServletRequest) ActionContext.getContext().get(ServletActionContext.HTTP_REQUEST); productList = orderDaoFactory.listProduct(Integer.parseInt( request.getParameter("p_id"))); System.out.println("\t"+productList.get(0).toString()); return SUCCESS; } In my JSP page i'm using this list (array of object) productList as an Iterator tag to iterate all customers who have ordered that product. <s:iterator value="productList"> <tr> <td><h4><s:property value="username"/></h4></td> <td><h4><s:property value="email"/></h4></td> <td><h4><s:property value="p_id"/></h4></td> <td><h4><s:property value="o_id"/></h4></td> <td><h4><s:property value="listed_price"/></h4></td> </tr> </s:iterator> Challenges/Issues: My problem is not to get output successfully even not getting any error after debugging. Empty mind now thinking to use O/R mapping to associate Objects JOIN with others. Please suggest me where i am wrong. Your suggestions appreciable. A: productList = (List<Object[]>) session.createSQLQuery(" SELECT User, Orders, Product" + "FROM User,Orders,Product " + "INNER JOIN User ON User.u_id = Orders.u_id " + "INNER JOIN Product ON Product.p_id = Orders.p_id " + "WHERE Product.p_id = '"+p_id +"' " + "ORDER BY User.username").list(); Then you will get three object. In first list you will get Object[0]=Users,Object[1]=Orders,Object[2]=Product. Now iterate it. after edition productList = (List<Object[]>) session.createQuery(" SELECT User, Orders, Product" + "FROM User,Orders,Product " + "INNER JOIN User ON User.u_id = Orders.u_id " + "INNER JOIN Product ON Product.p_id = Orders.p_id " + "WHERE Product.p_id = '"+p_id +"' " + "ORDER BY User.username").list(); what is the problem with is syntax?? A: Use HQL instead of SQL if you want to query your objects directly. Use session.createQuery(String) with the following query : SELECT u.username, u.email, o.p_id, o.o_id, p.listed_price FROM Orders as o INNER JOIN User as u WITH u.u_id = o.u_id INNER JOIN Product as p WITH p.p_id = o.p_id WHERE p.p_id = ??? ORDER BY u.username
doc_23536940
A: It is possible.You have to use a wcf Service for the inter process communication.The Service does not Need to stop. I recommend that you take a look at wcf and do some tutorials on asp.net. In most asp.net tutorials they Show how you can attach textboxes to a model.
doc_23536941
As shown in the figure (source code is given below), I do get as a result the orange line, whereas I expect something like the green one. The sum of squarred errors results to 3918 (orange) vs. 377 (green). This is a difference of factor 10 and should be far above any default tolerance values or any machine precision. Strangely, I do have similar data points, which are approximately shifted up or down with respect to the blue points. With these, the regression behaves very well. I mean could twigle the kwgargs of the curve_fit function since I do have some more information about the problem. But why is the "free" regression with the default values so bad? Are there any numerical instabilities during the iterative solving process, leading to far of local minima? I would be very pleased if someone comes up with an explaination and a solution. from numpy import array, linspace, piecewise from scipy import optimize import matplotlib.pyplot as plt def piecewise_linear(x, x0, y0, k1, k2): return piecewise(x, [x < x0], [lambda x:k1*x + y0-k1*x0, lambda x:k2*x + y0-k2*x0]) x = array([130., 125., 115., 110., 105., 95., 85., 75., 65., 55., 45., 35., 25., 15., 5., 0.]) y = array([ 5., 35., 70., 90., 100., 115., 125., 135., 145., 155., 165., 175., 189., 199., 209., 213.]) xs = linspace(min(x), max(x), num=100) ps = optimize.curve_fit(piecewise_linear, x, y) # result: ps = array([-124.37010926, 393.31034095, -90.9686379 , -1.35547041]) # i.e. x_0 is supposed to be in the second quadrant # OptimizeWarning: Covariance of the parameters could not be estimated fig, ax = plt.subplots() ax.plot(x, y, marker='o', linestyle='None') ax.plot(xs, piecewise_linear(xs, *ps[0]), label='calculated') # quick guess / approximate expectation expected_ps = array([110, 90, -1.1, -4]) ax.plot(xs, piecewise_linear(xs, *expected_ps), label='approx. expected') ax.legend() plt.show() sum_of_squared_errors = lambda params: sum((piecewise_linear(x, *params) - y)**2) print('errors estimate:', sum_of_squared_errors(ps[0])) # 3918 print('errors expected', sum_of_squared_errors(expected_ps)) # 377
doc_23536942
Possible Duplicate: Problem running Thinking Sphinx with Rails 2.3.5 I'm running rails 2.3.5 Every time I run rake ts:start or ts.rebuild the rake file quite with the following: Sphinx cannot be found on your system. You may need to configure the following settings in your config/sphinx.yml file: * bin_path * searchd_binary_name * indexer_binary_name rake aborted! key not found I have Sphinx running and the sphinx.yml in config is correct:(I believe) bin_path: /usr/bin/searchd searchd_binary_name: searchd indexer_binary_name: sphinx-indexer Sphinx seems to be running, as when it is running ( by running command: service searchd start) The error I get when I browse to a page that uses search is: ThinkingSphinx::SphinxError in Jobs#index Showing app/views/jobs/index.html.erb where line #30 raised: unknown local index 'job_core' in search request Rather than a connection error? Probably a multitude of problems here but i"m stuck. Alternatively, I could rewrite the code I'm amending to use a different search function, if so, what's best? A: bin_path should not include the actual binary names (as you're setting them with searchd_binary_name and indexer_binary_name) - so try it with just /usr/bin. A: Comparing your sphinx.yml config to mine, it looks like my values for bin_path, search_binary_name and index_binary_name are expressed as strings, but that doesn't seem to matter. My indexer binary however is indexer rather than sphinx-indexer development: min_infix_len: 3 config_file: "./config/development.sphinx.conf" searchd_log_file: "./log/searchd.log" query_log_file: "./log/searchd.query.log" pid_file: "./log/searchd.development.pid" bin_path: "/usr/local/bin" searchd_binary_name: "searchd" indexer_binary_name: "indexer" So it maybe worth just checking you've specified the correct binary names. This is in addition to what Pat said re not including the search binary name in the bin_path.
doc_23536943
https://regex101.com/r/eZ1gT7/945 For example: Testing | Hello World | Another test I want to be able to get Hello World. A: If it's always "Something | Something | Something", then .*\s\|(.*)\s\|.* would get whatever is in between the two |. https://regex101.com/r/eZ1gT7/947
doc_23536944
TimeFragment Class. import java.util.Calendar; import com.actionbarsherlock.app.SherlockDialogFragment; import android.app.Dialog; import android.app.TimePickerDialog; import android.content.Context; import android.os.Bundle; import android.text.format.DateFormat; import android.widget.TimePicker; public class TimePickerFragment extends SherlockDialogFragment implements TimePickerDialog.OnTimeSetListener { private TimePickedListener mListener; static int hour; static int minute; static Context mContext; public static TimePickerFragment newInstance(Context context, TimePickedListener listener, Calendar now) { TimePickerFragment dialog = new TimePickerFragment(); mContext = context; hour = now.get(Calendar.HOUR_OF_DAY); minute = now.get(Calendar.MINUTE); return dialog; } @Override public Dialog onCreateDialog(Bundle savedInstanceState) { return new TimePickerDialog(mContext, this, hour, minute, DateFormat.is24HourFormat(getActivity())); } @Override public void onTimeSet(TimePicker view, int hourOfDay, int minute) { // when the time is selected, send it to the activity via its callback // interface method Calendar c = Calendar.getInstance(); c.set(Calendar.HOUR_OF_DAY, hourOfDay); c.set(Calendar.MINUTE, minute); mListener.onTimePicked(c); } public static interface TimePickedListener { public void onTimePicked(Calendar time); } } Selecting time inside Main Fragment public void selectTime(final TextView lblTime, final int position) { hideRight(); timeFrag = TimePickerFragment.newInstance(getActivity(), new TimePickedListener() {@ Override public void onTimePicked(Calendar time) { lblTime.setText(DateFormat.format("h:mm a", time)); } }, now); timeFrag.show(getActivity().getSupportFragmentManager(), "timePicker"); } 07-02 17:28:04.214: E/XXX(10341): Uncaught exception is: 07-02 17:28:04.214: E/XXX(10341): java.lang.NullPointerException 07-02 17:28:04.214: E/XXX(10341): at com.common.TimePickerFragment.onTimeSet(TimePickerFragment.java:47) 07-02 17:28:04.214: E/XXX(10341): at android.app.TimePickerDialog.tryNotifyTimeSet(TimePickerDialog.java:130) 07-02 17:28:04.214: E/XXX(10341): at android.app.TimePickerDialog.onClick(TimePickerDialog.java:115) 07-02 17:28:04.214: E/XXX(10341): at com.android.internal.app.AlertController$ButtonHandler.handleMessage(AlertController.java:166) 07-02 17:28:04.214: E/XXX(10341): at android.os.Handler.dispatchMessage(Handler.java:99) 07-02 17:28:04.214: E/XXX(10341): at android.os.Looper.loop(Looper.java:137) 07-02 17:28:04.214: E/XXX(10341): at android.app.ActivityThread.main(ActivityThread.java:4928) 07-02 17:28:04.214: E/XXX(10341): at java.lang.reflect.Method.invokeNative(Native Method) 07-02 17:28:04.214: E/XXX(10341): at java.lang.reflect.Method.invoke(Method.java:511) 07-02 17:28:04.214: E/XXX(10341): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:791) 07-02 17:28:04.214: E/XXX(10341): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:558) 07-02 17:28:04.214: E/XXX(10341): at dalvik.system.NativeStart.main(Native Method) A: You forget to initialize private TimePickedListener mListener; Change your newInstance method with : public static TimePickerFragment newInstance(Context context, TimePickedListener listener, Calendar now) { TimePickerFragment dialog = new TimePickerFragment(); mContext = context; mListener = listener; hour = now.get(Calendar.HOUR_OF_DAY); minute = now.get(Calendar.MINUTE); return dialog; } A: You didn't instantiate private TimePickedListener mListener; anywhere . It is referencing to null . public static TimePickerFragment newInstance(Context context, TimePickedListener listener, Calendar now) { TimePickerFragment dialog = new TimePickerFragment(); mContext = context; hour = now.get(Calendar.HOUR_OF_DAY); minute = now.get(Calendar.MINUTE); mListener = listener; // you missed this line return dialog; }
doc_23536945
val contentByName: Map[String, Set[String]] = Map( ("idsFromDb1" -> Set("1", "2", "3")) , ("idsFromDb2" -> Set("2", "3", "6", "7")) , ("idsFromDb3" -> Set("4", "5", "6", "9", "10")) ) def foldOp(x: Set[String], y: (String, Set[String])): Set[String] = if (y._2.filter(_.size != 1).size == 0) x ++ y._2 else x val all0 = contentByName.foldLeft(Set[String]())(foldOp) val all1 = contentByName.foldLeft(Set[String]())((x: Set[String], y: (String, Set[String]))) => if (y._2.filter(_.size != 1).size == 0) x ++ y._2 else x) I would like to avoid defining the method foldOp() if possible, and just inline it. However, I have tried all sorts of variations and have not been successful. For example, the all1 line shows the following errors (ScalaIDE in Eclipse in Scala Worksheet): * *';' expected but ')' found. *not found: value x *not found: value x *not found: value x *not a legal formal parameter I googled and searched StackOverflow explicitly (I found it pretty difficult figure out how to search for this). I didn't find anything So, any assistance is greatly appreciated, including identifying this as a duplicate and pointing me to it (which might give me an understanding how to better search in the future). Thank you. UPDATE: Turns out I had an extra close parenthesis in my all1 case just prior to the =>. I almost always build these kinds of constructions up on multiple lines, and didn't in this case. Grrr. Additionally, I learned about a new approach, "case", to this from fotNelton. So, this was still very productive for me. A: You could do it like this: val contentByName: Map[String, Set[String]] = Map( "idsFromDb1" -> Set("1", "2", "3"), "idsFromDb2" -> Set("2", "3", "6", "7"), "idsFromDb3" -> Set("4", "5", "6", "9", "10") ) contentByName.foldLeft(Set.empty[String]) { case (acc, (key, value)) => if (acc.size == 1) acc else acc ++ value } Note that I'm using pattern match syntax for the "inline" method (better call it anonymous function), because that makes understanding the fold so much easier. EDIT: Meanwhile, OP has changed the semantics of the fold, but that shouldn't affect the methodology here.
doc_23536946
screen_space_vertex4 = local_vertex4 * world_view_projection_matrix // Homogeneous coordinates to Cartesian coordinates. screen_space_vertex4.x /= screen_space_vertex4.w screen_space_vertex4.y /= screen_space_vertex4.w screen_space_vertex4.z /= screen_space_vertex4.w This puts screen_space_vertex4.z in the -1;1 range. I try to get the world space distance between my camera and my vertex, but a simple mapping of near and far to -1;1 doesn't work. What are the math for this? I know there are near, far, fov and ratio involved, but I can't find the proper equation.
doc_23536947
heres my code: import json import sqlite3 with open('file.json') as f: data = json.load(f) # Open the file containing the SQL database. with sqlite3.connect("ans.sqlite") as conn: # Create the table if it doesn't exist. conn.execute( """CREATE TABLE IF NOT EXISTS donors( id varchar(40), channel varchar(9), value real, PRIMARY KEY (id) );""" ) # Insert each entry from json into the table. keys = ["id", "channel", "value"] for entry in data: # Each key will default to None if it doesnt exist in json entry. values = [entry.get(key, None) for key in keys] cmd = """INSERT OR IGNORE INTO donors VALUES( ?, ?, ? );""" conn.execute("""DROP TABLE IF EXISTS donors;""") #returns error if table doesnt exist conn.execute(cmd, values) conn.commit()
doc_23536948
"a" < "b" #=> true "a" > "b" #=> false "a" < "B" #=> false "A" < "B" #=> true "A" < "b" #=> true "A" < "z" #=> true "z" < "A" A: When checking for the condition it is converting into ASCII codes and then comparing the result. Here is the link "a" < "b" => true When it converts so a = 97 & b = 98 In ASCII And 97 < 98 which is true "a" > "b" => false When it converts so a = 97 & b = 98 In ASCII And 97 > 98 which is false "a" < "B" => false When it converts so a = 97 & B = 66 In ASCII And 97 < 66 which is false "A" < "B" => true When it converts so A = 65 & B = 66 In ASCII And 65 < 66 which is true "A" < "b" => true When it converts so A = 65 & b = 98 In ASCII And 65 < 98 which is true I hope you got my point A: As far as I can tell, both standard Ruby and Rubinius compare strings as they are saved in memory. In C-Ruby with: retval = memcmp(ptr1, ptr2, lesser(len1, len2)) and in Rubinius with: @data.compare_bytes(other.__data__, @num_bytes, other.bytesize) There are some additional checks (e.g. if other is also a String or if encodings are compatible), but when comparing "a" and "b", Ruby basically compares "a".bytes and "b".bytes. String#bytes returns an integer Array. In Ruby, Arrays aren't comparable by default, so you can launch class Array include Comparable end before playing with "a".bytes < "b".bytes in the console. Arrays, as strings, are compared according to the lexicographical order. As an example: class Array; include Comparable; end p "a".bytes # [97] p "b".bytes # [98] p "a".bytes < "b".bytes # true p "a" < "b" # true p "B".bytes # [66] p "a".bytes < "B".bytes # false When comparing ASCII strings, it fits the description provided by @AniketShivamTiwari. Finally, this behaviour isn't specific to Ruby. In a Linux folder, uppercase filenames are sorted before the lowercase ones (at least when LC_COLLATE="C"). A: It's a comparison of the byte value of the character. You can view the raw byte-value of your char with bytes method: 'B'.bytes => [66] 'a'.bytes => [97] So now you can see why 'B' is less than 'a'
doc_23536949
(* generate points on a circle *) pts = Table[{a Cos[t], a Sin[t], 0}, {t, 0, 2 Pi, 0.1}]; (* add last segment *) pts = Append[pts, {a, 0, 0}]; (* build tr... *) (* ... *) (* draw *) Graphics3D[GeometricTransformation[Line[pts], tr]] Is there a better way to create a table so that the first point is repeated? Append[] above looks bad. I am not using Circle[] because I need to transform the circle in a Graphics3D[]. I am not using ParametricPlot3D because to my knowledge I can't put that inside a GeometricTransformation[]. Thanks for any suggestions. Regards A: Well, how about segs=64.; pts = Table[{a Cos[t], a Sin[t], 0}, {t, 0, 2 Pi, 2 Pi/segs}]; which creates a list with segs+1 segments, the last of which is the same as the first? A: You could draw the curve as a faceless polygon: pts = Table[{a Cos[t], a Sin[t], 0}, {t, 0, 2 Pi, 0.1}]; Graphics3D[GeometricTransformation[{FaceForm[],EdgeForm[Thin],Polygon[pts]}, tr]] or Graphics3D[{FaceForm[],EdgeForm[Thin],GeometricTransformation[Polygon[pts], tr]}] A: If Append "looks bad," perhaps this is more aesthetic?: pts = {##,#}& @@ pts Or, if you are of a more obscure persuation, perhaps: ArrayPad[pts, {0, 1}, "Periodic"]
doc_23536950
Expected identifier or '(' " I've re-started the project, modeled after my old homework assignments that have used functions and am unable to alleviate the issue. Any and all suggestions/recommendations are welcomed after 2 days of stressing on this. Here's the beginning of my code to just before main. I can include more if needed. Many thanks! #include <stdio.h> /* constants */ #define STD_HOURS 40.0 /* hours per week */ #define SIZE 5 /* employees to process */ #define OT 1.5 /* for overtime calculation */ #define AVERAGE 5 /* for obtaining averages */ /* function to obtain hours worked for employees */ void getHours (long int clock_number[]); float hours_worked[]; /* array for hours worked */ { printf("Enter the number of hours worked for employee #%d: ", d + 1); scanf("%f", &hours_worked[d]); return (getHours); } /* function call to calculate overtime hours */ void overtime_grosspay (float hours_worked[]); float d; /* overtime calculation variable */ float hourly_wage[]; /* initialize array */ float hours_worked[]; /* array for hours worked */ float overtime_hours[]; /* array for overtime pay */ float gross_pay[]; /* array for gross pay per employee */ { overtime_hours[d] = hours_worked[d] - STD_HOURS; gross_pay[d] = hourly_wage[d] * STD_HOURS + (hourly_wage[d] * overtime_hours[d] * OT); return (overtime_grosspay); } /* function call to calculate gross pay */ void regular_grosspay (float hours_worked[]); float d; /* regular gross variable */ float hourly_wage[]; /* initialize array */ float hours_worked[]; /* array for hours worked */ float overtime_hours[]; /* array for overtime pay */ float gross_pay[]; /* array for gross pay per employee */ { overtime_hours[d] = 0; gross_pay[d] = hours_worked[d] * hourly_wage[d]; return (regular_grosspay); } /* function call to print */ int main() { A: You have: /* function to obtain hours worked for employees */ void getHours (long int clock_number[]); float hours_worked[]; /* array for hours worked */ { printf("Enter the number of hours worked for employee #%d: ", d + 1); scanf("%f", &hours_worked[d]); return (getHours); } The void getHours(…) line ends with a semicolon; that is a function declaration. The float hours_worked[]; line is an array definition, but it doesn't specify the array size and is not prefixed with extern so it is invalid. The { therefore has no business in the code; it isn't part of a function definition. As a function definition, there's no d in scope, and returning a pointer to the function isn't going to work (wrong type, amongst other things — the function isn't supposed to return a value at all!), and there's no way to make the data available to the calling code. You probably need: /* function to obtain hours worked for one employee */ float getHours (long clock_number) { float hours_worked; printf("Enter the number of hours worked for employee #%ld: ", clock_number); scanf("%f", &hours_worked); return (hours_worked); } There will then be changes required to how you use this function. I've not even looked at the code beyond it. A: Change the following: void getHours (long int clock_number[]); float hours_worked[]; /* array for hours worked */ { to: void getHours (long int clock_number[]) { float hours_worked[]; /* array for hours worked */ (notice how in addition to moving the { the ; after the function declaration has been removed)
doc_23536951
Can you suggest a quick and dirty way to get this code running in .Net 2.0? public static HaarClassifierCascade Parse(XDocument xDoc) { HaarClassifierCascade cascade = null; XElement stages_fn; XElement seq_fn = null; /* sequence */ XElement fn; int n; int i = 0, j = 0, k = 0, l = 0; int parent, next; stages_fn = xDoc.Descendants(stageId).First(); n = stages_fn.Elements().Count(); cascade = new HaarClassifierCascade(n); seq_fn = xDoc.Descendants(sizeId).First(); string[] size = seq_fn.Value.Split(' '); int.TryParse(size[0], out cascade.OriginalWindowSize.Width); int.TryParse(size[1], out cascade.OriginalWindowSize.Height); XElement stage_fn = (XElement)stages_fn.FirstNode; while (null != stage_fn) { XElement trees_fn = stage_fn.Element(treeId); n = trees_fn.Elements().Count(); cascade.StageClassifiers[i].Classifiers = new List<HaarClassifier>(n); for (j = 0; j < n; j++) { cascade.StageClassifiers[i].Classifiers.Add(new HaarClassifier()); cascade.StageClassifiers[i].Classifiers[j].HaarFeatures = null; } cascade.StageClassifiers[i].Count = n; j = 0; XElement tree_fn = (XElement)trees_fn.FirstNode; while (null != tree_fn) { HaarClassifier classifier; int lastIndex; classifier = cascade.StageClassifiers[i].Classifiers[j]; classifier.Count = tree_fn.Elements().Count(); classifier.HaarFeatures = new List<HaarFeature>(classifier.Count); for (k = 0; k < classifier.Count; k++) { classifier.HaarFeatures.Add(new HaarFeature()); classifier.Left.Add(0); classifier.Right.Add(0); classifier.Threshold.Add(0); classifier.Alpha.Add(0); } classifier.Alpha.Add(0); lastIndex = 0; k = 0; XNode node_fn = tree_fn.FirstNode; while (null != node_fn) { if (!(node_fn is XElement)) goto next_node_fn; XElement feature_fn; XElement rects_fn; feature_fn = ((XElement)node_fn).Element(featureId); rects_fn = feature_fn.Element(rectsId); l = 0; XNode rect_fn = rects_fn.FirstNode; while (null != rect_fn) { if (!(rect_fn is XElement)) goto next_rect_fn; { string[] rectangleParams = ((XElement)rect_fn).Value.Split(' '); Rectangle rectangle = new Rectangle(); rectangle.X = int.Parse(rectangleParams[0]); rectangle.Y = int.Parse(rectangleParams[1]); rectangle.Width = int.Parse(rectangleParams[2]); rectangle.Height = int.Parse(rectangleParams[3]); classifier.HaarFeatures[k].Rectangles[l] = new HaarRectangle(); classifier.HaarFeatures[k].Rectangles[l].Weight = float.Parse(rectangleParams[4]); classifier.HaarFeatures[k].Rectangles[l].Rectangle = rectangle; } l++; next_rect_fn: rect_fn = (XElement)rect_fn.NextNode; } for (; l < 3; ++l) classifier.HaarFeatures[k].Rectangles[l] = new HaarRectangle(); fn = feature_fn.Element(tiltedId); int.TryParse(fn.Value, out classifier.HaarFeatures[k].Tilted); fn = ((XElement)node_fn).Element(thresholdId); classifier.Threshold[k] = float.Parse(fn.Value); fn = ((XElement)node_fn).Element(left_nodeId); if (null != fn) /* left node */ classifier.Left[k] = int.Parse(fn.Value); else { fn = ((XElement)node_fn).Element(left_valId); classifier.Left[k] = -lastIndex; classifier.Alpha[lastIndex++] = float.Parse(fn.Value); } fn = ((XElement)node_fn).Element(right_nodeId); if (null != fn) /* right node */ classifier.Right[k] = int.Parse(fn.Value); else { fn = ((XElement)node_fn).Element(right_valId); classifier.Right[k] = -lastIndex; classifier.Alpha[lastIndex++] = float.Parse(fn.Value); } k++; next_node_fn: node_fn = (XElement)node_fn.NextNode; } j++; tree_fn = (XElement)tree_fn.NextNode; } fn = stage_fn.Element(stageThresholdId); cascade.StageClassifiers[i].Threshold = float.Parse(fn.Value); parent = i - 1; next = -1; fn = stage_fn.Element(parentId); parent = int.Parse(fn.Value); fn = stage_fn.Element(nextId); next = int.Parse(fn.Value); cascade.StageClassifiers[i].Parent = parent; cascade.StageClassifiers[i].Next = next; cascade.StageClassifiers[i].Child = -1; if (parent != -1 && cascade.StageClassifiers[parent].Child == -1) cascade.StageClassifiers[parent].Child = i; i++; stage_fn = (XElement)stage_fn.NextNode; } return cascade; } A: You could try to compile the mono sources in your .Net 2.0 project: https://github.com/mono Here are the sources for mono's implementation of System.Xml.Linq: https://github.com/mono/mono/tree/c7c906d69ac9e360ce3e7d517258b8eea2b962b2/mcs/class/System.Xml.Linq This ought to be feasible in theory since .Net 3 shares the same runtime with .Net 2. However I doubt this will be quick... A: XDocument and XElement run on top of LINQ (Language INtegrated Query). Therefor the code itself cannot run within the .NET 2.0 context. You could however try using Xml Serialization or XmlReader.
doc_23536952
Now I'm trying to get this collection to be called only once and then storing it in localStorage for read. for this I am trying to use this adapter (https://github.com/jeromegn/Backbone.localStorage) but I do not understand how. Sample code // models define([ 'underscore', 'backbone' ], function(_, Backbone) { var AzModel = Backbone.Model.extend({ defaults: { item: '', img:"img/gi.jpg" }, initialize: function(){ } }); return AzModel; }); // Collection define(['jquery', 'underscore', 'backbone', 'models/az'], function($, _, Backbone, AzModel) { var AzCollection = Backbone.Collection.extend({ localStorage: new Backbone.LocalStorage("AzStore"), // Unique name within your app. url : "json/azlist.json", model : AzModel parse : function(response) { return response; } }); return AzCollection; }); define(['jquery', 'underscore', 'backbone', 'collections/azlist', 'text!templates/karate/az.html'], function($, _, Backbone, AzList, AzViewTemplate) { var AzView = Backbone.View.extend({ id:"az", initialize: function() { this.collection = new AzList(); var self = this; this.collection.fetch().done(function() { //alert("done") self.render(); }); }, render : function() { var data = this.collection; if (data.length == 0) { // Show's the jQuery Mobile loading icon $.mobile.loading("show"); } else { $.mobile.loading("hide"); console.log(data.toJSON()); this.$el.html(_.template(AzViewTemplate, {data:data.toJSON()})); // create jqueryui $(document).trigger("create"); } return this; } }); return AzView; }); Does someone can point me the way. A: The Backbone local storage adapter overrides Collection.sync, the function which is used when you fetch the collection, or save models within the collection. If you set the Collection.localStorage property, it redirects the calls to the local storage instead of the server. This means you can have either or -- read and write to local storage or server -- but not both at the same time. This leaves you two options: * *Do the initial fetch, which populates the data from the server, and only then set the localStorage property: var self = this; self.collection.fetch().done(function() { self.collection.localStorage = new Backbone.LocalStorage("AzStore"); self.render(); }); *Set the Collection.localStorage property as you do now, and fetch the initial dataset manually using Backbone.ajaxSync, which is the alias given to Backbone.sync by the localstorage adapter: Backbone.ajaxSync('read', self.collection).done(function() { self.render(); } The latter option might be preferrable, because it doesn't prevent you from loading the data from the server later on, if required. You could quite neatly wrap the functionality as a method on the collection: var AzCollection = Backbone.Collection.extend({ localStorage: new Backbone.LocalStorage('AzStore'), refreshFromServer: function() { return Backbone.ajaxSync('read', this); } }); When you want to load data from the server, you can call that method: collection.refreshFromServer().done(function() { ... }); And when you want to use the local storage, you can use the native fetch: collection.fetch().done(function() { ... }); Edited to correct mistake in sample code for the benefit of drive-by googlers.
doc_23536953
switchTheme(themeCode: string) { document.body.className = ''; document.querySelector('body').classList.add(themeCode); } But I can't remove a class from HTML tag as below. switchTheme(themeCode: string) { document.html.className = ''; document.querySelector('html').classList.add(themeCode); } It gives following error in the first line of the function. Property 'html' does not exist on type 'Document'. Any help? A: That's because document does not have this html property. That's not a typescript issue, it's javascript, try to run this in your console: console.log(document.html); And you'll get undefined. To get a reference to the html part of the DOM you need to use the document.documentElement property (the type definition, MDN): console.log(document.documentElement);
doc_23536954
This is the HTML code: <div class="content-dx"> <div id="messages"></div> <input type="text" id="messageBox" maxlength="100" placeholder="Type your Message here"/> <button id="send" onclick="Invio()"><i class="fa fa-paper-plane"></i></button> </div> "message" is the box that contains the chat, the messages that arrive CSS code (if it can be useful, but I don't think): #messages{ background-color:yellow; margin-bottom: 32px; height: 86%; overflow: auto; text-align: left; overflow-x: hidden; margin-left: 5%; width: 90%; box-shadow: -1px 4px 28px 0px rgba(0,0,0,0.75); } What should I do to make the scroll automatically go down when there are messages? I tried some codes, they work BUT when I want to go back to reread the messages, it won't let me go. A: You can scroll to the bottom of the messages container with this code window.scrollTo(0,document.querySelector("#messages").scrollHeight); You need to tie this code to your event which controls received messages. I couldn't write that part of the code because your question was missing the event section. A: function scrollToBottom(div) { div.scrollTop = div.scrollHeight } const messages = document.querySelector("#messages") scrollToBottom(messages) A: Try this in javascript part: const messageBox = document.querySelector("#messages"); messageBox.animate({scrollTop:messageBox.scrollHeight}); If this failed, then use this one: const messageBox = document.querySelector("#messages"); messageBox.animate({scrollTop:messageBox[0].scrollHeight}); One of these will work. Check and give me the feedback whether it works or not.
doc_23536955
<?php session_start(); require_once("SubmitController/sale_property_controller.php"); $objproperty = new sale_property_controller(); if (count($_FILES['upload']['name']) > 0) { //Loop through each file for ($i = 0; $i < count($_FILES['upload']['name']); $i++) { //Get the temp file path $tmpFilePath = $_FILES['upload']['tmp_name'][$i]; //Make sure we have a filepath if ($tmpFilePath != "") { //save the filename $shortname = date('d-m-Y-H-i-s') . '-' . $_FILES['upload']['name'][$i]; //save the url and the file $filePath = "../img/saleproperty/" . $shortname; //Upload the file into the temp dir if (move_uploaded_file($tmpFilePath, $filePath)) { $_SESSION['Property_images'][] = $shortname; } } } } if(!$_SESSION['Property_images']){}else{ foreach ($_SESSION['Property_images'] as $items => &$item ) { $property_id=$_GET['id']; $objproperty->Updateproimg($property_id, $item); } } ?> this is my function function Updateproimg($property_id, $item) { $sql="update images_property set images='".$item."' where property_id='".$property_id."' "; $this->update($sql); } A: I feel you have to take count on count($_FILES['upload']) instead of $_FILES['upload']['name']; $count = count($_FILES['upload']) for($i=0; $i<=$count; $i++) { $tmpFilePath = $_FILES['upload'][$i]['tmp_name']; } Are you trying multiple file upload?
doc_23536956
I am using var dummydate: any = new Date().toUTCString(); to convert it in UTC. Again if i change the dummydate from string to date. This does not preserver the UTC TimeZone. After conversion you get the date in your local timezone. Hence Setminutes is applied on local time zone not on UTC. Himani A: For date manipulation in Javascript / Typescript u can use momentjs: MomentJS MomentJS Manipulation U can then do something like: moment().add(7, 'days'); More info in this post: How to add 30 minutes to a JavaScript Date object?
doc_23536957
My javascript <script type="text/javascript"> $(document).ready(function(){ for (i = 11; i < 19; i++) { $("#container").append("<div class=\"round-button\"><div class=\"round-button-circle\"><label><input type=\"checkbox\" value=" + i + "><span>" + i + "</span></label></div></div>"); } }); </script> The html is simply <div id="#container"></div> Update: The fiddle is working, but not working in html. What am I doing wrong? <!DOCTYPE html> <head> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1"> <meta http-equiv="content-type" content="text/html; charset=UTF-8" /> <meta http-equiv="Content-Style-Type" content="text/css" /> <link rel="stylesheet" type="text/css" media="screen" href="css/styles.css" /> <link rel="stylesheet" type="text/css" href="css/bootstrap.min.css" crossorigin="anonymous"> <link rel="stylesheet" href="css/font-awesome.min.css"> <link rel="stylesheet" href="css/select2.css"> <link rel="stylesheet" href="css/select2-bootstrap.css"> </head> <body> <div id="container"> </div> <script type="text/javascript"> $(document).ready(function() { for (i = 11; i < 19; i++) { $("#container").append("<div class=\"round-button\"><div class=\"round-button-circle\"><label><input type=\"checkbox\" value=" + i + "><span>" + i + "</span></label></div></div>"); } }); </script> </body> </html> The css is in styles.css A: Remove # from id of div, Just give id to div like <div id="container"></div> Jsfiddle: http://jsfiddle.net/qtju592w/
doc_23536958
I am trying to send data to a socket using AContext.Connection.Socket.Write(string). The data contains German characters like äöüßÄÜ. When the string I send is longer than 530 bytes, some of the special characters are replaced with blanks. Some are not. When I shorten the string length to 530, all characters are working fine. I tried to write the output in blocks of length 500, it works only when I do a Sleep(1) between each write. I would really like to send the data with one Write() command. How can I do this? This is the non-working code (encoding is set to IndyTextEncoding(437) in the OnConnect event): sOut := 'jölhniönjhiouöinjhioö-jhnioönhoinhiuhbgnujnhb-öihn-öouhn'+ 'jölhniönjhiouöinjhioö-jhnioönhoinhiuhbgnujnhb-öihn-öouhn'+ 'jölhniönjhiouöinjhioö-jhnioönhoinhiuhbgnujnhb-öihn-öouhn'+ 'jölhniönjhiouöinjhioö-jhnioönhoinhiuhbgnujnhb-öihn-öouhn'+ 'jölhniönjhiouöinjhioö-jhnioönhoinhiuhbgnujnhb-öihn-öouhn'+ 'jölhniönjhiouöinjhioö-jhnioönhoinhiuhbgnujnhb-öihn-öouhn'+ 'jölhniönjhiouöinjhioö-jhnioönhoinhiuhbgnujnhb-öihn-öouhn'+ 'jölhniönjhiouöinjhioö-jhnioönhoinhiuhbgnujnhb-öihn-öouhn'+ 'jölhniönjhiouöinjhioö-jhnioönhoinhiuhbgnujnhb-öihn-öouhn'+ 'jölhniönjhiouöinjhioö-jhnioönhoinhiuhbgnujnhb-öihn-öouhn'+ 'jölhniönjhiouöinjhioö-jhnioönhoinhiuhbgnujnhb-öihn-öouhn'; //sout := leftstr(sout, 500); AContext.Connection.Socket.Write(sOut);
doc_23536959
Here are my two tables MachinesSummary ID Machine1 Machine2 ShareCount ------------------------------- 1 A J NULL 2 K S NULL 3 A E NULL 4 J A NULL 5 Y U NULL 6 S W NULL 7 G A NULL 8 W S NULL The other table is MachineDetails ProcessNo Machine ------------------ 1 A 1 H 1 W 2 A 2 J 2 W 3 Y 3 K 4 J 4 A I want to update ShareCount in the MachineSummary table with the count of processes that both Machine1 and Machine2 share. For record 1 in the MachineSummary table, I want the number of processes both share in MachineDetails which is 1 in this case While for record 4 the ShareCount is 2 I tried this UPDATE M SET ShareCount = COUNT(DISTINCT X.ProcessNo) FROM (SELECT ProcessNo, ',' + STRING_AGG(Machine,',') + ',' Machines FROM MachineDetails GROUP BY ProcessNo) X INNER JOIN MachinesSummary M ON X.Machines LIKE '%'+ M.Machine1 + '%' AND X.Machines LIKE '%'+ M.Machine2 + '%' But I wonder if there is an easier high performance way The MachineDetails table has 250 million rows. A: Well, I would use a self-join to get the number of combinations: UPDATE M SET ShareCount = num_processes FROM MachinesSummary M JOIN (SELECT md1.Machine as machine1, md2.Machine as machine2, COUNT(*) as num_processes FROM MachineDetails md1 JOIN MachinesDetails md2 ON md1.processno = md2.processno GROUP BY md1.Machine, md2.Machine ) md ON md.Machine1 = M.machine1 AND md.Machine2 = M.machine2; A: I would use an updatable CTE here: WITH cte AS ( SELECT Machine, COUNT(*) AS cnt FROM MachineDetails GROUP BY Machine ), cte2 AS ( SELECT ShareCount, COALESCE(t1.cnt, 0) AS m1_cnt, COALESCE(t2.cnt, 0) AS m2_cnt FROM MachineSummary ms LEFT JOIN cte t1 ON t1.Machine1 = ms.Machine LEFT JOIN cte t2 ON t2.Machine2 = ms.Machine ) UPDATE cte2 SET ShareCount = m1_cnt + m2_cnt; The logic of the first CTE involving the MachineDetails table is to get the counts for every machine. The second CTE joins this counts CTE to the MachineSummary table twice, once for each of machine 1 and 2. Then, we update this second CTE and assign the sum of counts.
doc_23536960
var the_data = { title: 'This is title', description: 'this is description' } axios.post('/api/snippets/insert', the_data) .then(function (response) { console.log(response); }) .catch(function (error) { console.log(error); }); On the API end, I am using a simple PHP script and printing whole $_POST request data using this code var_dump($_POST); But this is returning empty array. A: I was running into this as well. Axios does not send the POST data in the form you're expecting. You need something like http://github.com/ljharb/qs and then use axios.post('/api/snippets/insert', Qs.stringify(the_data)). Please note this build on cdnjs uses Qs, not qs. Alternatives to qs would be e.g. JSON.stringify() or jQuery's $.param().
doc_23536961
Is there a way to disable all the link in print preview? i.e. they are no more clickable. A: I know this isn't disabling the links, but this is along the lines of print preview and links. Here's an option that you can add to your css so links will print much nicer: http://davidwalsh.name/optimize-your-links-for-print-using-css-show-url A: I wouldn't disable the links as much as style them the same way as regular text, since the intent of clicking on Print Preview is to prepare to print the page, where the links won't work anyway. If you're still set on disabling links, you can try this query code. Assume that your body has a class called print on it: $("body.print a").each(function(a) { $(a).html(a.innertext); }); The above is pseduocode and was off the top of my head at 8:30 AM. I would just save yourself the trouble and make links look like surrounding text.
doc_23536962
I think it's a problem of configuration. I'm using Eureka for service discovering, Zuul as Gateway and entry point. When the user request a protected service, he should be redirectet to my auth-service (OAuth2/JWT). The token he gets after login should be stored by Zuul (right ?). Actually Zuul doesn't get the token or doesn't store it. Do I have to do this by my own or should Zuul and OAuth manage this and I just have bad configurations? Could someone show me, how you configure this architecture or a new/working guide for Spring Boot 2.0.3? I'm actually really frustrated, need help. I'm new to Spring, but have to learn it for work. But at the moment i'm just overstrained. Additional infos: I didnt create any views now. I just defined some default controller which return Strings and are secured by @PreAuthorize. Gateway-Service: GatewayServiceApplication.java @SpringBootApplication @EnableZuulProxy @EnableDiscoveryClient @Configuration public class GatewayServiceApplication { public static void main(String[] args) { SpringApplication.run(GatewayServiceApplication.class, args); } } SecurityConfig.java @Configuration @EnableOAuth2Sso public class SecurityConfig extends WebSecurityConfigurerAdapter { @Override public void configure(HttpSecurity http) throws Exception { http.antMatcher("/**") .httpBasic().disable() .authorizeRequests() .antMatchers("/", "/core/", "/core/login**", "/oauth/authorize", "/core/oauth/authorize", "/login") .permitAll() .anyRequest() .authenticated() .and() .formLogin().permitAll(); } } Here I had a lot of different antMatchers. application.properties server.port=8000 spring.application.name=gateway-service eureka.client.service-url.defaultZone=http://localhost:8001/eureka/ security.oauth2.sso.login-path=http://localhost:8000/core/login security.oauth2.client.client-id=zuul security.oauth2.client.client-secret=zuul security.oauth2.client.access-token-uri=http://localhost:8000/core/oauth/token security.oauth2.client.user-authorization-uri=http://localhost:8000/core/oauth/authorize #security.oauth2.resource.user-info-uri=http://localhost:8000/core/user/me security.oauth2.resource.user-info-uri=http://localhost:8000/core/secured spring.thymeleaf.cache=false I think here's a failure. Core-Service CoreApplication.java @SpringBootApplication @EnableDiscoveryClient @EnableResourceServer public class CoreApplication { public static void main(String[] args) { SpringApplication.run(CoreApplication.class, args); } } AuthServerConfig @Configuration @EnableAuthorizationServer public class AuthServerConfig extends AuthorizationServerConfigurerAdapter { @Autowired private BCryptPasswordEncoder passwordEncoder; @Override public void configure( AuthorizationServerSecurityConfigurer oauthServer) throws Exception { oauthServer.tokenKeyAccess("permitAll()") .checkTokenAccess("isAuthenticated()"); } @Override public void configure(ClientDetailsServiceConfigurer clients) throws Exception { clients.inMemory() .withClient("zuul") .secret(passwordEncoder.encode("zuul")) .authorizedGrantTypes("authorization_code") .scopes("user_info") .autoApprove(true) .redirectUris("http://localhost:8000/core/secured"); } } SecurityConfig @Configuration @Order(1) public class SecurityConfig extends WebSecurityConfigurerAdapter { @Override protected void configure(AuthenticationManagerBuilder auth) throws Exception { auth.inMemoryAuthentication() .withUser("john") .password(passwordEncoder().encode("123")) .roles("USER"); } @Bean public BCryptPasswordEncoder passwordEncoder(){ return new BCryptPasswordEncoder(); } } application.properties server.port=8003 spring.application.name=core eureka.client.service-url.defaultZone=http://localhost:8001/eureka Ok, especially here I changed many things. So probably I destroyed much from older guides. (Sorry my english is bad!)
doc_23536963
Setup I create fakeRequest,fakeRequestNoHeaders thus: // create fake request fakeRequest := new(web.Request) fakeRequest.Request = httptest.NewRequest("GET", fakeServer.URL, nil) fakeRequestNoHeaders := new(web.Request) fakeRequestNoHeaders.Request = fakeRequest.Request // give fakeRequest some headers fakeRequest.Header.Add("Authorization", "Bearer ksjaf;oipyu7") fakeRequest.Header.Add("Scope", "test") Sanity Test I expect, of course, that fakeRequest.Header != fakeRequestNoHeaders.Header. I write that test: t.Run("HeadersSanityTest", func(t *testing.T) { assert.NotEqualf(t, fakeRequest.Header, fakeRequestNoHeaders.Header, "fakeRequest,fakeRequestNoHeaders share the same header state") Result of test It fails. Why is this and how can I achieve what I'm trying? UPDATE: I found the culprit: the underlying http.Request, returned by httptest.NewRequest, is actually a pointer. Header simply belongs to that Request. The problem now reduces down to "How to deep-copy that Request." A: The issue was, indeed, not with the Header field, but instead the Request field, which was a pointer. (Oh no! I accidentally shallow-copied) The Solution I recalled, in one of my earlier tests, a method that I wrote specifically to get around this: func makeBasicRequest() *web.Request { baseReq := httptest.NewRequest(http.MethodPost, "[some url here]", nil) req := new(web.Request) req.Request = baseReq return req } I basically just brought it into this test, and used it, hitting it once per fake request that I needed.
doc_23536964
If the status of this CTime object is null, the return value is an empty string. * *But how to set this status, is it even possible? *If it isn't possible, I guess boost::optional<CTime> would be a good alternative? A: CTime is just a wrapper for a __time64_t. When you call format it does this: inline CString CTime::Format(_In_z_ LPCTSTR pFormat) const { if(pFormat == NULL) { return pFormat; } TCHAR szBuffer[maxTimeBufferSize]; struct tm ptmTemp; if (_localtime64_s(&ptmTemp, &m_time) != 0) { AtlThrow(E_INVALIDARG); } if (!_tcsftime(szBuffer, maxTimeBufferSize, pFormat, &ptmTemp)) { szBuffer[0] = '\0'; } return szBuffer; } So the system function you want to look at is _tcsftime. And this is where I think the documentation is not very accurate. If the _localtime64_s fails you'll get an exception so a 'null' time can't really be passed to _tcsftime. You'll only get a NULL if _tcsftime fails but that won't be because of a 'null' time. So, in short, use something like you suggest of boost::optional to represent null.
doc_23536965
static int *p[]={a,a+1,a+2,a+3,a+4}; int **ptr = p; ptr++; printf("\n%d %d %d",ptr-p,*ptr-p,**ptr); Output -> 1 1 1 I am not able to understand , how expression "ptr-p" is yielding value 1 in all cases(I understand working of **ptr). A: * *ptr - p - when you initialized ptr, it gets the value of p. Both of them are essentially pointers to pointers to integers - int ** (I believe that you understand that arrays can be treated as pointers in some cases, including this one; not in all cases though...). Since you incremented that value of ptr, its value is greater than p by exactly 1, thus ptr - p = 1. ***ptr - since ptr was incremented, it now points to the second item of p (i.e p[1]), which is a+1. By performing double-dereferencing you eventually get: **ptr = *p[1] = *(a+1) = a[1] = 1 While I can explain the first and last result (ptr-p, **ptr), I can't understand how the middle result (*ptr - p) even compiles - it's an arithmetical operation between 2 different types (int *, int **) so the compiler raises an error. Hope that at least the first part makes sense...
doc_23536966
function removeParam(name, value) { var newUrl = window.location.href.split("?")[0], sourceURL = window.location.href, param, params_arr = [], queryString = (sourceURL.indexOf("?") !== -1) ? sourceURL.split("?")[1] : ""; if (queryString !== "") { params_arr = queryString.split("&"); for (var i = params_arr.length - 1; i >= 0; i -= 1) { param = params_arr[i].split("&")[0]; if (param.indexOf(name) !== -1) { if (params_arr[i].indexOf("%s") !== -1) { params_arr[i] = param.replace("%s" + value , ""); params_arr[i] = param.replace(value , ""); } else { params_arr[i] = param.replace(name + "=" + value, ""); } } } if (params_arr[0] !== "") { newUrl = newUrl + "?" + params_arr.join("&"); } } window.history.pushState(null,"", newUrl); Remove value C from var Source URL: /?var=A%sB%sC URL-should-be: /?var=A%sB I can have multiple parameters with corresponding values like Source URL: /?var=A%sB%sC&var2=D%sE%sF But magic is that if i add second str: params_arr[i] = param.replace(value , ""); It will remove only C without separator: %s How to fix it ? I added this second replace because i need to remove values from head. A: this has been answered before, try this function removeURLParameter(url, parameter) { //prefer to use l.search if you have a location/link object var urlparts= url.split('?'); if (urlparts.length>=2) { var prefix= encodeURIComponent(parameter)+'='; var pars= urlparts[1].split(/[&;]/g); //reverse iteration as may be destructive for (var i= pars.length; i-- > 0;) { //idiom for string.startsWith if (pars[i].lastIndexOf(prefix, 0) !== -1) { pars.splice(i, 1); } } url= urlparts[0] + (pars.length > 0 ? '?' + pars.join('&') : ""); return url; } else { return url; } } https://stackoverflow.com/a/1634841/3511012
doc_23536967
Below is my input. <?xml version="1.0" encoding="utf-8"?> <OrderStatusUpdate fileType="Order Status Update" fileStartTime="2020-11-11 22:36:08 " fileEndTime=" 2020-11-11 23:36:25"> <MessageHeader> <Standard>eBay_Enterprise</Standard> <HeaderVersion>EWS_eb2c_1.1</HeaderVersion> <VersionReleaseNumber>EWS_eb2c_1.1</VersionReleaseNumber> <SourceData> <SourceId>OMS</SourceId> <SourceType>OrderManagementSystem</SourceType> </SourceData> <DestinationData> <DestinationId>EE_OrderRTStatusXML</DestinationId> <DestinationType>MAILBOX</DestinationType> </DestinationData> <EventType>OrderStatus</EventType> <MessageData> <MessageId>20201111233723</MessageId> <CorrelationId>0</CorrelationId> </MessageData> <CreateDateAndTime>2020-11-11 23:37:23</CreateDateAndTime> </MessageHeader> <OrderStatusEvents> <OrderStatusEvent> <StoreCode>R21_US</StoreCode> <OrderId>101155883040</OrderId> <ExternalOrderId/> <WebOrderId>00201W008786173</WebOrderId> <OrderSource type="OrderClassifier">STORE-ORDER</OrderSource> <OrderLineId>1</OrderLineId> <OriginalOrderId/> <OriginalWebOrderId/> <OriginalOrderLineId/> <ItemId>88-0027615863</ItemId> <OrderStatusDetails> <OrderStatusDetail> <OrderStatusEventTimeStamp>2020-11-11 23:06:29</OrderStatusEventTimeStamp> <StatusName>Fulfilled and Invoiced</StatusName> <ReturnReason/> <Qty>1</Qty> <CancelReason/> <CancelReasonText/> </OrderStatusDetail> </OrderStatusDetails> <OrderShipmentDetails> <OrderShipmentDetail> <ShippingCarrier>FEDX</ShippingCarrier> <ShippingServiceLevel>RUEGND</ShippingServiceLevel> <ShippingTrackingNumber>124518478574</ShippingTrackingNumber> <ShippingTimestamp>2020-11-11 22:48:00</ShippingTimestamp> </OrderShipmentDetail> </OrderShipmentDetails> </OrderStatusEvent> <OrderStatusEvent> <StoreCode>R21_US</StoreCode> <OrderId>101155883040</OrderId> <ExternalOrderId/> <WebOrderId>00201W008786173</WebOrderId> <OrderSource type="OrderClassifier">STORE-ORDER</OrderSource> <OrderLineId>2</OrderLineId> <OriginalOrderId/> <OriginalWebOrderId/> <OriginalOrderLineId/> <ItemId>88-0027501642</ItemId> <OrderStatusDetails> <OrderStatusDetail> <OrderStatusEventTimeStamp>2020-11-11 23:06:29</OrderStatusEventTimeStamp> <StatusName>Fulfilled and Invoiced</StatusName> <ReturnReason/> <Qty>1</Qty> <CancelReason/> <CancelReasonText/> </OrderStatusDetail> </OrderStatusDetails> <OrderShipmentDetails> <OrderShipmentDetail> <ShippingCarrier>FEDX</ShippingCarrier> <ShippingServiceLevel>RUEGND</ShippingServiceLevel> <ShippingTrackingNumber>124518478574</ShippingTrackingNumber> <ShippingTimestamp>2020-11-11 22:48:00</ShippingTimestamp> </OrderShipmentDetail> </OrderShipmentDetails> </OrderStatusEvent> <OrderStatusEvent> <StoreCode>R21_US</StoreCode> <OrderId>101156041120</OrderId> <ExternalOrderId/> <WebOrderId>00201W008787110</WebOrderId> <OrderSource type=""/> <OrderLineId>1</OrderLineId> <OriginalOrderId/> <OriginalWebOrderId/> <OriginalOrderLineId/> <ItemId>88-0027627207</ItemId> <OrderStatusDetails> <OrderStatusDetail> <OrderStatusEventTimeStamp>2020-11-11 23:06:10</OrderStatusEventTimeStamp> <StatusName>Fulfilled and Invoiced</StatusName> <ReturnReason/> <Qty>1</Qty> <CancelReason/> <CancelReasonText/> </OrderStatusDetail> </OrderStatusDetails> <OrderShipmentDetails> <OrderShipmentDetail> <ShippingCarrier>UPS</ShippingCarrier> <ShippingServiceLevel>PBX04</ShippingServiceLevel> <ShippingTrackingNumber>1ZEW3573YW09535217</ShippingTrackingNumber> <ShippingTimestamp>2020-11-11 22:54:00</ShippingTimestamp> </OrderShipmentDetail> </OrderShipmentDetails> </OrderStatusEvent> <OrderStatusEvent> <StoreCode>R21_US</StoreCode> <OrderId>101156041120</OrderId> <ExternalOrderId/> <WebOrderId>00201W008787110</WebOrderId> <OrderSource type=""/> <OrderLineId>2</OrderLineId> <OriginalOrderId/> <OriginalWebOrderId/> <OriginalOrderLineId/> <ItemId>88-0027627223</ItemId> <OrderStatusDetails> <OrderStatusDetail> <OrderStatusEventTimeStamp>2020-11-11 23:06:10</OrderStatusEventTimeStamp> <StatusName>Fulfilled and Invoiced</StatusName> <ReturnReason/> <Qty>1</Qty> <CancelReason/> <CancelReasonText/> </OrderStatusDetail> </OrderStatusDetails> <OrderShipmentDetails> <OrderShipmentDetail> <ShippingCarrier>UPS</ShippingCarrier> <ShippingServiceLevel>PBX04</ShippingServiceLevel> <ShippingTrackingNumber>1ZEW3573YW09535217</ShippingTrackingNumber> <ShippingTimestamp>2020-11-11 22:54:00</ShippingTimestamp> </OrderShipmentDetail> </OrderShipmentDetails> </OrderStatusEvent> </OrderStatusEvents> </OrderStatusUpdate> Expected output should be. <?xml version='1.0' encoding='UTF-8'?> <OrderStatuses> <MESSAGES> <COMMANDSTATUS ID="SHIPPED" DESCRIPTION="Goods Shipped"> <ORDER O_ID="W008786173" TRACKING_URL="https://www.fedex.com/apps/fedextrack/?tracknumbers=124518478574"> <ORDER_LINE OL_ID="1" SKU="27615863" QUANTITY="1"/> <ORDER_LINE OL_ID="2" SKU="27501642" QUANTITY="1"/> </ORDER> </COMMANDSTATUS> </MESSAGES> </OrderStatuses> But I am getting below output. <?xml version="1.0" encoding="UTF-8"?> <OrderStatuses> <MESSAGES> <COMMANDSTATUS ID="SHIPPED" DESCRIPTION="Goods Shipped"> <ORDER O_ID="W008786173" TRACKING_URL="https://www.fedex.com/apps/fedextrack/?tracknumbers=124518478574"> <ORDER_LINE OL_ID="1" SKU="27615863" QUANTITY="1"/> <ORDER_LINE OL_ID="2" SKU="27501642" QUANTITY="1"/> </ORDER> </COMMANDSTATUS> </MESSAGES> <MESSAGES> <COMMANDSTATUS ID="SHIPPED" DESCRIPTION="Goods Shipped"> <ORDER O_ID="W008787110"/> </COMMANDSTATUS> </MESSAGES> </OrderStatuses> What exactly I need is if OrderSource tag has value of 'STORE-ORDER', then only it should add MESSAGES tag and data in OrderStatuses root tag otherwise it shouldn't add MESSAGES tag. Below is my XSLT. <?xml version="1.0" encoding="UTF-8" ?> <xsl:stylesheet version="2.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="xml" indent="yes" /> <xsl:param name="orderSource" /> <xsl:template match="/"> <OrderStatuses> <xsl:for-each-group select="/OrderStatusUpdate/OrderStatusEvents/OrderStatusEvent" group-by="/OrderStatusUpdate/OrderStatusEvents/OrderStatusEvent/WebOrderId"> <xsl:choose> <xsl:when test="./OrderSource = 'STORE-ORDER'"> <MESSAGES> <COMMANDSTATUS> <xsl:choose> <xsl:when test="upper-case(./OrderStatusDetails/OrderStatusDetail/StatusName) = 'FULFILLED AND INVOICED' and ./OrderSource = 'STORE-ORDER'"> <xsl:attribute name="ID">SHIPPED</xsl:attribute> <xsl:attribute name="DESCRIPTION">Goods Shipped</xsl:attribute> </xsl:when> <xsl:when test="upper-case(./OrderStatusDetails/OrderStatusDetail/StatusName) = 'RETURN RECEIVED' and ./OrderSource = 'STORE-ORDER'"> <xsl:attribute name="ID">RETURNED</xsl:attribute> <xsl:attribute name="DESCRIPTION">Goods Returned</xsl:attribute> </xsl:when> <xsl:when test="upper-case(./OrderStatusDetails/OrderStatusDetail/StatusName) = 'CANCELLED' and ./OrderSource = 'STORE-ORDER'"> <xsl:attribute name="ID">CANCELLED</xsl:attribute> <xsl:attribute name="DESCRIPTION">Goods Cancelled</xsl:attribute> </xsl:when> </xsl:choose> <ORDER> <xsl:choose> <xsl:when test="substring(current-grouping-key(), 0, 6) = '00201' and ./OrderSource = 'STORE-ORDER'"> <xsl:attribute name="O_ID"><xsl:value-of select='substring(current-grouping-key(), 6)' /></xsl:attribute> </xsl:when> <xsl:otherwise> <xsl:attribute name="O_ID"><xsl:value-of select='current-grouping-key()' /></xsl:attribute> </xsl:otherwise> </xsl:choose> <xsl:for-each select="current-group()[WebOrderId = current-grouping-key()]"> <xsl:choose> <xsl:when test="upper-case(./OrderStatusDetails/OrderStatusDetail/StatusName) = 'FULFILLED AND INVOICED' and ./OrderSource = 'STORE-ORDER'"> <xsl:attribute name="TRACKING_URL">https://www.fedex.com/apps/fedextrack/?tracknumbers=<xsl:value-of select='./OrderShipmentDetails/OrderShipmentDetail/ShippingTrackingNumber' /></xsl:attribute> </xsl:when> </xsl:choose> </xsl:for-each> <xsl:for-each select="current-group()[WebOrderId = current-grouping-key()]"> <xsl:choose> <xsl:when test="./OrderSource = 'STORE-ORDER'"> <ORDER_LINE> <xsl:attribute name="OL_ID"><xsl:value-of select='OrderLineId' /></xsl:attribute> <xsl:choose> <xsl:when test="substring(ItemId, 0, 6) = '88-00'"> <xsl:attribute name="SKU"><xsl:value-of select='substring(ItemId, 6)' /></xsl:attribute> </xsl:when> <xsl:otherwise> <xsl:attribute name="SKU"><xsl:value-of select='ItemId' /></xsl:attribute> </xsl:otherwise> </xsl:choose> <xsl:attribute name="QUANTITY"><xsl:value-of select='./OrderStatusDetails/OrderStatusDetail/Qty' /></xsl:attribute> </ORDER_LINE> </xsl:when> </xsl:choose> </xsl:for-each> </ORDER> </COMMANDSTATUS> </MESSAGES> </xsl:when> </xsl:choose> </xsl:for-each-group> </OrderStatuses> </xsl:template> </xsl:stylesheet> Let me know how can I achieve this. Thanks in advance! A: Just a shot in the dark... <xsl:for-each-group select="OrderStatusUpdate/OrderStatusEvents/OrderStatusEvent" group-by="OrderSource"> <xsl:if test="current-grouping-key()"> process items... </xsl:if> </xsl:for-each-group>
doc_23536968
What we have noticed is that some CF 9 counters accessed via perfmon are now giving crazy values. I recall something very similar happening when we upgraded to CF 8, but that was addressed by the following hotfix: http://kb2.adobe.com/cps/404/kb404026.html (discussion here) In that issue the cfperfmon_8.dll was replaced. Three of our recently upgraded Windows Server 2008 CF 9 servers are displaying a very similar issue. In my Windows event log I'm seeing a lot of these issues: The data buffer created for the "ColdFusion 9 Application Server" service in the "C:\Windows\system32\cfperfmon_9.dll" library is not aligned on an 8-byte boundary. This may cause problems for applications that are trying to read the performance data buffer. Contact the manufacturer of this library or service to have this problem corrected or to get a newer version of this library. Is there a similar fix available for CF 9?
doc_23536969
Thanks in advance. A: The most convenient solution is to use REGIONPROPS. In your example: stats = regionprops(image, 'area', 'centroid') For every feature, there is an entry in the structure stats with the area (i.e. # of voxels) and the centroid. A: I think that what you are looking for is called bwlabeln. It allows you to find blobs in 3D space, just like bwlabel does in 2D. Afterwards, you can use regionprops to find out the properties of the data. Taken directly from help: bwlabeln Label connected components in binary image. L = bwlabeln(BW) returns a label matrix, L, containing labels for the connected components in BW. BW can have any dimension; L is the same size as BW. The elements of L are integer values greater than or equal to 0. The pixels labeled 0 are the background. The pixels labeled 1 make up one object, the pixels labeled 2 make up a second object, and so on. The default connectivity is 8 for two dimensions, 26 for three dimensions, and CONNDEF(NDIMS(BW),'maximal') for higher dimensions.
doc_23536970
When I want to call function of object, appears drop-down menu with list of possible functions. Each function have icon. (Qt intellisense) Is there any place, where I can find definition of each icon? A: There you go: http://doc.qt.io/qtcreator/creator-completing-code.html The detail of each and every symbol. Qt documentation just rocks ;) Go crazy!
doc_23536971
here is list of command: $> groupadd mysql $> useradd -r -g mysql -s /bin/false mysql $> cd /usr/local $> tar zxvf /path/to/mysql-VERSION-OS.tar.gz $> ln -s full-path-to-mysql-VERSION-OS mysql $> cd mysql $> mkdir mysql-files $> chown mysql:mysql mysql-files $> chmod 750 mysql-files $> bin/mysqld --initialize --user=mysql $> bin/mysql_ssl_rsa_setup $> bin/mysqld_safe --user=mysql & # Next command is optional When i run to this command bin/mysql_ssl_rsa_setup then i got error: [ERROR] Could not find OpenSSL on the system I run this command to install openssl : yum install openssl-devel it look ok: Total 6.0 MB/s | 6.5 MB 00:00:01 Running transaction check Running transaction test Transaction test succeeded Running transaction Updating : libcom_err-1.43.5-8.3.alios7.x86_64 1/28 Updating : keyutils-libs-1.5.8-3.4.alios7.x86_64 2/28 Updating : libsepol-2.5-10.1.alios7.x86_64 3/28 Updating : libselinux-2.5-14.1.1.alios7.x86_64 4/28 Updating : 1:openssl-libs-1.0.2k-23.1.alios7.x86_64 5/28 Updating : krb5-libs-1.15.1-51.1.alios7.x86_64 6/28 Installing : libkadm5-1.15.1-51.1.alios7.x86_64 7/28 Installing : libsepol-devel-2.5-10.1.alios7.x86_64 8/28 Installing : keyutils-libs-devel-1.5.8-3.4.alios7.x86_64 9/28 Updating : libss-1.43.5-8.3.alios7.x86_64 10/28 Installing : libcom_err-devel-1.43.5-8.3.alios7.x86_64 11/28 Updating : e2fsprogs-libs-1.43.5-8.3.alios7.x86_64 12/28 Installing : pcre-devel-8.32-15.1.alios7.x86_64 13/28 Installing : libselinux-devel-2.5-14.1.1.alios7.x86_64 14/28 Installing : libverto-devel-0.2.5-4.1.alios7.x86_64 15/28 Installing : krb5-devel-1.15.1-51.1.alios7.x86_64 16/28 Installing : zlib-devel-1.2.7-16.2.alios7.x86_64 17/28 Installing : 1:openssl-devel-1.0.2k-23.1.alios7.x86_64 18/28 Updating : e2fsprogs-1.43.5-8.3.alios7.x86_64 19/28 Cleanup : 1:openssl-libs-1.0.2k-12.1.alios7.x86_64 20/28 Cleanup : krb5-libs-1.15.1-19.1.alios7.x86_64 21/28 Cleanup : e2fsprogs-1.43.5-8.alios7.x86_64 22/28 Cleanup : e2fsprogs-libs-1.43.5-8.alios7.x86_64 23/28 Cleanup : libss-1.43.5-8.alios7.x86_64 24/28 Cleanup : libselinux-2.5-12.1.alios7.x86_64 25/28 Cleanup : libsepol-2.5-8.1.1.alios7.x86_64 26/28 Cleanup : libcom_err-1.43.5-8.alios7.x86_64 27/28 Cleanup : keyutils-libs-1.5.8-3.1.alios7.x86_64 28/28 Verifying : 1:openssl-devel-1.0.2k-23.1.alios7.x86_64 1/28 Verifying : e2fsprogs-1.43.5-8.3.alios7.x86_64 2/28 Verifying : krb5-libs-1.15.1-51.1.alios7.x86_64 3/28 Verifying : 1:openssl-libs-1.0.2k-23.1.alios7.x86_64 4/28 Verifying : libss-1.43.5-8.3.alios7.x86_64 5/28 Verifying : keyutils-libs-1.5.8-3.4.alios7.x86_64 6/28 Verifying : krb5-devel-1.15.1-51.1.alios7.x86_64 7/28 Verifying : libcom_err-1.43.5-8.3.alios7.x86_64 8/28 Verifying : zlib-devel-1.2.7-16.2.alios7.x86_64 9/28 Verifying : libverto-devel-0.2.5-4.1.alios7.x86_64 10/28 Verifying : libselinux-devel-2.5-14.1.1.alios7.x86_64 11/28 Verifying : libcom_err-devel-1.43.5-8.3.alios7.x86_64 12/28 Verifying : libsepol-devel-2.5-10.1.alios7.x86_64 13/28 Verifying : libsepol-2.5-10.1.alios7.x86_64 14/28 Verifying : pcre-devel-8.32-15.1.alios7.x86_64 15/28 Verifying : libkadm5-1.15.1-51.1.alios7.x86_64 16/28 Verifying : libselinux-2.5-14.1.1.alios7.x86_64 17/28 Verifying : keyutils-libs-devel-1.5.8-3.4.alios7.x86_64 18/28 Verifying : e2fsprogs-libs-1.43.5-8.3.alios7.x86_64 19/28 Verifying : libss-1.43.5-8.alios7.x86_64 20/28 Verifying : keyutils-libs-1.5.8-3.1.alios7.x86_64 21/28 Verifying : libselinux-2.5-12.1.alios7.x86_64 22/28 Verifying : libcom_err-1.43.5-8.alios7.x86_64 23/28 Verifying : libsepol-2.5-8.1.1.alios7.x86_64 24/28 Verifying : e2fsprogs-1.43.5-8.alios7.x86_64 25/28 Verifying : e2fsprogs-libs-1.43.5-8.alios7.x86_64 26/28 Verifying : krb5-libs-1.15.1-19.1.alios7.x86_64 27/28 Verifying : 1:openssl-libs-1.0.2k-12.1.alios7.x86_64 28/28 Installed: openssl-devel.x86_64 1:1.0.2k-23.1.alios7 Dependency Installed: keyutils-libs-devel.x86_64 0:1.5.8-3.4.alios7 krb5-devel.x86_64 0:1.15.1-51.1.alios7 libcom_err-devel.x86_64 0:1.43.5-8.3.alios7 libkadm5.x86_64 0:1.15.1-51.1.alios7 libselinux-devel.x86_64 0:2.5-14.1.1.alios7 libsepol-devel.x86_64 0:2.5-10.1.alios7 libverto-devel.x86_64 0:0.2.5-4.1.alios7 pcre-devel.x86_64 0:8.32-15.1.alios7 zlib-devel.x86_64 0:1.2.7-16.2.alios7 Dependency Updated: e2fsprogs.x86_64 0:1.43.5-8.3.alios7 e2fsprogs-libs.x86_64 0:1.43.5-8.3.alios7 keyutils-libs.x86_64 0:1.5.8-3.4.alios7 krb5-libs.x86_64 0:1.15.1-51.1.alios7 libcom_err.x86_64 0:1.43.5-8.3.alios7 libselinux.x86_64 0:2.5-14.1.1.alios7 libsepol.x86_64 0:2.5-10.1.alios7 libss.x86_64 0:1.43.5-8.3.alios7 openssl-libs.x86_64 1:1.0.2k-23.1.alios7 Complete! but the issue not solved. Please guide how to solve. thanks a lot A: i've found this command: sudo yum install openssl then it's solved my issue Result: s this ok [y/d/N]: y Downloading packages: (1/2): openssl-1.0.2k-23.1.alios7.x86_64.rpm | 493 kB 00:00:00 (2/2): make-3.82-21.1.alios7.x86_64.rpm | 419 kB 00:00:00 ------------------------------------------------------------------------------------------------------------------------------------------------- Total 1.9 MB/s | 912 kB 00:00:00 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : 1:make-3.82-21.1.alios7.x86_64 1/2 Installing : 1:openssl-1.0.2k-23.1.alios7.x86_64 2/2 Verifying : 1:make-3.82-21.1.alios7.x86_64 1/2 Verifying : 1:openssl-1.0.2k-23.1.alios7.x86_64 2/2 Installed: openssl.x86_64 1:1.0.2k-23.1.alios7 Dependency Installed: make.x86_64 1:3.82-21.1.alios7
doc_23536972
A: Here's a link that gives a example on how to implement the shunting yard algorithm for expression parsing: https://eddmann.com/posts/shunting-yard-implementation-in-java/ You can use the ideas in this to build your expression parser and read the stack of commands using switch statements to perform the operation entered. Good luck on the project! A: If you mean what i think you do, you just need to make a function with a parameter, do the calculation and return the result.
doc_23536973
using namespace std; int main() { //int& a = 3; <- Doesn't compile. Expression must be lvalue. const auto& c = 1 + 2; // c is a constant reference to an int. (?) // compiles fine. 1+2 is a rvalue? what's going on? cout << c << endl; return 0; } I don't understand why the compiler wont raise a compilation error. Since auto "forces" c to be a reference to a constant int, and references are refereed to lvalues, how come this works? A: This would indeed not work without the const -- you would get a compilation error. But the const is there, i.e. you are not going to modify what c is referencing. For this case, there is additional wording in the standard that the temporary value c is referencing (the result of 1 + 2) will have its lifetime extended to the end of the lifetime of the reference. This is quite unrelated to auto. It's the const that is making the difference here.
doc_23536974
http://www.eclipse.org/articles/article.php?file=Article-EclipseDbWebapps/index.html in order to set-up a derby database server and everything works fine. I created the DB and could easily access it. however, these instructions use JSP to access my DB and I wand to change it so that I can access the DB through my custom Java classes but I cant create any connections to the DB. I simply tried: Connection con = DriverManager.getConnection ("jdbc:derby://localhost:1527/features", "root", "root"); Note: here my DB is names features I get the error: java.sql.SQLException: No suitable driver found for jdbc:derby://localhost:1527/features I tried loading the class for the driver: Class.forName("org.apache.derby.jdbc.ClientDriver"); I get the error: java.lang.ClassNotFoundException: org.apache.derby.jdbc.ClientDriver I dont know where to find and put org.apache.derby.jdbc.ClientDriver. How come in the instructions they only add a context.xml under META_INF and everything work? What am I missing? A: Did you: Copy the file derbyclient.jar from that folder to your TOMCAT_ROOT/lib folder (if you're using Tomcat 5.x, install into TOMCAT_ROOT/common/lib). This installs the Derby JDBC driver into Tomcat for use in a DataSource. You need the derbyclient.jar in the classpath.
doc_23536975
File "G:/PVH_work/PVH_program/ParkTheReal.py", line 395, in <lambda> Add_user = ttk.Button(frame_27, text="Add User", command=lambda: Add_user(frame_27, data_dictionary)).grid(row=1, column=0) TypeError: 'NoneType' object is not callable Here is the function Edit_user_admin: def Edit_user_admin(form_item, data_dictionary, row_num): form_item.grid_forget() frame_27 = Frame(gui) frame_27.grid() MyProfile = ttk.Button(frame_27, text="My profile", command=lambda: My_profile_admin(frame_27, data_dictionary, row_num)).grid(row=0, column=0) TrainingRecord = ttk.Button(frame_27, text="Training Record", command=lambda: Training_record_admin(frame_27, data_dictionary, row_num)).grid(row=0, column=1) Compare = ttk.Button(frame_27, text="Compare", command=lambda: Compare_admin(frame_27, data_dictionary, row_num)).grid(row=0, column=2) EditUsers = ttk.Button(frame_27, text="Edit Users", command=lambda: Edit_user_admin(frame_27, data_dictionary, row_num)).grid(row=0, column=3) Team = ttk.Button(frame_27, text="View/Edit Team", command=lambda: Team_admin(frame_27, data_dictionary, row_num)).grid(row=0, column=4) Logout = ttk.Button(frame_27, text="Logout", command=lambda: Logout(frame_27)).grid(row=0, column=5) Add_user = ttk.Button(frame_27, text="Add User", command=lambda: Add_user(frame_27, data_dictionary, row_num)).grid(row=1, column=0) Edit_user = ttk.Button(frame_27, text="Edit User", command=lambda: Edit_user(frame_27, data_dictionary, row_num)).grid(row=1, column=1) Remove_user = ttk.Button(frame_27, text="Remove User", command=lambda: Remove_user(frame_27, data_dictionary, row_num)).grid(row=1, column=2) And here is the function Add_user: def Add_user(form_item, data_dictionary, row_num): form_item.grid_forget() frame_28 = Frame(gui) frame_28.grid() #Declare variables for creating a new user account __Username = StringVar() __Name = StringVar() __Age = StringVar() __Email = StringVar() __DoB = StringVar() MyProfile = ttk.Button(frame_28, text="My profile", command=lambda: My_profile_admin(frame_28, data_dictionary, row_num)).grid(row=0, column=0) TrainingRecord = ttk.Button(frame_28, text="Training Record", command=lambda: Training_record_admin(frame_28, data_dictionary, row_num)).grid(row=0, column=1) Compare = ttk.Button(frame_28, text="Compare", command=lambda: Compare_admin(frame_28, data_dictionary, row_num)).grid(row=0, column=2) EditUsers = ttk.Button(frame_28, text="Edit Users", command=lambda: Edit_user_admin(frame_28, data_dictionary, row_num)).grid(row=0, column=3) Team = ttk.Button(frame_28, text="View/Edit Team", command=lambda: Team_admin(frame_28, data_dictionary, row_num)).grid(row=0, column=4) Logout = ttk.Button(frame_28, text="Logout", command=lambda: Logout_so(frame_28)).grid(row=0, column=5) I have had this error on other functions but I found that adding a '_' to the function I'm trying to call's name, and then adding them same name extension to the command worked. A: You assigned None to Add_user; ttk.Button.grid() returns None: Add_user = ttk.Button(...).grid(row=1, column=0) You should not use the same name for the button reference and the function; Python will use the local variable in this case, not the global function. Use a different name, and call .grid() separately: add_user_button = ttk.Button( frame_27, text="Add User", command=lambda: Add_user(frame_27, data_dictionary, row_num)) add_user_button.grid(row=1, column=0) The same applies to the other buttons. If, however, you are not using the add_user_button reference anywhere else, you can make it one line, but you don't have to bother about assigning the result: ttk.Button( frame_27, text="Add User", command=lambda: Add_user(frame_27, data_dictionary, row_num) ).grid(row=1, column=0)
doc_23536976
Then I try to insert the data into the table: LOAD DATA LOCAL INPATH 'path/data' insert into table test partition (idx=1) But then I get the following error: ERROR metastore.RetryingHMSHandler (RetryingHMSHandler.java:invoke(134)) - NoSuchObjectException(message:partition values=[1]) at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionWithAuth(ObjectStore.java:1427) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:111) at com.sun.proxy.$Proxy4.getPartitionWithAuth(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition_with_auth(HiveMetaStore.java:2025) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:102) at com.sun.proxy.$Proxy5.get_partition_with_auth(Unknown Source) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_partition_with_auth.getResult(ThriftHiveMetastore.java:6924) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_partition_with_auth.getResult(ThriftHiveMetastore.java:6908) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:104) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) What's the solution for this? A: You need to either pre-generate via add partition the partitions or use dynamic partitions. Pre-generate partitions: ALTER TABLE table_name ADD PARTITION (partCol = 'value1') location 'loc1'; Using dynamic partitions: Dynamic partitions
doc_23536977
what i am trying to do is when the user click on Edit button then make it inline editing the repeater row. END UPDATE onItemCommand i have added DataBind() rpt.DataSource = mydatasource; rpt.DataBind(); after i do that my page is not in edit mode and it blow away and everyting is refreshed i have on page_load if (!IsPostBack) { rpt.DataSource = mydatasource; rpt.DataBind(); } end update I've used repeaters many times without problems but something is going on here. I have a repeater and I'm subscribing to the itemDatabound event, But when i click the button (which is a linkbutton inside my repeater itemtemplate) it does not go to the ItemDataBound <asp:Repeater ID="rpt" runat="server" OnItemCommand="rpt_OnItemCommand" OnItemDataBound="rpt_OnItemDataBound"> <ItemTemplate> <li> <asp:Label ID="Label" runat="server" /> <asp:LinkButton ID="LinkButton1" runat="server" CommandName="edit" CommandArgument='<%# Eval("MyID") %>' Text='<%# Eval("Title") %>' /> </li> </ItemTemplate> </asp:Repeater> protected void rpt_OnItemCommand(object source, RepeaterCommandEventArgs e) { if (e.CommandName == "delete") { //Data.Contacts.RemoveAt(e.Item.ItemIndex); } else if (e.CommandName == "edit") { EditIndex = e.Item.ItemIndex; } else if (e.CommandName == "save") { // } } protected void rpt_OnItemDataBound(object sender, RepeaterItemEventArgs e) { if (e.Item.ItemType == ListItemType.Item || e.Item.ItemType == ListItemType.AlternatingItem) { if (e.Item.ItemIndex == EditIndex) { // never come to this line.... after the user click on LinkButton } } } A: Don't know if this helps but you must call DataBind() in order for for the OnItemDataBound event to fire. Also my guess is you are trying to set the EditIndex in the OnItemCommand and then use the value in the OnDataBind event. The events fire in the order OnItemDataBound then OnItemCommand so the EditIndex wouldn't be correct anyway in that situation. Add rpt.DataBind to the OnItemCommand. This workded when I tried it from your code, NOTE that you will be binding twice if you aren't using !IsPostBack for original data bind. rpt.DataSource = strings; if (!IsPostBack) { rpt.DataBind(); } protected void rpt_OnItemCommand(object source, RepeaterCommandEventArgs e) { if (e.CommandName == "delete") { //Data.Contacts.RemoveAt(e.Item.ItemIndex); } else if (e.CommandName == "edit") { EditIndex = e.Item.ItemIndex; } else if (e.CommandName == "save") { // } rpt.DataBind(); } A: You must change your rpt_OnItemCommand function. protected void rpt_OnItemCommand(object source, RepeaterCommandEventArgs e) { if (e.CommandName == "delete") { //Data.Contacts.RemoveAt(e.Item.ItemIndex); } else if (e.CommandName == "edit") { EditIndex = e.Item.ItemIndex; } else if (e.CommandName == "save") { // } else if (e.CommandName == "Complete") { // your function goes here } } A: I'm a little confused, but from the example above it looks like you've got it backwards. The button click would never fire the ItemDataBound event. The ItemDataBound event is only called after each item is bound to the repeater. The button click should fire the ItemCommand event however, and if that's not happening I would check to make sure you've actually assigned the ItemCommand handler, and also make sure that the command name is valid. On a side note, this behavior can also happen when the repeater is bound at every postback. Make sure that you're binding the repeater when !Page.IsPostBack. A: Why do you think that the ItemDataBound is raised when you click your LinkButton? ItemDataBound is only fired when Repeater.DataBind() is called. Actually the repeater's ItemCommand event is raised instead.
doc_23536978
Configured Cluster It was working fine till we started configuring security forest on cluster. Now we are not able to access MarkLogic Admin Interface. Log file shows as follows ErrorLog.txt 2019-03-28 08:31:28.713 Warning: Forest Security fast query timestamp (15536998638611159) lags commit timestamp (15537609012057850) by 61037344 ms 8001_AccessLog.txt IP - User [28/Mar/2019:16:05:35 +0000] "GET / HTTP/1.1" 500 1978 - "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.121 Safari/537.36" 8000_ErrorLog.txt 2019-03-28 15:10:52.020 Info: <error:error xsi:schemaLocation="http://marklogic.com/xdmp/error error.xsd" xmlns:error="http://marklogic.com/xdmp/error" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> 2019-03-28 15:10:52.020 Info:+ <error:code>XDMP-SECDB</error:code> 2019-03-28 15:10:52.020 Info:+ <error:name/> 2019-03-28 15:10:52.020 Info:+ <error:xquery-version>1.0-ml</error:xquery-version> 2019-03-28 15:10:52.020 Info:+ <error:message>Security database unavailable</error:message> 2019-03-28 15:10:52.020 Info:+ <error:format-string>XDMP-SECDB: Security database unavailable: XDMP-FORESTMNT: Forest forest-security3 not mounted: disconnected</error:format-string> 2019-03-28 15:10:52.020 Info:+ <error:retryable>false</error:retryable> 2019-03-28 15:10:52.020 Info:+ <error:expr/> 2019-03-28 15:10:52.020 Info:+ <error:data> 2019-03-28 15:10:52.020 Info:+ <error:datum>XDMP-FORESTMNT</error:datum> 2019-03-28 15:10:52.020 Info:+ <error:datum>forest-security3</error:datum> 2019-03-28 15:10:52.020 Info:+ <error:datum>disconnected</error:datum> 2019-03-28 15:10:52.020 Info:+ </error:data> 2019-03-28 15:10:52.020 Info:+ <error:stack> 2019-03-28 15:10:52.020 Info:+ <error:frame> 2019-03-28 15:10:52.020 Info:+ <error:uri>/qconsole</error:uri> 2019-03-28 15:10:52.020 Info:+ <error:xquery-version>1.0-ml</error:xquery-version> 2019-03-28 15:10:52.020 Info:+ </error:frame> 2019-03-28 15:10:52.020 Info:+ </error:stack> 2019-03-28 15:10:52.020 Info:+</error:error> Disk space /dev/sda2 30G 19G 11G 65% / devtmpfs 2.0G 0 2.0G 0% /dev tmpfs 2.0G 0 2.0G 0% /dev/shm tmpfs 2.0G 9.1M 2.0G 1% /run tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup /dev/sda1 497M 105M 392M 22% /boot /dev/sdb1 7.8G 36M 7.3G 1% /mnt/resource tmpfs 394M 0 394M 0% /run/user/1000 tmpfs 394M 0 394M 0% /run/user/994 Memory PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 4526 root 20 0 396544 24816 5608 S 1.3 0.6 0:33.77 python 3897 root 16 -4 222944 13608 1440 S 0.3 0.3 0:03.93 auoms 4760 omsagent 20 0 1396640 55524 6108 S 0.3 1.4 0:08.87 omsagent 4963 daemon 20 0 3449488 197692 35600 S 0.3 4.9 0:16.46 MarkLogic 11441 idmladm+ 20 0 162012 2296 1596 R 0.3 0.1 0:00.12 top 1 root 20 0 128104 6724 4180 S 0.0 0.2 0:12.60 systemd I/O 09:00:01 AM CPU %user %nice %system %iowait %steal %idle 09:10:01 AM all 2.07 0.00 1.09 1.19 0.00 95.65 09:20:01 AM all 2.09 0.00 1.01 1.10 0.00 95.79 09:30:01 AM all 2.09 0.00 1.04 1.29 0.00 95.58 Average: all 2.08 0.00 1.05 1.19 0.00 95.67 A: It looks like you expanded the security forest across additional nodes, and forest-security3 is not online. Usually the Security database only resides on a single node in the cluster, with a replica on another node to provide a failover capability for logging in to administer the system. The reason your forest is offline could be a resource contention issue (insufficient compute, memory and/or disk IO) issue, as that will sometimes cause timestamp lags. You can try restarting MarkLogic on all the nodes to see if that will allow it to recover from the error.
doc_23536979
<form action="/foo" method="get"><input type="hidden" id="start_date" name="start_date" value=""/> <input type="hidden" id="end_date" name="end_date" value=""/> <div id="control"> <div id="accordion"> <div class="accordion-title"><img alt="Application_form" border="0" src="/images/icons/application_form.gif?1277517563" />&nbsp;&nbsp;Formatting:</div> <div class="accordion-body"> <table border="0" width="100%"> <tr> <td width="30%">Order By</td> <td> <select name="order_by"> <option value="dates">Dates</option> <option value="activities">Activities</option> </select> </td> </tr> </table> </div> </div> </form> When I viewed this in IE8's Developer Tool, in the HTML view, the select tag has been altered to this: <select name="order_by" style="visibility: hidden;" __msh_save_visibility="inherit"> Where is this coming from? A: Use developer tools or firebug to check the cssproperties. Its probably an inheritance issue in that ff ignores A: Turns out these styles are being applied by the calendar.js script. There are calendar elements after this select tag in the form, but it seems odd to me that this script is changing tags outside of where the calendar objects are. I'm pursuing this in another question.
doc_23536980
Another plugin folks here mentioned in fuzzy finder textmate plugin. Unfortunately, this plugin doesn't work with current version of vim-fuzzy finder, or so it appears to me. Any suggestions? TIA Oliver A: Use ** to have it recurse down directories. A: I use tag mode provided by fuzzyfinder to simulate behavior of Textmate. in short, generate an extra tags file with file's base name as tag, then you can locate any files in the tags file directly by file's base name. The only drawback is you need to update the file tags file, this is a script for that. I have been using this method for several months and it works almost perfect. I summarize my method here A: I wanted to contribute to jamessan's answer. It is true that using **/ before your search will do a recursive search in your directory. However, I've found that it's more useful to have the recursive search enabled by default. In order to do that, you can add ** to your mapping (mine is ]) (you have to escape the * otherwise it won't work) map <leader>] :FuzzyFinderFile \*\*\/<CR> A: Haven't used Textmate, but LustyExplorer could be what you're looking for. Demo here.
doc_23536981
I need it to wait for the $spawn socket to have new data (can be equal i.e. would like it to read both times when it receives "FE") before it sends again. How do I get it to block when there isn't any new data? $i=0; while ($i<10){ //Read and Parse Client input $input = strtoupper(bin2hex(socket_read($spawn, 1))); echo $input . "\n"; //Deal with requests if ($input == "FE"){ //Server Ping socket_write($spawn, $ping_packet); echo "Ping Attempt \n"; } elseif ($input == "02"){ //Handshake Request socket_write($spawn, $handshake_packet); echo "Handshake Attempt \n"; } elseif($input == "01"){ //Login Request socket_write($spawn, $kick_packet); echo "Login Attempt \n"; } $i++; } A: You can set the socket to be blocking by using socket_set_block. See socket_select for how to handle a large number of sockets at the same time. A: socket_read will return an empty string if no data is ready to be read. To make a blocking read, use socket_recv with the flag MSG_WAITALL.
doc_23536982
One idea I had was to keep a static list of weak pointers to every shared_ptr constructed. I could then periodically check to see how many of the weak pointers are still valid. The problem here is, how do I automatically add a weak pointer to the list every time a shared_ptr is created? Will a custom allocator work? Does anyone know of a reasonable way to do this? A: You'll need to create a wrapper or factory where you get all your shared_ptr, so that at the same time, you can do your side accounting. template<class T, class... Args> typename std::shared_ptr<T> make_recorded(Args... ar) { std::shared_ptr<T> ptr= make_shared<T>(ar) ; // add your annotation/tracking here return ptr ; }
doc_23536983
For example, in 1st page I have 2 session variables: Array ( [session_id] => 655dasdasfdce7cfe9dd6asd9faead406 [ip_address] => ip_address [user_agent] => Mozilla/5.0 (Windows NT 6.3; WOW64; rv:44.0) Gecko/2010301 Firefox/44.0 [last_activity] => 14696222 [user_data] => [csrf] => Array ( [token] => xyz ) [logged_in] => Array ( [id] => 1 [cus_email] => email@email.com [company_id] => 1 ) [third_value] => Array ( [example] => 1 ) ) When I add third session variable. It's showing me on first page. But, not in second. Is it memory issue or time_out? memory_limit is 128M in php.ini config.php $config['sess_cookie_name'] = 'ci_session'; $config['sess_expiration'] = 7200; $config['sess_expire_on_close'] = TRUE; $config['sess_encrypt_cookie'] = FALSE; $config['sess_use_database'] = FALSE; $config['sess_table_name'] = 'ci_sessions'; $config['sess_match_ip'] = FALSE; $config['sess_match_useragent'] = TRUE; $config['sess_time_to_update'] = 300;
doc_23536984
DataSet oDs = new DataSet(); DataTable odt = new DataTable(); odt.Columns.Add(new DataColumn("FILE_ID", typeof(string))); odt.Columns.Add(new DataColumn("ID", typeof(string))); oDs.Tables.Add(odt); oDs.AcceptChanges(); for (int i = 1; i < 3; i++) { DataRow oDr = oDs.Tables[0].NewRow(); oDr["FILE_ID"] = "a" + i; oDr["ID"] = "b" + i; oDs.Tables[0].Rows.Add(oDr); } for (int i = 1; i < 3; i++) { DataRow oDr = oDs.Tables[0].NewRow(); oDr["FILE_ID"] = "c" + i; oDr["ID"] = "d" + i; oDs.Tables[0].Rows.Add(oDr); } oDs.AcceptChanges(); DataTable odt1 = new DataTable(); odt1.Columns.Add(new DataColumn("FILE_ID", typeof(string))); odt1.Columns.Add(new DataColumn("ID", typeof(string))); oDs.Tables.Add(odt1); oDs.AcceptChanges(); for (int i = 1; i < 3; i++) { DataRow oDr = oDs.Tables[1].NewRow(); oDr["FILE_ID"] = "a" + i; oDr["ID"] = "b" + i; oDs.Tables[1].Rows.Add(oDr); } for (int i = 1; i < 3; i++) { DataRow oDr = oDs.Tables[1].NewRow(); oDr["FILE_ID"] = "c" + i; oDr["ID"] = "d" + i; oDs.Tables[1].Rows.Add(oDr); } oDs.AcceptChanges(); I need a LINQ query by which I can find if the combination of the values of rows (FILE_ID+ID) are unique & if they are the same in both the datatables A: This gets you the non-unique values: var notUnique = odt.AsEnumerable() .GroupBy(x => (string) x["FILE_ID"] + x["ID"]) .Where(g => g.Count() > 1); Finding the values that are in one table but not the other is found here Compares Your particular case would look like this: var differentRows = odt.AsEnumerable().Where( o => odt1.AsEnumerable().All( o1 => ((string) o["FILE_ID"] + o["ID"]) != ((string) o1["FILE_ID"] + o1["ID"]))) .Union(odt1.AsEnumerable().Where( o1 => odt.AsEnumerable().All(o => ((string)o["FILE_ID"] + o["ID"]) != ((string)o1["FILE_ID"] + o1["ID"])))); Keep in mind this is like the "except" method where duplicate records wont present themselves as a difference. But since you are checking for dupes above I will assume no further checking is req'd.
doc_23536985
Could you look at the code and see if it is reasonable and sound in its approach? (I haven't added Progress Updates etc.) Public Class BgwArgs 'args object to pass to BGW Public sSql As String 'query to be run by DoWork Public dbTable As DataTable 'return data to this table Public lst As ListBox 'populate this control with returned data End Class Private Sub btnStart_Click(sender As System.Object, e As System.EventArgs) Handles btnStart.Click 'get new BGW, may be multiple BGW's running at a time Dim myBgw As BackgroundWorker = fGetBgw() Dim sArgs As New BgwArgs 'set Args sArgs.sSql = "BHRow" 'just a test string, would really be a query sArgs.lst = lstBH 'control to populate myBgw.RunWorkerAsync(sArgs) End Sub Private Function fGetBgw() As BackgroundWorker 'create New backgroundworker and addhandlers Dim newBgw As New BackgroundWorker myBgw.WorkerReportsProgress = True myBgw.WorkerSupportsCancellation = True AddHandler myBgw.DoWork, AddressOf WorkerDoWork AddHandler myBgw.ProgressChanged, AddressOf WorkerProgressChanged AddHandler myBgw.RunWorkerCompleted, AddressOf WorkerCompleted Return newBgw End Function Private Sub WorkerDoWork(sender As Object, e As System.ComponentModel.DoWorkEventArgs) Thread.Sleep(3000) 'kill time as a test Dim sArgs As BgwArgs = e.Argument pGetDbTable(sArgs) 'pass in Args to DataQuery e.Result = sArgs 'set result for use in Completed event End Sub Private Sub WorkerCompleted(sender As Object, e As System.ComponentModel.RunWorkerCompletedEventArgs) Dim sArgs As BgwArgs = e.Result Dim tbl As DataTable = sArgs.dbTable Dim lb As ListBox = sArgs.lst Dim thisBgw As BackgroundWorker = CType(sender, BackgroundWorker) If e.Cancelled = True Then ElseIf e.Error IsNot Nothing Then MsgBox(e.Error.Message) Else For Each row As DataRow In tbl.Rows 'update UI, bind data here, etc. lb.Items.Add(row("col1").ToString) Next RemoveHandler thisBgw.DoWork, AddressOf WorkerDoWork RemoveHandler thisBgw.ProgressChanged, AddressOf WorkerProgressChanged RemoveHandler thisBgw.RunWorkerCompleted, AddressOf WorkerCompleted Endif End Sub Private Sub pGetDbTable(sArgs As BgwArgs) Dim tbl As New DataTable 'would really run a SQL query here... and return the Datatable With tbl .Columns.Add("col1") Dim i As Integer For i = 0 To 200 .Rows.Add(sArgs.sSql & i) Next sArgs.dbTable = tbl End With End Sub
doc_23536986
The method get(Class, Serializable) in the type Session is not applicable for the arguments (Class, int)? public class Test { public static void main(String[] args) { Configuration config = new Configuration().configure(); SessionFactory factory = config.buildSessionFactory(); Session session = factory.openSession(); Transaction trx = session.beginTransaction(); Sample sample = new Sample(); sample = (Sample)session.get(Sample.class, 1); trx.commit(); System.out.println("success"); session.close(); } } public class Sample { private Integer id; private String name; public Integer getId() { return id; } public void setId(Integer id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } } A: You should use ... sample = (Sample)session.get(Sample.class, new Integer(1)); ... instead of simple int.
doc_23536987
My first question: is it possible to use Python 3.6 for SFTP file transfer? If so, will paramiko work? If the above will work, why I am I receiving the following errors when attempting to install PyCrypto? error: [WinError 2] The system canot find the file specified **Failed building wheel for pycrypto** My second question: if paramiko will not work with Python 3.6, are there any alternatives or must I revert back to a previous python version for SFTP file transfer? A: Yes through python can possible to file transfer with sftp. Python has a nice package Step 1 : pip install pysftp Step 2: Example how to transfer file: import pysftp with pysftp.Connection('hostname', username='me', password='secret') as sftp: with sftp.cd('public'): # temporarily chdir to public sftp.put('/my/local/filename') # upload file to public/ on remote sftp.get('remote_file') # get a remote file
doc_23536988
<list lazy="false" table="news_attachment" name="attachments"> <cache usage="nonstrict-read-write"/> <key column="news"/> <index column="index_attachment"/> <many-to-many class="mypackage.Archive" column="attachment" unique="true"/> </list> I am trying to swap two elements with: Archive archive0 = news.getAttachments().get(0); Archive archive1 = news.getAttachments().get(1); news.getAttachments().set(0, archive1); news.getAttachments().set(1, archive0); But, in commit time, I get an awful org.hibernate.exception.ConstraintViolationException: Duplicate entry '163703' for key 'attachment' And, for my surprise, mysql updates are: 479 Query update news_attachment set attachment=163703 where news=53306 and index_attachment=0 479 Query update news_attachment set attachment=163703 where news=53306 and index_attachment=0 (the surprise is id and index are always the same). But, if I create a new List, set the objects in the new order and execute the setter, everything works fine. List<Archive> list = new ArrayList<Archive>(); list.add(news.getAttachments().get(1)); list.add(news.getAttachments().get(0)); news.setAttachments(list); mysql.log outputs the inserts as I expected, different ids and indexes. Are operations on the original list, not recommended?
doc_23536989
For the part of insert the audio at video time 0, am using this AVMutableCompositionTrack *a_compositionVideoTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid]; [a_compositionVideoTrack insertTimeRange:video_timeRange ofTrack:[[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] atTime:kCMTimeZero error:nil]; My challenge now is to let the user pick the video time range he want the audio in !! have no idea how this works with the CMTimeMake and if there is any smoothy picker already done Thanks for helping !! A: CMTimeMake(value,timescale) value - as usual, amount of quantums (for example, seconds) timescale - length of this quantum in seconds CMTimeMake(1,30) // one interval of 30 sec CMTimeMake(30,1) // 30 intervals of 1 sec In fact it is the same absolute time, But it has different granularity, which is important when you deal with audio and video file processing.
doc_23536990
For some reason if you have an img inside a div, the div is like 3.5px taller than the image. However if you set the image as a block element this extra height disappears. Basic HTML: <div id="wrapper"> <img src="http://www.basini.com/wp-content/uploads/2013/02/seeing-in-the-dark.jpg" width="300" height="230" /> </div> And the CSS: #wrapper { background: orange; } #wrapper img { /* display: block; this will remove the extra height */ } I have set up a jsfiddle to demonstrate the effect Why does this happen and why does making the 'img' a block element fix it? Are there any other solutions? A: It's due to the line-height of the wrapping div of the img tag. To fix it, you can set line-height:0 to the div, float the img or display:block the img. Better explained: How to control line height in <p> styled as inline A: By default, an image is rendered inline, like a letter. It sits on the same line that a, b, c and d sit on. There is space below that line for the descenders you find on letters like j, p and q. You can adjust the vertical-align of the image to position it elsewhere. A: have you tried to reset all styles? before applying new styles?
doc_23536991
<div class="project name">..</div> <div data-project-id="1987" class="sidebar_project">...</div> <div data-project-id="3087" class="sidebar_project">...</div> <div data-project-id="8903" class="sidebar_project">...</div> <div data-project-id="223570" class="sidebar_project">...</div> <div class="project name">..</div> <div data-project-id="1846" class="sidebar_project">...</div> <div data-project-id="0935" class="sidebar_project">...</div> <div data-project-id="84735" class="sidebar_project">...</div> <div class="project name">..</div> <div data-project-id="11135" class="sidebar_project">...</div> I can easy select all sidebar_projects and hide them, but what I want its be able to make a button that on click hides/shows all sidebar_projects from It's project name div to the next project name div. THx A: Try this: $('.project.name').on('click', function(e){ $(this).nextUntil('.project.name', '.sidebar_project').toggle(); }); I didn't test this, but I think that this will be work fine for you. More about nextUntil() see here A: Perhaps a little something like this: $('div:not(.sidebar_project)').click(function() { $(this).nextUntil(':not(.sidebar_project)').toggle(); }); Demo: http://jsfiddle.net/nnnnnn/6gMnr/2/ This assumes that my comment above was correct, i.e., the heading elements don't have the classes "project" and "name", but actually they have different classes with the names of different projects, thus making it necessary to select them on the basis of their not having the "sidebar_project" class. Obviously the problem with this is that you might have other divs on the page, so I hope the divs shown in your question are within some containing object such that you can do this: var $container = $("selector for container here"); $container.find('div:not(.sidebar_project)').click(function() { $(this).nextUntil(':not(.sidebar_project)').toggle(); });
doc_23536992
This API provides access to a catalog of scholarly papers, authors, institutions, etc... It is easy enough to make a query for an institution's information such. Here is an example from the API docs: https://api.openalex.org/I19820366 I am trying to figure out how to get a specific institution's ID. In the docs, there is a statement that this ID number is Microsoft Academic Graph's institutional ID: https://www.microsoft.com/en-us/research/project/microsoft-academic-graph/ But I have been unable to figure out anything in Microsoft Academic Graph either. How can I find the institutional ID for a specific institution? A: I am a beginner, and I also just started to retrieve data from the OpenAlex API. For me, the easiest way was either to use the ROR id or the GRID id, which you can both look up for any institution either here: https://www.grid.ac/ or here: https://ror.org/ Then you use either ROR or GRID-ID as an identifier (https://docs.openalex.org/about-the-data/institution#ids) and that identifier as a filter, as specified in the API documentation. Be aware, that except for the Institution-ID, that you want to find, all the other institutional IDs, like ROR or GRID, have to be put in your request as a full URL. Take the example of the Johns Hopkins University. It's not enough to put their ROR like this: "00za53h95", you have to put in the API request like that: "https://ror.org/00za53h95" (without the quotes) or else it won't work. In my example, a request could look like this: https://api.openalex.org/institutions/https://ror.org/00za53h95 This will deliver a nice json file with all the info you need, including the institution's ID in the database. Save the information as a file by using a cURL GET request or just do it via your browser and get the result as a webpage, both works. If you do the latter, you should follow the suggestion of the OpenAlex team and install a browser plugin like JSONVue, that will make the experience of reading the result on your screen so much better. Hope that helps. A: You can use the OpenAlex API to search for the ID like this: https://api.openalex.org/institutions?filter=display_name.search:University%20of%20Virginia A: In addition to Heather's comment, I'd like to add that we can now go to https://explore.openalex.org/ and search for any entity. Start typing "Johns Hopkins University" and you'll get to this page: https://explore.openalex.org/institutions/I145311948 which has all the identifiers (including the openalex id I145311948) and additional information about this institution.
doc_23536993
Reproducible example: df <- data.frame(id=c(1, 2, 3, 4, 5), staple_1=c("potato", "potato","rice","fruit","coffee"), staple2_half1=c("yams","beer","potato","rice","yams"), staple2_half2=c("potato","rice","yams","rice","yams"), staple_3=c("rice","peanuts","fruit","fruit","rice")) potato<-c("potato") yams<-c("yams") staples<-c("potato","cassava","rice","yams") gives: id staple_1 staple2_half1 staple2_half2 staple_3 1 potato yams potato rice 2 potato beer rice peanuts 3 rice potato yams fruit 4 fruit rice rice fruit 5 coffee yams yams rice Now I want to create 2 additional columns summing the counts of "potato" and "yams", but by modifying the following code so that any counts from a "half" column (staple2_half1 and staple2_half2) only count as 0.5 instead of 1. Incorrect result using original answer: df$staples <- apply(df, 1, function(x) sum(staples %in% x)) df$potato<- apply(df, 1, function(x) sum(potato %in% x)) df$yams<- apply(df, 1, function(x) sum(yams %in% x)) Gives: id staple_1 staple2_half1 staple2_half2 staple_3 staples potato yams 1 potato yams potato rice 3 1 1 2 potato beer rice peanuts 2 1 0 3 rice potato yams fruit 3 1 1 4 fruit rice rice fruit 1 0 0 5 coffee yams yams rice 2 0 1 Desired result based on weighted count: id staple_1 staple2_half1 staple2_half2 staple_3 staples potato yams 1 potato yams potato rice 3 1.5 0.5 2 potato beer rice peanuts 1.5 1 0 3 rice potato yams fruit 2 0.5 0.5 4 fruit rice rice fruit 1 0 0 5 coffee yams yams rice 2 0 1 A: If you apply the %in% function over the columns of df[, -1], you get a matrix of true and false values. Then to do a weighted sum, you can multiply this matrix by a vector of weights. words <- data.frame(staples, potato, yams) weights <- 1 - 0.5*grepl('half', names(df[, -1])) df[names(words)] <- lapply(words, function(x) apply(df[, -1], 2, `%in%`, x) %*% weights) df # id staple_1 staple2_half1 staple2_half2 staple_3 staples potato yams # 1 1 potato yams potato rice 3.0 1.5 0.5 # 2 2 potato beer rice peanuts 1.5 1.0 0.0 # 3 3 rice potato yams fruit 2.0 0.5 0.5 # 4 4 fruit rice rice fruit 1.0 0.0 0.0 # 5 5 coffee yams yams rice 2.0 0.0 1.0 Example of what the output of apply(df1[, -1], 2, ... looks like apply(df[, -1], 2, `%in%`, potato) # staple_1 staple2_half1 staple2_half2 staple_3 # [1,] TRUE FALSE TRUE FALSE # [2,] TRUE FALSE FALSE FALSE # [3,] FALSE TRUE FALSE FALSE # [4,] FALSE FALSE FALSE FALSE # [5,] FALSE FALSE FALSE FALSE apply(df[, -1], 2, `%in%`, potato) %*% weights # [,1] # [1,] 1.5 # [2,] 1.0 # [3,] 0.5 # [4,] 0.0 # [5,] 0.0 A: Lots of ways to do this, but here's one using the tidyverse. By "gathering" the data so the staples are all in one column, I think it's easier to apply the correct weight. library(tidyverse) df <- data.frame(id=c(1, 2, 3, 4, 5), staple_1=c("potato", "potato","rice","fruit","coffee"), staple2_half1=c("yams","beer","potato","rice","yams"), staple2_half2=c("potato","rice","yams","rice","yams"), staple_3=c("rice","peanuts","fruit","fruit","rice")) potato<-c("potato") yams<-c("yams") staples<-c("potato","cassava","rice","yams") freqs <- df %>% mutate_if(is.factor, as.character) %>% # avoids a warning about converting types gather("column", "item", -id) %>% mutate(scalar = if_else(str_detect(column, "half"), 0.5, 1)) %>% group_by(id) %>% summarize( staples = sum(item %in% staples * scalar), potato = sum(item %in% potato * scalar), yams = sum(item %in% yams * scalar) ) left_join(df, freqs, by = "id") #> id staple_1 staple2_half1 staple2_half2 staple_3 staples potato yams #> 1 1 potato yams potato rice 3.0 1.5 0.5 #> 2 2 potato beer rice peanuts 1.5 1.0 0.0 #> 3 3 rice potato yams fruit 2.0 0.5 0.5 #> 4 4 fruit rice rice fruit 1.0 0.0 0.0 #> 5 5 coffee yams yams rice 2.0 0.0 1.0 Created on 2018-12-11 by the reprex package (v0.2.1)
doc_23536994
client certificate is immutable. My question is that how I can enable client certificate for GKE cluster. A: As per the gitlab Starting in 1.12, new clusters will not have a client certificate issued. You can manually enable (or disable) the issuance of the client certificate using the --[no-]issue-client-certificate flag. The clusters will have basic authentication and client certificate issuance disabled by default. As per @Dawid you can create an cluster having Client certificate > Enable using the below command and after that modification is not possible on that cluster. gcloud container clusters create YOUR-CLUSTER --machine-type=custom-2-12288 --issue-client-certificate --zone us-central1-a As a workaround if you want to enable the client certificate on existing cluster, you can clone (DUPLICATE) the cluster using command line and --issue-client-certificate at the end of the command as follows: gcloud beta container --project "xxxxxxxx" clusters create "high-mem-pool-clone-1" --zone "us-central1-f" --username "admin" --cluster-version "1.16.15-gke.6000" --release-channel "None" --machine-type "custom-2-12288" --image-type "COS" --disk-type "pd-standard" --disk-size "100" --metadata disable-legacy-endpoints=true --scopes "https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" --num-nodes "3" --enable-stackdriver-kubernetes --no-enable-ip-alias --network "projects/xxxxxxx/global/networks/default" --subnetwork "projects/xxxxxxxx/regions/us-central1/subnetworks/default" --no-enable-master-authorized-networks --addons HorizontalPodAutoscaling,HttpLoadBalancing --enable-autoupgrade --enable-autorepair --max-surge-upgrade 1 --max-unavailable-upgrade 0 --issue-client-certificate
doc_23536995
Thanks
doc_23536996
Table: emp Pid | Address | City | datetime | Edate | level 1 | Homeless | Chen | 2014-11-13 09:32:14.000 |2013-02-10 |3 1 | 3913 W. Strong | Chen | 2011-03-044 19:04:10.000 |2014-02-04 |7 1 | 1100 W MALLON | Chen | 2014-11-13 09:32:14.000 |2013-02-10 |5 2 | 610 W GARLAND #3 | Hyd | 2013-11-13 09:32:14.000 |2014-04-02 |4 3 | banvanu | chen | 2015-03-044 06:04:10.000 |2015-05-06 |6 3 | naneku | chen | 2015-03-044 06:04:10.000 |2015-06-09 |4 based on above table I want output like below Pid | Address | City | datetime | Edate | level 1 | 1100 W MALLON | Chen | 2014-11-13 09:32:14.000 |2013-02-10 |5 2 | 610 W GARLAND #3 | Hyd | 2013-11-13 09:32:14.000 |2014-04-02 |4 3 | naneku | chen | 2015-03-044 06:04:10.000 |2015-06-09 |4 we need to get address,city from same table based on below conditions We have few condition to get output : first level check the max(datetime) based on pid if max(datetime) values same for same pid then same pid need to check max(edate) if we get again same value then we need to check max(level) particilar patient the retrieve address,city for that pid I tried like below select * from (select *,row_number()over(partition by id ,order by datetime,edate,level)as rno from emp) where rno=1 but above query not given expect result please tell me how to write query to achive this task in sql server A: You need to use descending order in the window function: select [Pid], [Address], [City], [datetime], [Edate], [level] from ( select * , rn= row_number() over (partition by [pid] order by [datetime] desc, [edate] desc, [level] desc ) from emp ) a where rn = 1; A: Try this: WITH Empgroup AS ( SELECT Pid , [Address] , City , [DateTime] , EDate , [Level] , ROW_NUMBER() OVER (PARTITION BY Pid ORDER BY [DateTime] DESC, EDate DESC, [Level] DESC) AS RN FROM Emp ) SELECT Pid , [Address] , City , [DateTime] , EDate , [Level] FROM Empgroup WHERE RN = 1 The functional difference is in the ordering of the ROW_NUMBER(). I also elected to use a Common Table Expression to make the query clearer, however.
doc_23536997
Here is my User pojo class package com.kalam.model; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.EnumType; import javax.persistence.Enumerated; import javax.persistence.GeneratedValue; import javax.persistence.GenerationType; import javax.persistence.Id; import javax.persistence.Table; import org.hibernate.validator.constraints.NotEmpty; @Entity @Table(name="User") public class User { @Id @GeneratedValue(strategy=GenerationType.AUTO) private Long userId; @Column @NotEmpty(message="username can't be blank") private String UserName = ""; @Column @NotEmpty(message="password can't be blank") private String Password = ""; @Column @NotEmpty(message="Mobile can't be blank") private int MobileNo; @Enumerated(EnumType.ORDINAL) private Role userRole; public Long getUserId() { return userId; } public void setUserId(Long userId) { this.userId = userId; } public String getUserName() { return UserName; } public void setUserName(String userName) { UserName = userName; } public String getPassword() { return Password; } public void setPassword(String password) { Password = password; } public int getMobileNo() { return MobileNo; } public void setMobileNo(int mobileNo) { MobileNo = mobileNo; } public Role getUserRole() { return userRole; } public void setUserRole(Role userRole) { this.userRole = userRole; } } Here is User.jsp (Registration form) <%@ page language="java" contentType="text/html; charset=UTF-8" pageEncoding="UTF-8"%> <%@ taglib prefix="form" uri="http://www.springframework.org/tags/form"%> <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>Registration</title> </head> <body> <div align="center"> <form:form action="/addUser" method="post" modelAttribute="userForm"> <table border="0"> <tr> <td colspan="2" align="center"><h2>Spring MVC Form Demo - Registration</h2></td> </tr> <tr> <td>User Name:</td> <td><form:input path="UserName" /></td> </tr> <tr> <td>Password:</td> <td><form:password path="Password" /></td> </tr> <tr> <td>Mobile No:</td> <td><form:input path="MobileNo" /></td> </tr> <tr> <td colspan="2" align="center"><input type="submit" value="Register" /></td> </tr> </table> </form:form> </div> </body> </html> And finally the Controller class. package com.kalam.controller; import javax.validation.Valid; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.context.ApplicationContext; import org.springframework.context.support.ClassPathXmlApplicationContext; import org.springframework.stereotype.Controller; import org.springframework.ui.ModelMap; import org.springframework.validation.BindingResult; import org.springframework.web.bind.annotation.ModelAttribute; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestMethod; import com.kalam.daoimpl.EmployeeDaoImpl; import com.kalam.model.Employee; import com.kalam.model.User; import com.kalam.service.EmployeeService; import com.kalam.serviceimpl.EmployeeServiceImpl; @Controller public class KalamController { //@Autowired //EmployeeService employeeService; @RequestMapping("/kalam") public String showMessage(ModelMap map) { map.put("dollar", "50 US $"); return "KalamWorld"; } @RequestMapping("/insertData") public void InserData() { Employee emp= new Employee(); emp.setEmpID(11); emp.setEmpName("On Target"); emp.setEmpSalary(20000); emp.setAddress("Mumbai"); ApplicationContext context=new ClassPathXmlApplicationContext("hibernate-cfg.xml"); EmployeeDaoImpl dao= (EmployeeDaoImpl)context.getBean("employeeDaoImpl"); dao.addEmployee(emp); System.out.println("Data successfully inserted"); // employeeService.addEmployee(emp); } @RequestMapping(value ="/",method = RequestMethod.GET) public String login(ModelMap model) { User userFrom=new User(); model.put("userFrom", userFrom); return "User"; } @RequestMapping(value = "/addUser", method = RequestMethod.POST) public String addEmployee(@Valid @ModelAttribute("userFrom")User user, ModelMap model) { model.addAttribute("name", user.getUserName()); model.addAttribute("Id", user.getUserId()); model.addAttribute("Mobile", user.getMobileNo()); return "success"; } @RequestMapping(value ="/welcome",method = RequestMethod.GET) public String welcome(ModelMap model) { return "User"; } } I wanted to appear User.jsp page as first page when i run the application. Hence i'm trying to call @RequestMapping(value ="/",method = RequestMethod.GET) handler. But when i try to run the program i'm getting the following error. java.lang.IllegalStateException: Neither BindingResult nor plain target object for bean name 'userForm' available as request attribute at org.springframework.web.servlet.support.BindStatus.<init>(BindStatus.java:141) at org.springframework.web.servlet.tags.form.AbstractDataBoundFormElementTag.getBindStatus(AbstractDataBoundFormElementTag.java:168) at org.springframework.web.servlet.tags.form.AbstractDataBoundFormElementTag.getPropertyPath(AbstractDataBoundFormElementTag.java:188) at org.springframework.web.servlet.tags.form.AbstractDataBoundFormElementTag.getName(AbstractDataBoundFormElementTag.java:154) at org.springframework.web.servlet.tags.form.AbstractDataBoundFormElementTag.autogenerateId(AbstractDataBoundFormElementTag.java:141) at org.springframework.web.servlet.tags.form.AbstractDataBoundFormElementTag.resolveId(AbstractDataBoundFormElementTag.java:132) at org.springframework.web.servlet.tags.form.AbstractDataBoundFormElementTag.writeDefaultAttributes(AbstractDataBoundFormElementTag.java:116) at org.springframework.web.servlet.tags.form.AbstractHtmlElementTag.writeDefaultAttributes(AbstractHtmlElementTag.java:422) at org.springframework.web.servlet.tags.form.InputTag.writeTagContent(InputTag.java:142) at org.springframework.web.servlet.tags.form.AbstractFormTag.doStartTagInternal(AbstractFormTag.java:84) at org.springframework.web.servlet.tags.RequestContextAwareTag.doStartTag(RequestContextAwareTag.java:80) at org.apache.jsp.WEB_002dINF.views.User_jsp._jspx_meth_form_005finput_005f0(User_jsp.java:220) at org.apache.jsp.WEB_002dINF.views.User_jsp._jspx_meth_form_005fform_005f0(User_jsp.java:158) at org.apache.jsp.WEB_002dINF.views.User_jsp._jspService(User_jsp.java:104) at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70) at javax.servlet.http.HttpServlet.service(HttpServlet.java:731) at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:439) at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:395) at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:339) at javax.servlet.http.HttpServlet.service(HttpServlet.java:731) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208) at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208) at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:743) at org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:485) at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:410) at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:337) at org.springframework.web.servlet.view.InternalResourceView.renderMergedOutputModel(InternalResourceView.java:209) at org.springframework.web.servlet.view.AbstractView.render(AbstractView.java:266) at org.springframework.web.servlet.DispatcherServlet.render(DispatcherServlet.java:1225) at org.springframework.web.servlet.DispatcherServlet.processDispatchResult(DispatcherServlet.java:1012) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:959) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:876) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:961) at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:852) at javax.servlet.http.HttpServlet.service(HttpServlet.java:624) at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:837) at javax.servlet.http.HttpServlet.service(HttpServlet.java:731) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208) at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:219) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:110) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:506) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:169) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103) at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:962) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:445) at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1115) at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:637) at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:316) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.lang.Thread.run(Thread.java:745) Apr 16, 2018 9:25:25 PM org.apache.catalina.core.StandardWrapperValve invoke SEVERE: Servlet.service() for servlet [dispatcher] in context with path [/KalamProject] threw exception [An exception occurred processing JSP page /WEB-INF/views/User.jsp at line 20 17: </tr> 18: <tr> 19: <td>User Name:</td> 20: <td><form:input path="UserName" /></td> 21: </tr> 22: 23: <tr> Please help me to resolve the error. Thanks in advance A: You need to initialize an User object and put it in model before returning view name in welcome handler: @RequestMapping(value ="/welcome", method = RequestMethod.GET) public String welcome(ModelMap model) { User user = new User(); model.put("userForm", user); return "User"; } Or you can do this to get what you are trying to achieve: @RequestMapping(value ="/welcome", method = RequestMethod.GET) public String welcome() { return "redirect:/"; } NOTE: In your login handler, model.put("userFrom", userFrom); should be model.put("userForm", userFrom); as in User.jsp ...modelAttribute="userForm"> This will also show the same error.
doc_23536998
After some tries i'm not aware how it does not works, cause my attribute works well, but my does not show anything here's my .js .controller('DispatcherFilterController', [ '$scope', function($scope,{ $scope.dispatcherSearch=[{ id: 1, name: 'out1', description :'desc1', vat_number :'378297', dispatch_type :'daily', output : 'out1' }, { id: 2, name: 'out2', description :'desc2', vat_number :'3782f97', dispatch_type :'daily', output : 'out2' }, { id: 3, name: 'out3', description :'desc3', vat_number :'fssfes', dispatch_type :'daily', output : 'out3' }];}]) and Here is my HTML: <div class="table-responsive"> <table class="table" ng-controller="DispatcherFilterController"> <thead> <tr> <th class="col-order"><a class="sort asc" href="#" title="">{{'NAME' | translate}}</a> </th> <th class="col-order"><a class="sort asc" href="#" title="">{{'DESCRIPTION' | translate}}</a> </th> <th class="col-order"><a class="sort asc" href="#" title="">{{'VAT_NUMBER' | translate}}</a> </th> <th class="col-order"><a class="sort asc" href="#" title="">{{'DISPATCH_TYPE' | translate}}</a> </th> <th class="col-order"><a class="sort asc" href="#" title="">{{'OUTPUT' | translate}}</a> </th> <th class="colf-cmd"></th> </tr> </thead> <tbody> <tr ng-repeat="row in dispatcherSearch"> <td>{{row.name}}</td> <td>{{row.description}}</td> <td>{{row.vat_number}}</td> <td>{{row.dispatch_type}}</td> <td>{{row.output}}</td> <td class="colf-cmd"> <div class="form-inline pull-right"> <div class="form-group"> <div class="form-btn-container"> <button type="button" class="btn btn-primary pull-right" ng-click="spot()">{{'SPOT' | translate}}</button> </div> </div> <div class="form-group"> <div class="form-btn-container"> <button type="button" class="btn btn-primary pull-right" ng-click="periodic()">{{'PERIODIC' | translate}}</button> </div> </div> </div> </td> </tr> </tbody> </table> where did i go wrong? A: There was syntax error in your controller code. Also you haven't given any code for what translate filter is so I removed that too and we have a working Solution here. Controller .controller('DispatcherFilterController', ['$scope', function($scope) { $scope.dispatcherSearch = [{ id: 1, name: 'out1', description: 'desc1', vat_number: '378297', dispatch_type: 'daily', output: 'out1' }, { id: 2, name: 'out2', description: 'desc2', vat_number: '3782f97', dispatch_type: 'daily', output: 'out2' }, { id: 3, name: 'out3', description: 'desc3', vat_number: 'fssfes', dispatch_type: 'daily', output: 'out3' }]; }]); A: You forgot to complete controller function Here is corrected controller code Jsfiddle Js code var app = angular.module('myApp', []); app.controller('DispatcherFilterController', ['$scope', function($scope) { $scope.dispatcherSearch = [{ id: 1, name: 'out1', description: 'desc1', vat_number: '378297', dispatch_type: 'daily', output: 'out1' }, { id: 2, name: 'out2', description: 'desc2', vat_number: '3782f97', dispatch_type: 'daily', output: 'out2' }, { id: 3, name: 'out3', description: 'desc3', vat_number: 'fssfes', dispatch_type: 'daily', output: 'out3' }]; } ]); This will work
doc_23536999
from Input_DF col1 col2 col3 Course_66 0\nCourse_67 1\nCourse_68 0 a c Course_66 1\nCourse_67 0\nCourse_68 0 a d to Output_DF Course_66 Course_67 Course_68 col2 col3 0 0 1 a c 0 1 0 a d Please, note that col1 contains one long string. Please, any help would be very appreciated. Many Thanks in advance. Best Regards, Carlo A: Use: #first split by whitespaces to df df1 = df['col1'].str.split(expand=True) #for each column split by \n and select first value df2 = df1.apply(lambda x: x.str.split(r'\\n').str[0]) #for columns select only first row and select second splitted value df2.columns = df1.iloc[0].str.split(r'\\n').str[1] print (df2) 0 Course_66 Course_67 Course_68 0 0 0 1 1 0 1 0 #join to original, remove unnecessary column df = df2.join(df.drop('col1', axis=1)) print (df) Course_66 Course_67 Course_68 col2 col3 0 0 0 1 a c 1 0 1 0 a d Another solution with list comprehension: L = [[y.split('\\n')[0] for y in x.split()] for x in df['col1']] cols = [x.split('\\n')[1] for x in df.loc[0, 'col1'].split()] df1 = pd.DataFrame(L, index=df.index, columns=cols) print (df1) Course_66 Course_67 Course_68 0 0 0 1 1 0 1 0 EDIT: #split values by whitespaces - it split by \n too df1 = df['course_vector'].str.split(expand=True) #select each pair columns df2 = df1.iloc[:, 1::2] #for columns select each unpair value in first row df2.columns = df1.iloc[0, 0::2] #join to original df = df2.join(df.drop('course_vector', axis=1)) A: Since your data are ordered in value, key pairs, you can split on newlines and multiple spaces with regex to get a list, and then take every other value starting at the first position for values and the second position for labels and return a Series object. By applying, you will get back a DataFrame from these multiple series, which you can then combine with the original DataFrame. import pandas as pd df = pd.DataFrame({'col1': ['0\nCourse_66 0\nCourse_67 1\nCourse_68', '0\nCourse_66 1\nCourse_67 0\nCourse_68'], 'col2': ['a', 'a'], 'col3': ['c', 'd']}) def to_multiple_columns(str_list): # take the numeric values for each series and column labels and return as a series # by taking every other value return pd.Series(str_list[::2], str_list[1::2]) # split on newlines and spaces splits = df['col1'].str.split(r'\n|\s+').apply(to_multiple_columns) output = pd.concat([splits, df.drop('col1', axis=1)], axis=1) print(output) Output: Course_66 Course_67 Course_68 col2 col3 0 0 0 1 a c 1 0 1 0 a d