text
stringlengths
15
59.8k
meta
dict
Q: PHP cannot connect to LDAP Oracle Directory Server Enterprise Edition Been playing with this for days and can not get php to bind to ldap on Oracle's DSEE. function test(){ // LDAP variables $ldaphost = "xxx.xxxxx.com"; $ldapport = 636; $ldaprdn = 'cn=xyxyxyxy,ou=Accounts,dc=xxx,dc=xxxxx,dc=com'; $ldappass = 'vcvcvcvcvc'; ldap_set_option(NULL, LDAP_OPT_DEBUG_LEVEL, 7); // isn't helping // Connecting to LDAP $ldapconn = ldap_connect($ldaphost, $ldapport) or die("Could not connect to $ldaphost"); if ($ldapconn) { // binding to ldap server $ldapbind = ldap_bind($ldapconn, $ldaprdn, $ldappass); // verify binding if ($ldapbind) { echo "LDAP bind successful..."; } else { echo "LDAP bind failed..."; } } } I get the error: Message: ldap_bind() [function.ldap-bind]: Unable to bind to server: Can't contact LDAP server Tearing my hair out on this one. I just can't get the thing to bind. Have tried a straight telnet to the host on port 636 and am not being blocked by any firewall. Peculiarly I am not getting any extra debug info from the 'LDAP_OPT_DEBUG_LEVEL' on screen or in my logs. A: start_tls() and ldaps is mutually exclusive, meaning you cannot issue start_tls() on the ssl port (standard 636), or initiate ldaps on an unecrypted port (standard 389). The start_tls() command initiate a secure connection on the unencrypted port after connection is initiated, so you would then issue this before the bind takes place to make it encrypted. Another set of common ports is 3268 (unecrypted) and 3269 (ssl) which might be enabled in your server. ldap_set_option(NULL, LDAP_OPT_DEBUG_LEVEL, 7); is logging to your web servers error log, depending on your log level, or to stout (from PHP CLI). To gain more information here, check your web server log level setting, or simply run your php script from command line. To successfully use the ssl port, you need to specify the ldaps:// prefix, while on the unencrypted port this is not necessary (with a ldap:// prefix). Looking at your code, this could be a protocol version issue as PHP by default use version 2. To solve this, you can issue: ldap_set_option($conn, LDAP_OPT_PROTOCOL_VERSION,3); ldap_set_option($conn, LDAP_OPT_REFERRALS,0); before you attempt to bind. You can also have a look at the code in Problems with secure bind to Active Directory using PHP which I successfully use in CentOS 5, but is having problems in Ubuntu. If your server has an open unencrypted port, it's a good idea to do an unencrypted test bind against it to rule out any connectivity issues. To check if the port is open, you can check if telnet connects to it, E.G: telnet my.server.com 3268 If the port is open, then you should be able to bind using it. *Edit: If the ssl certificate is deemed invalid, the connection will fail, if this is the case, setting the debug level to 7 would announce this. To get around this specific problem you need to ignore the validity: You can ignore the validity in windows by issuing putenv('LDAPTLS_REQCERT=never'); in your php code. In *nix you need to edit your /etc/ldap.conf to contain TLS_REQCERT never A: The port 636 is the SSL enabled port and requires an SSL enabled connection. You should try to connect on port 389, or change your code to include the secure layer (much more complex). Kind regards, Ludovic A: To connect using SSL you should try $ldapconn = ldap_connect('ldaps://'.$ldaphost); This will automatically connect on port 636 which is the default ldaps-port. Depending on your server installation and configuration it may be possible that connections are only allowed on port 389 (using no encryption or using TLS) or only on port 636 using SSL-encryption. Although it might be possible that your server exposes other ports. So in general you need to know which port you're gonna connect to and which encryption method the server requires (no encryption, SSL or TLS). A: Is the certificate of the LDAP server signed by a valid CA? Maybe your client just rejects the certificate!
{ "language": "en", "url": "https://stackoverflow.com/questions/5888968", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: RAR file downloaded with C# becoming corrupt I'm download a .rar file from a github release but the downloaded file is becoming corrupt. The file is fine it's just not working when I download it with code this is my current code WebClient wc = new WebClient(); Uri fileLink = new Uri("https://api.github.com/repos/{my name}/{my repo}/releases/latest"); wc.DownloadFileCompleted += new AsyncCompletedEventHandler(UpdateCompleted); wc.DownloadFileAsync(fileLink, $@"{appRoot}\updated.rar"); A: My best guess is that the file that you want to download is not a .rar but some text (html or json) that contains information for the real download. Can you open the downloaded file with notepad to see what it contains? According to https://docs.github.com/en/rest/releases/releases#get-the-latest-release, it should contains a json file with some assets that contain the real download link. A: You need to keep the WebClient instance alive during the download. What does that mean? If the WebClient instance is only hold in the wc variable, and wc is being local to whatever method which contains the code in your question, then as soon as this method returns, the wc variable gets out of scope. If there is no other field/property/whatever holding a reference to the WebClient instance, the WebClient instance becomes subject to the next garbage collection. Normally, you cannot predict when a garbage collection happens (well, you can, but it's a complicated topic for another day...). If it so happens that the garbage collection happens while the download is still in progress, well, the WebClient instance will be finalized and destroyed anyway, effectively aborting download and leaving you with a partially downloaded file. Another possibility is that your program just exits without actually waiting for the download to complete before exiting, again leaving you with a partially downloaded file. Are there other possibilities of what might have gone wrong there with your download? Sure, there are. But for me, it's enough tea leaves reading for a day, so i leave the troubleshooting and debugging of your program and inspection of the downloaded data to you...
{ "language": "en", "url": "https://stackoverflow.com/questions/72239014", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-4" }
Q: Elasticsearch ILM index data is not shifting/migrating We have created ILM (index lifecycle management) policies, to automate index rollover using a matching index template and a bootstrap index to enable write index with an alias. Please find the below API code: Polices { "policy": { "phases": { "hot": { "min_age": "0ms", "actions": { "rollover": { "max_size": "50mb", "max_primary_shard_size": "50mb", "max_age": "30m" }, "set_priority": { "priority": 200 } } }, "cold": { "min_age": "5m", "actions": { "searchable_snapshot": { "snapshot_repository": "found-snapshots", "force_merge_index": true }, "set_priority": { "priority": 0 } } }, "frozen": { "min_age": "10m", "actions": { "searchable_snapshot": { "snapshot_repository": "found-snapshots", "force_merge_index": true } } }, "delete": { "min_age": "1h", "actions": { "delete": { "delete_searchable_snapshot": true } } } } } } Template PUT _index_template/sree { "index_patterns":["sree-*"], "template":{ "settings":{ "number_of_shards":1, "number_of_replicas":1, "index.lifecycle.name":"sree", "index.lifecycle.rollover_alias":"sree" } } } Bootstrap Index: PUT sree-000001 { "aliases":{ "sree":{ "is_write_index":true } } } Note: I'm using logstash to send data to elastic search by using the elasticsearch output plugin. Here is the output plugin code: output { elasticsearch { hosts => "https:xyz:9243" user => "elastic" password => "xyz" index => "sree-" ilm_enabled => true } } For the above scenario, I'm successfully creating index and index-alias, indexes are moving to next nodes like cold/frozen but data is not shifting/migrating. So, can anyone please help us to resolve this issue?
{ "language": "en", "url": "https://stackoverflow.com/questions/68638716", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: if a process mallocs memory and then forks, will the child process have a proper malloced memory For the following code : main() { int *p = (int *)malloc(2*sizeof(int)) ; if(fork()) wait() ; else *p = 10 ; } I want to know that when we fork does the child process receive the malloced block in its process space too. That is, in the above code is it safe to say --: *p = 10 ; A: Yes the child will have proper malloc()ed memory. First, know that there are two memory managers in place: * *One is the Linux kernel, which allocates memory pages to processes. This is done through the sbrk() system call. *On the other hand, malloc() uses sbrk() to request memory from the kernel and then manages it, by breaking it in chunks, remembering how the memory has been divided and later mark them as available when free()ed (and at times perform something similar to garbage collection). That said, what malloc() does with memory is completely transparent to the Linux kernel. It's effectively just a linked list or two, which you could have implemented yourself. What the Linux kernel sees as your memory are the pages assigned to your process and their contents. When you call fork() (emphasis mine): * *The child process is created with a single thread--the one that called fork(). The entire virtual address space of the parent is replicated in the child, including the states of mutexes, condition variables, and other pthreads objects; the use of pthread_atfork(3) may be helpful for dealing with problems that this can cause. *The child inherits copies of the parent's set of open file descriptors. Each file descriptor in the child refers to the same open file description (see open(2)) as the corresponding file descriptor in the parent. This means that the two descriptors share open file status flags, current file offset, and signal-driven I/O attributes (see the description of F_SETOWN and F_SETSIG in fcntl(2)). *The child inherits copies of the parent's set of open message queue descriptors (see mq_overview(7)). Each descriptor in the child refers to the same open message queue description as the corresponding descriptor in the parent. This means that the two descriptors share the same flags (mq_flags). *The child inherits copies of the parent's set of open directory streams (see opendir(3)). POSIX.1-2001 says that the corresponding directory streams in the parent and child may share the directory stream positioning; on Linux/glibc they do not. So fork() not only copy the entire virtual address space, but also all mutexes, file descriptors and basically every kind of resource the parent has opened. Part of the virtual address space copied is the linked list(s) of malloc(). So after a fork(), the malloc()ed memories of both processes are equal and the information malloc() keeps and what memory is allocated is also the same. However, they now live on separate memory pages. Side information: One might think that fork() is a very expensive operation. However (from the man page): Under Linux, fork() is implemented using copy-on-write pages, so the only penalty that it incurs is the time and memory required to duplicate the parent's page tables, and to create a unique task structure for the child. This basically says that on fork()ing, no actual copying is done, but the pages are marked to be copied if the child tries to modify them. Effectively, if the child only reads from that memory, or completely ignore it, there is no copy overhead. This is very important for the common fork()/exec() pattern.
{ "language": "en", "url": "https://stackoverflow.com/questions/23608033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Can Azure IOT hub be used to read(Get) data from some devices? In my case I have 1000+ of devices that stores activity inside. I need to send a http get request to this device to get those data in csv or json format and save it in a storage hosted on azure. Cab IOT hub require data using get request and can it be scheduled to read daily/weekly? What other azure services would you suggest to facilitated this scheduled reads? A: You have not mentioned which the Azure IoT Hub scale tier is used. Basically there are two price groups such as Basic and Standard with a significant different cost and capabilities. The Basic tier offers only services for one-way communications between the devices and Azure IoT Hub. Based on that, the following scenarios can be used for your business case: 1. Basic Tier (non event-driven solution) The device pushs periodicaly a telementry and non-telemetry messages based on the needs to the Azure IoT Hub, where the non-telemetry messages are routed to the Azure Function via the Service Bus Queue/Topic. Responsibility for this non-telemetry pipe is to persist a real device state in the database. Note, that the 6M messages will cost only $50/month. The back-end application can any time to query this database for devices state. 2. Standard Tier (event-driven solution) In this scenario you can use a Device Twin of the Azure IoT Hub to enable storing a real-device state in the cloud-backend (described by @HelenLo). The device can be triggered by C2D message, changing a desired property, invoking a method or based on the device edge trigger to the action for updating a state (reported properties). The Azure IoT Hub has a capabilities to run your scheduled jobs for multiple devices. In this solution, the back-end application can call any time a job for ExportDevicesAsync to the blob storage, see more details here. Note, that the 6M messages will cost $250/month. As you can see the above each scenario needs to build a different device logic model based on the communications capabilities between the devices and Azure IoT Hub and back. Note, there are some limitations for these communications, see more details here. A: You can consider using Device Twin of IoT Hub https://learn.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-device-twins Use device twins to: * *Store device-specific metadata in the cloud. For example, the deployment location of a vending machine. *Report current state information such as available capabilities and conditions from your device app. For example, a device is connected to your IoT hub over cellular or WiFi. *Synchronize the state of long-running workflows between device app and back-end app. For example, when the solution back end specifies the new firmware version to install, and the device app reports the various stages of the update process. *Query your device metadata, configuration, or state. A: IoT Hub provides you with the ability to connect your devices over various protocols. Preferred protocols are messaging protocols, such as MQTT or AMQP, but HTTPS is also supported. Using IoT hub, you do not request data from the device, though. The device will send the data to the IoT Hub. You have to options to implement that with IoT Hub: * *The device connects to the IoT Hub whenever it has some data to be sent, and pushes the data up to IoT Hub *The device does not send any data on its own, but stays always or at least regularly connected to IoT Hub. You then can send a cloud to device message over IoT Hub to the device, requesting the data to be sent. The device then sends the data the same way it would in the first option. When the data then has been sent to IoT Hub, you need to push it somewhere where it is persistently stored - IoT Hub only keeps messages for 1 day by default. Options for this are: * *Create a blob storage account and push to that directly from IoT Hub using a custom endpoint This would probably be the easiest and cheapest. Dependening on how you need to access your data, a blob might not be the best option, though *Create a function app, create a function with an EventHubTrigger, connect it to IoT Hub and let the function process incoming data by outputting it into any kind of data sink, such as SQL, CosmosDB, Table Storage...
{ "language": "en", "url": "https://stackoverflow.com/questions/49785684", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: lockDays in Litepicker Js Using Litepicker (https://github.com/wakirin/litepicker/) I'm having troubles adding lockDays to my calendar. I'm retrieving some data from my DB and this is the JSON answer: { "error":0, "reservation_counter":1, "reservations":[ { "start_date":"2022-02-28 00:00:00", "end_date":"2022-03-02 00:00:00" } ] } Then I format the dates fields with this JS script const obj = JSON.parse(result); if(parseInt(obj.error) === 0) { var lockedArray = []; if(parseInt(obj.reservation_counter) > 0){ for (let i = 0; i < parseInt(obj.reservation_counter); i++) { let subArray = [obj.reservations[i].start_date.slice(0, 10), obj.reservations[i].end_date.slice(0, 10)]; lockedArray.push(subArray); } } } The output of this code is, as string: [["2022-02-28", "2022-03-02"]], so is compliant with the Litepicker format. To render the calendar, I use this script: $('input[name="litepicker"]').daterangepicker({ parentEl: "#calendarelement-div", opens: 'center', inlineMode: false, minDate: new Date(), autoUpdateInput: true, singleMode: false, locale: { format: "DD/MM/YYYY", separator: " - " }, lockDaysFormat: 'YYYY-MM-DD', lockDays: lockedArray, disallowLockDaysInRange: true, highlightedDays: lockedArray }); The calendar is fully working, but it not locks the dates. I tried with hand-coded dates, but nothing. This is an image of the result: The expected resut is dates between 2022-02-28 and 2022-03-02 are locked, as JSON data. I post the entire code to show the chronological order of the code $.ajax({ url: 'php/getbedroomavailability.php', type: 'GET', data: { space: space, offer: offer, guests: guests}, dataType: "text", success: function(result){ console.log(result); const obj = JSON.parse(result); if(parseInt(obj.error) === 0) { // No error var lockedArray = []; if(parseInt(obj.reservation_counter) > 0){ // Some reservations for (let i = 0; i < parseInt(obj.reservation_counter); i++) { let subArray = [obj.reservations[i].start_date.slice(0, 10), obj.reservations[i].end_date.slice(0, 10)]; lockedArray.push(subArray); } } $('input[name="litepicker"]').daterangepicker({ parentEl: "#calendarelement-div", opens: 'center', inlineMode: false, minDate: new Date(), autoUpdateInput: true, singleMode: false, locale: { format: "DD/MM/YYYY", separator: " - " }, lockDaysFormat: 'YYYY-MM-DD', lockDays: lockedArray, disallowLockDaysInRange: true, highlightedDays: lockedArray }); document.getElementById("ava-box").style.display = "block"; document.getElementById("over-shadow").style.display = "block"; simulateClick(document.getElementById("litepicker")); document.getElementById("selectfareckt").onclick = function () { goCheckout(space, offer); } } else { alert(obj.error_msg); } } });
{ "language": "en", "url": "https://stackoverflow.com/questions/71293197", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: SQLAlchemy relationship between inherited tables throws primary key error I'm trying to set up a joined inheritance in SQLAlchemy which is working fine. My schema design requires a one-to-many relationship between two inherited table. My actual working example is quite complex but I was able to reproduce the issue with the SQLAlchemy joined inheritance tutorial code. """Joined-table (table-per-subclass) inheritance example.""" from sqlalchemy import Column from sqlalchemy import create_engine from sqlalchemy import ForeignKey from sqlalchemy import inspect from sqlalchemy import Integer from sqlalchemy import or_ from sqlalchemy import String from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import relationship from sqlalchemy.orm import Session from sqlalchemy.orm import with_polymorphic Base = declarative_base() class Resource(Base): __tablename__ = "resource" id = Column(Integer, primary_key=True) # Permissions and other common columns. # Left out for simplicity type = Column(String(50)) __mapper_args__ = { "polymorphic_identity": "resource", "polymorphic_on": type, } class ChildResource(Resource): __tablename__ = "child_resource" id = Column(ForeignKey("resource.id"), primary_key=True) name = Column(String(30)) parent_id = Column(ForeignKey('parent_resource.id'), nullable=False) parent = relationship('ParentResource', back_populates='children', foreign_keys=parent_id) __mapper_args__ = {"polymorphic_identity": "child_resource"} class ParentResource(Resource): __tablename__ = "parent_resource" id = Column(ForeignKey("resource.id"), primary_key=True) name = Column(String(30)) children = relationship('ChildResource', back_populates='parent', foreign_keys='ChildResource.id') __mapper_args__ = {"polymorphic_identity": "parent_resource"} if __name__ == "__main__": engine = create_engine("sqlite://", echo=True) Base.metadata.create_all(engine) session = Session(engine) res_child = ChildResource( name="My child shared resource", ) res_parent = ParentResource( name="My parent shared resource" ) res_parent.children.append(res_child) session.add(res_child) session.add(res_parent) session.commit() So I have a ParentResource and a ChildResource. Both are inherited from a common Resource class (in real life the common base is necessary it contains much more columns). Between the ParentResource and the ChildResource there is a one-to-many relationship. The tables are created correctly in sqlite & postgres, but when I try add one parent and one child object to the session I've got the following error: sqlalchemy.exc.IntegrityError: (sqlite3.IntegrityError) UNIQUE constraint failed: resource.id [SQL: INSERT INTO resource (id, type) VALUES (?, ?)] [parameters: (1, 'child_resource')] (Background on this error at: http://sqlalche.me/e/gkpj) When I check the SQLAlchemy echo I see the following. 2020-06-17 07:28:42,276 INFO sqlalchemy.engine.base.Engine () 2020-06-17 07:28:42,276 INFO sqlalchemy.engine.base.Engine COMMIT 2020-06-17 07:28:42,280 INFO sqlalchemy.engine.base.Engine BEGIN (implicit) 2020-06-17 07:28:42,280 INFO sqlalchemy.engine.base.Engine INSERT INTO resource (type) VALUES (?) 2020-06-17 07:28:42,280 INFO sqlalchemy.engine.base.Engine ('parent_resource',) 2020-06-17 07:28:42,281 INFO sqlalchemy.engine.base.Engine INSERT INTO parent_resource (id, name) VALUES (?, ?) 2020-06-17 07:28:42,281 INFO sqlalchemy.engine.base.Engine (1, 'My parent shared resource') 2020-06-17 07:28:42,281 INFO sqlalchemy.engine.base.Engine INSERT INTO resource (id, type) VALUES (?, ?) 2020-06-17 07:28:42,281 INFO sqlalchemy.engine.base.Engine (1, 'specific_resource_1') 2020-06-17 07:28:42,281 INFO sqlalchemy.engine.base.Engine ROLLBACK It's looks like res_child and res_parent will get the same primary key which of course breaks the pk constraint. What am I doing wrong? A: As above_c_level suggested the minimal solution is to change the primary key column name in the base class. The mistake what I made both the base class and the subclass had the "id" property which was overriden by the subclass. You can find below the working code sample. """Joined-table (table-per-subclass) inheritance example.""" from sqlalchemy import Column from sqlalchemy import create_engine from sqlalchemy import ForeignKey from sqlalchemy import inspect from sqlalchemy import Integer from sqlalchemy import or_ from sqlalchemy import String from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import relationship from sqlalchemy.orm import Session from sqlalchemy.orm import with_polymorphic Base = declarative_base() class Resource(Base): __tablename__ = "resource" resource_id = Column(Integer, primary_key=True) # Permissions and other common columns. # Left out for simplicity type = Column(String(50)) __mapper_args__ = { "polymorphic_identity": "resource", "polymorphic_on": type, } class ChildResource(Resource): __tablename__ = "child_resource" id = Column(ForeignKey("resource.resource_id"), primary_key=True) name = Column(String(30)) parent_id = Column(ForeignKey('parent_resource.id'), nullable=False) parent = relationship('ParentResource', back_populates='children', foreign_keys=parent_id) __mapper_args__ = {"polymorphic_identity": "child_resource", "inherit_condition": id == Resource.resource_id} class ParentResource(Resource): __tablename__ = "parent_resource" id = Column(ForeignKey("resource.resource_id"), primary_key=True) name = Column(String(30)) children = relationship('ChildResource', back_populates='parent', foreign_keys='ChildResource.id') __mapper_args__ = {"polymorphic_identity": "parent_resource", "inherit_condition": id == Resource.resource_id} if __name__ == "__main__": engine = create_engine("sqlite://", echo=True) Base.metadata.create_all(engine) session = Session(engine) res_child = ChildResource( name="My child shared resource", ) res_parent = ParentResource( name="My parent shared resource" ) res_parent.children.append(res_child) session.add(res_child) session.add(res_parent) session.commit()
{ "language": "en", "url": "https://stackoverflow.com/questions/62422207", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Is there an equivalent to Java -Xmx for python? I am used to running .command scripts with terminal that use java, which allocates an amount of memory by the arguments -Xmx and -Xms. I am now using a .sh script with python and can't seem to figure out what the equivalent is. EDIT: Let me explain a little more. Is it possible to do that in the shell script or am I stuck decompiling Python scripts?
{ "language": "en", "url": "https://stackoverflow.com/questions/22887400", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Not able to install Tomcat9 to Netbeans 8.0.2 Need some help. Its my first time with netBeans and i am trying to install Tomcat server9 with netbeans 8.0.2. I am succesfully using it with eclipse but I googled ofcourse but none of the solution seemed to work. My CATALINA_HOME is set and pointing to tomcat location. In Path i added %CATALINA_HOME%\bin But when i try to add it in netbeans It gives the error: " The specified Server Location (Catalina Home) folder is not valid. Can someone please help me Thanks
{ "language": "en", "url": "https://stackoverflow.com/questions/51016525", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Using request count for hpa I am trying to deploy HorizontalPodAutoscaler on GKE referring the following link #HPA.yaml apiVersion: autoscaling/v2beta1 kind: HorizontalPodAutoscaler metadata: name: hello-hpa spec: minReplicas: 1 maxReplicas: 5 metrics: - external: metricName: loadbalancing.googleapis.com|https|backend_request_count metricSelector: matchLabels: resource.labels.forwarding_rule_name: k8s-fw-default-hello-hpa--fbeacb94cfa9120e targetAverageValue: "1" type: External scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: hello-hpa Here k8s-fw-default-hello-hpa--fbeacb94cfa9550e is the name of the forwarding rule in ingress. The above works for us but we would like to automate the deployment process where even when there is a change in the Ingress/HPA object it should work without human intervention. AFAIK the forwarding rule name changes per deployment and this requires us to go and change the HPA object accordingly. Is there some way to get that value as a reference to point to something that is same across deployments? Where can I describe the external metrics to see if we can use some other selector? Referenced this and this but did not find anything It would be great if I could use the service name (or may be service name and project id pair) to filter. example- matchLabels: resource.labels.forwarding_rule_name: k8s-fw-default-hello-hpa.* OR matchLabels: resource.labels.backend_name: hello-hpa OR matchLabels: resource.labels.backend_name: hello-hpa resource.labels.project_id: hello-project
{ "language": "en", "url": "https://stackoverflow.com/questions/57691427", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Cloudfront CloudFormation We have option to get the value of DomainName in cloudformation template while creating a CloudFront Distribution using Fn::GetAtt function. But I could not find anywhere that how we get Origin's Id and DefaultCacheBehaviour's TargetOriginId dynamically? Can I just use Ref to my S3 and ELB? This is my code, I have used some parameters also and changed the Cloudfront code as well. Please check it once whether it is correct or not. And it is throwing me an error called "Property validation failure: [Encountered unsupported properties in {/DistributionConfig/Origins/1/S3OriginConfig}: [HTTPSPort, HTTPPort, OriginProtocolPolicy]]" { "AWSTemplateFormatVersion": "2010-09-09", "Parameters": { "ClientName": { "Type": "String", "Description": "Name of the Client" }, "EnvName": { "Type": "String", "Description": "Name of the Environment" } }, "Resources": { "distd2v0l803ay8odocloudfrontnet": { "Type": "AWS::CloudFront::Distribution", "Properties": { "DistributionConfig": { "Enabled": true, "DefaultRootObject": "index.html", "PriceClass": "PriceClass_All", "CacheBehaviors": [ { "TargetOriginId": { "Ref": "elbhtlbetaelb" }, "PathPattern": "/app*", "ViewerProtocolPolicy": "allow-all", "MinTTL": 0, "AllowedMethods": [ "HEAD", "DELETE", "POST", "GET", "OPTIONS", "PUT", "PATCH" ], "CachedMethods": [ "HEAD", "GET" ], "ForwardedValues": { "QueryString": true, "Cookies": { "Forward": "all" } } }, { "TargetOriginId": { "Ref": "elbhtlbetaelb" }, "PathPattern": "/api*", "ViewerProtocolPolicy": "allow-all", "MinTTL": 0, "AllowedMethods": [ "HEAD", "DELETE", "POST", "GET", "OPTIONS", "PUT", "PATCH" ], "CachedMethods": [ "HEAD", "GET" ], "ForwardedValues": { "QueryString": true, "Cookies": { "Forward": "all" } } } ], "DefaultCacheBehavior": { "TargetOriginId": { "Ref": "s3htlbeta" }, "ViewerProtocolPolicy": "allow-all", "MinTTL": 0, "AllowedMethods": [ "HEAD", "DELETE", "POST", "GET", "OPTIONS", "PUT", "PATCH" ], "CachedMethods": [ "HEAD", "GET" ], "ForwardedValues": { "Cookies": { "Forward": "none" } } }, "Origins": [ { "DomainName": { "Fn::GetAtt": [ "s3htlbeta", "DomainName" ] }, "Id": { "Ref": "s3htlbeta" }, "S3OriginConfig": { "OriginAccessIdentity": "origin-access-identity/cloudfront/EYD1QGO9CUDA2" } }, { "DomainName": { "Fn::GetAtt": [ "elbhtlbetaelb", "DNSName" ] }, "Id": { "Ref": "elbhtlbetaelb" }, "S3OriginConfig": { "HTTPPort": "80", "HTTPSPort": "443", "OriginProtocolPolicy": "http-only" } } ], "Restrictions": { "GeoRestriction": { "RestrictionType": "none", "Locations": [] } }, "ViewerCertificate": { "CloudFrontDefaultCertificate": "true", "MinimumProtocolVersion": "TLSv1" } } } }, "s3htlbeta": { "Type": "AWS::S3::Bucket", "Properties": { "AccessControl": "Private", "VersioningConfiguration": { "Status": "Suspended" } } } }, "Description": "xxx-beta cloudformation template" } A: The DistributionConfig/Origins/ID field should just be a text name, it doesn't need to reference anything. ie. Set DistributionConfig/Origins/ID to a string e.g. 'MyOriginBucket' Then your CacheBehaviour TargetOriginId is also a string set to 'MyOriginBucket' The only Ref required to your new bucket is in Origins/DomainName. The purpose of the TargetOriginId is to point to the origin ID that you specified in the list of Origins, not point to the bucket name.
{ "language": "en", "url": "https://stackoverflow.com/questions/53576157", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Print in terminal after`freopen("file.txt", "wb", stdout);` (or similar solution to preserve a printing function) I want to store the printed output of a function with freopen("file.txt", "wb", stdout); (or another solution, but the function matrix_output_printf() should remained untouched. However, I want to be able to further print something after having closed fclose(stdout); as shown below but the hello of printf("hello"); does not appear in terminal. Is it possible print hello in another way (on a MacOS system). /* * main.c * https://stackoverflow.com/questions/75349098/stream-the-output-of-a-void-function-using-printf/75352112#75352112 */ #include <stdio.h> #include <stdlib.h> void matrix_output(); void matrix_output_printf(){ for (int i = 0; i < 3; i++){ for (int j = 0; j < 3; j++){ printf("%d\t", i+j); } printf("\n"); } } int main () { freopen("file.txt", "wb", stdout); matrix_output_printf(); fclose(stdout); printf("hello"); return 0; } Edit The code provided in the answer is not working on my MacOS computer ( 'io.h' file not found). Is there another solution portable for UNIX like systems? A: Tried and tested on my Windows box: #include <stdio.h> #include <stdlib.h> #include <io.h> void matrix_output_printf() { for( int i = 0; i < 3; i++ ) { for( int j = 0; j < 3; j++ ) printf( "%d\t", i+j ); printf( "\n" ); } } int main( void ) { int saved = _dup( fileno( stdout ) ); freopen( "file.txt", "w", stdout ); matrix_output_printf(); fclose( stdout ); _dup2( saved, 1 ); printf( "hello\n" ); return 0; } A: If stdout was originally tied to the console, you can reopen the stream to /dev/tty: #include <stdio.h> void matrix_output_printf(void) { for (int i = 0; i < 3; i++) { for (int j = 0; j < 3; j++) { printf("%d\t", i + j); } printf("\n"); } } int main() { freopen("file.txt", "w", stdout); matrix_output_printf(); freopen("/dev/tty", "w", stdout); printf("hello"); return 0; } This should work on most unix systems, such as macOS, but if stdout was redirected to another file or device, reopening it with freopen won't work. You could try and duplicate the system handle with dup() and use fdopen() to open a standard stream after closing stdout, but there is no guarantee that the stream returned by fdopen will be the same as stdout. Using freopen for your purpose is a hack. A much better approach is to rewrite matrix_output_printf to take a FILE * argument for the output stream, or a const char *filename for the name of the file to produce: void matrix_output_printf(const char *filename) { FILE *fp = stdout; if (filename) { fp = fopen(filename, "w"); if (fp == NULL) { /* handle error */ return; } } for (int i = 0; i < 3; i++) { for (int j = 0; j < 3; j++) { fprintf(fp, "%d\t", i + j); } fprintf(fp, "\n"); } if (filename) fclose(fp); } A: Another option: * *Flush the stream *Duplicate stdout's file descriptor before calling the function, duplicate a file descriptor open to the desired file to the stream's original file descriptor value *Call the function *Restore the original stdout descriptor Similar to this: #include <stdio.h> #include <stdlib.h> #include <fcntl.h> #include <sys/stat.h> #include <unistd.h> void matrix_output_printf(void){ for (int i = 0; i < 3; i++){ for (int j = 0; j < 3; j++){ printf("%d\t", i+j); } printf("\n"); } } int main() { int fd = open("file.txt", O_WRONLY | O_CREAT | O_TRUNC, 0644); int savedStdoutFD = dup(fileno(stdout)); fflush(stdout); dup2(fd, STDOUT_FILENO); close(fd); matrix_output_printf(); fflush(stdout); dup2(savedStdoutFD, STDOUT_FILENO); close(savedStdoutFD); printf("hello\n"); return 0; } Note that this code is inherently unsafe to use in a multithreaded process. @chqrlie had the best answer: "A much better approach is to rewrite matrix_output_printf to take a FILE * argument for the output stream, or a const char *filename for the name of the file to produce".
{ "language": "en", "url": "https://stackoverflow.com/questions/75355851", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Why did css 'position: absolute' stop working? In my site, I have a language bar floating on the top right of the screen. html: <form id='form2' name='form2' method='get' action=''> <div id='langs' class='langs'> <button type='submit' name='lang' value='PT'>PT</button> <button type='submit' name='lang' value='EN'>EN</button> </div></form> css: .langs { background-color: #90A090; position:absolute; right:4; top:4; } So far so good. But when I updated the website to include a login system from here, something went wrong, and the language bar don't float anymore. Instead, it is at the end of the page. There's no css in any file added. And every other aspect of the css still works (like change the background color). The file, that was index.php, is now main.php (index.php is now used for the login screen). The css is all inside the main.php file. What can be causing the change in behavior? A: your missing the the units of the value try: .langs { background-color: #90A090; position:absolute; right:4px; /* right:0; is ok but right:4; will fail */ top:4px; } But you are expected to set position relative to the parent, for better results xD form#form2 { position: relative; }
{ "language": "en", "url": "https://stackoverflow.com/questions/33289473", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Delphi - Two Modal forms I have a main form (A) that call a modal form (B). and (B) call another modal form (C) and they work normal the problem when I added new modal form (D) and make it called from (B) then when I close the form (D) the form (B) also closed !! although I made sure the close button modal result = mrnone please advise Code: Form A calling B B := TB.Create(self); B.ShowModal; Form B Calling C C := TC.Create(self); C.ShowModal; Form B Calling D D := TD.Create(self); D.ShowModal; I use Delphi 2010 More Code added this is how I free the form that cause the problem and make the caller close ! procedure TForm2.FormClose(Sender: TObject; var Action: TCloseAction); begin action := cafree; end; procedure TForm2.FormDestroy(Sender: TObject); begin Form2 := nil; end; This is how I show the modal form procedure Tmymodalfrm.Button1Click(Sender: TObject); begin form2 := Tform2.Create(self); form2.ShowModal; end; And after tracing with the call stack I get the code that originally created form B which is so normal: B := TB.Create(self); B.ShowModal; and I am going crazy soon :) A: Found the problem. the button that call the form has a modal result = mrclose !!
{ "language": "en", "url": "https://stackoverflow.com/questions/12505572", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-2" }
Q: How to get the Google Custom Search (V2) to execute immediately with a pre-loaded search string? I've been tasked with adding GCS to a website. After I followed the instructions to create my free GCS (http://www.google.com/cse/sitesearch/create), and pasted the supplied snippet into the appropriate place, the search box & button components render OK and the user is able to enter a search string, run the search and see the results. So far so good. However, when the components render for the first time I want to be able to pass a pre-entered string into the box and programmatically have the search executed immediately. This bit is not working. The code I currently have in place is as follows, consisting of the supplied snippet plus some extra code derived from my reading of the Custom Search Element Control API doc (https://developers.google.com/custom-search/docs/element) and intended to implement the 'execute immediate': <div class="content-container"> <script type="text/javascript"> (function() { var cx = '(my search id)'; var gcse = document.createElement('script'); gcse.type = 'text/javascript'; gcse.async = true; gcse.src = (document.location.protocol == 'https:' ? 'https:' : 'http:') + '//www.google.com/cse/cse.js?cx=' + cx; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(gcse, s); })(); </script> <gcse:search> gname="2DSearch"</gcse:search> <script type="text/javascript"> var element = google.search.cse.element.getElement("2DSearch"); element.prefillQuery(primarySearch); element.execute(primarySearch); </script> </div> primarySearch is the string I want to automatically search on. When the components render, the string 'gname="2DSearch"' appears briefly then disappears again just before the search components appear, then nothing else happens. There appear to be some similarities here with this question (unanswered): https://stackoverflow.com/questions/15871911/passing-optional-search-parameters-into-google-custom-search-query I have searched the Web in vain for a number of hours for anything else relevant. Can anybody tell me why it's not working and/or what I need to do? My apologies, I have done alot of programmming but am virtually illiterate when it comes to HTML & javascript. Thanks Jim I discovered that the Chrome console is showing the following error: Uncaught ReferenceError: google is not defined My code now looks like this: <div class="content-container"> <script type="text/javascript"> (function() { var cx = '013736134253840884188:fxsx6zqql_y'; var gcse = document.createElement('script'); gcse.type = 'text/javascript'; gcse.async = true; gcse.src = (document.location.protocol == 'https:' ? 'https:' : 'http:') + '//www.google.com/cse/cse.js?cx=' + cx; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(gcse, s); })(); </script> </div> <div class="gcse-search" data-gname="2DSearch"></div> <div class="content-container"> <script type="text/javascript" src="https://www.google.com/jsapi"></script> <script type="text/javascript"> var element = google.search.cse.element.getElement("2DSearch"); element.prefillQuery(primarySearch); element.execute(primarySearch); </script> </div> In the console again I'm now also seeing the following: XMLHttpRequest cannot load (insert here the jsapi link above that I'm not allowed to post). Origin (insert here the URL for my localhost) is not allowed by Access-Control-Allow-Origin. There are numerous references to similar errors to this all over the Net, each one slightly different, with solutions proposed referring to JSON, JQUERY, AJAX etc.etc., but nothing that I have found seems directly relevant to what I am trying to do (ie make available to my code the file or library in which 'google' is defined), and nothing that I have tried has worked. Talk about trying to find your way through a coalmine with a candle... :) Cheers A: Can you pass the search term via the URL? <script> (function() { var cx = 'YOURID'; var gcse = document.createElement('script'); gcse.type = 'text/javascript'; gcse.async = true; gcse.src = (document.location.protocol == 'https:' ? 'https:' : 'http:') + '//www.google.com/cse/cse.js?cx=' + cx; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(gcse, s); })(); </script> <gcse:searchbox queryParameterName="term"></gcse:searchbox> <gcse:searchresults></gcse:searchresults> If you call your "search" page via yourdomain.com/search?term=searchword the search results appear immediately. A: <gcse:search gname='abcd'></gcse:search> And when the page loaded: google.search.cse.element.getElement('abcd').execute(query); A: I've got it working with the gcse callback option (I also changed my layout in the CSE Control Panel to prevent the default overlay). <script> function gcseCallback() { if (document.readyState != 'complete') return google.setOnLoadCallback(gcseCallback, true); google.search.cse.element.render({gname:'gsearch', div:'results', tag:'searchresults-only', attributes:{linkTarget:''}}); var element = google.search.cse.element.getElement('gsearch'); element.execute('this is my query'); }; window.__gcse = { parsetags: 'explicit', callback: gcseCallback }; (function() { var cx = 'YOUR_ENGINE_ID'; var gcse = document.createElement('script'); gcse.type = 'text/javascript'; gcse.async = true; gcse.src = (document.location.protocol == 'https:' ? 'https:' : 'http:') + '//www.google.com/cse/cse.js?cx=' + cx; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(gcse, s); })(); </script> <div id="results"></div>
{ "language": "en", "url": "https://stackoverflow.com/questions/16849144", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Elasticsearch 6.2 (java/gradle) integration test I am spending days trying to find solution what to use for this "simple" test: * *inside of test stand up ES node that will be used for test only *connect with transport client to that ES instance and add new index. (that is just a start of all integration tests, but I need to see how to do one if anyone has an example) Note: we have to continue to use transport client, we know it is going away. We are upgrading from ES version 2 to version 6 and trying to keep integration tests as much unchanged as possible. This is exactly the same issue as this person had in lower version, we had it that way but it is not supported anymore in 6, and how do I accomplish the same now: Start elasticsearch within gradle build for integration tests I am finding comments about "existing gradle plugin" will do the job - what is that plugin and how to use it? Or, example with ESIntegTestCase would be perfect too, anything that works. Thank you!
{ "language": "en", "url": "https://stackoverflow.com/questions/51271057", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Best way to create a unique number for each many to many relationship I have a table of Students and a table of Courses that are connected through an intermediate table to create a many-to-many relationship (ie. a student can enroll in multiple courses and a course can have multiple students). The problem is that the client wants a unique student ID per course. For example: rowid Course Student ID (calculated) 1 A Ben 1 2 A Alex 2 3 A Luis 3 4 B Alex 1 5 B Gail 2 6 B Steve 3 The ID's should be numbered from 1 and a student can have a different ID for different course (Alex for example has ID=2 for course A, but ID=1 for Course B). Once an ID is assigned it is fixed and cannot change. I implemented a solution by ordering on the rowid of the through table "SELECT Student from table WHERE Course=A ORDER BY rowid" and then returning a number based on the order of the results. The problem with this solution, is that if a student leaves a course (is deleted from the table), the numbers of the other students will change. Can someone recommend a better way? If it matters, I'm using PostgreSQL and Django. Here's what I've thought of: * *Creating a column for the ID instead of calculating it. When a new relationship is created assigning an ID based on the max(id)+1 of the students in the course *Adding a column "disabled" and setting it True when a student leaves the course. This would involve changing all my code to make sure that only active students are used I think the first solution is better, but is there a more "database centric way" where the database can calculate this for me automatically? A: If you want to have stable ID's, you certanly need to store them in the table. You'll need to assign a new sequential ID for every student that joins a course and just delete it if the student leaves, without touching others. If you have concurrent access to your tables, don't use MAX(id), as two queries can select same MAX(id) before inserting it into the table. Instead, create a separate table to be used as a sequence, lock each course's row with SELECT FOR UPDATE, then insert the new student's ID and update the row with a new ID in a single transaction, like this: Courses: Name NextID ------- --------- Math 101 Physics 201 Attendants: Student Course Id ------- ------ ---- Smith Math 99 Jones Math 100 Smith Physics 200 BEGIN TRANSACTION; SELECT NextID INTO @NewID FROM Courses WHERE Name = 'Math' FOR UPDATE; INSERT INTO Attendants (Student, Course, Id) VALUES ('Doe', 'Math', @NewID); UPDATE Courses SET NextID = @NewID + 1 WHERE Course = 'Math'; COMMIT; A: Your first suggestions seems good: have a last_id field in the course table that you increase by 1 any time you enroll a student in that course. A: Creating a column for the ID instead of calculating it. When a new relationship is created assigning an ID based on the max(id)+1 of the students in the course That how I'd do it. There is no point of calculating it. And the id's shouldn't change just because someone dropped out. Adding a column "disabled" and setting it True when a student leaves the course. Yes, that would be a good idea. Another one is creating another table of same structure, where you'll store dropped students. Then of course you'll have to select max(id) from union of these two tables. A: I think there are two concepts that you need to help you out here. * *Sequences where the database gets the next value for an ID for you automatically *Composite keys where more than one column can be combined to make the primary key of a table. From a quick google it looks like Django can handle sequences but not composite keys, so you will need to emulate that somehow. However you could equally have two foreign keys and a sequence for the course/student relationship As for how to handle deletions, it depends on what you need from your app, you may find that a status field would help you as you may want to differentiate between students who left and those that were kicked out, or get statistics on how many students leave different courses.
{ "language": "en", "url": "https://stackoverflow.com/questions/565102", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Django view that rewrites URL to contain username as a query parameter? I want to do simple analytics over which user is doing what on my Django site. I think the simplest way is to rewrite all URLs so that they contain "?username=foo". This will only happen on GET urls that don't already have any URL query parameters. How do I do this? Can I rewrite the URL to include a new query parameter from a view? A: Why would you want to add a get parameter to each view while the request already contains the user? Also you should probably use the middleware layer to log user actions. A: I don't think you would have to change anything except the templates for this. If your existing url like http://server/myapp/view1 then accessing an url like this http://server/myapp/view1?username=foo will also work. So your existing views will work as is. However, you will have to change templates that renders the links as per new scheme. For example for above view if you are doing {% url 'view1' %} to get a link, then you will have to change it to {% url 'view1'%}?username={{user}}.
{ "language": "en", "url": "https://stackoverflow.com/questions/12526385", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: LDA - Assigning keywords to topics I have unstructured data of about 150k documents. I am trying to group these documents using unsupervised learning algorithm. Currently I am using LDA (Latent Dirichlet allocation) in gensim Python. For LDAModel I have passed num_topics=20. Hence my whole 150k data is falling into 20 topics. Now that I have these groups, I have 2 questions: * *How should I assign new documents to these topics? The approach I am taking is: Calculate the sum of the word scores of the document per topic and assign the document to the topic with the highest score. However this is not giving me good results. Is there any better way to get this? *How do I assign the main keywords that denote the topic? A: How should I assign new documents to these topics? Once you have a trained model you can query the model for your document with: doc_bow = model.id2word.doc2bow(doc.split()) # convert to bag of words format first doc_topics, word_topics, phi_values = model.get_document_topics(bow, per_word_topics=True) re. This code is going to provide you with per-doc and per-word information about the level of belonging to a particular topic. This means the per-word calculations are done for you automatically. How do I assign the main keywords that denote the topic? it is difficult to understand what you mean. The keywords denoting a topic along with their weights are the actual LDA model that you got from the training using a corpus. I suppose you may be interested in reviewing the following notebook [*] for more information how to query model for specific information regarding a document (per-word topic information, etc.). [*] from which I took the excerpt of the code above
{ "language": "en", "url": "https://stackoverflow.com/questions/43656999", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Git push atomically, only if the target branch is at the expected SHA Is there a way to push to a branch only if it hasn't changed since I last fetched it? More specifically, I'd like to delete branches that have already been merged, but only if no one else pushes new commits to them between when I check that the branch has been merged and when I push the deletion. From the git push documentation, I see that it does support a --atomic option, but that option is for making updates to multiple branches atomically with respect to each other, rather than for ensuring that my update to a remote branch will be atomic with respect to other users' updates. Ie, I want to delete merged branches on a remote git repository without any risk of deleting anyone else's work, even if they are working concurrently. A: Yes, this is possible by using the --force-with-lease option. For example: $ git push --force-with-lease=refs/heads/foo origin :refs/heads/foo This is designed for force pushes, but works for deletes as well. The option has many variants and can take explicit values if you like.
{ "language": "en", "url": "https://stackoverflow.com/questions/65430425", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: git - cannot push to repository - bug with hashmap When I try to push to repository, I got this message BUG: remote.c:236: hashmap_put overwrote entry after hashmap_get returned NULL Any idea what is wrong? Google shows nothing... When I try push to another repository on the same server, it is working correctly. The problem is only with one repo. Local Git version: 2.36.0.windows.1 Server Git version: 2.19.2 A: Git has self-detected an internal error. Report this to the Git mailing list (git@vger.kernel.org). The output from git config --list --show-origin may also be useful to the Git maintainers, along with the output of git ls-remote on the remote in question (origin, probably). (The bug itself is in your Windows Git; the server version of Git should be irrelevant, but it won't hurt to mention that too.) A: Based on reporting problem to git team, the problem was caused by branch with an empty name "". After removing this form .git/config, pushing works again. However, the problem is passed to git team and will probably be solved in future version.
{ "language": "en", "url": "https://stackoverflow.com/questions/72293994", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to get stderr using Popen + vim I am trying capture error messages from gvim using subprocess.Popen(), but not having much luck. import sys import subprocess import time ppp = subprocess.Popen( ["vim", "-g", "--nofork", "--some_option_that_does_not_exist", "blah"], stderr=subprocess.PIPE ) time.sleep(1) poll = ppp.poll() err = '' if poll != None and poll != 0: err = ppp.communicate() print '-------------------------------->>POLL\n', print poll print '-------------------------------->>PPP\n', print ppp print '-------------------------------->>err\n', print err print '--------------------------------<<\n\n', print 'Trying to get:\n' subprocess.Popen( ["vim", "-g", "--nofork", "--some_option_that_does_not_exist", "sasa"] ) Result: $ ./poperr.py -------------------------------->>POLL 1 -------------------------------->>PPP <subprocess.Popen object at 0xb73effec> -------------------------------->>err (None, '') --------------------------------<< Trying to get: VIM - Vi IMproved 7.3 (2010 Aug 15, compiled Oct 26 2012 16:45:52) Unknown option argument: "--some_option_that_does_not_exist" More info with: "vim -h" The point is to start vim in background and on success continue with script, on error log error message and abort script. Guess there are some quirks using vim. Have tested various with no luck. Any ideas? vim -h: -g Run using GUI (like "gvim") -f or --nofork Foreground: Don't fork when starting GUI Edit: To emphasize: It is the (None, '') part that is the issue. As one can see stderr is an empty string: ''. If there is no error it will be None as well (like stdout). The reason for the sleep is to give the sub-process a chance to succeed or fail. However, the real script keep polling for status on the process to detect if it has ended during its lifetime. As such the sleep is not that important, but reduces stress on the host as a lot of other tasks are skipped. … and it makes for a more re-producible snippet here. I increased the sleep time in sample to one second to perhaps make it more clear. Another example would be this. In this case errorfile.txt is always empty. ppp = subprocess.Popen( ["vim", "-g", "--nofork", "--some_option_that_does_not_exist", "blah"], stderr=open('errorfile.txt', 'a') ) A: ppp.communicate() will not be called unless the process terminates in less than 0.01 seconds, which evidently is not happening. Note that ppp.communicate() will wait for the process to finish, so I'm not sure why you do time.sleep() and ppp.poll(). The following consistently works: import subprocess stdout, stderr = subprocess.Popen( ["vim", "-g", "--nofork", "--nonexistent_option", "blah"], stderr=subprocess.PIPE ).communicate() print '---stdout---\n%s\n---stderr---\n%s\n---expected---' % (stdout, stderr) subprocess.Popen( ["vim", "-g", "--nofork", "--nonexistent_option", "blah"] ).wait()
{ "language": "en", "url": "https://stackoverflow.com/questions/20446313", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What's the gain I can have with blocks over regular methods? I am a Java programmer, and I am learning Ruby... But I don't get where those blocks of codes can give me gain... like what's the purpose of passing a block as an argument ? why not have 2 specialized methods instead that can be reused ? Why have some code in a block that cannot be reused ? I would love some code examples... Thanks for the help ! A: Consider some of the things you would use anonymous classes for in Java. e.g. often they are used for pluggable behaviour such as event listeners or to parametrize a method that has a general layout. Imagine we want to write a method that takes a list and returns a new list containing the items from the given list for which a specified condition is true. In Java we would write an interface: interface Condition { boolean f(Object o); } and then we could write: public List select(List list, Condition c) { List result = new ArrayList(); for (Object item : list) { if (c.f(item)) { result.add(item); } } return result; } and then if we wanted to select the even numbers from a list we could write: List even = select(mylist, new Condition() { public boolean f(Object o) { return ((Integer) o) % 2 == 0; } }); To write the equivalent in Ruby it could be: def select(list) new_list = [] # note: I'm avoid using 'each' so as to not illustrate blocks # using a method that needs a block for item in list # yield calls the block with the given parameters new_list << item if yield(item) end return new_list end and then we could select the even numbers with simply even = select(list) { |i| i % 2 == 0 } Of course, this functionality is already built into Ruby so in practice you would just do even = list.select { |i| i % 2 == 0 } As another example, consider code to open a file. You could do: f = open(somefile) # work with the file f.close but you then need to think about putting your close in an ensure block in case an exception occurs whilst working with the file. Instead, you can do open(somefile) do |f| # work with the file here # ruby will close it for us when the block terminates end A: The idea behind blocks is that it is a highly localized code where it is useful to have the definition at the call site. You can use an existing function as a block argument. Just pass it as an additional argument, and prefix it with an &
{ "language": "en", "url": "https://stackoverflow.com/questions/4783166", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: HighChart - Stacked Bar chart - To show dash line over the bar chart but it is not visible on Left side of the bar I have attached the screenshot to show the dash line in left side of the chart. It would be helpful if we can achieve the functionality and I tried to increase the width of the border, it is visible slightly. Do you have any proper approach to achieve this? Code Snippet : Highcharts.chart('container', { chart: { type: 'bar', charWidth: 520, chartHeight:300, margin: [70,0,0,0] }, xAxis: { categories: ['Jan'], visible: false }, yAxis: { min: 0, visible: false }, plotOptions: { series:{ stacking:'normal' }, dataLabels: { enabled : false, } }, series: [{ name: 'John', data: [{ y: 15 }] }, { name: 'Jane', data: [{ y: 22 }] }, { name: 'Joe', data: [{ y: 33, }] }, { stacking: false, data: [55], grouping: false, dashStyle:'ShortDash', color: 'transparent', borderWidth: 2, borderColor: 'red', }] }); Thanks for the response [![enter image description here][2]][2] A: The left side of the bar is connected to the xAxis, which makes the left border less visible. There are some possible solutions to this issue. * *You can set the minimum value of the xAxis to -0.1 and set startOnTick property to false. Then the left border is visible (it's not directly connected to the axis). Demo: https://jsfiddle.net/BlackLabel/pqy84hvs/ API references: https://api.highcharts.com/highcharts/xAxis.startOnTick https://api.highcharts.com/highcharts/xAxis.min yAxis: { min: -0.1, visible: false, startOnTick: false } *You can set the borderWidth property to 3. Then the border is visible. Demo: https://jsfiddle.net/BlackLabel/01m6p47f/ API references: https://api.highcharts.com/highcharts/series.bar.borderWidth { name: 'Joe', borderWidth: 3, borderColor: 'red', data: [{ y: 33, }] } *You can also use SVG Renderer and render the border yourself. Docs: https://api.highcharts.com/class-reference/Highcharts.SVGRenderer Example demo of using SVG Renderer: https://jsfiddle.net/BlackLabel/2koczuq0/
{ "language": "en", "url": "https://stackoverflow.com/questions/68897331", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: how to connect socket_io_client via https in flutter? I work with socket_io_client. So how do I connect the https socket in my application. below is code example Socket socket = io('https://abcd.co.in:1000005', OptionBuilder().setTransports(['websocket']).build()); socket.connect(); socket.onConnect((_) { print('connect'); socket.emit("Client", [MyConstants.CtName]); socket.emit("room", [MyConstants.CtName]); }); I am use socket_io_common: ^2.0.0 because of my flutter channel is flutter channel stable 2.2.3. also I am use socket_io_client: any. it is work fine with my other http socket url
{ "language": "en", "url": "https://stackoverflow.com/questions/72086390", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Can't hear sound from bluetooth speaker I'm using Raspberry Pi with Ubuntu 22.10 and Java 19. I'm trying to play a sound with the following code: private static void testMixers() throws IOException, UnsupportedAudioFileException { try (AudioInputStream audioInputStream = AudioSystem.getAudioInputStream(new BufferedInputStream(Objects.requireNonNull(Listener.class.getResourceAsStream("/alarmSound.wav"))))) { final Scanner scanner = new Scanner(System.in); for (Mixer.Info info : AudioSystem.getMixerInfo()) { System.out.println("Testing sound for mixer \"" + info + "\"..."); try (Clip clip = AudioSystem.getClip(info)) { clip.open(audioInputStream); clip.setFramePosition(0); clip.start(); System.out.println("Did you hear any sound? (true|false)"); if (Boolean.parseBoolean(scanner.nextLine())) return; } catch (Exception e) { System.out.println(e); } } } System.out.println("Mixer not found!"); } alarmSound.wav is under src/main/resources (this is a Maven project). AudioSystem.getMixerInfo() gives me the following Mixer.Info[]: * *Port Headphones [hw:0], version 5.19.0-1006-raspi *Port vc4hdmi0 [hw:1], version 5.19.0-1006-raspi *Port vc4hdmi1 [hw:2], version 5.19.0-1006-raspi *Headphones [default], version 5.19.0-1006-raspi *Headphones [plughw:0,0], version 5.19.0-1006-raspi *vc4hdmi0 [plughw:1,0], version 5.19.0-1006-raspi *vc4hdmi1 [plughw:2,0], version 5.19.0-1006-raspi Only the Headphones Mixers don't give the error: java.lang.IllegalArgumentException: Line unsupported: interface Clip supporting format PCM_SIGNED unknown sample rate, 16 bit, stereo, 4 bytes/frame, big-endian I didn't tested the headphone jack. This is completely different from Ubuntu 22.04 which gave more Mixer.Infos. Is there any workaround? it seems that every Ubuntu version requires different workaround for being able to play a sound that can be heard not through the headphones (but from bluetooth speakers or via HDMI for example).
{ "language": "en", "url": "https://stackoverflow.com/questions/74281825", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Can I use html area tag to create different shape text containers? Is it possible to create text container which is not square/rectangle? Can I use area tag for this? Any example snippet will be appreciated. A: No. All elements are rectangles by defintion. Even the <area> tag wouldn't get past that.
{ "language": "en", "url": "https://stackoverflow.com/questions/20359304", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Getting the Value which is already selected in drop down menu by Selenium Webdriver i have a Drop Down menu which has pre-selected a value please help me how to get the selected value text and print it in console drop down menu html code is <select id="ctl00_ContentPlaceHolder1_ddlCalculation" class="normalText" style="width:100%;" onchange="javascript:LoadMethods(this.value); CallonChange(this.value,'spn_ddlCalculation'); return false;" disabled="disabled" name="ctl00$ContentPlaceHolder1$ddlCalculation"> <option value="0" selected="selected">--- Select ---</option> <option value="f">Formula Based</option> <option value="m">Formula Based with Matrix Table</option> <option value="q">Quantity Based</option> <option value="t">Time Based</option> </select> A: Please try the following and let me know if it works : Select dropdown = new Select(driver.findElement("Use the correct selector"); WebElement option = dropdown.getFirstSelectedOption(); String content = option.getText(); System.out.println("selected Value " + content);
{ "language": "en", "url": "https://stackoverflow.com/questions/22091892", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: how to write value to file with node.js so i have a multiple requests from client, each request contain data, { key: 'string', public_url: 'string'| null , value: 'string', created_at:'string', updated_at: 'string', content_type: 'string', size: 'int', checksum: 'string', theme_id: 'int', warnings: [] } so lets say i have 243 requests, for each request i need to write the data to file, i have tried many ways like fs.writeFile , fs.writeFileSync,fs.createWriteStream,promises,etc, const sourceStoreShopName = session.shop.split('.')[0] const fetchData = await restClient.get({ path: 'assets', theme_id: id, query: { ['asset[key]']: assetPath, }, }) const asset = fetchData.body.asset const filename = asset.key.split('/') const dir = filename.length > 2 ? `${process.cwd()}/tmp/${sourceStoreShopName}/${filename[0]}/${filename[1]}` : `${process.cwd()}/tmp/${sourceStoreShopName}/${filename[0]}` if (!fs.existsSync(dir)) { fs.mkdirSync(dir, { recursive: true }) } const ws = fs.createWriteStream(path.join(dir, filename.slice(-1)[0])) const API_LIMIT = fetchData.headers.get('x-shopify-shop-api-call-limit').split('/') return asset } so i want to copy content from value to file for one file it passed , more then 2 got the error. the error i got 1:23:07 PM [vite] http proxy error: Error: socket hang up at connResetException (node:internal/errors:704:14) at Socket.socketOnEnd (node:_http_client:505:23) at Socket.emit (node:events:525:35) at endReadableNT (node:internal/streams/readable:1359:12) at process.processTicksAndRejections (node:internal/process/task_queues:82:21) any ideas how to solve that problem?
{ "language": "en", "url": "https://stackoverflow.com/questions/73486213", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Best method to get double value from datetime in jquery Which is the best way to convert datetime to double in jquery? I have used Date.parse. But, it is not correct in few cases. Also,i am unable to pass datetime format for that method. - Viji A: In JavaScript, you can simple do variablename.toFixed(2); for that. Example: var num1 = 12312.12312 console.log(num1.toFixed(2)); // Will give 12312.12
{ "language": "en", "url": "https://stackoverflow.com/questions/16914918", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: javascript program to check if a string is palindromes not returning false I wrote this bit of code a a part of an exercise to check weather or not a string is palindromes. They program is working correctly in terms of checking the string but it does not return false when the string is not palindromes. What am I doing wrong? thanks //convert the string to array var stringArr = [ ]; var bool; function palindrome(str) { // make lowercase var lowerCase = str.toLowerCase(); //remove numbers, special characters, and white spaces var noNumbers = lowerCase.replace(/[0-9]/g, ''); var noSpecials = noNumbers.replace(/\W+/g, " "); var finalString = noSpecials.replace(/\s/g, ''); stringArr = finalString.split(""); if (stringArr.sort(frontToBack)==stringArr.sort(backToFront)) { bool = true; } else { bool= false; } return bool; } function frontToBack (a,b) {return a-b;} function backToFront (a,b) {return b-a;} palindrome("eye"); A: if (stringArr.sort(frontToBack)==stringArr.sort(backToFront)) { is your problem. In JavaScript, the sort method updates the value of the variable you are sorting. So in your comparison, once both sort's have run, both end up with the same value (since the second sort, effectively overrides the first). For example. var a = [1,7,3]; a.sort(); console.log(a); // will print 1,3,7 Edit: had a quick test, I think eavidan's suggestion is probably the best one. Edit2: Just put together a quick version of a hopefully working palindrome function :) function palindrome(str) { return str.split("").reverse().join("") == str;} A: It is because string subtraction yields NaN, which means both sorted arrays are the same as the original. Even if you did convert to ASCII coding, you sort the entire string, then for instance the string abba would be sorted front to back as aabb and back to front as bbaa. (edit: and also what Carl wrote about sort changing the original array. Still - sort is not the way to go here) What you should do is just reverse the string (using reverse on the array) and compare. A: You might do as follows; var isPalindrome = s => { var t = s.toLowerCase() .replace(/\s+/g,""); return [].slice.call(t) .reverse() .every((b,i) => b === t[i]); }; console.log(isPalindrome("Was it a car or a cat I saw")); console.log(isPalindrome("This is not a palindrome")); A: function pal() { var x=document.getElementById("a").value; //input String var y=""; //blank String for (i=x.length-1;i>=0;i--) //string run from backward { y=y+x[i]; //store string last to first one by one in blank string } if(x==y) //compare blank and original string equal or not { console.log("Palindrome"); } else { console.log("Not Palindrome "); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/39730416", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How can I add meta data to the head section of the page from a wordpress plugin? I am writing a wordpress plugin and need to add some meta tags to the head section of the page from within my plugin. Does anyone know how I would do this? Thanks A: Yes you can add an action hook to wp_head like this: add_action('wp_head', myCallbackToAddMeta); function myCallbacktoAddMeta(){ echo "\t<meta name='keywords' content='$contents' />\n"; }
{ "language": "en", "url": "https://stackoverflow.com/questions/5805888", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Change button background programmatically I'm creating a Tic-Tac-Toe app using Android Studio, this is the layout <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:id="@+id/activity_main" android:layout_width="match_parent" android:layout_height="match_parent" android:paddingBottom="@dimen/activity_vertical_margin" android:paddingLeft="@dimen/activity_horizontal_margin" android:paddingRight="@dimen/activity_horizontal_margin" android:paddingTop="@dimen/activity_vertical_margin" tools:context="com.company.alex.tic_tac_toe.MainActivity" android:layout_margin="16dp" android:layout_marginBottom="16dp"> <Button android:id="@+id/B2" android:background="#33b5e5" android:layout_centerVertical="true" android:layout_centerHorizontal="true" android:layout_width="80dp" android:layout_height="80dp" android:textSize="@dimen/activity_vertical_margin" tools:textStyle="normal" tools:textSize="36sp" /> <Button android:layout_height="80dp" android:layout_above="@+id/B2" android:layout_centerHorizontal="true" android:id="@+id/A2" android:layout_marginBottom="16dp" android:background="#33b5e5" android:layout_width="80dp" /> <Button android:layout_alignBottom="@+id/B2" android:layout_toEndOf="@+id/B2" android:id="@+id/B3" android:background="#33b5e5" android:layout_marginLeft="16dp" android:layout_width="80dp" android:layout_height="80dp" tools:textSize="@android:dimen/thumbnail_width" /> <Button android:id="@+id/B1" android:layout_below="@+id/A2" android:layout_toStartOf="@+id/B2" android:layout_marginRight="16dp" android:background="#33b5e5" android:layout_width="80dp" android:layout_height="80dp" /> <Button android:layout_alignTop="@+id/A2" android:layout_alignEnd="@+id/B1" android:id="@+id/A1" android:background="#33b5e5" android:layout_width="80dp" android:layout_height="80dp" /> <Button android:id="@+id/C2" android:background="#33b5e5" android:layout_below="@+id/B2" android:layout_toEndOf="@+id/B1" android:layout_marginTop="16dp" android:layout_width="80dp" android:layout_height="80dp" /> <Button android:layout_alignTop="@+id/C2" android:layout_toStartOf="@+id/C2" android:id="@+id/C1" android:background="#33b5e5" android:layout_marginRight="16dp" android:layout_width="80dp" android:layout_height="80dp" /> <Button android:layout_alignTop="@+id/C2" android:layout_toEndOf="@+id/C2" android:id="@+id/C3" android:layout_marginLeft="16dp" android:background="#33b5e5" android:layout_width="80dp" android:layout_height="80dp" /> <Button android:text="NUOVA PARTITA" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignParentTop="true" android:layout_alignParentStart="true" android:id="@+id/bNewGame" android:layout_alignParentEnd="true" android:background="#f17a0a" tools:textSize="24sp" /> <Button android:layout_height="80dp" android:id="@+id/A3" android:background="#33b5e5" android:layout_width="80dp" android:layout_alignTop="@+id/A2" android:layout_alignStart="@+id/B3" /> <TextView android:layout_height="wrap_content" android:layout_alignParentBottom="true" android:layout_marginBottom="18dp" android:id="@+id/punteggio" android:layout_width="80dp" android:layout_alignParentEnd="true" android:layout_alignParentStart="true" android:textAppearance="@style/TextAppearance.AppCompat" /> <ImageView android:layout_width="wrap_content" android:layout_height="wrap_content" app:srcCompat="@drawable/grigio" android:id="@+id/imageView5" android:layout_alignBottom="@+id/C2" android:layout_alignEnd="@+id/B3" android:layout_alignTop="@+id/A2" android:layout_alignStart="@+id/B1" android:background="@drawable/grigio" /> </RelativeLayout> I would like to change the background color of the three buttons when a player wins. I check the winner in this part of code `{ boolean there_is_a_winner = false; //prima si controllano le linee orizzontali if (a1.getText() == a2.getText() && a2.getText() == a3.getText() && !a1.isClickable()) there_is_a_winner = true; else if (b1.getText() == b2.getText() && b2.getText() == b3.getText() && !b1.isClickable()) there_is_a_winner = true; else if (c1.getText() == c2.getText() && c2.getText() == c3.getText() && !c1.isClickable()) there_is_a_winner = true; //adesso si va a controllare le linee verticali if (a1.getText() == b1.getText() && b1.getText() == c1.getText() && !a1.isClickable()) there_is_a_winner = true; else if (a2.getText() == b2.getText() && b2.getText() == c2.getText() && !b2.isClickable()) there_is_a_winner = true; else if (a3.getText() == b3.getText() && b3.getText() == c3.getText() && !c3.isClickable()) there_is_a_winner = true; //adesso si controllano le diagonali if (a1.getText() == b2.getText() && b2.getText() == c3.getText() && !a1.isClickable()) there_is_a_winner = true; else if (a3.getText() == b2.getText() && b2.getText() == c1.getText() && !b2.isClickable()) there_is_a_winner = true;` if i put "Button11.setBackgroundColor(Color.RED);" under the first if the program fails How can I do this?
{ "language": "en", "url": "https://stackoverflow.com/questions/42906869", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: React gives me red underline errors but there is no error Visual Code gives errors in lines but there are no errors in those lines. A: Please rename your file to "todo.jsx". Explanation: VSCode and other IDEs choose your parser based on the file extension. For VSCode it looks like you are creating a "normal" JavaScript file. But JavaScript does not know tags, so you get an error message. A small addition: if you ever work with TypeScript in React, the same applies: instead of the .ts extension you should choose the .tsx extension. A: It looks like your file is named ToDO.js. Since you're using JSX syntax, the file should be given a .jsx extension, so that VSCode knows how its syntax should be parsed (and, as a result, what errors to display). So, rename it to ToDO.jsx.
{ "language": "en", "url": "https://stackoverflow.com/questions/70510413", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Tkinter Wait for Key to Be Pressed I am using Python 3 and Tkinter, and I have a wait() function, waiting until the right arrow key or left arrow key is pressed, however, it just freezes everything, and I have to force stop the program. from tkinter import * right = left = False def setLeft(event): global left left = True print('Left!') def setRight(event): global right right = True print('Right!') def wait(): global right, left left = right = 0 while not (left or right): pass print(right) #0 for left, 1 for right left = right = 0 root = Tk() root.bind('<Left>', setLeft) root.bind('<Right>', setRight) Is there a way for the wait() function to work like it is supposed to, or do I need to find a different way? A: Yes, you need a different way, because of event-driven nature of Tkinter GUI programming. When some event lead you to your wait() function there it is: you're stuck in infinite loop, and you can't get outside with events anymore! As @Bryan Oakley pointed - GUI is constantly in a waiting state by default, since you reached a mainloop(). And I think, that you trying just suppress all other events (or just navigate keys) when user navigating through a tree , except this two (Left and Right clicks). So here is a small example: import tkinter as tk # Main - application class App(tk.Tk): # init of our application def __init__(self, *args, **kwargs): tk.Tk.__init__(self, *args, **kwargs) self.minsize(width=450, height=100) self.wm_title('Just another example from SO') self.wait_state = False self.init_events() # switch state def switch_wait_state(self, event): self.wait_state = not self.wait_state print('Wait state switched to: %s ' % self.wait_state) # init events def init_events(self): self.bind('<Key>', self.wait) self.bind('<Control-s>', self.switch_wait_state) # waiter(listener) for keypress def wait(self, event): if self.wait_state and any(key == event.keysym for key in ['Left', 'Right']): print('I have successfully waited until %s keypress!' % event.keysym) self.do_smth(event.keysym) else: print('Wait state: %s , KeyPress: %s' % (self.wait_state, event.keysym)) self.do_nhth() @staticmethod def do_smth(side): print("Don't be rude with me, Im trying my best on a %s side!" % side) @staticmethod def do_nhth(): pass app = App() app.mainloop()
{ "language": "en", "url": "https://stackoverflow.com/questions/42126361", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Get name of All Constraints from access table in java How can I get all constraints like Primary Key and Foreign Key defined in MS Access database. For MySql, we use describe table_name Similarly, what is the query for MS Access for using it in java program? A: import java.sql.*; public class DescQueryOutput{ public static void main(String args[]) { try { Class.forName("com.mysql.jdbc.Driver"); Connection con = DriverManager.getConnection ("jdbc:mysql://localhost:3306/test","root","root"); Statement stmt = con.createStatement(); String query = "DESC newaccount"; ResultSet rs = stmt.executeQuery(query); System.out.println("COLUMN NAME\tDATATYPE\tNULL\tKEY\tDEFAULT\tEXTRA"); while (rs.next()) { System.out.print(rs.getString(1)+"\t"); System.out.print(rs.getString(2)+"\t"); System.out.print(rs.getString(3)+"\t"); System.out.print(rs.getString(4)+"\t"); System.out.print(rs.getString(5)+"\t"); System.out.println(rs.getString(6)); } } catch (SQLException e) { e.printStackTrace(); } catch (ClassNotFoundException e) { e.printStackTrace(); } finally { } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/15355448", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: A basic vanilla javascript AJAX loader I am trying to write a simple function to return data from an ajax call. Here is what I have var mytext = ""; function load(url){ var xhr = new XMLHttpRequest(); xhr.open('GET', url); xhr.send(); xhr.onloadend = function(e){ return xhr.responseText; } } var mytext = load('window.html'); console.log(mytext); I am stuck, How do I get the returned value? I'ts a function in a function and I am lost :( A: There are serveral ways to do that. Since you seems to be new to javascript, you can start with callback: function load(url, callback){ var xhr = new XMLHttpRequest(); xhr.onloadend = function(e){ callback(xhr); }; xhr.open('GET', url); xhr.send(); } load('/', function (data) { console.log(data); }); In this example, callback is a function, and we pass xhr as a parameter to that function while calling. A: My advice is to make use of the Promise and fetch APIs. function ajax(options) { return new Promise(function (resolve, reject) { fetch(options.url, { method: options.method, headers: options.headers, body: options.body }).then(function (response) { response.json().then(function (json) { resolve(json); }).catch(err => reject(err)); }).catch(err => reject(err)); }); } You can use it like so: const ajaxResponse = await ajax({url: '/some/api/call', method: 'get'}); In case you don't already know, await can only be used inside async functions. If you don't want to use async functions, do the following: ajax({url: '/some/api/call', method: 'get'}).then(data => { // process data here }); Explanation: JavaScript is a single-threaded language. This means everything runs in a blocking manner. If your Ajax call takes 3 seconds, then JavaScript will pause for 3 seconds. Luckily, the XMLHttpRequest and fetch APIs combat this issue by using asynchronous functions, meaning code can continue running while the Ajax call is awaiting a response. In your code, you're not getting a response from your function because the Ajax call doesn't stop the execution, meaning by the time the call has been made, there's nothing to return yet and by the time the call is finished, the function's call is long in the past too. You can tell JavaScript however to keep track of this asynchronous task through Promises. When your task is finished, the Promise's then function is called with the data from the Ajax call. JavaScript also provides syntactic sugar to make reading asynchronous code easier. When we use an async function, what we're actually doing is creating a regular function, whose body is wrapped in a Promise. This also means that when you want to wait for the result of a previous Promise, you can prepend await in front of the Promise reference and await the completion of the Promise. So you may have code that looks like this: const call = await ajax({ ... }); console.log(call); which actually translates to the following: ajax({ ... }).then(call => { console.log(call); });
{ "language": "en", "url": "https://stackoverflow.com/questions/61136997", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How can use Codeigniter my_model class for udate with multiple where condetion? Iam used codeigniter my_model class for update my table datas. Actulay i need the query is `UPDATE `pay_project` SET `awarded` = '129' WHERE `pay_id` = 1 and `status` ='y' For this purpose am tried the codeignter my_model function update $this->pay_project_model->update_by(array('pay_id'=>1,'status'=>'y'),$updateJob); But the code is not working, if am trying the update() instead of update_by() at that time its showing like this `UPDATE `pay_project` SET `awarded` = '129' WHERE 'id'=array() Please help me to solve this ? Also am trying with update_many(), same is not working.. The model using is https://github.com/jamierumbelow/codeigniter-base-model A: Try this Active Records it'll let you know what you were doing along with query function update($data = array(), $where = '') { $where = array('pay_id'=>1,'status'=>'y'); $data = array('awarded' => '129'); $this->db->where($where); $res = $this->db->update($this->main_table, $data); $rs = $this->db->affected_rows(); return $rs; } A: Here is the query you need in your model function. class my_model extends CI_Model { function __construct(){ parent::__construct(); } /*Model function*/ function update_by($condition = array(),$updateJob=array()) { if($condition && $updateJob) { $this->db->where($condition); $this->db->update('pay_project', $updateJob ); } } } Now you can use your existing code from controller for your desired purpose. $updateJob = ['awarded' => '129']; $this->pay_project_model->update_by(array('pay_id'=>1,'status'=>'y'), $updateJob); A: use this codeigniter active records for doing database operations $data = array( 'awarded' => '129' ); $where = array( 'pay_id'=>1, 'status'=>'y' ); $this->db->where($where); $this->db->update('pay_project', $data); For documentation refer this link codeigniter active records update A: Use this $query = $this->db->query("UPDATE pay_project SET awarded = '129' WHERE pay_id = '1' and status ='y' "); $result = $query->result_array(); return $result; A: If you are using Codeigniter My_model class, then you need to change your model like this to work the update_by() function class Pay_project_model extends MY_Model { protected $table='pay_project'; function update_by($where = array(),$data=array()) { $this->db->where($where); $query = $this->db->update($this->table,$data); return $query; } } In My_model class there is not a default option to update the class by multiple where conditions, So you need to write update_by() function. Simple copy paste the function inside your class, and this will be work perfectly.
{ "language": "en", "url": "https://stackoverflow.com/questions/31284275", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How to resolve 404 Error for an AJAX call in Django I am getting a 404 Error for an AJAX call in Django Here's my code: jQuery Script: $('#cluster-id').change( function(){ $.ajax({ url: '/ajax_demo/', type: 'get', dataType: 'json', complete: function( jqXHR, textStatus ){ console.log("complete " + textStatus); }, success: function( data, jqXHR, textStatus ){ console.log("success " + data); }, error: function(jqXHR, textStatus, errorThrown ){ console.log("error: " + errorThrown ); } }); I am trying to get the data( in json ) form this function: from django.http import JsonResponse def ajax_demo( request ): cluster = request.GET.get('cluster', None) data = { 'name' : 'abc' } return JsonResponse(data) I have set the url correctly too: urlpatterns = [ url( r'^$', views.main_page_demo, name = 'main_page_demo' ), url( r'^demo/', views.ajax_demo, name = 'ajax_demo'), ] I am new to AJAX and jQuery and i don't know what's the issue. root url urlpatterns =[ url(r'^admin/', include(admin.site.urls)), url(r'^$', include( 'app.urls', namespace = "mainPage")), A: You have several obvious errors here. Firstly, you are posting to /ajax_demo/, but you don't have that as a URL; only /demo/. You did either change the URL pattern or the URL you are posting to. Secondly, you have included the app patterns wrongly; to must not use a terminal $ with include. It should be: url(r'^$', include( 'app.urls', namespace = "mainPage")), Thirdly, you​ are not including any data with your request. You need a data key: $.ajax({ url: '/demo/', type: 'get', data: {"cluster": "value"} ...
{ "language": "en", "url": "https://stackoverflow.com/questions/43178026", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Pointer to function instead if in c I'm making a little calculator in C with only "+, -, /, *, %" parameters, and honestly, this is working fine. But I would try to make it with pointer to function I think I will use a better syntax but I don't know where I may start and how can I do/can I use it? I would like improve myself. Someone can explain me please? #include <unistd.h> #include <stdio.h> int ft_atoi(char *str); void ft_putnbr(int nb); int check_by_zero(int a, int b, char oper) { int res; if (oper == '/') { if (b == 0) { write(1, "Stop : division by zero\n", 24); res = -1; } else res = a / b; } if (oper == '%') { if (b == 0) { write(1, "Stop : modulo by zero\n", 22); res = -1; } else res = a % b; } return (res); } int do_result(int a, int b, char oper) { int res; if (oper == '-') res = a - b; if (oper == '+') res = a + b; if (oper == '/' || oper == '%') res = check_by_zero(a, b, oper); if (oper == '*') res = a * b; return (res); } int main(int ac, char **av) { int value1; int value2; int res; char c; if (ac != 4) return (0); value1 = ft_atoi(av[1]); value2 = ft_atoi(av[3]); c = av[2][0]; if ((!(c == '+' || c == '-' || c == '*' || c == '/' || c == '%')) || av[2][1] != 0) res = 0; else res = do_result(value1, value2, c); if (res != -1) { ft_putnbr(res); write(1, "\n", 1); } return (0); } EDIT: Can't use standard function. Only mine. This is the reason why I use write instead printf A: make it with pointer to function Create a function for each operator. int hh_div(int a, int b) { // TBD: add some checks return a / b; } int hh_add(int a, int b) { // TBD: add some checks return a + b; } Then use an array of function pointers indexed by char oper. int do_it(char oper, int a, int b) { static const int (*all[256])(int, int) = { // ['/'] = hh_div, // ['+'] = hh_add, // // etc }; int (*f)(int a, int b) = all[(unsigned char) oper]; if (f == NULL) { Handle_NotDefined(); } return f(a, b); } Code could use a big if() or switch(), but with a nice function look-up table, what is a 1 or 2 k-byte table between friends? Of course the table could be smaller with some pre-tests. Ref: Some 2 operand operators: % ^ & * + - | \ < > /
{ "language": "en", "url": "https://stackoverflow.com/questions/68462910", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to convert latitude or longitude to meters? If I have a latitude or longitude reading in standard NMEA format is there an easy way / formula to convert that reading to meters, which I can then implement in Java (J9)? Edit: Ok seems what I want to do is not possible easily, however what I really want to do is: Say I have a lat and long of a way point and a lat and long of a user is there an easy way to compare them to decide when to tell the user they are within a reasonably close distance of the way point? I realise reasonable is subject but is this easily do-able or still overly maths-y? A: Here is the R version of b-h-'s function, just in case: measure <- function(lon1,lat1,lon2,lat2) { R <- 6378.137 # radius of earth in Km dLat <- (lat2-lat1)*pi/180 dLon <- (lon2-lon1)*pi/180 a <- sin((dLat/2))^2 + cos(lat1*pi/180)*cos(lat2*pi/180)*(sin(dLon/2))^2 c <- 2 * atan2(sqrt(a), sqrt(1-a)) d <- R * c return (d * 1000) # distance in meters } A: The earth is an annoyingly irregular surface, so there is no simple formula to do this exactly. You have to live with an approximate model of the earth, and project your coordinates onto it. The model I typically see used for this is WGS 84. This is what GPS devices usually use to solve the exact same problem. NOAA has some software you can download to help with this on their website. A: There are many tools that will make this easy. See monjardin's answer for more details about what's involved. However, doing this isn't necessarily difficult. It sounds like you're using Java, so I would recommend looking into something like GDAL. It provides java wrappers for their routines, and they have all the tools required to convert from Lat/Lon (geographic coordinates) to UTM (projected coordinate system) or some other reasonable map projection. UTM is nice, because it's meters, so easy to work with. However, you will need to get the appropriate UTM zone for it to do a good job. There are some simple codes available via googling to find an appropriate zone for a lat/long pair. A: For approximating short distances between two coordinates I used formulas from http://en.wikipedia.org/wiki/Lat-lon: m_per_deg_lat = 111132.954 - 559.822 * cos( 2 * latMid ) + 1.175 * cos( 4 * latMid); m_per_deg_lon = 111132.954 * cos ( latMid ); . In the code below I've left the raw numbers to show their relation to the formula from wikipedia. double latMid, m_per_deg_lat, m_per_deg_lon, deltaLat, deltaLon,dist_m; latMid = (Lat1+Lat2 )/2.0; // or just use Lat1 for slightly less accurate estimate m_per_deg_lat = 111132.954 - 559.822 * cos( 2.0 * latMid ) + 1.175 * cos( 4.0 * latMid); m_per_deg_lon = (3.14159265359/180 ) * 6367449 * cos ( latMid ); deltaLat = fabs(Lat1 - Lat2); deltaLon = fabs(Lon1 - Lon2); dist_m = sqrt ( pow( deltaLat * m_per_deg_lat,2) + pow( deltaLon * m_per_deg_lon , 2) ); The wikipedia entry states that the distance calcs are within 0.6m for 100km longitudinally and 1cm for 100km latitudinally but I have not verified this as anywhere near that accuracy is fine for my use. A: Here is a javascript function: function measure(lat1, lon1, lat2, lon2){ // generally used geo measurement function var R = 6378.137; // Radius of earth in KM var dLat = lat2 * Math.PI / 180 - lat1 * Math.PI / 180; var dLon = lon2 * Math.PI / 180 - lon1 * Math.PI / 180; var a = Math.sin(dLat/2) * Math.sin(dLat/2) + Math.cos(lat1 * Math.PI / 180) * Math.cos(lat2 * Math.PI / 180) * Math.sin(dLon/2) * Math.sin(dLon/2); var c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1-a)); var d = R * c; return d * 1000; // meters } Explanation: https://en.wikipedia.org/wiki/Haversine_formula The haversine formula determines the great-circle distance between two points on a sphere given their longitudes and latitudes. A: One nautical mile (1852 meters) is defined as one arcminute of longitude at the equator. However, you need to define a map projection (see also UTM) in which you are working for the conversion to really make sense. A: There are quite a few ways to calculate this. All of them use aproximations of spherical trigonometry where the radius is the one of the earth. try http://www.movable-type.co.uk/scripts/latlong.html for a bit of methods and code in different languages. A: Given you're looking for a simple formula, this is probably the simplest way to do it, assuming that the Earth is a sphere with a circumference of 40075 km. Length in km of 1° of latitude = always 111.32 km Length in km of 1° of longitude = 40075 km * cos( latitude ) / 360 A: 'below is from 'http://www.zipcodeworld.com/samples/distance.vbnet.html Public Function distance(ByVal lat1 As Double, ByVal lon1 As Double, _ ByVal lat2 As Double, ByVal lon2 As Double, _ Optional ByVal unit As Char = "M"c) As Double Dim theta As Double = lon1 - lon2 Dim dist As Double = Math.Sin(deg2rad(lat1)) * Math.Sin(deg2rad(lat2)) + _ Math.Cos(deg2rad(lat1)) * Math.Cos(deg2rad(lat2)) * _ Math.Cos(deg2rad(theta)) dist = Math.Acos(dist) dist = rad2deg(dist) dist = dist * 60 * 1.1515 If unit = "K" Then dist = dist * 1.609344 ElseIf unit = "N" Then dist = dist * 0.8684 End If Return dist End Function Public Function Haversine(ByVal lat1 As Double, ByVal lon1 As Double, _ ByVal lat2 As Double, ByVal lon2 As Double, _ Optional ByVal unit As Char = "M"c) As Double Dim R As Double = 6371 'earth radius in km Dim dLat As Double Dim dLon As Double Dim a As Double Dim c As Double Dim d As Double dLat = deg2rad(lat2 - lat1) dLon = deg2rad((lon2 - lon1)) a = Math.Sin(dLat / 2) * Math.Sin(dLat / 2) + Math.Cos(deg2rad(lat1)) * _ Math.Cos(deg2rad(lat2)) * Math.Sin(dLon / 2) * Math.Sin(dLon / 2) c = 2 * Math.Atan2(Math.Sqrt(a), Math.Sqrt(1 - a)) d = R * c Select Case unit.ToString.ToUpper Case "M"c d = d * 0.62137119 Case "N"c d = d * 0.5399568 End Select Return d End Function Private Function deg2rad(ByVal deg As Double) As Double Return (deg * Math.PI / 180.0) End Function Private Function rad2deg(ByVal rad As Double) As Double Return rad / Math.PI * 180.0 End Function A: To convert latitude and longitude in x and y representation you need to decide what type of map projection to use. As for me, Elliptical Mercator seems very well. Here you can find an implementation (in Java too). A: Here is a MySQL function: SET @radius_of_earth = 6378.137; -- In kilometers DROP FUNCTION IF EXISTS Measure; DELIMITER // CREATE FUNCTION Measure (lat1 REAL, lon1 REAL, lat2 REAL, lon2 REAL) RETURNS REAL BEGIN -- Multiply by 1000 to convert millimeters to meters RETURN 2 * @radius_of_earth * 1000 * ASIN(SQRT( POW(SIN((lat2 - lat1) / 2 * PI() / 180), 2) + COS(lat1 * PI() / 180) * COS(lat2 * PI() / 180) * POW(SIN((lon2 - lon1) / 2 * PI() / 180), 2) )); END; // DELIMITER ; A: Why limiting to one degree? The formula is based on the proportion: distance[m] : distance[deg] = max circumference[m] : 360[deg] Lets say you are given an angle for a latitude and one for longitude both in degrees: (longitude[deg], latitude[deg]) For the latitude, the max circumference is always the one passing for the poles. In a spherical model, with radius R (in meters) the max circumference is 2 * pi * R and the proportions resolves to: latitude[m] = ( 2 * pi * R[m] * latitude[deg] ) / 360[deg] (note that deg and deg simplifies, and what remains is meters on both sides). For the longitude the max circumference is proportional to the cosine of the latitude (as you can imagine running in circle the north pole is shorter than running in circle around the equator), so it is 2 * pi * R * cos(latitude[rad]). Therefore longitude distance[m] = ( 2 * pi * R[m] * cos(latitude[rad]) * longitude[deg] ) / 360[deg] Note that you will have to convert the latitude from deg to rad before computing the cos. Omitting details for who is just looking for the formula: lat_in_m = 111132.954 * lat_in_degree / 360 lon_in_m = 111132.954 * cos(lat_in_radians) * lon_in_deg ) / 360 A: If its sufficiently close you can get away with treating them as coordinates on a flat plane. This works on say, street or city level if perfect accuracy isnt required and all you need is a rough guess on the distance involved to compare with an arbitrary limit. A: Here is a version in Swift: func toDegreeAt(point: CLLocationCoordinate2D) -> CLLocationDegrees { let latitude = point.latitude let earthRadiusInMetersAtSeaLevel = 6378137.0 let earthRadiusInMetersAtPole = 6356752.314 let r1 = earthRadiusInMetersAtSeaLevel let r2 = earthRadiusInMetersAtPole let beta = latitude let earthRadiuseAtGivenLatitude = ( ( pow(pow(r1, 2) * cos(beta), 2) + pow(pow(r2, 2) * sin(beta), 2) ) / ( pow(r1 * cos(beta), 2) + pow(r2 * sin(beta), 2) ) ) .squareRoot() let metersInOneDegree = (2 * Double.pi * earthRadiuseAtGivenLatitude * 1.0) / 360.0 let value: CLLocationDegrees = self / metersInOneDegree return value } A: Original poster asked "If I have a latitude or longitude reading in standard NMEA format is there an easy way / formula to convert that reading to meters" I haven't used Java in a while so I did the solution here in "PARI". Just plug your point's latitude and longitudes into the equations below to get the exact arc lengths and scales in meters per (second of Longitude) and meters per (second of Latitude). I wrote these equations for the free-open-source-mac-pc math program "PARI". You can just paste the following into it and the I will show how to apply them to two made up points: \\=======Arc lengths along Latitude and Longitude and the respective scales: \p300 default(format,"g.42") dms(u)=[truncate(u),truncate((u-truncate(u))*60),((u-truncate(u))*60-truncate((u-truncate(u))*60))*60]; SpinEarthRadiansPerSec=7.292115e-5;\ GMearth=3986005e8;\ J2earth=108263e-8;\ re=6378137;\ ecc=solve(ecc=.0001,.9999,eccp=ecc/sqrt(1-ecc^2);qecc=(1+3/eccp^2)*atan(eccp)-3/eccp;ecc^2-(3*J2earth+4/15*SpinEarthRadiansPerSec^2*re^3/GMearth*ecc^3/qecc));\ e2=ecc^2;\ b2=1-e2;\ b=sqrt(b2);\ fl=1-b;\ rfl=1/fl;\ U0=GMearth/ecc/re*atan(eccp)+1/3*SpinEarthRadiansPerSec^2*re^2;\ HeightAboveEllipsoid=0;\ reh=re+HeightAboveEllipsoid;\ longscale(lat)=reh*Pi/648000/sqrt(1+b2*(tan(lat))^2); latscale(lat)=reh*b*Pi/648000/(1-e2*(sin(lat))^2)^(3/2); longarc(lat,long1,long2)=longscale(lat)*648000/Pi*(long2-long1); latarc(lat1,lat2)=(intnum(th=lat1,lat2,sqrt(1-e2*(sin(th))^2))+e2/2*sin(2*lat1)/sqrt(1-e2*(sin(lat1))^2)-e2/2*sin(2*lat2)/sqrt(1-e2*(sin(lat2))^2))*reh; \\======= To apply that to your type of problem I will make up that one of your data points was at [Latitude, Longitude]=[+30, 30] and the other at [Latitude, Longitude]=[+30:00:16.237796,30:00:18.655502]. To convert those points to meters in two coordinates: I can setup a system of coordinates in meters with the first point being at the origin: [0,0] meters. Then I can define the coordinate x-axis as due East-West, and the y-axis as due North-South. Then the second point's coordinates are: ? [longarc(30*Pi/180,30*Pi/180,((18.655502/60+0)/60+30)*Pi/180),latarc(30*Pi/180,((16.237796/60+0)/60+30)*Pi/180)] %9 = [499.999998389040060103621525561027349597207, 499.999990137812119668486524932382720606325] Warning on precision: Note however: Since the surface of the Earth is curved, 2-dimensional coordinates obtained on it can't follow the same rules as cartesian coordinates such as the Pythagorean Theorem perfectly. Also lines pointing due North-South converge in the Northern Hemisphere. At the North Pole it becomes obvious that North-South lines won't serve well for lines parallel to the y-axis on a map. At 30 degrees Latitude with 500 meter lengths, the x-coordinate changes by 1.0228 inches if the scale is set from [0,+500] instead of [0,0]: ? [longarc(((18.655502/60+0)/60+30)*Pi/180,30*Pi/180,((18.655502/60+0)/60+30)*Pi/180),latarc(30*Pi/180,((16.237796/60+0)/60+30)*Pi/180)] %10 = [499.974018595036400823218815901067566617826, 499.999990137812119668486524932382720606325] ? (%10[1]-%9[1])*1000/25.4 %12 = -1.02282653557713702372872677007019603860352 ? The error there of 500meters/1inch is only about 1/20000, good enough for most diagrams, but one might want to reduce the 1 inch error. For a completely general way to convert lat,long to orthogonal x,y coordinates for any point on the globe, I would chose to abandon aligning coordinate lines with East-West and North-South, except still keeping the center y-axis pointing due North. For example you could rotate the globe around the poles (around the 3-D Z-axis) so the center point in your map is at longitude zero. Then tilt the globe (around the 3-D y-axis) to bring your center point to lat,long = [0,0]. On the globe points at lat,long = [0,0] are farthest from the poles and have a lat,long grid around them that is most orthogonal so you can use these new "North-South", "East-West" lines as coordinate x,y lines without incurring the stretching that would have occurred doing that before rotating the center point away from the pole. Showing an explicit example of that would take a lot more space. A: You need to convert the coordinates to radians to do the spherical geometry. Once converted, then you can calculate a distance between the two points. The distance then can be converted to any measure you want. A: Based on average distance for degress in the Earth. 1° = 111km; Converting this for radians and dividing for meters, take's a magic number for the RAD, in meters: 0.000008998719243599958; then: const RAD = 0.000008998719243599958; Math.sqrt(Math.pow(lat1 - lat2, 2) + Math.pow(long1 - long2, 2)) / RAD; A: If you want a simple solution then use the Haversine formula as outlined by the other comments. If you have an accuracy sensitive application keep in mind the Haversine formula does not guarantee an accuracy better then 0.5% as it is assuming the earth is a sphere. To consider that Earth is a oblate spheroid consider using Vincenty's formulae. Additionally, I'm not sure what radius we should use with the Haversine formula: {Equator: 6,378.137 km, Polar: 6,356.752 km, Volumetric: 6,371.0088 km}.
{ "language": "en", "url": "https://stackoverflow.com/questions/639695", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "175" }
Q: Fine Uploader - get image date Is there a way to obtain (on the client side) the creation/last modification date of an image uploaded via FineUploader? Looking for a solution for JPEGs (where EXIF is available) and PNGs(where EXIF is not available). A: It is not possible to determine these values for non- jpegs. For jpegs, you can use a client side EXIF parser, like, https://github.com/jseidelin/exif-js to extract this data.
{ "language": "en", "url": "https://stackoverflow.com/questions/22655338", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Making a form submit NOT reload the page I am still learning proper and more efficient ways of writing code - at the moment I am stuck on this current issue. I have a form that currently works fine - writes to the sql database, then reloads the current page. I would like the form to NOT reload the page. I know there is a way to do this with jQuery and Ajax -- honestly I'm not skilled yet with these. * *If there's a way to do this with PHP/HTML - I would love guidance on this method. *If the only answer is Java and Ajax - I would love guidance for this specific instance of changing this form to not reload the page. Screen shot of what the button looks like -- three colors to represent 3 stages of tracking payment on invoices. Red turns to yellow when clicked, yellow to green, green back to red (in case of a mistake click). Here's front end code for my form: <form action="jump_invoicepaid.php" method="post"> <input type="hidden" name="invoicepaid" value="'.$projectid.'">'); if ($projectpaid == 2) { echo('<button type="submit" name="action" class="greenbutton" value="unpaid">Paid</button></form>'); } else if ($projectpaid == 1) { echo('<button type="submit" name="action" class="yellowbutton" value="paid">Paid</button></form>'); } else { echo('<button type="submit" name="action" class="redbutton" value="sent">Paid</button></form>'); } Here's the back end code for my form: if ($_POST['action'] == paid) { ///marking project as paid include_once('toolbox.php'); $projectid = $_POST['invoicepaid']; mysqli_query($con, "UPDATE kmis_projects SET project_paid='2' WHERE project_id= '$projectid'"); header('Location: ./?page=pastevents'); } else if ($_POST['action'] == sent) { ///marking project as sent include_once('toolbox.php'); $projectid = $_POST['invoicepaid']; mysqli_query($con, "UPDATE kmis_projects SET project_paid='1' WHERE project_id= '$projectid'"); header('Location: ./?page=pastevents'); } else if ($_POST['action'] == unpaid) { ///marking project as unpaid include_once('toolbox.php'); $projectid = $_POST['invoicepaid']; mysqli_query($con, "UPDATE kmis_projects SET project_paid='0' WHERE project_id= '$projectid'"); header('Location: ./?page=pastevents'); } I do want to learn jQuery and ajax fully, it's next on my to learn list. I know they would make this issue a lot easier to solve. UPDATE -- I spent some time with the link posted, suggesting it was a duplicate question of the link. I do see why... but it's not the same at the heart of what I'm asking or the answer that was given in the link. I think the link could be "an" answer to the issue - but I was interested in any ways of solving the issue without having to use jQuery and AJAX -- and if that was unavoidable, then specifically how AJAX would fit into my code and this specific situation. I equally think this is unique from the link, given that my buttons are populated with 3 different "if" checks, to write a different value depending on what button is populated and clicked. UPDATE 2 -- Here's the current code I'm working with. The ajax isn't sending name or value of button though it seems. <script> $( document ).ready(function() { $("button").click(function(e) { e.preventDefault(); // prevent page refresh $.ajax({ type: "POST", url: "jump_invoicepaid.php", data: $(this).serialize(), // serialize form data success: function(data) { window.location.replace("?page=pastevents"); }, error: function() { // Error ... } }); }); }); </script> <form action=""> <input type="hidden" name="invoicepaid" value="'.$projectid.'"> if ($projectpaid == 2) { echo('<button type="submit" name="action" class="greenbutton" value="unpaid">Paid</button></form>'); } else if ($projectpaid == 1) { echo('<button type="submit" name="action" class="yellowbutton" value="paid">Paid</button></form>'); } else { echo('<button type="submit" name="action" class="redbutton" value="sent">Paid</button></form>'); } A: In order to execute the server-side script (PHP) from the client side (static HTML and JavaScript), you need to use the Ajax technology. In essence, Ajax will allow you to send and/or retrieve data from the server "behind the scenes" without affecting your page. JavaScript, a client-side scripting language used to add interactivity to your webpage, already supports Ajax. However, the syntax is rather verbose, because you need to account for differences across browsers and their versions (Internet Explorer version 5 up to 11 in particular have their own implementation different from Chrome or Mozilla). You, as a developer, can avoid these technicalities by using a front-end framework such as jQuery (other good examples are Angular.js, Vue.js, and others). jQuery is essentially a library of code written in vanilla JavaScript that simplifies common tasks such as querying DOM. It also provides expressive syntax for using Ajax. It will resolve browser incompatibilities internally. This way you can focus on your high-level logic, rather than low-level issues like that. Once again, jQuery code is also JavaScript code. So place the following <script> tag, as you would normally do with JS, somewhere in your page. <script> $( document ).ready(function() { $("form").submit(function(e) { e.preventDefault(); // prevent page refresh $.ajax({ type: "POST", url: "jump_invoicepaid.php", data: $(this).serialize(), // serialize form data success: function(data) { // Success ... }, error: function() { // Error ... } }); }); }); </script> First, this code will run once the page is loaded. Second, it will attach a submit event handler to every <form> in your HTML page. When you click on the <button type='submit'>, this event will fire. e.preventDefault() will halt the normal submission of the form and prevent page refresh. Note that e is the event object that contains info about the submit event (such as which element it originated from). Next, we are sending the actual Ajax request using $.ajax method from jQuery library. It is a generic method, and we could as well have used $.post, since we are specifically doing the POST request. It will be received by your PHP file jump_invoicepaid.php. Now, in your PHP file, you have the following line: header('Location: ./?page=pastevents'); which forces a redirect to the home page with a GET parameter. If you want to have the same behavior, then you would need to remove that line, and put window.location.replace("/?page=pastevents") to your JavaScript code: // in your Ajax request above... success: function(data) { window.location.replace("/?page=pastevents"); }, This would refresh the page however, because you are essentially requesting the home page with a GET method. I am guessing that this is done in order to update information on the page. You could do so without redirecting, by adding / changing / removing elements on your page (i.e. in DOM). This way, you would send data from the server back to the client, and if the response is received successfully, in your JavaScript you can obtain that response and manipulate your webpage accordingly. If the latter interests you, then consider using JSON format. You just need to do json_encode in your PHP, and then on the client-side, parse the JSON string using JSON.parse(...). Read on here for example. Hope this helps!
{ "language": "en", "url": "https://stackoverflow.com/questions/39694245", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Impersonate user while creating Organization Service Proxy with XRM Tooling Connector I am connecting to CRM Online v9.0 using XRM Tooling Connector. I am trying to update a Contact using WebAPI. Once the contact is updated, the a workflow is triggered that sends an Email to the Contact. However I am getting - User does not have privilege send as. When I check in the Code, CallerID for service object is set to {00000000-0000-0000-0000-000000000000}. I think this is the reason why my workflow does not execute successfully. Can someone please help me with this. Thank you in Advance. A: I think this is a Security issue. There are specific security privileges required to act on behalf of or send emails on behalf of other users. These privileges are on the Business Management tab in the Security Role. In addition to this, the impersonated user must have also authorised emails to be sent on their behalf. This setting is located in the the impersonated users personal settings
{ "language": "en", "url": "https://stackoverflow.com/questions/51095298", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to specify covariates that don't interact with factors when using rstatix package? When adding covariates to a between-subjects ANOVA model specified using rstatix package syntax, can one specify these covariates as having no interaction with the main effects? For example, results from running the "RUN1" code below results in both main effects for the covariates (e.g., "CV1") but also interactions with the "Date" variable (e.g., "CV1:Date) displayed. However, some covariate interactions with the time variable may make no theoretical sense (for example, an individual's gender typically wouldn't change after 6 measurements 1 week apart), therefore, I would like to try and exclude these interactions from the model. "RUN2" and "RUN3" are some attempts so far to resolve this issue, but didn't work. Does anybody know how this could be achieved? My data: MyData <- structure(list(ID = structure(c(1L, 2L, 3L, 4L, 5L, 6L, 7L, 8L, 9L, 10L, 11L, 12L, 13L, 14L, 15L, 1L, 2L, 3L, 4L, 5L, 6L, 7L, 8L, 9L, 10L, 11L, 12L, 13L, 14L, 15L, 1L, 2L, 3L, 4L, 5L, 6L, 7L, 8L, 9L, 10L, 11L, 12L, 13L, 14L, 15L), levels = c("5f609c2408813b0009898419", "5f9aacf32ab79c000bb1d56e", "5f9feef22ab79c000bb264c8", "5fa48df02ab79c000bb2ec4a", "5fa7d6c12ab79c000bb3495d", "5fac7ad22ab79c000bb3d6af", "6003785e2ab79c000978297b", "6003a1132ab79c0009782c8e", "6007a18d2ab79c000978526d", "600b9db52ab79c000bcf6d2e", "600e2b582ab79c000bcfeebe", "6010990c2ab79c000bd0698f", "6017a8992ab79c000b55eb27", "601b29eb2ab79c000b57a8d1", "60ff895fadbe1d0009fd07b2"), class = "factor"), Date = structure(c(3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L), levels = c("1", "2", "3"), class = "factor"), MeanValue = c(1.57142857142857, 3, 0.857142857142857, 1.28571428571429, 1.42857142857143, 1, 0.714285714285714, 0.142857142857143, 1.71428571428571, 0.285714285714286, 1.14285714285714, 1, 1.42857142857143, 0.428571428571429, 1.14285714285714, 1.14285714285714, 2.71428571428571, 1, 1.71428571428571, 0.857142857142857, 1.71428571428571, 0.857142857142857, 0.571428571428571, 1.57142857142857, 2.14285714285714, 1, 1.28571428571429, 1.71428571428571, 2.57142857142857, 3, 1.14285714285714, 2.57142857142857, 1.14285714285714, 1.42857142857143, 1.57142857142857, 1.57142857142857, 0.571428571428571, 0.142857142857143, 2.14285714285714, 0.428571428571429, 0.714285714285714, 0.714285714285714, 1.28571428571429, 3, 0.714285714285714), CV1 = c(43, 56, 73, 43, 49, 52, 52, 33, 35, 45, 51, 60, 45, 44, 59, 43, 56, 73, 43, 49, 52, 52, 33, 35, 45, 51, 60, 45, 44, 59, 43, 56, 73, 43, 49, 52, 52, 33, 35, 45, 51, 60, 45, 44, 59), CV2 = c("1", "2", "2", "1", "1", "2", "2", "2", "2", "2", "2", "1", "1", "2", "1", "1", "2", "2", "1", "1", "2", "2", "2", "2", "2", "2", "1", "1", "2", "1", "1", "2", "2", "1", "1", "2", "2", "2", "2", "2", "2", "1", "1", "2", "1"), CV3 = c(0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0), CV4 = c(5, 5, 1, 1, 2, 5, 5, 4, 5, 5, 3, 5, 5, 5, 5, 5, 5, 1, 1, 2, 5, 5, 4, 5, 5, 3, 5, 5, 5, 5, 5, 5, 1, 1, 2, 5, 5, 4, 5, 5, 3, 5, 5, 5, 5), CV5 = c(0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1), CV6 = c(0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1)), row.names = c(NA, -45L), class = c("tbl_df", "tbl", "data.frame")) And the commands I ran: # Load packages library(tidyverse) library(rstatix) # RUN1: between subjects ANOVA using rstatix res.aov <- anova_test(data = MyData, dv = MeanValue, wid = ID, within = Date, covariate=c("CV1","CV2", "CV4","CV5")) get_anova_table(res.aov) # RUN2: specify formula res.aov <- anova_test(data = MyData, dv = MeanValue, wid = ID, within = Date, formula = MeanValue ~ Date + Error(ID/Date), covariate=c("CV1","CV2", "CV4","CV5")) get_anova_table(res.aov) # RUN3: specify formula adding covariates directly in res.aov <- anova_test(data = MyData, dv = MeanValue, wid = ID, within = Date, formula = MeanValue ~ Date + CV1 + CV2 + CV4 + CV5 + Error(ID/Date)) get_anova_table(res.aov)
{ "language": "en", "url": "https://stackoverflow.com/questions/72628427", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Why does a dialog seemingly have its one thread? I'm new to advanced programming - but from what I've read, all my android programs are on one thread. This means, only one line of code or method/function can be executed at a time before moving on to the next line (that's what I thought anyway). However, I'm using a custom dialog to develop this application and yet, the program continues even after the dialog has ran. I'd like my program to wait for the dialog to close so that I can receive the input and manipulate it. This seems fairly straightforward when programming in Java (e.g. the Scanner tool waits for the user input before proceeding as opposed to running the code following it while it waits for user input). How can I do this? A: Everything does happen on one thread unless you explicitly tell it not to. However, showing a Dialog happens asynchronously. Basically, when you ask the Dialog to show, it adds that information to a list of UI events that are waiting to happen, and it will happen at a later time. This has the effect that the code after asking the Dialog to show will execute right away. To have something execute after a Dialog choice is made, add an onDismissListener to the dialog and do whatever it is you want to do in onDismiss.
{ "language": "en", "url": "https://stackoverflow.com/questions/5227111", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to dynamically attach attribute to mock-wrapped instance I need to be able to attach an attribute to a mock-spy during a test. That is, I want to be able to do this: [ins] In [1]: class Foo: ...: def __init__(self): pass [ins] In [2]: foo = Foo() [ins] In [3]: from unittest import mock [nav] In [4]: mock_foo = mock.Mock(wraps=foo) [ins] In [5]: mock_foo.a = 1 [ins] In [6]: foo.a # not assigned to original instance --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-6-b58746ef2d07> in <module> ----> 1 foo.a AttributeError: 'Foo' object has no attribute 'a' I tried to overload the __setattr__ but wasn't sucessful, Is this possible? why? I'm trying to change the type of an instance >>> bar = Bar() >>> foo = Foo() >>> mock_foo = mock.Mock(wraps=foo, spec=Bar) >>> >>> if isinstance(mock_foo, Bar): >>> mock_foo.fooled_you = 'hah' >>> >>> foo.fooled_you hah! A: Overloading Mock's __setattr__ actually does work okay: [ins] In [1]: class MyMock(mock.Mock): ...: def __init__(self, *args, **kwargs): ...: super().__init__(*args, **kwargs) ...: ...: def __setattr__(self, attr_name, attr_value): ...: setattr(self._mock_wraps, attr_name, attr_value) [nav] In [2]: foo = Foo() [nav] In [3]: mock_foo = MyMock(wraps=foo) [ins] In [4]: mock_foo.y = 4 [ins] In [5]: foo.y 4
{ "language": "en", "url": "https://stackoverflow.com/questions/72326859", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: C# - TableLayoutPanel Cutting Off Label String I'm creating a table layout panel to display the values from a dictionary, but the table layout panel keeps cutting the Label controls I put into the cells off at 14 characters. I've tried to fiddle with the ColumnStyles of my table layout panel but none of the options will make the Label control actually 'fit' into the cell. I've tried all the available column style SizeTypes: Auto-Size (labels with text values are cropped at 14 characters ("1234567890ABCD") every time, though columns with no controls present (spacers) are shrunk to nothing) Percentage (no effect whatsoever - no columns got wider, even when I weighted the column types (value, key, space) to be different sizes). Absolute (makes the columns x pixels wide, but the labels are STILL cut off at 14 characters - even if the cell is 1,000 pixels wide) I've also tried messing with the Size property of the label, but I can't edit that because I "Cannot modify the return value of 'System.Windows.Forms.Control.Size' because it is not a variable" (whatever that means). So, having exhausted all those options, how do I make the full label appear in the table cell without being cut off at 14 characters? Here's the code that's generating the table layout panel. It's using a custom class I built (GridDisplay) that keeps a list of objects (GridDisplayCell) that contain a Control, a row number, a column number, and a few other fields. The class lets me add/remove/move/insert controls to the list and then build the table layout all in one go with the Generate() function (rather than determine it's size in advance or re-size it as I add items). private void FillInCustomerData() { GridDisplay grid = new GridDisplay(tl_TopLeft); int rowMax = 8; int columnLabelIndex = 0; int curRow = 0; int curCol = 0; foreach (var item in DD.AllCustomerData["BasicInfo"]) //Dictionary<string, object> { if (curRow == rowMax) { curRow = 0; curCol = columnLabelIndex + 2; //1 for key column, 1 for value column } var keyLabel = new Label(); keyLabel.Text = item.Key; var valueLabel = new Label(); valueLabel.Text = (item.Value == null || item.Value.ToString() == "") ? "NA" : "1234567890ABDCDEF"; //item.Value.ToString() var key = grid.AddItem(new GridDisplayCell(item.Key, keyLabel), item.Key, curRow, curCol); // Function Definition: GridDisplay.AddItem(GridDisplayCell(string cellName, Control control), string cellName, int rowNumber, int colNumber) var value = grid.AddItem(new GridDisplayCell(item.Key + "Value", valueLabel), item.Key + "Value", curRow, curCol+1); curRow++; } grid.WrapMode = false; grid.AutoSize = true; grid.Generate(); //experimenting with column sizes. NOT WORKING foreach (ColumnStyle cs in grid.Table.ColumnStyles) { cs.SizeType = SizeType.AutoSize; } } And here's the chunk of my generate function which actually adds the controls to the TableLayoutPanel: (_cells is the list of GridDisplayCells, and AutoSize is a property of GridDisplay in this case (not the TableLayoutPanel's AutoSize property)) foreach (var cellItem in _cells) { if (AutoSize == false && ValidateSize(cellItem.Value.Column, cellItem.Value.Row, false) == false) { continue; //the cell was outside the range of the control, so we don't add it. } _table.Controls.Add(cellItem.Value.CellControl, cellItem.Value.Column, cellItem.Value.Row); } Any help is appreciated. A: Fixed the problem. I needed to set the Label's AutoSize property to true.
{ "language": "en", "url": "https://stackoverflow.com/questions/12611468", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Error while installing uml lab in eclipse i was trying to install uml lab in eclipse juno,but no matter how much i try i am always getting this error message: An error occurred while collecting items to be installed uml lab is just not getting installed .please help me out i need to complete my work. A: what error do you get? You find this information in the details or in the Error Log view. Just a guess: Is the update site of Eclipse Juno activated? Have a look at "Window -> Preferences -> Install/Update -> Available Software Sites". There should be an active entry pointing to "http://download.eclipse.org/releases/juno". Alternatively you could install the UML Lab Standalone RCP (http://www.uml-lab.com/download/). Best regards Manuel
{ "language": "en", "url": "https://stackoverflow.com/questions/14610385", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How does putExtra() work ? How does it's parameters work ? And what do the two parameters mean? I have been trying my hand on Android development and was following on with the guide given at http://developer.android.com/, I came across putExtra() and was wondering if someone could explain to me as to what does this function do ? How does the following code work ? For sending : Intent intent = new Intent(this, DisplayMessageActivity.class); EditText editText = (EditText) findViewById(R.id.edit_message); String message = editText.getText().toString(); intent.putExtra(EXTRA_MESSAGE, message); For receiving: Intent intent = getIntent(); String message = intent.getStringExtra(MainActivity.EXTRA_MESSAGE); how are the 2 functions, getStringExtra and putExtra, working in this code ? A: Intent intent = new Intent(this, DisplayMessageActivity.class); // sets target activity EditText editText = (EditText) findViewById(R.id.edit_message); // finds edit text String message = editText.getText().toString(); // pull edit text content intent.putExtra(EXTRA_MESSAGE, message); // shoves it in intent object Lets say this = Activity A, and your DisplayMessageActivity = Activity B In this scenario, your getting the edit text content from Activity A and your communicating its content over to Activity B using the Intent object. Activity B, being interested in the value must pull it out of the intent object, so it would do the following usually in its onCreate() : Intent intent = getIntent(); String message = intent.getStringExtra(MainActivity.EXTRA_MESSAGE); This is generally the happy, but common case. However, if Activity B had existed in the backstack, and Activity A wanted to clear the backstack to reach Activity B again, Activity B would have a new intent delivered in its onNewIntent(Intent theNewIntent) method, which you would have to override in Activity B to see this new intent. Or else you would be stuck dealing with the original intent Activity B had first received. UPDATED Sounds like your interested in the internals of intents as well as how you get the "EXTRA_MESSAGE" part of the intent. Intents store key-value pairs, so if you want to get the key part, something like the following would work: for (String key : bundle.keySet()) { Object value = bundle.get(key); Log.d(TAG, String.format("%s %s (%s)", key, value.toString(), value.getClass().getName())); } A quick overview of the internals is that Intents use Android's IPC (Inter-process communication). Essentially, the only data types that are OS-friendly are primitive types (int, long, float, boolean, etc...), this is why putExtra() allows you to store primitives only. However, putExtra() also tolerates parcelables, and any object defining itself as a Parcelable basically defines how the Java object trickles down to its primitives, allowing the intent to deal with those friendly data types once more, so no magic there. This matters because Intents act as wrappers for the Binder layer. Binder layer being the underlying structure of an Intent object, and this implementation lives in the native layer of Android (the c/c++ parts). Effectively, the native layer handles the marshalling/unmarshalling back up to the java layer, where your Activity B gets the data. I realize this simplification might be skipping too many details, so reference this pdf for better understanding. A: If not mistaken its just key-value pair https://searchenterprisedesktop.techtarget.com/definition/key-value-pair .It is just indication this id(key) is 2(value). From the another activity you can get the value finding by the key(id) ,i.e Activity B Intent intent = getIntent(); String id = intent.getStringExtra("id"); REFER TO How do I get extra data from intent on Android?
{ "language": "en", "url": "https://stackoverflow.com/questions/24233194", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Error when login in on deezer developer portal when I try to get access to deezer developer portal, I'm redirect to a blank page with this text You have to login to accept the terms and conditions of the simple API. And a button to login. Which when pressed come back to this page. Any suggestion? Thanks.
{ "language": "en", "url": "https://stackoverflow.com/questions/51912868", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What's wrong with my insertion sort? I'm trying to sort a large datafile using an Insertion-based sorting algorithm, the code runs fine but the output is incorrect. I've studied it over and over to absolutely no avail, can anyone see where I've gone wrong? public void sort(Comparable[] items) { for (int i = 1; i < items.length; i++) { Comparable temp = items[i]; int j = i - 1; while (j >= 0 && items[j].compareTo(items[j]) > 0) { items[j + 1] = items[j]; j = j - 1; } items[j] = temp; } } An example datafile I have produced is... 2 1 3 5 9 6 7 4 8 And obviously the output should be 1,2,3,4... - but instead I get 1 3 5 9 6 7 4 8 8 A: items[j].compareTo(items[j]) should be items[j].compareTo(temp), otherwise you're just comparing the item against itself - you need to be comparing it against the object you want to insert. Then items[j] = temp; will also cause an ArrayIndexOutOfBoundsException because, at the end of the loop, items[j] is smaller than temp, or j == -1, so we need to insert at the position after that - the simplest fix is just changing that to items[j+1] = temp;. A: Algorithm: for i ← 1 to length(A) j ← i while j > 0 and A[j-1] > A[j] swap A[j] and A[j-1] j ← j - 1 Translated to Java: import java.util.*; class InsertionSortTest { public static int[] insertionSort(int[] A) { for (int i = 1; i < A.length; i++) { int j = i; while (j > 0 && A[j-1] > A[j]) { int t = A[j]; A[j] = A[j-1]; A[j-1] = t; j--; } } return A; } public static void main (String[] args) { int[] arr = { 5, 3, 0, 2, 1, 4 }; System.out.println(Arrays.toString(insertionSort(arr))); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/22222388", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Why is Next.js adding a ?ts query parameter to static JS files, preventing loading them from cache? On every page load, Next.js requests the same JS files with a query parameter such as ?ts=1234. E.g: /_next/static/chunks/pages/_app.js?ts=1671033077175 /_next/static/chunks/main.js?ts=1671033077175 This is obviously done to prevent reusing these files from the browser cache. But I want to reuse them since they are the same until a rebuild is deployed. Why is this parameter added? If it is to prevent using outdated code after a rebuild, then why not have a build version as the value instead of a timestamp? Are there any available configuration options for caching static JS files? A: The timestamp query parameter is added only in development mode, i.e. if you start the next.js app with: npx next dev To start a production build, run: npx next build npx next start This will allow caching of all JS files, as well as make them much leaner and faster by removing all of the development and debug tools.
{ "language": "en", "url": "https://stackoverflow.com/questions/74801302", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Sort linq query result, when it has "First() and into Group" var qryLatestInterview = from rows in dt.AsEnumerable() group rows by new { PositionID = rows["msbf_acc_cd"], CandidateID = rows["msbf_fac_tp"] } into grp select grp.First(); I want to sort by above results using msbf_fac_dt which is a DateTime Column, So I did following changes var qryLatestInterview = from rows in dt.AsEnumerable() orderby rows["msbf_fac_dt"] ascending group rows by new { PositionID = rows["msbf_acc_cd"], CandidateID = rows["msbf_fac_tp"], FacilityDate = rows["msbf_fac_dt"] } into grp select grp.First(); But this is not sort by above msbf_fac_dt column , what can I do here A: You can order the group before selecting the first record: var qryLatestInterview = from rows in dt.AsEnumerable() group rows by new { PositionID = rows["msbf_acc_cd"], CandidateID = rows["msbf_fac_tp"], } into grp select grp.OrderBy(x=> x["msbf_fac_dt"]).First(); A: If I understand correctly, you want to get recent facilitated record from this list, you could do ordering after grouping and take first record. dt.AsEnumerable() .GroupBy(row => new { PositionID = rows["msbf_acc_cd"], CandidateID = rows["msbf_fac_tp"] }) .Select(x=>x.OrderByDescending(o=>o["msbf_fac_dt"]>).FirstOrDefault() .ToList();
{ "language": "en", "url": "https://stackoverflow.com/questions/37428694", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: HTML won't link up to CSS when I open it again My problem is that I linked the CSS correctly when I created my HTML file, but then when I try to edit the CSS the other day, it just didn't update my HTML file and I did save it of course. One way I can get around this is by deleting the CSS in my htdocs and creating a new stylesheet with the exact same contents then linking the new one to my HTML. Can you guys tell me why is this happening and how can I prevent this from happening again? A: I think that the problem is with cache. You can try running the site in an incognito/private window of your browser. You can also inspect the page and see if the new styles are loaded. You can also try Empty Cache and Hard Reload option when you right click the reload button on chrome browser while inspecting. A: one way you can prevent it is by doing all your css in the actual html file like so <head><style></style></head>
{ "language": "en", "url": "https://stackoverflow.com/questions/41255496", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do i format "message.content" on a discord bot? i want to format what my discord bot is saying but with "message.content" for example, if you were typing normally in discord and you wanted to, for example, make it a code chunk, you'd go "insert message here", but when I try to do it with the "message.content" command, it doesn't work, since it turns it into text, message.channel.send(message.content) if(client.on) { client.once('message', function (message) { if (message.content.startsWith("||tb ")) { message.channel.send(message.content) } }); } the "``"s in this case are meant to make it a code chunk is there any way around this? A: If you want to get the user's message and then repost it as a code block, you can use Formatters to do that. All you have to do is get the user's message and then repost it by doing this: const { Formatters } = require('discord.js') // ... client.on('message', function (message) { if (message.content.startsWith("||tb ")) { message.channel.send(Formatters.codeBlock(message.content)) } }); // ... You can also additionally provide which language the code would be in also to the Formatter. If you want to learn more about the .codeBlock(), you can go here => codeBlock | Formatters Easier Solution An easier solution to format the message content would be to use triple backticks like this ```. Then, additionally if you want to specify the programming language as well you can just add it after the triple backticks. An example: client.on('message', function (message) { if (message.content.startsWith("||tb ")) { message.channel.send('```' + message.content + '```') } });
{ "language": "en", "url": "https://stackoverflow.com/questions/72156925", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Get Time for page to load in jQuery? Get Time for page to load in jQuery? A: What is the end of page load? window.onload? var start = new Date(); $(window).load(function() { $('body').html(new Date() - start); }); jsFiddle. If you're supporting newer browsers, you can swap the new Date() with Date.now(). A: With large pages or pages containing inline JavaScript, it is a good idea to monitor how long pages take to load. The code below is different from waiting until onload is fired and only measures page load time. [+] <html> <head> <script type="text/javascript"> var t = new Date(); </script> … (your page) … <script type="text/javascript"> new Image().src = '/load.gif?' + (new Date().getTime() - t.getTime()); </script> </body> <html> load.gif can be a generic 1*1 pixel GIF. You can extract the data from your log files using grep and sed. Also check here. * *Page load time with Jquery *Javascript: time until page load One good diagnostic tool to help measure page load time is [jQTester]. It is a plugin that has you place small amounts of code at the top and bottom of your page. When the page finishes loading, you get a notification saying how long it took the page to load
{ "language": "en", "url": "https://stackoverflow.com/questions/5188648", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Add buttons Back, Forward, Play / Pause and a form for setInterval() I'm trying to make a simple slideshow with query but I need to add some buttons for back to previous slide, forward to next, play / pause the slideshow and an input form for change the interval. Here's my code for slideshow: var tempo = 3000; var temporizzatore = function() { var elemento_visibile = $("ul#slider li").not(".hidden"); elemento_visibile.addClass("hidden"); if(elemento_visibile.is(':last-child')) { $("ul#slider li:first-child").removeClass("hidden"); } else { elemento_visibile.next().removeClass("hidden"); } } window.setInterval(temporizzatore, tempo); A: I find the way for the buttons...and change query code. But i don't be able to change the interval by the input form Here my code... $('.ppt li:gt(0)').hide(); $('.ppt li:last').addClass('last'); $('.ppt li:first').addClass('first'); $('#play').hide(); var cur = $('.ppt li:first'); var interval; var time= 3000; function start() { interval = setInterval( "forward()", time ); } function stop() { clearInterval( interval ); } function forward() { cur.fadeOut( 0 ); if ( cur.attr('class') == 'last' ) cur = $('.ppt li:first'); else cur = cur.next(); cur.fadeIn( 0 ); } function goFwd() { stop(); forward(); start(); } function back() { cur.fadeOut( 0 ); if ( cur.attr('class') == 'first' ) cur = $('.ppt li:last'); else cur = cur.prev(); cur.fadeIn( 0 ); } function goBack() { stop(); back(); start(); } function showPause() { $('#play').hide(); $('#stop').show(); } function showPlay() { $('#stop').hide(); $('#play').show(); } $('#fwd').click( function() { goFwd(); showPause(); } ); $('#back').click( function() { goBack(); showPause(); } ); $('#stop').click( function() { stop(); showPlay(); } ); $('#play').click( function() { start(); showPause(); } ); $(function() { start(); } ); </script>
{ "language": "en", "url": "https://stackoverflow.com/questions/31632533", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-2" }
Q: How to find the most recent partition in HIVE table I have a partitioned table - with 201 partitions. I need to find latest partition in this table and use it to post process my data. The query to find list of all partitions is : use db; show partitions table_name; I need a query to find the latest of these partitions. The partitions are in format ingest_date=2016-03-09 I tried using max() which gave me a wrong result. I do not want to traverse through entire table by doing select max(ingest_date) from db.table_name; This would give me the expected output.. but kill the whole point of having partitions in the 1st place. Is there a more efficient query to get the latest partition for HIve table ? A: if you know your table location in hdfs. This is the most quick way without even opening the hive shell. You can check you table location in hdfs using command; show create table <table_name> then hdfs dfs -ls <table_path>| sort -k6,7 | tail -1 It will show latest partition location in hdfs A: You can use "show partitions": hive -e "set hive.cli.print.header=false;show partitions table_name;" | tail -1 | cut -d'=' -f2 This will give you "2016-03-09" as output. A: If you want to avoid running the "show partitions" in hive shell as suggested above, you can apply a filter to your max() query. That will avoid doing a fulltable scan and results should be fairly quick! select max(ingest_date) from db.table_name where ingest_date>date_add(current_date,-3) will only scan 2-3 partitions. A: It looks like there is no way to query for the last partition via Hive (or beeline) CLI that checks only metadata (as one should expect). For the sake of completeness, the alternative I would propose to the bash parsing answer is the one directly querying the metastore, which can be easily extended to more complex functions of the ingest_date rather than just taking the max. For instance, for a MySQL metastore I've used: SELECT MAX(PARTITIONS.PART_NAME) FROM DBS INNER JOIN TBLS ON DBS.DB_ID = TBLS.DB_ID INNER JOIN PARTITIONS ON TBLS.TBL_ID = PARTITIONS.TBL_ID PARTITIONS DBS.NAME = 'db' PARTITIONS TBLS.TBL_NAME = 'my_table' Then the output will be in format partition_name=partition_value.
{ "language": "en", "url": "https://stackoverflow.com/questions/36095790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Using max and other SQL aggregate functions Whenever I use SQL aggregate functions I find their utility hampered by the need for the Group By clause. I always end up having to use a bunch of nested selects to get what I want. I'm wondering if I'm just not using these functions correctly. For instance. If I have the following data: ID Fruit Color CreatedDate -- ----- ----- ----------- 1 Apple Red 2014-07-25 12:41:44.000 2 Apple Green 2014-07-31 10:01:01.000 3 Apple Blue 2014-07-10 07:05:51.317 4 Orange Orange 2014-06-26 13:42:35.360 I want to get the most recently created apple record. If I use this: SELECT [ID] ,[Fruit] ,[Color] ,max([CreatedDate]) FROM [CCM].[dbo].[tblFruit] WHERE Fruit = 'Apple' GROUP BY ID, Fruit, Color It gives me all three Apple entries, not just the latest one because I'm forced to include all the other columns in the group by clause. Really I just want it to group by fruit and give me the latest record (the whole record, not just a subset of the columns). To get what I want I have to use this: SELECT [ID] ,[Fruit] ,[Color] ,[CreatedDate] FROM [CCM].[dbo].[tblFruit] WHERE Fruit = 'Apple' AND CreatedDate IN (SELECT max([CreatedDate]) as [CreatedDate] FROM [CCM].[dbo].[tblFruit] WHERE Fruit = 'Apple') This is ugly to me and it would be easier to just forget about aggregates in SQL and do any min, max, count, etc in .NET. Is this the correct way to use aggregates (with nested selects) or am I doing it wrong? A: For this situation you may be better off using a windowing function like row_number() select id, fruit, color, createddate from ( select id, fruit, color, createddate, row_number() over(partition by fruit order by createddate desc) seq from tblFruit ) d where seq = 1; See Demo Using this allows you to partition the data by the fruit and order the rows within each fruit by the createddate. By placing your row_number() inside of a subquery, you will return the first row of each fruit - these are the items with a seq=1. If you are looking for items that are only Apple, then you can easily add a WHERE clause. You could also get the result by using a subquery to select the max(createddate) for each fruit: select f.id, f.fruit, f.color, f.createddate from tblFruit f inner join ( select fruit, max(createddate) CreatedDate from tblfruit group by fruit ) d on f.fruit = d.fruit and f.createddate = d.createddate; See Demo. You get the same result and you could still apply a WHERE filter to this. A: Based on your comment, you can use a CTE to build a list of max date for each fruit. Then you can join that back to your original table to get the full row that matches that max date. SQL Fiddle with MaxDates as (select fruit, max(createddate) as maxdate from table1 group by fruit) select t1.* from table1 t1 inner join maxdates md on t1.fruit = md.fruit and t1.createddate = md.maxdate BTW, you really don't want to try and push this kind of functionality to your application. Doing this kind of stuff is infinitely better in SQL. If nothing else, think about if you have millions of rows in your table. You certainly don't want to push those millions of rows from your db to your application to sum it up to a single row, etc. A: How about using TOP with an ORDER BY SELECT TOP(1) * FROM [CCM].[dbo].[tblFruit] WHERE Fruit = 'Apple' ORDER BY [CreatedDate] DESC
{ "language": "en", "url": "https://stackoverflow.com/questions/25062205", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: unsupported operand type(s) for +: 'QuerySet' and 'QuerySet' I have model it is have two MantToManyField, How I can sum or compain this two fields, there is number for each field, I want to have have result of sum this fields. My model class Video(models.Model): viewers = models.ManyToManyField(Account, related_name='video_views') viewers_by_ip = models.ManyToManyField(UsersByIP, default='192.168.0.1', blank=True) My view video_viewers_ip = video.viewers_by_ip.all() video_viewers = video.viewers.all() video_views = video_viewers_ip + video_viewers Or how to get the result number in a new field num_viewers = models.IntegerField('viewers_by_ip' + 'viewers') A: in views.py use the annotate function to generate the result video_views=Video.objects.all().annotate(vote_count=Count('viewers', distinct=True)) .annotate(likes_count=Count('viewers_by_ip', distinct=True)) context={'video_views':video_views} return render(request, template_to_display.html, context) A: I just convert this video_views = video_viewers_ip + video_viewers to video_views = len(list(chain(video_viewers_ip, video_viewers))) and it's worked with me fine.
{ "language": "en", "url": "https://stackoverflow.com/questions/67120899", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: OpenLayers - Trying to load layers using a JSON file I will have a bunch of layers I need to load into my map, so instead of loading each one individually I was trying to load using a json file and Openlayers.Request.GET but do not know how to complete to code. json file: { "layers": [ { "title":"Client Manholes" , "url":"./mh_file.geojson" , "style":"mh_style"}, { "title":"Client Pipe" , "url":"./pipe_file.geojson" , "style":"pipe_style"}, { "title":"Client Parcels" , "url":"./parcel_file.geojson" , "style":"parcel_style"} ] } javascript: var request = OpenLayers.Request.GET({ url: "http://domain.com/layers.json", callback: handler }); function handler(request) { //alert (request); var response = json.read (request.responseText); //loop thru each layer for each (var layer in request) { //load layer layer = new OpenLayers.Layer.Vector(layer_title, { strategies: [new OpenLayers.Strategy.Fixed()], protocol: new OpenLayers.Protocol.HTTP({ url: layer_url, format: new OpenLayers.Format.GeoJSON({ }) }), //...load stylemap }); //turn layer off layer.setVisibility(false); } }; A: Fixed it! function handler (request) { var json = new OpenLayers.Format.JSON (); var response = json.read (request.responseText); //console.log(response); var layer_data = response.layers for (var i in layer_data) { var layer_name = layer_data[i].title; //alert(layer_name); var layer_url = layer_data[i].url; //alert(layer_url); var layer_style = layer_data[i].style; //alert (layer_style); layer = new OpenLayers.Layer.Vector(layer_name, { strategies: [new OpenLayers.Strategy.Fixed()], protocol: new OpenLayers.Protocol.HTTP({ url: layer_url, format: new OpenLayers.Format.GeoJSON({ }) }), //...load stylemap }); //turn layer off layer.styleMap = mh_style_map; layer.setVisibility(false); map.addLayer(layer); } //alert (response); };
{ "language": "en", "url": "https://stackoverflow.com/questions/18600427", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can I fix Webpack related error: Cannot assign to read only property 'exports' of object '#'? I'm using webpack, which, if I understand correctly, doesn't support native webworker syntax, so I'm trying to use the worker-loader npm module, but I'm getting a weird error. The module can be found here. My webpack config: module.exports = { entry: "./app", output: { path: __dirname + "/build", filename: "bundle.js" }, watch: true, module: { rules: [{ test: /\.worker\.js$/, use: { loader: 'worker-loader' } }] } } My code triggering the error: import Worker from '../workers/sim.js'; class Synapse { // ... } module.exports = Synapse; Error: Cannot assign to read only property 'exports' of object '#<Object>'? which points to module.exports = Synapse A: You'll get this error when mixing the CommonJS module.exports with ES Modules. You'll have to change module.exports to its ES Module counterpart export default: import Worker from '../workers/sim.js'; class Synapse { // ... } export default Synapse; // ES Module syntax A: You are doing import and export in the same file which is not allowed by webpack better to do like this const {Worker} = require('../workers/sim.js'); class Synapse { // ... } module.exports = {Synapse};
{ "language": "en", "url": "https://stackoverflow.com/questions/47720354", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Php code not updating the Database Im new to programming and i cant understand why this code is not working. <?php $host="localhost"; $username="ryan"; $password="s@ch!911"; $db_name="webservice"; $con=mysql_connect("$host", "$username", "$password")or die("cannot connect"); mysql_select_db("$db_name")or die("cannot select DB"); $ngno = '112'; $myArray = array("date"=> "Mon Apr 11 00:00:00 GMT+05:30 2016", "Thu Mar 31 00:00:00 GMT+05:30 2016"); foreach($myArray as $dateSelected => $dateValue){ $sql = "INSERT INTO datepicker(ngno, date) VALUES($ngno, $dateValue)"; $result = mysql_query($sql); } ?> datepicker table has 3 columns. which are entry_id, ngno, date. entry_id gets auto incremented. I have tried removing the entry_id column as well. But no luck. I have other php files using the same Database and they are all working fine. Inserting, Selecting etc works fine. But when i run this php nothing happens.what am i doing wrong here? A: Try to change your Insert into into this: $sql = "INSERT INTO datepicker(ngno, date) VALUES('$ngno', '$dateValue')"; let me know if works. A: You can use this. I hope it will work fine for you. <?php $host="localhost"; $username="root"; $password=""; $db_name="test"; $con=mysql_connect("$host", "$username", "$password")or die("cannot connect"); mysql_select_db($db_name, $con)or die("cannot select DB"); $ngno = '112'; $myArray = array("date"=> "Mon Apr 11 00:00:00 GMT+05:30 2016", "Thu Mar 31 00:00:00 GMT+05:30 2016"); foreach($myArray as $dateSelected => $dateValue){ $sql = "INSERT INTO datepicker (`ngno`, `date`) VALUES('$ngno', '$dateValue')"; $result = mysql_query($sql); } ?>
{ "language": "en", "url": "https://stackoverflow.com/questions/36424956", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to pass data using submit() in javascript? I want to post data from one page to another using javascript post method? Below is the javascript I am using.. In test1.asp page <script type="text/javascript"> function Service_Add(Data_ID,Data_Type) { var Data_ID=Data_ID; var Data_Type=Data_Type; document.miformulario.submit();// Here I want to pass data like "Submit(Data_ID,Data_Type)" } </script> I want to post "Data_ID" and "Data_Type" to test2.asp page A: To pass data when you submit a form, you have to include that data in a form input field (hidden, visible, doesn't matter). A: You can add hidden fields in your HTML form like this <input type="hidden" id="Data_ID"> <input type="hidden" id="Data_Type"> and then set the values in your javascript function and then submit (?) <script type="text/javascript"> function Service_Add(Data_ID,Data_Type) { document.getElelementByID("Data_ID").value=Data_ID; document.getElelementByID("Data_Type").value=Data_Type; document.miformulario.submit(); } </script>
{ "language": "en", "url": "https://stackoverflow.com/questions/1995368", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Realtime Recording angular I use recordRTC to record videos and its working fine. Since we are building a robot, means the user interacts with the application via (a webcam and a microphone) , example when he says ' i want to buy a ticket ' we analyse his voice in our back-end application(java) then we redirect him to the ' page of tickets purchase '. here is my code : mediaRecorder.onstop = (ev)=>{ let blob = new Blob(chunks, { 'type' : 'video/mp4;' }); chunks = []; let videoURL = window.URL.createObjectURL(blob); vidSave.src = videoURL; var file = new File([blob], 'video.mp4', { type: 'video/mp4' }); //send it to back-end via post // let req = new XMLHttpRequest(); let formData = new FormData(); formData.append("file", blob); req.open("POST", 'http://localhost:8081/avi/upload-file'); req.send(formData); // } Every second i send the video so that i am able to analyse it and interact with him while he speaks. So how can i frame the video (its like a streaming ) ? I tried to add this: const duration = 2000; setInterval(() => { mediaRecorder.requestData() }, duration); but it is sending empty videos to my java application. (sorry for my bad explanation)
{ "language": "en", "url": "https://stackoverflow.com/questions/60486743", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: type inference for std::initializer_list If I write this std::vector<std::string> v{"one","two","three"}; What is the type inferred to the associated std::initializer_list template ? In other words, when the char * string literals are converted to std::string ? It's a better idea to declare it as std::vector<std::string> v{std::string("one"), std::string("two"), std::string("three")}; to avoid issues connected to the type-deduction mechanism of the templates involved ? I'll keep the same optimizations with this ? A: Update: To answer your question about type inference: The initializer list constructor of vector<string> takes an initializer_list<string>. It is not templated, so nothing happens in terms of type inference. Still, the type conversion and overload resolution rules applied here are of some interest, so I'll let my initial answer stand, since you have accepted it already: Original answer: At first, the compiler only sees the initializer list {"one","two","three"}, which is only a list of initializers, not yet an object of the type std::initializer_list. Then it tries to find an appropiate constructor of vector<string> to match that list. How it does that is a somewhat complicated process you would do best to look up in the standard itself if you are interested in the exact process. Therefore, the compiler decides to create an actual object of std::initializer_list<string> from the initializer list, since the implicit conversion from the char*'s to std::strings makes that possible. Another, maybe more interesting example: std::vector<long> vl1{3}; std::vector<string> vs1{3}; std::vector<string> vs2{0}; What do these do? * *The first line is relatively easy. The initializer list {3} can be converted into a std::initializer_list<long> analogous to the {"onm", "two", "three"} example above, so you get a vector with a single element, which has value 3. *The second line is different. It constructs a vector of 3 empty strings. Why? Because an initializer list {3} can by no means be converted into an std::initializer_list<string>, so the "normal" constructor std::vector<T>::vector(size_t, T = T()) kicks in and gives three default-constructed strings. *Well this one should be roughly the same as the second, right? It should give an empty vector, in other words, with zero default-constructed strings. WRONG!. The 0 can be treated as a nullpointer constant, and validates the std::initializer_list<string>. Only this time the single string in that list gets constructed by a nullpointer, which is not allowed, so you get an exception. A: There is no type inference because vector provide only a fully specialized constructor with the initializer list. We could add a template indirection to play with type deduction. The example below show that a std::initializer_list<const char*> is an invalid argument to the vector constructor. #include <string> #include <vector> std::string operator"" _s( const char* s, size_t sz ) { return {s, s+sz}; } template<typename T> std::vector<std::string> make_vector( std::initializer_list<T> il ) { return {il}; } int main() { auto compile = make_vector<std::string>( { "uie","uieui","ueueuieuie" } ); auto compile_too = make_vector<std::string>( { "uie"_s, "uieui", "ueueuieuie" } ); //auto do_not_compile = make_vector( { "uie","uieui","ueueuieuie" } ); } Live demo A: From http://en.cppreference.com/w/cpp/language/string_literal: The type of an unprefixed string literal is const char[] Thus things go this way: #include <iostream> #include <initializer_list> #include <vector> #include <typeinfo> #include <type_traits> using namespace std; int main() { std::cout << std::boolalpha; std::initializer_list<char*> v = {"one","two","three"}; // Takes string literal pointers (char*) auto var = v.begin(); char *myvar; cout << (typeid(decltype(*var)) == typeid(decltype(myvar))); // true std::string ea = "hello"; std::initializer_list<std::string> v2 = {"one","two","three"}; // Constructs 3 std::string objects auto var2 = v2.begin(); cout << (typeid(decltype(*var2)) == typeid(decltype(ea))); // true std::vector<std::string> vec(v2); return 0; } http://ideone.com/UJ4a0i
{ "language": "en", "url": "https://stackoverflow.com/questions/22714859", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: __puppeteer_evaluation_script__ empty in Chrome dev tools With Puppeteer version: "9.0.0" When placing debugger into JavaScript code and launching puppeteer the source code is empty in chrome dev tools. Running the script with Node: "scripts": { "test": "echo \"Error: no test specified\" && exit 1", "start": "node startscrape.js" }, startscrape.js ( async () => { const browser = await puppeteer.launch( { headless: false, defaultViewport:null, slowMo: 250, devtools:true, }); const page = await browser.newPage(); await page.goto('https://www.google.com'); await page.type('input', 'Here' ); await page.keyboard.press('Enter'); await page.waitForNavigation(); let xa = await page.evaluate(() => { console.log('Alive'); // Logging works in console but cannot breakpoint let elements = document.getElementsByClassName('someitem'); return elements; }); await page.goto('https://www.google.com'); })(); Result: breakpoint triggers however no source code can be seen: This is an issue in Puppeteer versions greater than "3.0.0" unfortunately version 3.0 is too old now
{ "language": "en", "url": "https://stackoverflow.com/questions/67461958", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What does the MongoClient version of find() actually return? I am using the native node.js mongodb driver, and just did a basic find() operation on a collection and it returned, well, something I don't understand. I don't expect to know everything about this object, just for starters I'd like to know where I should begin to parse this object for my desired collection. Other interesting facts: I'm using OsX. Thanks for your concern! I'll include this object below... Readable { connection: null, server: null, disconnectHandler: { s: { storedOps: [], storeOptions: [Object], topology: [Object] }, length: [Getter] }, bson: {}, ns: 'epc.customers', cmd: { find: 'epc.customers', limit: 0, skip: 0, query: {}, slaveOk: true, readPreference: { preference: 'primary', tags: undefined, options: undefined } }, options: { skip: 0, limit: 0, raw: undefined, hint: null, timeout: undefined, slaveOk: true, readPreference: { preference: 'primary', tags: undefined, options: undefined }, db: EventEmitter { domain: null, _events: {}, _eventsCount: 0, _maxListeners: undefined, s: [Object], serverConfig: [Getter], bufferMaxEntries: [Getter], databaseName: [Getter], options: [Getter], native_parser: [Getter], slaveOk: [Getter], writeConcern: [Getter] }, promiseLibrary: [Function: Promise], disconnectHandler: { s: [Object], length: [Getter] } }, topology: EventEmitter { domain: null, _events: { reconnect: [Function], timeout: [Object], error: [Object], close: [Function], destroy: [Object] }, _eventsCount: 5, _maxListeners: undefined, s: { options: [Object], callbacks: [Object], logger: [Object], state: 'connected', reconnect: true, reconnectTries: 30, reconnectInterval: 1000, emitError: true, currentReconnectRetry: 30, ismaster: [Object], readPreferenceStrategies: undefined, authProviders: [Object], id: 1, tag: undefined, disconnectHandler: [Object], wireProtocolHandler: {}, Cursor: [Object], bsonInstance: {}, bson: {}, pool: [Object], isMasterLatencyMS: 2, serverDetails: [Object] }, name: [Getter], bson: [Getter], wireProtocolHandler: [Getter], id: [Getter] }, cursorState: { cursorId: null, cmd: { find: 'epc.customers', limit: 0, skip: 0, query: {}, slaveOk: true, readPreference: [Object] }, documents: [], cursorIndex: 0, dead: false, killed: false, init: false, notified: false, limit: 0, skip: 0, batchSize: 1000, currentLimit: 0, transforms: undefined }, callbacks: null, logger: { className: 'Cursor' }, _readableState: ReadableState { objectMode: true, highWaterMark: 16, buffer: [], length: 0, pipes: null, pipesCount: 0, flowing: null, ended: false, endEmitted: false, reading: false, sync: true, needReadable: false, emittedReadable: false, readableListening: false, defaultEncoding: 'utf8', ranOut: false, awaitDrain: 0, readingMore: false, decoder: null, encoding: null }, readable: true, domain: null, _events: {}, _eventsCount: 0, _maxListeners: undefined, s: { numberOfRetries: 5, tailableRetryInterval: 500, currentNumberOfRetries: 5, state: 0, streamOptions: {}, bson: {}, ns: 'epc.customers', cmd: { find: 'epc.customers', limit: 0, skip: 0, query: {}, slaveOk: true, readPreference: [Object] }, options: { skip: 0, limit: 0, raw: undefined, hint: null, timeout: undefined, slaveOk: true, readPreference: [Object], db: [Object], promiseLibrary: [Function: Promise], disconnectHandler: [Object] }, topology: EventEmitter { domain: null, _events: [Object], _eventsCount: 5, _maxListeners: undefined, s: [Object], name: [Getter], bson: [Getter], wireProtocolHandler: [Getter], id: [Getter] }, topologyOptions: { socketOptions: {}, auto_reconnect: true, host: 'localhost', port: 27017, cursorFactory: [Object], reconnect: true, emitError: true, size: 5, disconnectHandler: [Object], bson: {}, messageHandler: [Function], wireProtocolHandler: {} }, promiseLibrary: [Function: Promise], currentDoc: null }, Evening var MongoClient = require('mongodb').MongoClient , assert = require('assert') , ObjectId = require('mongodb').ObjectId; // Connection URL var url = 'mongodb://localhost:27017/eProveCommons'; // Use connect method to connect to the Server var findAll = function(db, callback) { var collection = db.collection('all'); collection.find().toArray(function(err, docs){ assert.equal(err, null); console.log(docs); callback(docs); }) } MongoClient.connect(url, function(err, db) { assert.equal(null, err); console.log('You are connected correctly to the server.'); findAll(db, function(docs){ exports.getAll = function(){ return docs; } db.close(); }); }); A: This is a cursor object. With the cursor, you would do something like var cursor = collection.find({}); cursor.each(...); See this link for more details: https://mongodb.github.io/node-mongodb-native/markdown-docs/queries.html Note: If you know you have a small result set, you can use find({}).toArray() which will return a list of documents.
{ "language": "en", "url": "https://stackoverflow.com/questions/33659784", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Writing my own SMTP server I am writing a simple SMTP server and client. Server is in two parts receiver-SMTP and Sender SMTP. both will run in two different daemon services. The two modes this can run in is 'remote' and 'local'. Since i am new to network programming I am having difficulty in even getting started. Any help in form of text, sample or skeleton code of an SMTP will help me a lot in clearing my doubts, A: If your primary motive is to learn network programming and writing daemons, then I would recommend reading Beej's Guide to Network Programming and Advanced Programming in the Unix Environment. These don't provide straight up SMTP implementations but will give a good foundation to implement any protocol. A: If you're set on writing this in C, start with this guide on network programming and sockets. Writing such a server isn't simple and requires a lot of background knowledge. After you're a bit comfortable with sockets, install WireShark, some open-source SMTP server and try to send it some of the standard SMTP requests - seeing how it responds. This type of "exploration" is extremely valuable when implementing protocols. A: The simple answer would be to google for open source smtp, try and find an existing project that is in the language you want to implement your own in, or in a language you can read and understand, and then work through the code to gain the understanding you need Sites like sourceforge, freshmeat github, bitbucket will have projects on that will range from small to large. ou can also try some of the other repositories like PHPClasses, CPAN etc. (again depending on your language of choice). You can also try open source search such as Krugle. Another reference would be the SMTP RFC RFC 821 which will give you the standard you are writing to regardless of language.
{ "language": "en", "url": "https://stackoverflow.com/questions/1692447", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: HTACCESS with PHP value Hey, I need to filter out requests with certain PHP value in HTACCESS and I cant find how to do that. The problem is there is someone spamming my site with special PHP value and it keeps my server overloaded. The URL is www.site.com/?q=XXXXX. I need to filter out all requests like this (with ?q=XXXX) and redirect them to homepage instead. I tried this but it doesnt work properly (there is a loop). RewriteCond %{QUERY_STRING} q=(.*) RewriteRule ^(.*) http://www.site.com Thanks A: why dont u clean out whatever is being put into the _GET value? (using php) at the top of the php file put something like: if(isset($_GET['q'])){ header('Location: homepage.php'); } A: If someone is spamming you hard enough to overload your server you should look at blocking their IP address/addresses or something along those lines if possible. Also I would suggest letting those requests die() rather than making them send you another request when they load your homepage. Or maybe keep them busy by redirecting to a domain that doesn't exist or something but that may or may not have an impact on them. A: have you thought about counting "X"? if the ?q=X == true, continue, otherwise if q>9 then you know someone is messing with it and restrict them
{ "language": "en", "url": "https://stackoverflow.com/questions/5384422", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: PHP if specific business day of week and time echo The following code works to echo "Open" or "Closed" if time is between 8:15am and 5:30pm. I am trying to make it day specific. How can I incorporate format character 'D' as example, Mon hours 8:15am - 5:30pm .. echo "Open", Sat hours 8:15am - 1:00pm "Open". I want to be able to control echo of Open/Closed by each day and time. current working code for hours only <?php date_default_timezone_set('America/New_York'); $hour = (int) date('Hi'); $open = "yah hoo, we are open"; $closed = "by golly, im closed"; if ($hour >= 0815 && $hour <=1735) { // between 8:15am and 5:35pm echo "$open"; } else { echo "$closed"; } ?> example of what I am trying to do: $hour = (int) date('D Hi'); if ($hours >= 0815 && $hour <=1735 && $hour === 'Mon') { echo "$open"; } else { echo "$closed"; } if ($hours >= 0815 && $hour <=1300 && $hour === 'Sat') { echo "$open"; } else { echo "$closed"; } another example per The One and Only's answer which looks close to what I am looking for, but this also does not work <?php $openDaysArray = array('Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat','Sun'); $thisDate = date('D Hi'); $explode = explode(" ", $thisDate); $day = $explode[0]; $time = $explode[1]; if (in_array($day, $openDaysArray)) if ($time < 815 || $time > 1730 && $day === 'Mon'); if ($time < 815 || $time > 1730 && $day === 'Tue'); if ($time < 815 || $time > 1730 && $day === 'Wed'); if ($time < 815 || $time > 1730 && $day === 'Thu'); if ($time < 815 || $time > 1730 && $day === 'Fri'); if ($time < 815 || $time > 1730 && $day === 'Sat'); if ($time < 815 || $time > 1730 && $day === 'Sun'); {echo 'Open';} else {echo 'Closed';} ?> A: I'd handle it this way. Set up an array of all your open times. If you know you're closed on Saturday and Sunday, there's really no need to proceed with with checking times at that point, so kill the process there first. Then simply find out what day of the week it is, look up the corresponding opening and closing times in your $hours array, create actual DateTime objects to compare (rather than integers). Then just return the appropriate message. function getStatus() { $hours = array( 'Mon' => ['open'=>'08:15', 'close'=>'17:35'], 'Tue' => ['open'=>'08:15', 'close'=>'17:35'], 'Wed' => ['open'=>'08:15', 'close'=>'17:35'], 'Thu' => ['open'=>'08:15', 'close'=>'22:35'], 'Fri' => ['open'=>'08:15', 'close'=>'17:35'] ); $now = new DateTime(); $day = date("D"); if ($day == "Sat" || $day == "Sun") { return "Sorry we're closed on weekends'."; } $openingTime = new DateTime(); $closingTime = new DateTime(); $oArray = explode(":",$hours[$day]['open']); $cArray = explode(":",$hours[$day]['close']); $openingTime->setTime($oArray[0],$oArray[1]); $closingTime->setTime($cArray[0],$cArray[1]); if ($now >= $openingTime && $now < $closingTime) { return "Hey We're Open!"; } return "Sorry folks, park's closed. The moose out front should have told ya."; } echo getStatus(); A: Use a switch statement: $thisDate = date('D Hi'); $hoursOfOpArray = array("Mon_Open" => "815", "Mon_Close" => "1730", "Tue_Open" => "815", "Tue_Close" => "1730"); //repeat for all days too fill this array $explode = explode(" ", $thisDate); $day = $explode[0]; $time = (int)$explode[1]; switch($day) { case "Sun": $status = "Closed"; break; case "Mon": $status = ($time < $hoursOfOpArray[$day . "_Open"] || $time > $hoursOfOpArray[$day . "_Close"]) ? "Closed" : "Open"; break; //same as Monday case for all other days } echo $status; This should also work: echo ($day === 'Sun' || ($time < $hoursOfOpArray[$day . "_Open"]) || ($time > $hoursOfOpArray[$day . "_Close"])) ? "Closed" : "Open"; A: This one works, added remarks to explain as much as possible. <?php date_default_timezone_set('America/New_York'); // Runs the function echo time_str(); function time_str() { if(IsHoliday()) { return ClosedHoliday(); } $dow = date('D'); // Your "now" parameter is implied // Time in HHMM $hm = (int)date("Gi"); switch(strtolower($dow)){ case 'mon': //MONDAY adjust hours - can adjust for lunch if needed if ($hm >= 0 && $hm < 830) return Closed(); if ($hm >= 830 && $hm < 1200) return Open(); if ($hm >= 1200 && $hm < 1300) return Lunch(); if ($hm >= 1300 && $hm < 1730) return Open(); if ($hm >= 1730 && $hm < 2359) return Closed(); break; case 'tue': //TUESDAY adjust hours if ($hm >= 830 && $hm < 1730) return Open(); else return Closed(); break; case 'wed': //WEDNESDAY adjust hours if ($hm >= 830 && $hm < 1730) return Open(); else return Closed(); break; case 'thu': //THURSDAY adjust hours if ($hm >= 830 && $hm < 1730) return Open(); else return Closed(); break; case 'fri': //FRIDAY adjust hours if ($hm >= 830 && $hm < 1730) return Open(); else return Closed(); break; case 'sat': //Saturday adjust hours return Closed(); break; case 'sun': //Saturday adjust hours return Closed(); break; } } // List of holidays function HolidayList() { // Format: 2009/05/11 (year/month/day comma seperated for days) return array("2016/11/24","2016/12/25"); } // Function to check if today is a holiday function IsHoliday() { // Retrieves the list of holidays $holidayList = HolidayList(); // Checks if the date is in the holidaylist - remove Y/ if Holidays are same day each year if(in_array(date("Y/m/d"),$holidayList)) { return true; }else { return false; } } // Returns the data when open function Open() { return 'Yes we are Open'; } // Return the data when closed function Closed() { return 'Sorry We are Closed'; } // Returns the data when closed due to holiday function ClosedHoliday() { return 'Closed for the Holiday'; } // Returns if closed for lunch // if not using hours like Monday - remove all this // and make 'mon' case hours look like 'tue' case hours function Lunch() { return 'Closed for Lunch'; } ?> A: $o = ['Mon' => [815, 1735], /*and all other days*/'Sat' => [815, 1300]]; echo (date('Hi')>=$o[date('D')][0] && date('Hi')<=$o[date('D')][1]) ? "open": "closed"; Done! And dont ask.
{ "language": "en", "url": "https://stackoverflow.com/questions/39520107", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: File permission issues in android PFA the png, in this please note the permissions for the images inside app_themes directory. I am creating and adding files to this directory(app_themes) and during the process I do not specify any permissions at all. Now I can see, for some png's the permission is rwxrwxrwx and for some others, it is just rw. Though I do not specify any permission at all, why this difference in permission occurs? I have no issues while I access a file with rwxrwxrwx from a different application, but cannot access those files which has only rw permission from a second application. Please let me know how to resolve this by having the same permission - rwxrwxrwx for all the files. This behaviour of android seems slightly strange! Any help is much appreciated. In first place, why android sets different permissions for different files, though the files are created using the same code?
{ "language": "en", "url": "https://stackoverflow.com/questions/9903231", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: what is placemark in flutter geocoding I m very beginner for flutter google maps. I just want to know what is placemark in flutter geocoding and I just need to understand the below code. Thank you so much for any help. _getAddress() async { try { List<Placemark> p = await placemarkFromCoordinates( _currentPosition.latitude, _currentPosition.longitude); Placemark place = p[0]; setState(() { _currentAddress = "${place.name}, ${place.locality}, ${place.postalCode}, ${place.country}"; startAddressController.text = _currentAddress; _startAddress = _currentAddress; }); } catch (e) { print(e); } } A: Placemark is a class that contains information like place's name, locality, postalCode, country and other properties. See Properties in the documentation. placemarkFromCoordinates is a method that returns a list of Placemark instances found for the supplied coordinates. Placemark place = p[0] just gets the first Placemark from the list you got from placemarkFromCoordinates method. The code inside the setState method just updates the _currentAddress to the place info you got from the Placemark place and then passes its value to the startAddressController.text and _startAddress. A: Placemark() class helps you to get certain information like city name, country name, local code based on google map api. Before you use Placemark() in your app, you need to get decoded string info from google map api https://maps.googleapis.com/maps/api/geocode/json?latlng='.$request->lat.','.$request->lng.'&key='."your api key here" From your server side code should return json response and then _placeMark = Placemark(name: _address) Now _placeMark would help you get access to city, country, local code etc. For more go there https://www.dbestech.com/tutorials/flutter-google-map-geocoding-and-geolocator
{ "language": "en", "url": "https://stackoverflow.com/questions/70186396", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: docker build how to run intermediate containers with centos:systemd I am trying to build a docker image that is based on centos:systemd. In my Dockerfile I am running a command that depends on systemd running, this fails with the following error: Failed to get D-Bus connection: Operation not permitted error: %pre(mod-php-7.1-apache2-zend-server-7.1.7-16.x86_64) scriptlet failed, exit status 1 Error in PREIN scriptlet in rpm package mod-php-7.1-apache2-zend-server-7.1.7-16.x86_64 how can I get the intermediate containers to run with --privileged and mapping -v /sys/fs/cgroup:/sys/fs/cgroup:ro ? If I comment out the installer and just run the container and manually execute the installer it works fine. Here is the Dockerfile FROM centos/systemd COPY ./ZendServer-9.1.0-RepositoryInstaller-linux.tar.gz /opt RUN tar -xvf /opt/ZendServer-9.1.0-RepositoryInstaller-linux.tar.gz -C /opt/ RUN /opt/ZendServer-RepositoryInstaller-linux/install_zs.sh 7.1 java --automatic A: If your installer needs systemd running, I think you will need to launch a container with the base centos/systemd image, manually run the commands, and then save the result using docker commit. The base image ENTRYPOINT and CMD are not run while child images are getting built, but they do run if you launch a container and make your changes. After manually executing the installer, run docker commit {my_intermediate_container} {my_image}:{my_version}, replacing the bits in curly braces with the container name/hash, your desired image name, and image version. You might also be able to change your Dockerfile to launch init before running your installer. I am not sure if that will work here in the context of building an image, but that would look like: FROM centos/systemd COPY ./ZendServer-9.1.0-RepositoryInstaller-linux.tar.gz /opt RUN tar -xvf /opt/ZendServer-9.1.0-RepositoryInstaller-linux.tar.gz -C /opt/ \ && /usr/sbin/init \ && /opt/ZendServer-RepositoryInstaller-linux/install_zs.sh 7.1 java --automatic A: A LAMP stack inside a docker container does not need systemd - I have made to work with the docker-systemctl-replacement script. It is able to start and stop a service according to what's written in the *.service file. You could try it with what the ZendServer is normally doing outside a docker container.
{ "language": "en", "url": "https://stackoverflow.com/questions/46084775", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Select Similar Titles From a Table With a Single Query I would like to query a single table and select any entries which have similar titles. A similar title is a title with exactly the same string (case insensitive) but with "(fresh)" appended. The result should return the id and title of any matches. For example if this was my table: ID TITLE 1 Bacon 2 Eggs 3 Eggs (fresh) 4 ketchup 5 Ketchup (Fresh) Then I'd like to extract: array( array( id => 2 title => Eggs ), array( id => 3 title => Eggs (fresh) ) ), array( array( id => 4 title => ketchup ), array( id => 5 title => Ketchup (Fresh) ) ) Currently I'm using two separate queries. First I select all titles and then I do a loop to find which entries have those titles with "(fresh)" appended. I'm sure there must be a more efficient way to extract these matching titles in a single query but I can't put my finger on it. A: Try this: select src.id ID_wo_fresh, tgt.id ID_w_fresh, src.title from tbl src inner join tbl tgt on src.title= replace(replace(tgt.title,' (fresh)',''),' (Fresh)','') and src.id <> tgt.id This will return the ID of product without 'fresh' or 'Fresh' in the name, the ID with 'fresh' or 'Fresh' in the name and the name itself. Note that this will exclude rows which don't find a match i.e. only have the name with or without suffix, but not both. To show for all rows, use left join instead.
{ "language": "en", "url": "https://stackoverflow.com/questions/23073701", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Using Directive C++ Implementation In C, if I use #include "someFile.h", the preprocessor does a textual import, meaning that the contents of someFile.h are "copy and pasted" onto the #include line. In C++, there is the using directive. Does this work in a similar way to the #include, ie: a textual import of the namespace? using namespace std; // are the contents of std physically inserted on this line? If this is not the case, then how is the using directive implemented`. A: The using namespace X will simply tell the compiler "when looking to find a name, look in X as well as the current namespace". It does not "import" anything. There are a lot of different ways you could actually implement this in a compiler, but the effect is "all the symbols in X appear as if they are available in the current namespace". Or put another way, it would appear as if the compiler adds X:: in front of symbols when searching for symbols (as well as searching for the name itself without namespace). [It gets rather complicated, and I generally avoid it, if you have a symbol X::a and local value a, or you use using namespace Y as well, and there is a further symbol Y::a. I'm sure the C++ standard DOES say which is used, but it's VERY easy to confuse yourself and others by using such constructs.] In general, I use explicit namespace qualifiers on "everything", so I rarely use using namespace ... at all in my own code. A: No, it does not. It means that you can, from this line on, use classes and functions from std namespace without std:: prefix. It's not an alternative to #include. Sadly, #include is still here in C++. Example: #include <iostream> int main() { std::cout << "Hello "; // No `std::` would give compile error! using namespace std; cout << "world!\n"; // Now it's okay to use just `cout`. return 0; } A: Nothing is "imported" into the file by a using directive. All it does is to provide shorter ways to write symbols that already exist in a namespace. For example, the following will generally not compile if it is the first two lines of a file: #include <string> static const string s("123"); The <string> header defines std::string, but string is not the same thing. You haven't defined string as a type, so this is an error. The next code snippet (at the top of a different file) will compile, because when you write using namespace std, you are telling the compiler that string is an acceptable way to write std::string: #include <string> using namespace std; static const string s("123"); But the following will not generally compile when it appears at the top of a file: using namespace std; static const string s("123"); and neither will this: using namespace std; static const std::string s("123"); That's because using namespace doesn't actually define any new symbols; it required some other code (such as the code found in the <string> header) to define those symbols. By the way, many people will wisely tell you not to write using namespace std in any code. You can program very well in C++ without ever writing using namespace for any namespace. But that is the topic of another question that is answered at Why is "using namespace std" considered bad practice? A: No, #include still works exactly the same in C++. To understand using, you first need to understand namespaces. These are a way of avoiding the symbol conflicts which happen in large C projects, where it becomes hard to guarantee, for example, that two third-party libraries don't define functions with the same name. In principle everyone can choose a unique prefix, but I've encountered genuine problems with non-static C linker symbols in real projects (I'm looking at you, Oracle). So, namespace allows you to group things, including whole libraries, including the standard library. It both avoids linker conflicts, and avoids ambiguity about which version of a function you're getting. For example, let's create a geometry library: // geo.hpp struct vector; struct matrix; int transform(matrix const &m, vector &v); // v -> m . v and use some STL headers too: // vector template <typename T, typename Alloc = std::allocator<T>> vector; // algorithm template <typename Input, typename Output, typename Unary> void transform(Input, Input, Output, Unary); But now, if we use all three headers in the same program, we have two types called vector, two functions called transform (ok, one function and a function template), and it's hard to be sure the compiler gets the right one each time. Further, it's hard to tell the compiler which we want if it can't guess. So, we fix all our headers to put their symbols in namespaces: // geo.hpp namespace geo { struct vector; struct matrix; int transform(matrix const &m, vector &v); // v -> m . v } and use some STL headers too: // vector namespace std { template <typename T, typename Alloc = std::allocator<T>> vector; } // algorithm namespace std { template <typename Input, typename Output, typename Unary> void transform(Input, Input, Output, Unary); } and our program can distinguish them easily: #include "geo.hpp" #include <algorithm> #include <vector> geo::vector origin = {0,0,0}; typedef std::vector<geo::vector> path; void transform_path(geo::matrix const &m, path &p) { std::transform(p.begin(), p.end(), p.begin(), [&m](geo::vector &v) -> void { geo::transform(m,v); } ); } Now that you understand namespaces, you can also see that names can get pretty long. So, to save typing out the fully-qualified name everywhere, the using directive allows you to inject individual names, or a whole namespace, into the current scope. For example, we could replace the lambda expression in transform_path like so: #include <functional> void transform_path(geo::matrix const &m, path &p) { using std::transform; // one function using namespace std::placeholders; // an entire (nested) namespace transform(p.begin(), p.end(), p.begin(), std::bind(geo::transform, m, _1)); // this ^ came from the // placeholders namespace // ^ note we don't have to qualify std::transform any more } and that only affects those symbols inside the scope of that function. If another function chooses to inject the geo::transform instead, we don't get the conflict back.
{ "language": "en", "url": "https://stackoverflow.com/questions/23585758", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What does +Mat mean? Lookint at the definition of the Flow class on https://doc.akka.io/api/akka/current/akka/stream/scaladsl/Flow.html, it has the following signature: final class Flow[-In, +Out, +Mat] The question is, why is the type of the third parameter is +Mat? I thought, +Mat makes only sense on Sink, because Sink consumes the stream. Even the Source has the +Mat: final class Source[+Out, +Mat] A: Each stage materializes in some value, this is what gives you the ability to obtain a mechanism to push elements into the stream via a SourceQueueWithComplete when you use a Source.queue. Even a Flow could materialize in some value but this isn't common, in this cases you'll see that the materialized value is NotUsed.
{ "language": "en", "url": "https://stackoverflow.com/questions/55611836", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Docker compose network doesn't bind ip address I'm trying to run a selenium grid from a docker compose file. In this docker compose I set an Ip address, because I want to run test on a jenkins from another docker image. Only when I try to run my tests with the setup ip address it gives following error org.openqa.selenium.remote.UnreachableBrowserException at RemoteWebDriver.java:573 Caused by: java.net.UnknownHostException at InetAddress.java:800 This is my docker compose file version: "3" services: hub: image: selenium/hub networks: testing_net: ipv4_address: 172.28.1.1 ports: - "4444:4444" environment: GRID_MAX_SESSION: 16 GRID_BROWSER_TIMEOUT: 3000 GRID_TIMEOUT: 3000 chrome: image: selenium/node-chrome container_name: web-automation_chrome depends_on: - hub environment: HUB_PORT_4444_TCP_ADDR: hub HUB_PORT_4444_TCP_PORT: 4444 NODE_MAX_SESSION: 4 NODE_MAX_INSTANCES: 4 volumes: - /dev/shm:/dev/shm ports: - "9001:5900" links: - hub networks: testing_net: ipv4_address: 172.28.1.2 firefox: image: selenium/node-firefox container_name: web-automation_firefox depends_on: - hub environment: HUB_PORT_4444_TCP_ADDR: hub HUB_PORT_4444_TCP_PORT: 4444 NODE_MAX_SESSION: 2 NODE_MAX_INSTANCES: 2 volumes: - /dev/shm:/dev/shm ports: - "9002:5900" links: - hub networks: testing_net: ipv4_address: 172.28.1.3 networks: testing_net: ipam: driver: default config: - subnet: 172.28.0.0/16 When I go to 172.28.1.1:4444 I don't reach the selenium hub
{ "language": "en", "url": "https://stackoverflow.com/questions/66226308", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Math: Division Problems Generator I am making a program that takes asks if you want to do Addition, Subtraction, Multiplication, and Division. It then asks how many questions. I have everything working but Division. I want to make sure that there will be no remainder when doing the division problem. I don't know how to make this work. else if (player==4) { System.out.print("How many questions would you like?-->"); player = in.nextInt(); numQuestions= player; do{ //question do { do{ num1 = (int) (Math.random() * 100); num2 = (int) (Math.random() *10); }while (num2 > num1); } while (num1 % num2 == 0); compAnswer = num1 / num2; System.out.println(num1+" / " + num2 + "="); System.out.print("What's your answer? -->"); player = in.nextInt(); if (player == compAnswer) { System.out.println(" That's right, the answer is " + compAnswer); System.out.println(""); score++; } else { System.out.println("That's wrong! The answer was " + compAnswer); System.out.println(""); } //x++; }while( x < numQuestions + 1 ); System.out.println(""); System.out.println("Thanks for playing! Your score was " + score + "."); } A: You are specifically picking numbers where there are a reminder. You can just loop while there is a reminder instead: } while (num1 % num2 != 0); However, you won't need to loop at all. If you pick the second operand and the answer, you can calculate the first operand: num2 = (int) (Math.random() *10); compAnswer = (int) (Math.random() * 10); num1 = num2 * compAnswer;
{ "language": "en", "url": "https://stackoverflow.com/questions/26048672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Reporting plugin for Opensource chef Is there any Plugin to add reporting feature to an opensource chef as it is for Enterprise chef? I tried installing using below command /opt/chef/embedded/bin/gem install knife-reporting knife-reporting gem got install but no related commands are found when i run knife A: Knife-Reporting plugin is used to analyze the reports sent by clients to the server. [knife runs help ] run this command to find the new functionalists given by knife reporting. Question Seems to be a similar question. chef doc has the code for handling the report and sending it to the server. It has to be shipped with chef handler cookbook and enabled before chef run. For getting json files which contains all info about node run as report. Get chef_handler from community. Add recipe[chef_handler::json_file] in your run list. This will get the report as a json file and store in your /var/chef/reports (can be changed in chef_handler cookbook) in client machine.
{ "language": "en", "url": "https://stackoverflow.com/questions/24668666", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Unable to edit any columns in my C# DataGridView - would anyone know why? I'm trying to enabled editing for a number of columns in my DataGridView. Now, before anyone suggestions I should read the MSDN article: How to: Specify the Edit Mode for the Windows Forms DataGridView Control, I already have. In summary (and I quote): * *The underlying data source supports editing. *The DataGridView control is enabled. *The EditMode property value is not EditProgrammatically. *The ReadOnly properties of the cell, row, column, and control are all set to false. All of that is simple and common sense. I've confirmed that the Enabled = true. I've confirmed that the EditMode is EditOnKeystrokeOrF2 I've confirmed that all the columns (except one) are ReadOnly = false. What I find interesting is the first line:- The underlying data source supports editing. Now, what I'm doing is the following, to bind the data to the DGV :- // Grab all the Foos. var foos = (from x in MyRepository.Find() select new { x.Foo1, x.Foo2, ... x.FooN }).ToList(); // Now lets bind this result to the GridView. dataGridView2.DataSource = foos; Which I thought was the right way to do things.. What i was planing on doing was, when the cell is changed and the user then leaves the cell, that's where I was planning on grabbing the data that was just changed (figure this out manually) and then manually update the DB. Is this the right way to do things? A: In this case, the underlying data source does not support editing since the properties of anonymous types are read only. Per the C# language spec: The members of an anonymous type are a sequence of read-only properties inferred from the anonymous object initializer used to create an instance of the type. Instead, you might want to define a display class with editable properties for the values you want to display and create instances of that class. A: I had this problem with an unbound DGV. I fixed it by setting the ReadOnly property of each ROW. Both the DGV and the Columns were not ReadOnly, but the Rows were somehow being set R/O perhaps when I set them using a collection. So: foreach (var r in Lst) { ... dgv.Rows[dgv.Rows.Count - 1].ReadOnly = false; }
{ "language": "en", "url": "https://stackoverflow.com/questions/4004311", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to clear the session using a button and javascript Session is client side staff, but is it possible through clear it using javascript code? I would like too have the idea like this and ,can it convert to jquery format? Thank you. js $(function(){ $("#closeTab").click(function() { window.parent.$('#tt').tabs('close','Create List'); $.post("clear.php",function(data){ }); }); }); php <? if (isset($_SESSION['lname'])) unset($_SESSION['lname']); if (isset($_POST['creminder'])) unset($_SESSION['creminder']); ?> Is this one ok ? A: make an ajax call to the server and let your server page kills/ends the session HTML <a href="#" id="aKill" > Kill Session</a> Script $(function(){ $("#aKill").click(function(){ $.post("serverpage.php",function(data){ // if you want you can show some message to user here }); }); and in your serverpage.php, Execute the PHP script to terminate the session. A: Create a separate file to clear the session only. clearsession.php session_start(); session_destroy(); Now, make a simple request $("#aKill").click(function(){ $.get("clearsession.php"); } A: The below covers the basics more or less. The Function: function destroySession(){ var theForm = $("#yourForm"); //we don't need any ajax frame. theForm.each(function(){ this.reset() }); $.ajax({ url: 'destroysession.php', type: 'post', data: 'sure=1', //send a value to make sure we want to destroy it. success: function(data); alert(data); } }); } The PHP (destroysession.php): <?php //whatever logic is necessary if(!empty($_POST['sure'])){ $sure = $_POST['sure']; if($sure == 1){ //logic to destroy session echo 'Session Destroyed!'; }else if($sure != 1){ //logic to perform if we're being injected. echo 'That value is incorrect. What are you doing over there?'; } }else{ //logic to perform if we're being injected. echo 'That value is incorrect. What are you doing over there?'; } ?> The HTML: <input type='button' value='reset' onclick='destroySession()'>
{ "language": "en", "url": "https://stackoverflow.com/questions/9780583", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Inheriting class definition from parent class I am building Grape Entities inside my Rails models as described here: https://github.com/ruby-grape/grape-entity#entity-organization Currently I am creating default values automatically, based on the column hash of the model itself. So I have a static get_entity method that exposes all the model's columns: class ApplicationRecord < ActiveRecord::Base def self.get_entity(target) self.columns_hash.each do |name, column| target.expose name, documentation: { desc: "Col #{name} of #{self.to_s}" } end end end And then I have here an example Book model using it inside the declared Entity subclass (the comment also shows how I can override the documentation of one of the model's column): class Book < ActiveRecord::Base class Entity < Grape::Entity Book::get_entity(self) # expose :some_column, documentation: {desc: "this is an override"} end end The downside with this approach is that I always need to copy and paste the class Entity declaration in each model I want the Entity for. Can anybody help me out generating the class Entity for all child of ApplicationRecord automagically? Then if I need overrides I will need to have the Entity declaration in the class, otherwise if the default declaration is enough and can leave it as it is. NOTE: I cannot add class Entity definition straight inside ApplicationRecord because, Entity class should call get_entity and get_entity depends on column_hash of Books. SOLUTION: ended up doing this thanks to brainbag: def self.inherited(subclass) super # definition of Entity entity = Class.new(Grape::Entity) entity.class_eval do subclass.get_entity(entity) end subclass.const_set "Entity", entity # definition of EntityList entity_list = Class.new(Grape::Entity) entity_list.class_eval do expose :items, with: subclass::Entity expose :meta, with: V1::Entities::Meta end subclass.const_set "EntityList", entity_list end def self.get_entity(entity) model = self model.columns_hash.each do |name, column| entity.expose name, documentation: { type: "#{V1::Base::get_grape_type(column.type)}", desc: "The column #{name} of the #{model.to_s.underscore.humanize.downcase}" } end end Thanks! A: I haven't used Grape so there may be some extra magic here that you need that I don't know about, but this is easy to do in Ruby/Rails. Based on your question "generating the class Entity for all child of ApplicationRecord automagically" you can do this: class ApplicationRecord < ActiveRecord::Base self.abstract_class = true class Entity < Grape::Entity # whatever shared stuff you want end end Book will then have access to the parent Entity: > Book::Entity => ApplicationRecord::Entity If you want to add extra code only to the Book::Entity, you can subclass it in Book, like this: class Book < ApplicationRecord class Entity < Entity # subclasses the parent Entity, don't forget this # whatever Book-specific stuff you want end end Then Book::Entity will be its own class. > Book::Entity => Book::Entity To combine this with your need for get_entity to be called on an inherited class, you can use the #inherited method to automatically call get_entity any time ApplicationRecord is subclassed: class ApplicationRecord < ActiveRecord::Base self.abstract_class = true def self.get_entity(target) target.columns_hash.each do |name, column| target.expose name, documentation: { desc: "Col #{name} of #{self.to_s}" } end end def self.inherited(subclass) super get_entity(subclass) end class Entity < Grape::Entity # whatever shared stuff you want end end
{ "language": "en", "url": "https://stackoverflow.com/questions/47371243", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Nested resource parameters with Rails and Backbone.js I've got this issue where I am seeing the incoming parameters nested incorrectly. Here's some of the setup: In routes.config I've defined: resources :events do resources :guests end As a result I end up with the right controller/action paths. The ones I'll call out here are specified as such: POST /events/:event_id/guests(.:format) guests#create no surprises here. Now in the Guest model I have an association with a Contact model such that A Guest belongs to a Contact and a Contact has many Guests. This works correctly. Nothing tricky here either. The backbone.js model for this is defined as such: define('GuestModel', [ 'underscore', 'backbone' ], function(_, Backbone) { var Guest = Backbone.Model.extend({ idAttribute: "id", url: function() { return '/events/' + this.get('event_id') + '/guests'; } }); return Guest; }); A quick caveat to this: I am providing the backend API support for the frontend and the guy who is implementing the front end (as my immediate client) doesn't use the rails asset pipeline at all. I have no control over this and as a result this is the bit of code I can't really affect w/o first convincing him it's the way to go. The problem is that in the GuestsController#create method I'm expecting to see params that look like this: { :event_id => 1, :guest => { :contact => { ...all the contact parameters... }, ...other guest params... }, ...other railsy parameters... } Instead I'm getting: { :event_id => 1, :contact => { ...all the contact parameters... }, ...other guest params..., ...other railsy parameters... } Because there's no :guest object in the params at the top level, none of this is picked up by the standard Rails create behavior and I find myself having to basically pull out the contact and other guest params and put them manually into a hash object. Or I can pull out the event id and other Railsy params and use the params as a whole. But I can't believe this hasn't ever been seen or addressed by someone before me so I'm hoping someone can either tell me what the standard approach to this is, or what I'm doing wrong at a more fundamental level. Thanks in advance, and let me know if I've left out some information or code you need to see... jd
{ "language": "en", "url": "https://stackoverflow.com/questions/16265402", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to pass variable parameter into XPath expression? I want to pass a parameter into an XPath expression. (//a/b/c[x=?],myParamForXAttribute) Can I do this with XPath 1.0 ? (I tried string-join but it is not there in XPath 1.0) Then how can I do this ? My XML looks like <a> <b> <c> <x>val1</x> <y>abc</y> </c> <c> <x>val2</x> <y>abcd</y> </c> </b> </a> I want to get <y> element value where x element value is val1 I tried //a/b/c[x='val1']/y but it did not work. A: Given that you're using the Axiom XPath library, which in turn uses Jaxen, you'll need to follow the following three steps to do this in a thoroughly robust manner: * *Create a SimpleVariableContext, and call context.setVariableValue("val", "value1") to assign a value to that variable. *On your BaseXPath object, call .setVariableContext() to pass in the context you assigned. *Inside your expression, use /a/b/c[x=$val]/y to refer to that value. Consider the following: package com.example; import org.apache.axiom.om.OMElement; import org.apache.axiom.om.impl.common.AxiomText; import org.apache.axiom.om.util.AXIOMUtil; import org.apache.axiom.om.xpath.DocumentNavigator; import org.jaxen.*; import javax.xml.stream.XMLStreamException; public class Main { public static void main(String[] args) throws XMLStreamException, JaxenException { String xmlPayload="<parent><a><b><c><x>val1</x><y>abc</y></c>" + "<c><x>val2</x><y>abcd</y></c>" + "</b></a></parent>"; OMElement xmlOMOBject = AXIOMUtil.stringToOM(xmlPayload); SimpleVariableContext svc = new SimpleVariableContext(); svc.setVariableValue("val", "val2"); String xpartString = "//c[x=$val]/y/text()"; BaseXPath contextpath = new BaseXPath(xpartString, new DocumentNavigator()); contextpath.setVariableContext(svc); AxiomText selectedNode = (AxiomText) contextpath.selectSingleNode(xmlOMOBject); System.out.println(selectedNode.getText()); } } ...which emits as output: abcd A: It depends on the language in which you're using XPath. In XSLT: "//a/b/c[x=$myParamForXAttribute]" Note that, unlike the approach above, the three below are open to XPath injection attacks and should never be used with uncontrolled or untrusted inputs; to avoid this, use a mechanism provided by your language or library to pass in variables out-of-band. [Credit: Charles Duffy] In C#: String.Format("//a/b/c[x={0}]", myParamForXAttribute); In Java: String.format("//a/b/c[x=%s]", myParamForXAttribute); In Python: "//a/b/c[x={}]".format(myParamForXAttribute)
{ "language": "en", "url": "https://stackoverflow.com/questions/30352671", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: php Error: Resourse id #4 for($i = 0;$i<10;$i++) { $query1 ="SELECT `id` FROM radcheck ORDER BY `id` DESC LIMIT 1;"; $lololo= mysql_query($query1) or die(mysql_error()); //echo $lololo; $query = "INSERT INTO radcheck (username,attribute,op,value) VALUES ('teleuser".$lololo."','Cleartext-Password',':=','$arr1[$i]')"; mysql_query($query) or die(mysql_error()); } I am trying to retrieve my latest id value to append with my username. And the value retrieve always be 'Resource id #4'. Is there anyway to solve it ? Thank you. A: Mysql doesn't support TOP that is for SQL Server. Instead of using TOP you can use mysql LIMIT so your query would be: SELECT `id` FROM radcheck ORDER BY `id` DESC LIMIT 1; A: You do not have to use single quotes around column names. if you need to escape it use backticks. And TOP is not mysql syntax. You have to use limit SELECT `id` FROM radcheck ORDER BY `id` DESC limit 1 Do not longer use the depricated mysql_* API. Use mysqli_* or PDO A: Remove the 's around field names - "SELECT TOP 1 id FROM radcheck ORDER BY id DESC"; But as Daan told it will not work for mysql, then ORDER & LIMIT will do the trick. "SELECT id FROM radcheck ORDER BY id DESC LIMIT 1"; A: i hope you never mind trying this pattern Select id from radcheck WHERE id=1 ORDER BY 'id' DESC ;
{ "language": "en", "url": "https://stackoverflow.com/questions/29914294", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Highcharts center on x and y axis offset First, for reference I'm trying to mimic this chart: Currently my chart looks like this: http://jsfiddle.net/eA4Df/ What's the option to center the (1, 1) coordinates of the chart but still display all of the bubble data? Current highcharts options are: chart: { type: 'bubble', zoomType: 'xy' }, legend: { enabled: false }, credits: { enabled: false }, title: { text: 'Highcharts Bubbles' }, xAxis: { plotLines: [{ color: '#000000', width: 2, value: 1 }] }, yAxis: { plotLines: [{ color: '#000000', width: 2, value: 1 }] } A: You can try to set min and max values for the axis: xAxis: { plotLines: [{ color: '#000000', width: 2, value: 1 }], max: 2, min: 0 }, yAxis: { plotLines: [{ color: '#000000', width: 2, value: 1 }], max: 2, min: 0 }, Example SQL FIDDLE HERE A: Instaed of plotLines, you can move your axis by offset parameter or use plugin
{ "language": "en", "url": "https://stackoverflow.com/questions/25082666", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: CS50 Caesar multiple errors I'm working on Caesar of CS50 and I don't know what's wrong with this code. I keep getting 5 errors and 1 warning (scroll down). I have done my best so far to fix it but I honestly don't know what to do. The program is essentially a Caesar cipher encoder. When I launch the program it should as me for a key (which is a number) and the message I'm trying to cipher, it should then give me the output which is the cipher. #include <stdio.h> #include <cs50.h> #include <string.h> #include <ctype.h> #include <stdlib.h> bool check_valid_key(string s); int main(int argc, string argv[]) { if (argc != 2 || !check_valid_key(argv[1])) { printf("Usage: ./caesar key\n"); return 1; } int key = atoi(argv[1]); string plaintext = get_string("plaintext: "); printf("ciphertext: "); for (int i = 0, len = strlen(plaintext); i < len; i++) { char c = plaintext[i]; if (isalpha(c)) { char m = 'A'; if (islower(c)) m = 'a'; printf("%c", (c - m + key) % 26 + m); } else printf("%c", c); } printf("\n"); } bool check_valid_key(string s); { for (int i = 0, len = strlen(s); i < len; i++) if (!isdigit(s[i])) return false; return true; } A: it would be useful to use a better text editor to catch syntax errors before you even compile. here are the syntax errors your program has. the len should be assigned first like this for (int i = 0, len = strlen(plaintext); i < len; i++) { but that's in efficient since the strlen() is called at each iteration, do this instead. int len = strlen(plaintext); // it's in efficient to calculate length each iteration. for (int i = 0; i < len; i++) { condition for if statements should be surrounded by brackets. if (islower(c)) // if (cond) <-- brackets are importants your function definition has a semicolon which is also a syntax error bool check_valid_key(string s) // ; <-- remove this semicolon { }
{ "language": "en", "url": "https://stackoverflow.com/questions/65963505", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-2" }
Q: merging result of one observable to another I have a situation which I haven't been able to solve via RxJava2 operators. I have a list of "Matches" I retrieve from Room. Here is my Match POJO. public class Match { @PrimaryKey private int id; @Ignore Team rivalTeam; } Here is my Dao for this table. @Dao public interface MatchDao { @Query("select * from `match` where homeTeamId= :homeTeamId") Single<List<Match>> getMatchesByTeamId(int homeTeamId); } My target is to create an observable which will pull out these matches and then merge it with an observable which would be returning the rivalTeam. Here is my Team Dao @Dao public interface TeamDao { @Query("select * from teams where id = :teamId") Single<Team> getTeamById(int teamId); } What I have achieved is this. private Single<List<Match>> fetchMatches(Team team) { return matchDao. getMatchesByTeamId(team.getId()).toObservable(). flatMapIterable(matches -> matches). map(match -> teamDao.getTeamById(**match.getRivalId()**).toObservable() // this doesn't work ofcourse since map can't return an observable.); } I know I have to perhaps zip both observables , but how would I go about doing that since the second observable is dependent on the first?. If zip would have worked on a single source , I would have been able to do a match.setRivalTeam(team) in the zipper method. Hints?
{ "language": "en", "url": "https://stackoverflow.com/questions/48953279", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do I dynamically make a field type number in Angular? I am generating an HTML form from JSON using some custom directives. Part of the JSON include restrictions on the values of the fields. One of the restrictions the JSON puts on fields is a positive decimal type. The code I use in my link function to accomplish this: //...snip if (<this field should be number>) { elem.attr("type", "number"); } This successfully changes the field type to number. However, there are two problems: * *It does not allow negative numbers *Every time I type a key stroke (even for valid numbers), it throws a number format exception. For example, if I type "123" I will get the following 3 errors: * *Error: [ngModel:numfmt] 1 to be a number *Error: [ngModel:numfmt] Expected 12 to be a number *Error: [ngModel:numfmt] Expected 123 to be a number I know that the model is successfully changed to a number type because when I print it out with a directive it displays without quotes. So my question is: * *How do I allow negative numbers *How do I stop all of the exceptions from being A: How to Dynamically Change the Type of an Input Element The type of an input element can be changed dynamically simply with AngularJS interpolation. <input type="{{dynamicType}}" ng-model="inputValue"> Or from a directive angular.module("myApp").directive("typeVar", function() { return { link: linkFn }; function linkFn(scope,elem,attrs) { //elem.attr("type", "number"); attrs.$observe("typeVar", function(value) { elem.attr("type", value); //attrs.$set("type", value); }); } }); HTML <input type-var="{{dynamicType}}" ng-model="inputValue"> The DEMO on JSFiddle The type property is an intrinsic part of the input element. For more information, see MDN HTML Element Reference -- input.
{ "language": "en", "url": "https://stackoverflow.com/questions/35369084", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Mercurial convert extension not able to pull from remote Git repository I'm trying to do a 'hg convert', to pull from a Git repo into an Hg one. Now, I don't have Git installed on my Windows 7 machine; is that necessary? I'm using the TortoiseHG commandline, and I have activated the convert extension ('hg help convert' works fine). Here's an example of the command I'm trying to use: hg convert -s git -d hg https://github.com/mysticbob/glm.git gittest That's a public repo on Github, so I should be able to convert from it. And the address is what Github says one should use for getting. What I get is the following message: initializing destination gittest repository https://github.com/mysticbob/glm.git does not look like a Git repository Any ideas? A: If I believe issue 1246, you need to have git installed for the hg convert extension to work. Even with Git installed, you might experience some other issues with the import, in which case you could consider other alternatives such as: * *converting the git repo to a svn one, and then importing that svn repo into a mercurial one *or trying the hg-git mercurial plugin, which specifically mentions: This plugin is implemented entirely in Python - there are no Git binary dependencies, you do not need to have Git installed on your system. (But I don't know if hg-git works with recent 1.7+ Mercurial versions)
{ "language": "en", "url": "https://stackoverflow.com/questions/4917589", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Order of operations in for loops in C In the following floor loop, how would sum += n -- be evaluated? I'm very, very confused... int sum; for(sum=0;n>0;sum+=n--); A: For sum += n-- the following operations are performed * *add n to sum *decrement n With sum += --n * *n is decremented *the new value of n is added to sum n-- is called postdecrement, and --n is called predecrement A: That has to do with post- and predecrement operations. Predecrement operations first decrease the value and then are used in other operations while postdecrement ones first get used in operations (addition in the case) and decrement the value only after this. All in all, the order will be as follows: * *sum is incremented by n *n is decremented A: A very simple example I would like to demostrate. Let us consider two variables a=1 and b=4 the statement a=b will assign the value of b to a, In the statement a=b++ , first the value of b is assigned to a and then the value of b is incremented. If the value of awas 1 and value of b was 4, then after using a=b++, the value of a will become 4 and the value of b will become 5. The statement a=b++can be visualised as a=b; then b=b+1; Similarly, in your case, you have sum+=n-- which can be broken down as sum=sum+(n--) or sum=sum+n then n=n-1 Here again, first the value of sum+n will be assigned to sum, then the value of n will be decremented by 1
{ "language": "en", "url": "https://stackoverflow.com/questions/34239335", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: WPF FileDrop Event: just allow a specific file extension I have a WPF Control and I want to drop a specific file from my desktop to this control. This is not a heavy part but I would like to check the file extension to allow or disallow the dropping. What is the best way to solve this problem? A: I think this should work: <Grid> <ListBox AllowDrop="True" DragOver="lbx1_DragOver" Drop="lbx1_Drop"></ListBox> </Grid> Let's assume you want to allow only C# files: private void lbx1_DragOver(object sender, DragEventArgs e) { bool dropEnabled = true; if (e.Data.GetDataPresent(DataFormats.FileDrop, true)) { string[] filenames = e.Data.GetData(DataFormats.FileDrop, true) as string[]; foreach (string filename in filenames) { if(System.IO.Path.GetExtension(filename).ToUpperInvariant() != ".CS") { dropEnabled = false; break; } } } else { dropEnabled = false; } if (!dropEnabled) { e.Effects = DragDropEffects.None; e.Handled = true; } } private void lbx1_Drop(object sender, DragEventArgs e) { string[] droppedFilenames = e.Data.GetData(DataFormats.FileDrop, true) as string[]; }
{ "language": "en", "url": "https://stackoverflow.com/questions/724774", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Cant obtain md5 Fingerprint for api key I am trying to develop the map element of my app and I am having trouble obtaining a md5 fingerprint, I am using Ubuntu and I have located the debug.keystore file, but when I enter the commands I get: No command 'store' found, did you mean: Command '0store' from package 'zeroinstall-injector' (universe) Command 'stone' from package 'stone' (universe) store: command not found or; Illegal option: -key Try keytool -help I think the first error maybe incorrect file path but I think the second error is when I got it right. Anyone Help with this Thanks In Advance A: The command is keytool, not store. Per the documentation, the command you need to run is: keytool -list -alias androiddebugkey -keystore <path_to_debug_keystore>.keystore -storepass android -keypass android In your specific case, for Ubuntu, this turns into: keytool -list -alias androiddebugkey -keystore ~/.android/debug.keystore -storepass android -keypass android
{ "language": "en", "url": "https://stackoverflow.com/questions/3804533", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Angular 14 (Excel) File Download, Returning 502 (Bad Gateway) I have an app, that uses a standalone service for the back-end (Springboot) and a standalone front end app (Angular 14). I'm making a request from the front end, to the back end. The back-end generates an excel file (XSSFWorkbook) and sends it back to the front using HttpServletResponse. The front-end is expecting a blob, and uses an octet-stream to create a downloadable file. My issue is, this functionality works perfectly on a local machine, but returns a 502 error, when both apps are deployed. Both Apps are deployed to separate cloud environments. No issues with Http requests to other endpoints, except for this file download endpoint. I have looked at the backend app's logs, but it returns no errors & I've considered the idea that they're might be a timeout issue, but the error is returned within 3 seconds and the average file size is 9 bytes. Error shown in chrome developer tools console: main.4854ab3470858733.js:1 ERROR {error: 'Backend returned code 502, body was:[object Blob]', status: 502} error : "Backend returned code 502, body was:[object Blob]" status : 502 [[Prototype]] : Object I need some ideas, on what could be the cause.
{ "language": "en", "url": "https://stackoverflow.com/questions/75612008", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Jackson deserialization into field from subnode? I have a class public class Foo { String foo; String bar; } and JSON { "foo": "foo", "bar": { "bar": "bar" } } Can I deserialize it using any json library without writing adapters/serializers to specific class. Maybe some of gson's adapters has needed behavior? But I didn't find adapter to field. I'd like to write annotation like @MyJsonExpandAnnotation("bar.bar") but I can't find adapter with access to Field. A: Basically what you would use without your given annotation example is public class Foo { String foo; Bar bar; } public class Bar{ String bar; } There is an open suggestion for it tho.
{ "language": "en", "url": "https://stackoverflow.com/questions/35436068", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: is it possible to write a directive to match "any other query params"? To ensure people don't append random query parameters (e.g. appending &r=234522.123 or similar) to avoid hitting our cache I want to have a way to reject any queries that are not handled explicitly. I can of course create one that contains a whitelist, but that would have to be separately maintained and I hate maintaining two things that needs to stay in synch. (Though, it would aid in failing faster.) Is this possible with Spray routing? A: I ended up with this: // This contains a white-list of allowed query parameters. This is useful to // ensure people don't try to use &r=234234 to bust your caches. def allowedParameters(params: String*): Directive0 = parameterSeq.flatMap { case xs => val illegal = xs.collect { case (k, _) if !params.contains(k) => k } if (illegal.nonEmpty) reject(ValidationRejection("Illegal query parameters: " + illegal.mkString("", ", ", "\nAllowed ones are: ") + params.mkString(", "))) else pass } For usage, have a look at the unit tests: val allowedRoute = { allowedParameters("foo", "bar") { complete("OK") } } "Allowed Parameter Directive" should "reject parameters not in its whitelist" in { Get("/?foo&bar&quux") ~> allowedRoute ~> check { handled should equal(false) rejection should be(ValidationRejection("Illegal query parameters: quux\nAllowed ones are: foo, bar")) } } it should "allow properly sorted parameters through" in { Get("/?bar&foo") ~> allowedRoute ~> check { handled should equal(true) responseAs[String] should equal("OK") } } A: Actually, you have a good solution, i can only suggest a small refactoring: def only(params: String*): Directive0 = { def check: Map[String, String] => Boolean = _.keySet diff params.toSet isEmpty parameterMap.require(check, rejection) } You can write it as a one-liner, but it would be just longer A: With a route like this: val myRoute = { ... pathPrefix("test") { parameter("good") { good => complete("GOOD") } } ~ ... } Spray will require first parameter to be good and to have value, i.e. ?good=value. No other parameters that have values are allowed.
{ "language": "en", "url": "https://stackoverflow.com/questions/22153900", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Spawning new robot in running ROS Gazebo simulation The problem statement is in simulating both 'car' and a quadcopter in ROS Gazebo SITL as mentioned in this question. Two possibilities have been considered for the same which is as depicted in the image. (Option 1 uses 6 terminals with independent launch files and MAVproxy initiation terminals) While trying to search for Option 1, the documentation appeared to be sparse (The idea is to launch the simulation with ErleRover and then spawn ErleCopter on-the-go; I haven't found any official documentation mentioning either the possibility or the impossibility of this option). Can somebody be requested to let me know how option 1 can be achieved or why it is impossible by mentioning corresponding official documentation? Regarding option 2, additional options have been explored; The problem is apparently with two aspects: param vs rosparam and tf2 vs tf_prefix. Some of the attempts of simulation of multiple turtlebots have used tf_prefix which is deprecated. But, I have been unable to find any example which uses tf2 while simulating multiple (different) robots. But, tf2 works on ROS Hydro (and thus Indigo). Another possible option is the usage of rosparam instead of param (only). But, documentation on that is sparse regarding the usage of same on multi-robot simulation and I have been able to find only one example (for a single robot Husky). But, one thing is clearer: MAVproxy can support multiple robots through the usage of SYSID and component-ID parameters. (upto 255 robots with 0 being a broadcast ID) Thus, port numbers have to be modified (possibly 14000 and 15000 as each vehicle uses 4 consecutive ports) just like the UCTF simulation. (vehicle_base_port = VEHICLE_BASE_PORT + mav_sys_id*4) To summarise the question, the main concern is to simulate an independent car moving around and an independent quadcopter flying around in the ROS Gazebo SITL (maybe using Python nodes; C++ is fine too). Can somebody be requested to let me know the answers to the following sub-questions? * *Is this kind of simulation possible? (Either by the usage of ROS Indigo, Gazebo 7, MAVproxy 1.5.2 on Ubuntu 14.04 or by modifying UCTF project to spawm a car like ErleRover if there is no other option) (You are kindly requested to let me know the examples if possible and official links if this is impossible) *If on-the-go launch is not possible with two launch files, is it possible to launch two different robots with a single launch file? *This is an optional question: How to modify the listener (subscriber) of the node? (Is it to be done in the Python node?) This simulation is taking relatively long time with system software crashing for about 3 times (NVIDIA instead of Noveau, broken packages etc) and any help will be whole-heartedly, gratefully and greatly appreciated. Thanks for your time and consideration. Prasad N R
{ "language": "en", "url": "https://stackoverflow.com/questions/40308891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Reset Bootstrap modal when clicking outside modal window I'm having some problems with reseting my bootstrap modal whenever the user clicks outside the modal window to dismiss instead of pressing the dismiss button. My datetimepicker stops working whenever the modal window doesnt reset properly. This is what my modal should do whenever it opens: But this is what happens if I open the window just after dismissing my window by hitting the gray area outside the modal window: Whenever I dismiss using the Avbryt (dismiss) button it works just fine. I get no errors and this is the code I use for my modal $scope.show = function() { ModalService.showModal({ templateUrl: 'newProject.html', controller: "NewProjectModalController" }).then(function(modal) { modal.element.modal(); modal.close.then(function(result) { $('#newProjectModal').modal('hide'); $('body').removeClass('modal-open'); $('.modal-backdrop').remove(); } }); }); } $scope.dismiss = function () { close(false, 500) } The modal service can be found on JSfiddle. I discovered that its not really the backdrop that is triggering the dismiss but the edges of the modal window. If I hit the outside of the modal within the small square, its dismisses, but not if I hit the backdrop outside. A: Try triggering a click event on the Avbryt button when a user clicks on the modal backdrop. Something like this: $('.modal-backdrop').click(function(){ $('.avbryt').trigger('click'); }); That should bypass whatever the underlying problem is. For reference: http://api.jquery.com/trigger/ A: According to the image in question, you are using angular-modal-service along with bootstrap.js in your project. So I searched its github repo and found this issue: https://github.com/dwmkerr/angular-modal-service/issues/107 According to the comments there, you should change your code as below: ModalService.showModal({ templateUrl: 'newProject.html', controller: "NewProjectModalController" }).then(function(modal) { modal.element.modal(); modal.close.then(function(result) { $('#newProjectModal').modal('hide'); $('body').removeClass('modal-open'); $('.modal-backdrop').remove(); } // added by me modal.element.on('hidden.bs.modal', function () {// bootstrap event modal.scope.close(false,500); }); }); });
{ "language": "en", "url": "https://stackoverflow.com/questions/43027677", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }