text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
Monitoring Basics ¶ This part of the Icinga 2 documentation provides an overview of all the basic monitoring concepts you need to know to run Icinga 2. Keep in mind these examples are made with a Linux server. If you are using Windows, you will need to change the services accordingly. See the ITL reference for further information. Attribute Value Types ¶ The Icinga 2 configuration uses different value types for attributes. It is important to use the correct value type for object attributes as otherwise the configuration validation will fail. Hosts and Services ¶. Host States ¶ Hosts can be in any one of the following states: Service States ¶ Services can be in any one of the following states: Check Result State Mapping ¶ Check plugins return with an exit code which is converted into a state number. Services map the states directly while hosts will treat 0 or 1 as UP for example. Hard and Soft States ¶. Host and Service Checks ¶. Tip hostaliveis the same as pingbut with different default thresholds. Both use the pingCLI command to execute sequential checks. If you need faster ICMP checks, look into the icmp CheckCommand. A number of other built-in check commands are also available. In addition to these commands the next few chapters will explain in detail how to set up your own check commands. Host Check Alternatives ¶ If the host is not reachable with ICMP, HTTP, etc. you can also use the dummy CheckCommand to set a default state. object Host "dummy-host" { check_command = "dummy" vars.dummy_state = 0 //Up vars.dummy_text = "Everything OK." } This method is also used when you send in external check results. A more advanced technique is to calculate an overall state based on all services. This is described here. Templates ¶ Templates and objects share the same namespace, i.e. you can’t define a template that has the same name like an object. Multiple Templates ¶ The following example uses custom variables which are provided in each template. The web-server template is used as the base template for any host providing web services. In addition to that it specifies the custom variable webserver_type, e.g. apache. Since this template is also the base template, we import the generic-host template here. This provides the check_command attribute by default and we don’t need to set it anywhere later on. template Host "web-server" { import "generic-host" vars = { webserver_type = "apache" } } The wp-server host template specifies a Wordpress instance and sets the application_type custom variable. Please note the += operator which adds dictionary items, but does not override any previous vars attribute. template Host "wp-server" { vars += { application_type = "wordpress" } } The final host object imports both templates. The order is important here: First the base template web-server is added to the object, then additional attributes are imported from the wp-server object. object Host "wp.example.com" { import "web-server" import "wp-server" address = "192.168.56.200" } If you want to override specific attributes inherited from templates, you can specify them on the host object. object Host "wp1.example.com" { import "web-server" import "wp-server" vars.webserver_type = "nginx" //overrides attribute from base template address = "192.168.56.201" } Custom Variables ¶ In addition to built-in object attributes you can define your own custom attributes inside the vars attribute. Tip This is called custom variablesthroughout the documentation, backends and web interfaces. Older documentation versions referred to this as custom attribute. The following example specifies the key ssh_port as custom variable and assigns an integer value. object Host "localhost" { check_command = "ssh" vars.ssh_port = 2222 } vars is a dictionary where you can set specific keys to values. The example above uses the shorter indexer syntax. An alternative representation can be written like this: vars = { ssh_port = 2222 } or vars["ssh_port"] = 2222 Custom Variable Values ¶ Valid values for custom variables include: You can also define nested values such as dictionaries in dictionaries. This example defines the custom variable disks as dictionary. The first key is set to disk / is itself set to a dictionary with one key-value pair. vars.disks["disk /"] = { disk_partitions = "/" } This can be written as resolved structure like this: vars = { disks = { "disk /" = { disk_partitions = "/" } } } Keep this in mind when trying to access specific sub-keys in apply rules or functions. Another example which is shown in the example configuration: vars.notification["mail"] = { groups = [ "icingaadmins" ] } This defines the notification custom variable as dictionary with the key groups which itself has an array as value. Note: This array is the exact same as the user_groups attribute for notification apply rules expects. vars.notification = { mail = { groups = [ "icingaadmins" ] } } Functions as Custom Variables ¶ Icinga 2 lets you specify functions for custom variables. The special case here is that whenever Icinga 2 needs the value for such a custom variable function(" ") }} ... } Accessing object attributes at runtime inside these functions is described in the advanced topics chapter. Runtime Macros ¶ Macros can be used to access other objects’ attributes and custom variables at runtime. For example they are used in command definitions to figure out which IP address a check should be run against: object CheckCommand "my-ping" { command = [ PluginDir + "/check_ping" ] arguments = { "-H" = "$ping_address$" "-w" = "$ping_wrta$,$ping_wpl$%" "-c" = "$ping_crta$,$ping_cpl$%" "-p" = "$ping_packets$" } // Resolve from a host attribute, or custom variable. vars.ping_address = "$address$" // Default values variables, e.g. by using $ping_wrta$. Icinga automatically tries to find the closest match for the attribute you specified. The exact rules for this are explained in the next section. Note When using the $sign as single character you must escape it with an additional dollar character ( $$). Evaluation Order ¶ When executing commands Icinga 2 checks the following objects in this order to look up macros and their respective values: - User object (only for notifications) - Service object - Host object - Command object - Global custom variables in the Varsconstant This execution order allows you to define default values for custom variables in your command objects. Here’s how you can override the custom variable ping_packets from the previous example: object Service "ping" { host_name = "localhost" check_command = "my-ping" vars.ping_packets = 10 // Overrides the default value of 5 given in the command } If a custom variable variable for the service. This returns an empty value if the service does not have such a custom variable no matter whether another object such as the host has this attribute. Host Runtime Macros ¶ The following host custom variables are available in all commands that are executed for hosts or services: In addition to these specific runtime macros host object attributes can be accessed too. Service Runtime Macros ¶ The following service macros are available in all commands that are executed for services: In addition to these specific runtime macros service object attributes can be accessed too. Command Runtime Macros ¶ The following custom variables are available in all commands: User Runtime Macros ¶ The following custom variables are available in all commands that are executed for users: In addition to these specific runtime macros user object attributes can be accessed too. Notification Runtime Macros ¶ In addition to these specific runtime macros notification object attributes can be accessed too. Global Runtime Macros ¶ The following macros are available in all executed commands: The following macros provide global statistics: Apply Rules ¶ Several object types require an object relation, e.g. Service, Notification, Dependency, ScheduledDowntime objects. The object relations are documented in the linked chapters. If you for example create a service object you have to specify the host_name attribute and reference an existing host attribute. object Service "ping4" { check_command = "ping4" host_name = "icinga2-agent1.localdomain" } This isn’t comfortable when managing a huge set of configuration objects which could match on a common pattern. Instead you want to use apply rules. If you want basic monitoring for all your hosts, add a ping4 service apply rule for all hosts which have the address attribute specified. Just one rule for 1000 hosts instead of 1000 service objects. Apply rules will automatically generate them for you. apply Service "ping4" { check_command = "ping4" assign where host.address } More explanations on assign where expressions can be found here. Apply Rules: Prerequisites ¶ Before you start with apply rules keep the following in mind: - Define the best match. - A set of unique custom variables for these hosts/services? - Or group memberships, e.g. a host being a member of a hostgroup which should have a service set? - A generic pattern match on the host/service name? - Multiple expressions combined with &&or ||operators - All expressions must return a boolean value (an empty string is equal to falsee.g.) More specific object type requirements are described in these chapters: - Apply services to hosts - Apply notifications to hosts and services - Apply dependencies to hosts and services - Apply scheduled downtimes to hosts and services Apply Rules: Usage Examples ¶ You can set/override object attributes in apply rules using the respectively available objects in that scope (host and/or service objects). vars.application_type = host.vars.application_type Custom variables can also store nested dictionaries and arrays. That way you can use them for not only matching for their existence or values in apply expressions, but also assign (“inherit”) their values into the generated objected from apply rules. Remember the examples shown for custom variable values: vars.notification["mail"] = { groups = [ "icingaadmins" ] } You can do two things here: - Check for the existence of the notificationcustom variable and its nested dictionary key - Assign the value of the groupskey to the user_groupsattribute. apply Notification "mail-icingaadmin" to Host { [...] user_groups = host.vars.notification.mail.groups assign where host.vars.notification.mail } A more advanced example is to use apply rules with for loops on arrays or dictionaries provided by custom atttributes or groups. Remember the examples shown for custom variable values: vars.disks["disk /"] = { disk_partitions = "/" } You can iterate over all dictionary keys defined in disks. You can optionally use the value to specify additional object attributes. apply Service for (disk => config in host.vars.disks) { [...] vars.disk_partitions = config.disk_partitions } Please read the apply for chapter for more specific insights. Tip Building configuration in that dynamic way requires detailed information of the generated objects. Use the object listCLI command after successful configuration validation. Apply Rules Expressions ¶. Apply Rules Expressions Examples ¶ Assign a service to a specific host in a host group array using the in operator: assign where "hostgroup-dev" in host.groups Assign an object when a custom variable is equal to a value: assign where host.vars.application_type == "database" assign where service.vars.sms_notify == true Assign an object if a dictionary contains a given key: assign where host.vars.app_dict.contains("app") Match the host name by either using a case insensitive match: assign where match("webserver*", host.name) Match the host name by using a regular expression. Please note the escaped backslash character: assign where regex("^webserver-[\\d+]", host.name) Match all *mysql* patterns in the host name and ( &&) custom variable prod_mysql_db matches the db-* pattern. All hosts with the custom variable variable is set to customer-xy OR the host custom variable always_notify is set to true. The notification is ignored for services whose host name ends with *internal OR the priority custom variable. Apply Services to Hosts ¶ The sample configuration already includes a detailed example in hosts.conf and services.conf for this use case. The example for ssh applies a service object to all hosts with the address attribute being defined and the custom variable os set to the string Linux in vars. apply Service "ssh" { import "generic-service" check_command = "ssh" assign where host.address && host.vars.os == "Linux" } Other detailed examples are used in their respective chapters, for example apply services with custom command arguments. Apply Notifications to Hosts and Services ¶ variable defined. The notification command is set to mail-service-notification and all members of the user group noc will get notified. It is also possible to generally apply a notification template and dynamically overwrite values from the template by checking for custom variables. This can be achieved by using conditional statements: apply Notification "host-mail-noc" to Host { import "mail-host-notification" // replace interval inherited from `mail-host-notification` template with new notfication interval set by a host custom variable if (host.vars.notification_interval) { interval = host.vars.notification_interval } // same with notification period if (host.vars.notification_period) { period = host.vars.notification_period } // Send SMS instead of email if the host's custom variable `notification_type` is set to `sms` if (host.vars.notification_type == "sms") { command = "sms-host-notification" } else { command = "mail-host-notification" } user_groups = [ "noc" ] assign where host.address } In the example above the notification template mail-host-notification contains all relevant notification settings. The apply rule is applied on all host objects where the host.address is defined. If the host object has a specific custom variable set, its value is inherited into the local notification object scope, e.g. host.vars.notification_interval, host.vars.notification_period and host.vars.notification_type. This overwrites attributes already specified in the imported mail-host-notification template. The corresponding host object could look like this: object Host "host1" { import "host-linux-prod" display_name = "host1" address = "192.168.1.50" vars.notification_interval = 1h vars.notification_period = "24x7" vars.notification_type = "sms" } Apply Dependencies to Hosts and Services ¶ Detailed examples can be found in the dependencies chapter. Apply Recurring Downtimes to Hosts and Services ¶ The sample configuration includes an example in downtimes.conf. Detailed examples can be found in the recurring downtimes chapter. Using Apply For Rules ¶ = "2001:db8:1234::42" vars.oids["if01"] = "1.1.1.1.1" vars.oids["temp"] = "1.1.1.1.2" vars.oids["bgp"] = "1.1.1.1.5" } The idea is to create service objects for if01 and temp but not bgp. The oid value should also be used as service custom variable snmp_oid. This is the command argument required by the snmp check command. The service’s display_name should be set to the identifier inside the dictionary, e.g. if01. variable oids set. It iterates over all dictionary items inside the for loop and evaluates the assign/ignore where expressions. You can access the loop variable in these expressions, e.g. to ignore specific values. In this example the bgp identifier is ignored. This avoids to generate unwanted services. A different approach would be to match the oid value with a regex/wildcard match pattern for example. ignore where regex("^\d.\d.\d.\d.5$", oid) Note You don’t need an assign whereexpression which checks for the existence of the oidscustom variable. This method saves you from creating multiple apply rules. It also moves the attribute specification logic from the service to the host. Apply For and Custom Variable Override ¶ Imagine a different more advanced example: You are monitoring your network device (host) with many interfaces (services). The following requirements/problems apply: - Each interface" Define the interfaces custom variable on the cisco-catalyst-6509-34 host object and add three example interfaces as dictionary keys. Specify additional attributes inside the nested dictionary as learned with custom variable values: variables = "remote" qos = "enabled" } vars.interfaces["MgmtInterface1"] = { iftraffic_community = IftrafficSnmpCommunity vlan = "mgmt" interface_address = "127.99.0.100" #special management ip } } Start with the apply for definition and iterate over host.vars.interfaces. This is a dictionary and should use the variables interface_name as key and interface_config as value for each generated object scope. "if-" specifies the object name prefix for each service which results in if-<interface_name> for each iteration. /* loop over the host.vars.interfaces dictionary * for (key => value in dict) means `interface_name` as key * and `interface_config` as value. Access config attributes * with the indexer (`.`) character. */ apply Service "if-" for (interface_name => interface_config in host.vars.interfaces) { Import the generic-service template, assign the iftraffic check_command. Use the dictionary key interface_name to set a proper display_name string for external interfaces. import "generic-service" check_command = "iftraffic" display_name = "IF-" + interface_name The interface_name key’s value is the same string used as command parameter for iftraffic: /* use the key as command argument (no duplication of values in host.vars.interfaces) */ vars.iftraffic_interface = interface_name Remember that interface_config is a nested dictionary. In the first iteration it looks like this: interface_config = { iftraffic_units = "g" iftraffic_community = IftrafficSnmpCommunity iftraffic_bandwidth = 1 vlan = "internal" qos = "disabled" } Access the dictionary keys with the indexer syntax and assign them to custom variables used as command parameters for the iftraffic check command. /* map the custom variables as command arguments */ vars.iftraffic_units = interface_config.iftraffic_units vars.iftraffic_community = interface_config.iftraffic_community If you just want to inherit all attributes specified inside the interface_config dictionary, add it to the generated service custom variables like this: /* the above can be achieved in a shorter fashion if the names inside host.vars.interfaces * are the _exact_ same as required as command parameter by the check command * definition. */ vars += interface_config If the user did not specify default values for required service custom variables, add them here. This also helps to avoid unwanted configuration validation errors or runtime failures. Please read more about conditional statements here. /*" } If the host object did not specify a custom SNMP community, set a default value specified by the global constant IftrafficSnmpCommunity. /* set the global constant if not explicitely * not provided by the `interfaces` dictionary on the host */ if (len(interface_config.iftraffic_community) == 0 || len(vars.iftraffic_community) == 0) { vars.iftraffic_community = IftrafficSnmpCommunity } Use the provided values to calculate more object attributes which can be e.g. seen in external interfaces. /* } Tip Building configuration in that dynamic way requires detailed information of the generated objects. Use the object listCLI command after successful configuration validation. Verify that the apply-for-rule successfully created the service objects with the inherited custom variables: # = "remote"" Use Object Attributes in Apply Rules ¶ Since apply rules are evaluated after the generic objects, you can reference existing host and/or service object attributes as values for any object attribute specified in that apply rule. object Host "opennebula-host" { import "generic-host" address = "10.1.1.2" vars.hosting["cust1"] = { http_uri = "/shop" customer_name = "Customer 1" customer_id = "7568" support_contract = "gold" } vars.hosting["cust2"] = { http_uri = "/" customer_name = "Customer 2" customer_id = "7569" support_contract = "silver" } } hosting is a custom variable with the Dictionary value type. This is mandatory to iterate with the key => value notation in the below apply for rule. apply Service for (customer => config in host.vars.hosting) { import "generic-service" check_command = "ping4" vars.qos = "disabled" vars += config vars.http_uri = "/" + customer + "/" + config.http_uri display_name = "Shop Check for " + vars.customer_name + "-" + vars.customer_id notes = "Support contract: " + vars.support_contract + " for Customer " + vars.customer_name + " (" + vars.customer_id + ")." notes_url = "" + host.name action_url = "" + host.name + "/" + vars.customer_id } Each loop iteration has different values for customer and config` in the local scope. 1. customer = "cust 1" config = { http_uri = "/shop" customer_name = "Customer 1" customer_id = "7568" support_contract = "gold" } 2. customer = "cust2" config = { http_uri = "/" customer_name = "Customer 2" customer_id = "7569" support_contract = "silver" } You can now add the config dictionary into vars. vars += config Now it looks like the following in the first iteration: customer = "cust 1" vars = { http_uri = "/shop" customer_name = "Customer 1" customer_id = "7568" support_contract = "gold" } Remember, you know this structure already. Custom attributes can also be accessed by using the indexer syntax. vars.http_uri = ... + config.http_uri can also be written as vars += config vars.http_uri = ... + vars.http_uri Groups ¶" } Group Membership Assign ¶. Notifications ¶ problems. In addition to that Recovery notifications are sent (they require the OK state).. You should choose which information you (and your notified users) are interested in case of emergency, and also which information does not provide any value to you and your environment. An example notification command is explained here. You can add all shared attributes to a Notification template which is inherited to the defined notifications. That way you’ll save duplicated attributes in each Notification object. Attributes can be overridden locally. template Notification "generic-notification" { interval = 15m command = "mail-service-notification" states = [ Warning, Critical, Unknown ] types = [ Problem, Acknowledgement, Recovery, Custom, FlappingStart, FlappingEnd, DowntimeStart, DowntimeEnd, DowntimeRemoved ] period = "24x7" } The time period 24x7 is included as example configuration with Icinga 2. Use the apply keyword to create Notification objects for your services: apply Notification "notify-cust-xy-mysql" to Service { import "generic-notification" users = [ "noc-xy", "mgmt-xy" ] assign where match("*has gold support 24x7*", service.notes) && (host.vars.customer == "customer-xy" || host.vars.always_notify == true ignore where match("*internal", host.name) || (service.vars.priority < 2 && host.vars.is_clustered == true) } Instead of assigning users to notifications, you can also add the user_groups attribute with a list of user groups to the Notification object. Icinga 2 will send notifications to all group members. Note Only users who have been notified of a problem before ( Warning, Critical, Unknownstates for services, Downfor hosts) will receive Recoverynotifications. Icinga 2 v2.10 allows you to configure Acknowledgement and/or Recovery without a Problem notification. These notifications will be sent without any problem notifications beforehand, and can be used for e.g. ticket systems. types = [ Acknowledgement, Recovery ] Notifications: Users from Host/Service ¶ A common pattern is to store the users and user groups on the host or service objects instead of the notification object itself. The sample configuration provided in hosts.conf and notifications.conf already provides an example for this question. Tip Please make sure to read the apply and custom variable values chapter to fully understand these examples. Specify the user and groups as nested custom variable on the host object: object Host "icinga2-agent1.localdomain" { [...] vars.notification["mail"] = { groups = [ "icingaadmins" ] users = [ "icingaadmin" ] } vars.notification["sms"] = { users = [ "icingaadmin" ] } } As you can see, there is the option to use two different notification apply rules here: One for sms. This example assigns the users and groups nested keys from the notification custom variable to the actual notification object attributes. Since errors are hard to debug if host objects don’t specify the required configuration attributes, you can add a safety condition which logs which host object is affected. critical/config: Host 'icinga2-client3.localdomain' does not specify required user/user_groups configuration attributes for notification 'mail-icingaadmin'. You can also use the script debugger for more advanced insights. apply Notification "mail-host-notification" to Host { [...] /* Log which host does not specify required user/user_groups attributes. This will fail immediately during config validation and help a lot. */ if (len(host.vars.notification.mail.users) == 0 && len(host.vars.notification.mail.user_groups) == 0) { log(LogCritical, "config", "Host '" + host.name + "' does not specify required user/user_groups configuration attributes for notification '" + name + "'.") } users = host.vars.notification.mail.users user_groups = host.vars.notification.mail.groups assign where host.vars.notification.mail && typeof(host.vars.notification.mail) == Dictionary } apply Notification "sms-host-notification" to Host { [...] /* Log which host does not specify required user/user_groups attributes. This will fail immediately during config validation and help a lot. */ if (len(host.vars.notification.sms.users) == 0 && len(host.vars.notification.sms.user_groups) == 0) { log(LogCritical, "config", "Host '" + host.name + "' does not specify required user/user_groups configuration attributes for notification '" + name + "'.") } users = host.vars.notification.sms.users user_groups = host.vars.notification.sms.groups assign where host.vars.notification.sms && typeof(host.vars.notification.sms) == Dictionary } The example above uses typeof as safety function to ensure that the vars.notification.mail = "yes" You can also do a more fine granular assignment on the service object: apply Service "http" { [...] vars.notification["mail"] = { groups = [ "icingaadmins" ] users = [ "icingaadmin" ] } [...] } This notification apply rule is different to the one above. The service notification users and groups are inherited from the service and if not set, from the host object. A default user is set too. apply Notification "mail-service-notification" to Service { [...] if (service.vars.notification.mail.users) { users = service.vars.notification.mail.users } else if (host.vars.notification.mail.users) { users = host.vars.notification.mail.users } else { /* Default user who receives everything. */ users = [ "icingaadmin" ] } if (service.vars.notification.mail.groups) { user_groups = service.vars.notification.mail.groups } else if (host.vars.notification.mail.groups) { user_groups = host.vars.notification.mail.groups } assign where ( host.vars.notification.mail && typeof(host.vars.notification.mail) == Dictionary ) || ( service.vars.notification.mail && typeof(service.vars.notification.mail) == Dictionary ) } Notification Escalations ¶. object NotificationCommand "sms-notification" { command = [ PluginDir + "/send_sms_notification", "$mobile$", "..." } The two new notification escalations are added onto the local host and its service ping4 using the generic-notification template. The user icinga-oncall-2nd-level will get notified by SMS ( sms-notification command) after 30m until 1h. Note The intervalwas set to 15m in the generic-notificationtemplate example. Lower that value in your escalations by using a secondary template or by overriding the attribute directly in the notificationsarray position for escalation-sms-2nd-level." } Notification Delay ¶" } Disable Re-notifications ¶ If you prefer to be notified only once, you can disable re-notifications by setting the interval attribute to 0. apply Notification "notify-once" to Service { import "generic-notification" command = "mail-notification" users = [ "icingaadmin" ] interval = 0 // disable re-notification assign where service.name == "ping4" } Notification Filters by State and Type ¶ If there are no notification state and type filter attributes defined at the Notification or User object, Icinga 2 assumes that all states and types are being notified. Available state and type filters for notifications are: template Notification "generic-notification" { states = [ OK, Warning, Critical, Unknown ] types = [ Problem, Acknowledgement, Recovery, Custom, FlappingStart, FlappingEnd, DowntimeStart, DowntimeEnd, DowntimeRemoved ] } Commands ¶ Icinga 2 uses three different command object types to specify how checks should be performed, notifications should be sent, and events should be handled. Check Commands ¶ CheckCommand objects define the command line how a check is called. CheckCommand objects are referenced by Host and Service objects using the check_command attribute. Note Make sure that the checker feature is enabled in order to execute checks. Integrate the Plugin with a CheckCommand Definition ¶. Passing Check Command Parameters from Host or Service ¶ Check command parameters are defined as custom variables. Define the default check command custom variables, for example mysql_user and mysql_password (freely definable naming schema) and optional their default threshold values. You can then use these custom variables as runtime macros for command arguments on the command line. Tip Use a common command type as prefix for your command arguments to increase readability. mysql_userhelps understanding the context better than just useras argument. The default custom variables can be overridden by the custom variables defined in the host or service using the check command my-mysql. The custom variables variable "icinga2-agent1.localdomain { ... vars.ssh_port = 2022 } Passing Check Command Parameters Using Apply For ¶ The host localhost with the generated services from the basic-partitions dictionary (see apply for for details) checks a basic set of disk partitions with modified custom variables (warning thresholds at 10%, critical thresholds at 5% free disk space). The custom variable variables can be found in this chapter. Command Arguments ¶ Next to the short command array specified in the command object, it is advised to define plugin/script parameters in the arguments dictionary attribute. The value of the --parameter key itself is a dictionary with additional keys. They allow to create generic command objects and are also for documentation purposes, e.g. with the description field copying the plugin’s help text in there. The Icinga Director uses this field to show the argument’s purpose when selecting it. arguments = { "--parameter" = { description = "..." value = "..." } } Each argument is optional by default and is omitted if the value is not set. Learn more about integrating plugins with CheckCommand objects in this chapter. There are additional possibilities for creating a command only once, with different parameters and arguments, shown below. Command Arguments: Value ¶ In order to find out about the command argument, call the plugin’s help or consult the README. ./check_systemd.py --help ... -u UNIT, --unit UNIT Name of the systemd unit that is beeing tested. Whenever the long parameter name is available, prefer this over the short one. arguments = { "--unit" = { } } Define a unique prefix for the command’s specific arguments. Best practice is to follow this schema: <command name>_<parameter name> Therefore use systemd_ as prefix, and use the long plugin parameter name unit inside the runtime macro syntax. arguments = { "--unit" = { value = "$systemd_unit$" } } In order to specify a default value, specify a custom variable inside the CheckCommand object. vars.systemd_unit = "icinga2" This value can be overridden from the host/service object as command parameters. Command Arguments: Description ¶ Best practice, also inside the ITL, is to always copy the command parameter help output into the description field of your check command. Learn more about integrating plugins with CheckCommand objects in this chapter. With the example above, inspect the parameter’s help text. ./check_systemd.py --help ... -u UNIT, --unit UNIT Name of the systemd unit that is beeing tested. Copy this into the command arguments description entry. arguments = { "--unit" = { value = "$systemd_unit$" description = "Name of the systemd unit that is beeing tested." } } Command Arguments: Required ¶ Specifies whether this command argument is required, or not. By default all arguments are optional. Tip Good plugins provide optional parameters in square brackets, e.g. [-w SECONDS]. The required field can be toggled with a boolean value. arguments = { "--host" = { value = "..." description = "..." required = true } } Whenever the check is executed and the argument is missing, Icinga logs an error. This allows to better debug configuration errors instead of sometimes unreadable plugin errors when parameters are missing. Command Arguments: Skip Key ¶ The arguments attribute requires a key, empty values are not allowed. To overcome this for parameters which don’t need the name in front of the value, use the skip_key boolean toggle. command = [ PrefixDir + "/bin/icingacli", "businessprocess", "process", "check" ] arguments = { "--process" = { value = "$icingacli_businessprocess_process$" description = "Business process to monitor" skip_key = true required = true order = -1 } } The service specifies the custom variable icingacli_businessprocess_process. vars.icingacli_businessprocess_process = "bp-shop-web" This results in this command line without the --process parameter: '/bin/icingacli' 'businessprocess' 'process' 'check' 'bp-shop-web' You can use this method to put everything into the arguments attribute in a defined order and without keys. This avoids entries in the command attributes too. Command Arguments: Set If ¶ This can be used for the following scenarios: Parameters without value, e.g. --sni. command = [ PluginDir + "/check_http"] arguments = { "--sni" = { set_if = "$http_sni$" } } Whenever a host/service object sets the http_sni custom variable to true, the parameter is added to the command line. '/usr/lib64/nagios/plugins/check_http' '--sni' Numeric values are allowed too. Parameters with value, but additionally controlled with an extra custom variable boolean flag. The following example is taken from the postgres CheckCommand. The host parameter should use a value but only whenever the postgres_unixsocket custom variable is set to false. Note: set_if is using a runtime lambda function because the value is evaluated at runtime. This is explained in this chapter. command = [ PluginContribDir + "/check_postgres.pl" ] arguments = { "-H" = { value = "$postgres_host$" set_if = {{ macro("$postgres_unixsocket$") == false }} description = "hostname(s) to connect to; defaults to none (Unix socket)" } An executed check for this host and services … object Host "postgresql-cluster" { // ... vars.postgres_host = "192.168.56.200" vars.postgres_unixsocket = false } … use the following command line: '/usr/lib64/nagios/plugins/check_postgres.pl' '-H' '192.168.56.200' Host/service objects which set postgres_unixsocket to false don’t add the -H parameter and its value to the command line. References: abbreviated lambda syntax, macro. Command Arguments: Order ¶ Plugin may require parameters in a special order. One after the other, or e.g. one parameter always in the first position. arguments = { "--first" = { value = "..." description = "..." order = -5 } "--second" = { value = "..." description = "..." order = -4 } "--last" = { value = "..." description = "..." order = 99 } } Keep in mind that positional arguments need to be tested thoroughly. Command Arguments: Repeat Key ¶ Parameters can use Array as value type. Whenever Icinga encounters an array, it repeats the parameter key and each value element by default. command = [ NscpPath + "\\nscp.exe", "client" ] arguments = { "-a" = { value = "$nscp_arguments$" description = "..." repeat_key = true } } On a host/service object, specify the nscp_arguments custom variable as an array. vars.nscp_arguments = [ "exclude=sppsvc", "exclude=ShellHWDetection" ] This translates into the following command line: nscp.exe 'client' '-a' 'exclude=sppsvc' '-a' 'exclude=ShellHWDetection' If the plugin requires you to pass the list without repeating the key, set repeat_key = false in the argument definition. command = [ NscpPath + "\\nscp.exe", "client" ] arguments = { "-a" = { value = "$nscp_arguments$" description = "..." repeat_key = false } } This translates into the following command line: nscp.exe 'client' '-a' 'exclude=sppsvc' 'exclude=ShellHWDetection' Command Arguments: Key ¶ The arguments attribute requires unique keys. Sometimes, you’ll need to override this in the resulting command line with same key names. Therefore you can specifically override the arguments key. arguments = { "--key1" = { value = "..." key = "-specialkey" } "--key2" = { value = "..." key = "-specialkey" } } This results in the following command line: '-specialkey' '...' '-specialkey' '...' Environment Variables ¶ The env command object attribute specifies a list of environment variables with values calculated from custom variables which should be exported as environment variables prior to executing the command. This is useful for example for hiding sensitive information on the command line output when passing credentials to database checks: object CheckCommand "mysql" { command = [ PluginDir + "/check_mysql" ] arguments = { "-H" = "$mysql_address$" "-d" = "$mysql_database$" } vars.mysql_address = "$address$" vars.mysql_database = "icinga" vars.mysql_user = "icinga_check" vars.mysql_pass = "password" env.MYSQLUSER = "$mysql_user$" env.MYSQLPASS = "$mysql_pass$" } The executed command line visible with ps or top looks like this and hides the database credentials in the user’s environment. /usr/lib/nagios/plugins/check_mysql -H 192.168.56.101 -d icinga Note If the CheckCommand also supports setting the parameter in the command line, ensure to use a different name for the custom variable. Otherwise Icinga 2 adds the command line parameter. If a specific CheckCommand object provided with the Icinga Template Library needs additional environment variables, you can import it into a new custom CheckCommand object and add additional env keys. Example for the mysql_health CheckCommand: object CheckCommand "mysql_health_env" { import "mysql_health" // env.NAGIOS__SERVICEMYSQL_USER = "$mysql_health_env_username$" env.NAGIOS__SERVICEMYSQL_PASS = "$mysql_health_env_password$" } Specify the custom variables mysql_health_env_username and mysql_health_env_password in the service object then. Note Keep in mind that the values are still visible with the debug console and the inspect mode in the Icinga Director. You can also set global environment variables in the application’s sysconfig configuration file, e.g. HOME or specific library paths for Oracle. Beware that these environment variables can be used by any CheckCommand object and executed plugin and can leak sensitive information. Notification Commands ¶ NotificationCommand objects define how notifications are delivered to external interfaces NotificationCommand objects are referenced by Notification objects using the command attribute. Note Make sure that the notification feature is enabled in order to execute notification commands. While it’s possible to specify an entire notification command right in the NotificationCommand object it is generally advisable to create a shell script in the /etc/icinga2/scripts directory and have the NotificationCommand object refer to that. A fresh Icinga 2 install comes with with two example scripts for host and service notifications by email. Based on the Icinga 2 runtime macros (such as $service.output$ for the current check output) it’s possible to send email to the user(s) associated with the notification itself ( $user.email$). Feel free to take these scripts as a starting point for your own individual notification solution - and keep in mind that nearly everything is technically possible. Information needed to generate notifications is passed to the scripts as arguments. The NotificationCommand objects mail-host-notification and mail-service-notification correspond to the shell scripts mail-host-notification.sh and mail-service-notification.sh in /etc/icinga2/scripts and define default values for arguments. These defaults can always be overwritten locally. Note This example requires the Depending on the distribution, you need a local mail transfer agent (MTA) such as Postfix, Exim or Sendmail in order to send emails. These tools virtually provide the mail-host-notification ¶ The mail-host-notification NotificationCommand object uses the example notification script located in /etc/icinga2/scripts/mail-host-notification.sh. Here is a quick overview of the arguments that can be used. See also host runtime macros for further information. mail-service-notification ¶ The mail-service-notification NotificationCommand object uses the example notification script located in /etc/icinga2/scripts/mail-service-notification.sh. Here is a quick overview of the arguments that can be used. See also service runtime macros for further information. Dependencies ¶ Icinga 2 uses host and service Dependency objects for determining ] In other words If the parent service object changes into the Warningstate, this dependency will fail and render all child objects (hosts or services) unreachable. You can determine the child’s reachability by querying the last_reachable attribute via the REST API. Note Reachability calculation depends on fresh and processed check results. If dependencies disable checks for child objects, this won’t work reliably. Implicit Dependencies for Services on Host ¶ Icinga 2 automatically adds an implicit dependency for services on their host. That way service notifications are suppressed when a host is DOWN or UNREACHABLE. This dependency does not overwrite other dependencies and implicitly } Dependencies for Network Reachability ¶‘s" } Apply Dependencies based on Custom Variables ¶ variables variable variable. Dependencies for Agent Checks ¶ Another good example are agent based checks. You would define a health check for the agent daemon responding to your requests, and make all other services querying that daemon depend on that health check. apply Service "agent-health" { check_command = "cluster-zone" display_name = "cluster-health-" + host.name /* This follows the convention that the agent zone name is the FQDN which is the same as the host object name. */ vars.cluster_zone = host.name assign where host.vars.agent_endpoint } Now, make all other agent based checks dependent on the OK state of the agent-health service. apply Dependency "agent-health-check" to Service { parent_service_name = "agent-health" states = [ OK ] // Fail if the parent service state switches to NOT-OK disable_notifications = true assign where host.vars.agent_endpoint // Automatically assigns all agent endpoint checks as child services on the matched host ignore where service.name == "agent-health" // Avoid a self reference from child to parent } This is described in detail in this chapter. Event Commands ¶ with the event_command attribute. Therefore the EventCommand object should define a command line evaluating the current service state and other service runtime attributes available through runtime variables. Runtime macros such as $service.state_type$ and $service.state$ will be processed by Icinga 2 and help with fine-granular triggered events If the host/service is located on a client as command endpoint the event command will be executed on the client itself (similar to the check command). Common use case scenarios are a failing HTTP check which requires an immediate restart via event command. Another example would be an application that is not responding and therefore requires a restart. You can also use event handlers to forward more details on state changes and events than the typical notification alerts provide. Use Event Commands to Send Information from the Master ¶ This example sends a web request from the master node to an external tool for every event triggered on a businessprocess service. Define an EventCommand object send_to_businesstool which sends state changes to the external tool. object EventCommand "send_to_businesstool" { command = [ "/usr/bin/curl", "-s", "-X PUT" ] arguments = { "-H" = { value ="$businesstool_url$" skip_key = true } "-d" = "$businesstool_message$" } vars.businesstool_url = "" vars.businesstool_message = "$host.name$ $service.name$ $service.state$ $service.state_type$ $service.check_attempt$" } Set the event_command attribute to send_to_businesstool on the Service. object Service "businessprocess" { host_name = "businessprocess" check_command = "icingacli-businessprocess" vars.icingacli_businessprocess_process = "icinga" vars.icingacli_businessprocess_config = "training" event_command = "send_to_businesstool" } In order to test this scenario you can run: nc -l 8080 This allows to catch the web request. You can also enable the debug log and search for the event command execution log message. tail -f /var/log/icinga2/debug.log | grep EventCommand Feed in a check result via REST API action process-check-result or via Icinga Web 2. Expected Result: # nc -l 8080 PUT /businesstool HTTP/1.1 User-Agent: curl/7.29.0 Host: localhost:8080 Accept: */* Content-Length: 47 Content-Type: application/x-www-form-urlencoded businessprocess businessprocess CRITICAL SOFT 1 Use Event Commands to Restart Service Daemon via Command Endpoint on Linux ¶ This example triggers a restart of the httpd service on the local system when the procs service check executed via Command Endpoint fails. It only triggers if the service state is Critical and attempts to restart the service before a notification is sent. Requirements: - Icinga 2 as client on the remote node - icinga user with sudo permissions to the httpd daemon Example on CentOS 7: # visudo icinga ALL=(ALL) NOPASSWD: /usr/bin/systemctl restart httpd Note: Distributions might use a different name. On Debian/Ubuntu the service is called apache2. Define an EventCommand object restart_service which allows to trigger local service restarts. Put it into a global zone to sync its configuration to all clients. [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/global-templates/eventcommands.conf object EventCommand "restart_service" { command = [ PluginDir + "/restart_service" ] arguments = { "-s" = "$service.state$" "-t" = "$service.state_type$" "-a" = "$service.check_attempt$" "-S" = "$restart_service$" } vars.restart_service = "$procs_command$" } This event command triggers the following script which restarts the service. The script only is executed if the service state is CRITICAL. Warning and Unknown states are ignored as they indicate not an immediate failure. [root@icinga2-agent1.localdomain /]# vim /usr/lib64/nagios/plugins/restart_service #!/bin/bash while getopts "s:t:a:S:" opt; do case $opt in s) servicestate=$OPTARG ;; t) servicestatetype=$OPTARG ;; a) serviceattempt=$OPTARG ;; S) service=$OPTARG ;; esac done if ( [ -z $servicestate ] || [ -z $servicestatetype ] || [ -z $serviceattempt ] || [ -z $service ] ); then echo "USAGE: $0 -s servicestate -z servicestatetype -a serviceattempt -S service" exit 3; else # Only restart on the third attempt of a critical event if ( [ $servicestate == "CRITICAL" ] && [ $servicestatetype == "SOFT" ] && [ $serviceattempt -eq 3 ] ); then sudo /usr/bin/systemctl restart $service fi fi [root@icinga2-agent1.localdomain /]# chmod +x /usr/lib64/nagios/plugins/restart_service Add a service on the master node which is executed via command endpoint on the client. Set the event_command attribute to restart_service, the name of the previously defined EventCommand object. [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/master/icinga2-agent1.localdomain.conf object Service "Process httpd" { check_command = "procs" event_command = "restart_service" max_check_attempts = 4 host_name = "icinga2-agent1.localdomain" command_endpoint = "icinga2-agent1.localdomain" vars.procs_command = "httpd" vars.procs_warning = "1:10" vars.procs_critical = "1:" } restart_service Use Event Commands to Restart Service Daemon via Command Endpoint on Windows ¶ This example triggers a restart of the httpd service on the remote system when the service-windows service check executed via Command Endpoint fails. It only triggers if the service state is Critical and attempts to restart the service before a notification is sent. Requirements: - Icinga 2 as client on the remote node - Icinga 2 service with permissions to execute Powershell scripts (which is the default) Define an EventCommand object restart_service-windows which allows to trigger local service restarts. Put it into a global zone to sync its configuration to all clients. [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/global-templates/eventcommands.conf object EventCommand "restart_service-windows" { command = [ "C:\\Windows\\SysWOW64\\WindowsPowerShell\\v1.0\\powershell.exe", PluginDir + "/restart_service.ps1" ] arguments = { "-ServiceState" = "$service.state$" "-ServiceStateType" = "$service.state_type$" "-ServiceAttempt" = "$service.check_attempt$" "-Service" = "$restart_service$" "; exit" = { order = 99 value = "$$LASTEXITCODE" } } vars.restart_service = "$service_win_service$" } This event command triggers the following script which restarts the service. The script only is executed if the service state is CRITICAL. Warning and Unknown states are ignored as they indicate not an immediate failure. Add the restart_service.ps1 Powershell script into C:\Program Files\Icinga2\sbin: param( [string]$Service = '', [string]$ServiceState = '', [string]$ServiceStateType = '', [int]$ServiceAttempt = '' ) if (!$Service -Or !$ServiceState -Or !$ServiceStateType -Or !$ServiceAttempt) { $scriptName = GCI $MyInvocation.PSCommandPath | Select -Expand Name; Write-Host "USAGE: $scriptName -ServiceState servicestate -ServiceStateType servicestatetype -ServiceAttempt serviceattempt -Service service" -ForegroundColor red; exit 3; } # Only restart on the third attempt of a critical event if ($ServiceState -eq "CRITICAL" -And $ServiceStateType -eq "SOFT" -And $ServiceAttempt -eq 3) { Restart-Service $Service; } exit 0; Add a service on the master node which is executed via command endpoint on the client. Set the event_command attribute to restart_service-windows, the name of the previously defined EventCommand object. [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/master/icinga2-agent2.localdomain.conf object Service "Service httpd" { check_command = "service-windows" event_command = "restart_service-windows" max_check_attempts = 4 host_name = "icinga2-agent2.localdomain" command_endpoint = "icinga2-agent2.localdomain" vars.service_win_service = "httpd" } In order to test this configuration just stop the httpd on the remote host icinga2-agent1.localdomain. C:> net stop httpd You can enable the debug log and search for the executed command line in C:\ProgramData\icinga2\var\log\icinga2\debug.log. Use Event Commands to Restart Service Daemon via SSH ¶ This example triggers a restart of the httpd daemon via SSH when the http service check fails. Requirements: - SSH connection allowed (firewall, packet filters) - icinga user with public key authentication - icinga user with sudo permissions to restart: [root@icinga2-master1.localdomain /]# vim /etc/icinga2/zones.d/master/local_eventcommands.conf /* variable systemctl restart $event_by_ssh_service$" } Now set the event_command attribute to event_by_ssh_restart_service and tell it which service should be restarted using the event_by_ssh_service attribute. apply Service "http" { import "generic-service" check_command = "http" event_command = "event_by_ssh_restart_service" vars.event_by_ssh_service = "$host.vars.httpd_name$" //vars.event_by_ssh_logname = "icinga" //vars.event_by_ssh_identity = "/home/icinga/.ssh/id_rsa.pub" assign where host.vars.httpd_name } Specify the httpd_name custom variable on the host to assign the service and set the event handler service. object Host "remote-http-host" { import "generic-host" address = "192.168.1.100" vars.httpd_name = "apache2" } by_ssh
https://icinga.com/docs/icinga-2/latest/doc/03-monitoring-basics/
CC-MAIN-2022-27
en
refinedweb
[reportlab-users] underlines and horizontal spacing Henning von Bargen H.vonBargen at t-p.com Fri Oct 31 04:30:18 EDT 2008 Previous message: [reportlab-users] limits of KeepTogether() ? Next message: [reportlab-users] Add comments to PDF? Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] Chris Foster wrote: > I'm using intra-paragraph markup (<u>...</u>) for underlines and my > users aren't happy with the line extending into the space between > words. (It actually looks OK to me, but they're picky.) Is there a > better way to do underlining or maybe get more spacing between words? You might try the Paragraph class from wordaxe.rl.NewParagraph (which will also be used by default as if you say from wordaxe.rl.Paragraph import Paragraph) from the wordaxe-0.3.0 release ( ). This implementation handles spaces explicitly, whereas the reportlab.platypus.Paragraph implementation only stores the words (not the spaces inbetween). Using the wordaxe implementation, you can use Paragraph("<u>Only</u> <u>words</u> <u>are</u> <u>underlined</u>.", style) This is untested, but should work as expected. For more spacing between words, you probably have to do some coding, because the current implementation converts multiple spaces to a single space. Or you could use a transparent inline image with <img>. Take a look at Dinu Gherman's alternative paragraph implementation, too. Henning Previous message: [reportlab-users] limits of KeepTogether() ? Next message: [reportlab-users] Add comments to PDF? Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] More information about the reportlab-users mailing list
http://two.pairlist.net/pipermail/reportlab-users/2008-October/007618.html
CC-MAIN-2022-27
en
refinedweb
Particle CLI The Particle CLI is a powerful tool for interacting with your devices and the Particle Device Cloud. The CLI uses Node.js and can easily run on Windows, macOS (OS X), and Linux. It's also open source so you can edit and change it, and even send in your changes as pull requests if you want to share! Installing Using macOS or Linux The easiest way to install the CLI is to open a Terminal and type: bash <( curl -sL ) This command downloads the particle command to your home directory at ~/bin, installs a version of Node.js to ~/.particle and installs the particle-cli Node.js module that contain the code of the CLI. It will also try to install DFU-util, a utility program for programming devices over USB. See the instructions for installing DFU-util if the installer is not able to automatically install dfu-util. The installer also works on the Raspberry Pi! Using Windows Download the Windows CLI Installer and run it to install the Particle CLI, the device drivers and DFU-util. The CLI is installed to %LOCALAPPDATA%\particle ( C:\Users\username\AppData\Local\particle for Windows in English). Advanced Install You can manually install the particle-cli Node.js package if you need the CLI installed in a different location or you need to install a specific version of the CLI. Make sure you have a recent LTS version of Node.js installed. # check that you have node.js 10 or above. Check on how to update node.js $ node -v v10.15.1 # check that you have npm 6 or above $ npm -v 6.4.1 Next, open a command prompt or terminal, and install by typing: # how to install the particle-cli $ npm install -g particle-cli $ particle login If you experience permission errors, we recommend you change the directory where npm installs global packages (ones installed with -g) to another directory as documented here. If you must install particle-cli to the default global package location as the superuser, you have to use the --unsafe-perm flag to successfully install all dependencies: sudo npm install -g --unsafe-perm particle-cli. For more OS-specific install instructions, see below. On Windows, make sure to download and install the Windows Drivers if you installed the CLI through npm and did not use the Windows CLI Installer. To use the local flash and key features you'll also need to install dfu-util, and openssl. They are freely available and open-source, and there are installers and binaries for most major platforms. Here are some great tutorials on the community for full installs: Install Separate Components for Windows Installing on Ubuntu 12.04 Installing on Ubuntu 14.04 Upgrading to the latest version If you installed the Particle CLI through the installer, it will periodically update itself to the latest version. To force it to update, run the installer script again or enter this command: # how to update the installed CLI $ particle update-cli If the CLI is outputing unexpected errors after an update, delete the ~/.particle (macOS and Linux) or C:\Users\<username>\AppData\Local\particle directory and run the installer script again to start over. To prevent the Particle CLI from automatically updating, set the environment variable PARTICLE_DISABLE_UPDATE=true for your system. Use particle update-cli to manually update. If you installed manually using npm install, you can upgrade by running the same command you used to install the tool. Running from source (advanced) To grab the CLI source and play with it locally # how to get the source code for the CLI $ git clone $ cd particle-cli $ npm install $ npm start -- help View README#Development for more Getting Started These next two commands are all you need to get started setting up an account, claiming a device, and discovering new features. particle setup This command will guide you through logging in or creating a new account as well as claiming your device! # how to setup your account and your device $ particle setup particle help Shows you what commands are available and how to use them. You can also give the name of a command for detailed help. # how to get help $ particle help $ particle help keys Flashing over Serial for the Electron If you're wanting to save data on your Electron you should definitely consider flashing your Electron over USB instead of OTA (over-the-air). Assuming you've compiled and downloaded the firmware binary from the Web IDE by clicking the cloud button next to the file name, you should be able to use the Particle CLI, mentioned above, to flash your application firmware to your Electron without using data. Steps: - 1: Put the Electron in to DFU mode (blinking yellow). - 2: Open a command prompt or terminal window. - 3: Navigate to the folder where you've downloaded the firmware.binfile. - 4: From the CLI issue particle flash --usb firmware.bin # How to flash an Electron over USB $ particle flash --usb firmware.bin Note: If your Electron goes into safe mode blinking magenta you should put the Electron back into DFU mode (blinking yellow) and do: $ particle update Blink an LED with Tinker If you're just opening a new device, chances are it's already loaded with Tinker, the app we load at the factory. If you don't have Tinker, or if you've been using the build IDE already, let's load it quickly by typing: # How to re-load tinker onto a device $ particle flash my_new_device_name tinker Including: /usr/local/lib/node_modules/particle-cli/binaries/particle_tinker.bin attempting to flash firmware to your device my_new_device_name flash device said {"id":"0123456789ABCDEFGHI","status":"Update started"} Let's make sure your device is online and loaded with Tinker. We should see the four characteristic functions exposed by Tinker, "digitalWrite", "digitalRead", "analogWrite", and "analogRead". # how to show all your devices and their functions and variables $ particle list Checking with the cloud... Retrieving devices... (this might take a few seconds) my_device_name (0123456789ABCDEFGHI) 0 variables, and 4 functions Functions: int digitalread(String args) int digitalwrite(String args) int analogread(String args) int analogwrite(String args) Let's try turning on the LED attached to pin D7 on your device. # how to call a function on your device $ particle call my_device_name digitalwrite D7,HIGH 1 $ particle call my_device_name digitalwrite D7,LOW 1 Nice! You should have seen the small blue LED turn on, and then off. Update your device remotely You can write whole apps and flash them remotely from the command line just as you would from the build IDE. Let's write a small blink sketch to try it out. Copy and paste the following program into a file called blinky.ino // Copy me to blinky.ino #define PIN D7 int state = 0; void setup() { //tell the device we want to write to this pin pinMode(PIN, OUTPUT); } void loop() { //alternate the PIN between high and low digitalWrite(PIN, (state) ? HIGH : LOW); //invert the state state = !state; //wait half a second delay(500); } Then let's compile that program to make sure it's valid code. The CLI will automatically download the compiled binary of your program if everything went well. # how to compile a program without flashing to your device $ particle compile photon blinky.ino Including: blinky.ino attempting to compile firmware pushing file: blinky.ino grabbing binary from: saved firmware to firmware_123456781234.bin Compiled firmware downloaded. Replace photon with the type of device you have: - argon - boron - photon - p1 - electron (also E series) Now that we have a valid program, let's flash it to our device! We can use either the source code again, or we can send our binary. # how to flash a program to your device (from source code) $ particle flash my_device_name blinky.ino # OR - how to flash a pre-compiled binary to your device $ particle flash my_device_name firmware_123456781234.bin Including: firmware_123456781234.bin attempting to flash firmware to your device my_device_name flash device said {"id":"01234567890ABCDEFGH","status":"Update started"} Compile and flash code locally You can find a step-by-step guide to installing the local build toolchain for the firmware in the FAQ section of the documentation. After building your code on your machine, you can flash it to your device over Serial or remotely. Working with projects and libraries When your code gets too long for one file or you want to use libraries that other developers have contributed to the Particle platform it's time to create a project. Creating a project By default projects are created in your home directory under Particle or in your Documents folder under Particle on Windows. You can also create projects in the current directory. $ particle project create What would you like to call your project? [myproject]: doorbell Would you like to create your project in the default project directory? [Y/n]: Initializing project in directory /home/user/Particle/projects/doorbell... > A new project has been initialized in directory /home/user/Particle/projects/doorbell Using libraries The CLI supports using libraries with your project. This allows you to incorporate already written and tested code into your project, speeding up development and assuring quality. The overall flow when consuming a library goes like this - set up the initial project for your application - find the library you want to add particle library search - add the library to your project - particle library add - edit your source code to use the library - compile your project - particle compile These commands are described in more details in the CLI reference. Instead of the text search in the CLI, you can also use the web-based library search. library search The library search command allows you to search for libraries that are related to the text that you type in. For example, particle library search neo Will find libraries containing neo in their name. library add The library add command adds the latest version of a library to your project. For example, if you wanted to add the InternetButton library to your project, you would type $ particle library add internetbutton > Library InternetButton 0.1.10 has been added to the project. > To get started using this library, run particle library view InternetButton to view the library documentation and sources. This will add the InternetButton dependency to your project's project.properties file. The InternetButton library itself is not present in your project, so you won't see the InternetButton sources. The library is added to your project when the project is compiled in the cloud. To make the library functionality available to your application, you add an include statement to your application source code. The include statement names the library header file, which is the library name with a .h ending. For example, if we were using the library "UberSensor", it would be included like this #include "UberSensor.h" library view The library view downloads the source code of a library so you can view the code, example and README. $ particle library view internetbutton Checking library internetbutton... Library InternetButton 0.1.11 installed. Checking library neopixel... Checking library particle_ADXL362... Library particle_ADXL362 0.0.1 installed. Library neopixel 0.0.10 installed. To view the library documentation and sources directly, please change to the directory /home/monkbroc/Particle/community/libraries/InternetButton@0.1.11 Change to the directory indicated to view the sources. library copy Adding a library to your project does not add the library sources. For times when you want to modify the library sources, you can have them added locally. particle library copy neopixel The library will be copied to the lib folder of your project. If you already have the library in your project.properties make sure to remove it so the cloud compiler doesn't overwrite your changed copy with the published code. Incorporating the library into your project Once the library is added, it is available for use within your project. The first step to using the library is to include the library header, which follows the name of the library. For example: #include "neopixel.h" The functions and classes from that library are then available for use in your application. Check out the library examples and documentation that comes with the library for specifics on using that library. Contributing Libraries Contributing a library is the process where you author a library and share this with the community. The steps to creating a library are as follows: - optionally, create a project for consuming the library - scaffold a new library structure - library create - author the library, tests and examples - publish the library Create a project for consuming the library While it's not strictly necessary to have a project present when authoring a new library, having one can help ensure that the library works as intended before publishing it. The project allows you to consume the library, check that it compiles and verify it behaves as expected on the target platforms before publishing. For the library consumer project that will consume the library mylib, create an initial project structure that looks like this: src/project.cpp src/project.h project.properties lib/mylib The library will exist in the directory lib/mylib. All these files are initially empty - we'll add content to them as the library is authored. Scaffolding the library The library create command is used to scaffold the library. It creates a skeleton structure for the library, containing initial sources, examples, tests and documentation. In our example project structure we want to create a new library in lib/mylib so we will run these commands: cd lib/mylib particle library create The command will prompt you to enter the name of the library - mylib, the version - 0.0.1 and the author, your name/handle/ident. The command will then create the skeleton structure for the library. Authoring the library You are then free to edit the .cpp and .h files in the lib/mylib/src folder to provide the functionality of your library. It's a good idea to test often, by writing code in the consuming project that uses each piece of functionality in the library as it's written. Consuming the library To test your changes in the library, compile the project using particle compile <platform> particle compile photon This will create a .bin file which you then flash to your device. particle flash mydevice firmware.bin (Replace the name firmware.bin with the name of the .bin file produced by the compile step.) Contributing the library Once you have tested the library and you are ready to upload the library to the cloud, you run the library upload command. You run this command from the directory containing the library cd lib/mylib particle library upload Before the library is contributed, it is first validated. If validation succeeds, the library is contributed and is then available for use in your other projects. The library is not available to anyone else. Publishing the Library If you wish to make a contributed library available to everyone, it first needs to be published. When publishing a library, it's important to ensure the version number hasn't been published before - if the version has already been published, the library will not be published and an error message will be displayed. Incrementing the version number with each publish is a recommended approach to ensuring unique versions. Once the library is published, it is visible to everyone and available for use. Once the a given version of a library has been published, the files and data cannot be changed. Subsequent changes must be via a new contributed version and subsequent publish. Reference For more info on CLI commands, go here. Also, check out and join our community forums for advanced help, tutorials, and troubleshooting.
https://docs.particle.io/getting-started/developer-tools/cli/
CC-MAIN-2022-27
en
refinedweb
requirement Modular exponentiation algorithm , Through the verification of the server . visit**.207.12.156:9012/step_04 The server will give you 10 A question , Each question contains three numbers (a,b,c), Please give me a^b%c Value . The return value is written to the field ans,10 Use a comma for each number , separate , Submitted to the**.207.12.156:9012/step_04 Tips : Note that commas must be English commas . {"is_success": true, "questions": "[[1336, 9084, 350830992845], [450394869827, 717234262, 9791], [2136, 938408201856, 612752924963], [6026, 754904536976, 5916], [787296602, 305437915, 661501280], [864745305, 6963, 484799723855], [4165, 110707859589, 102613824], [398189032, 723455558974, 794842765789], [974657695, 138141973218, 760159826372], [9034, 7765, 437523243]]"} Python Program realization import requests as re import time def fastModular(x): # Fast power implementation """x[0] = base """ """x[1] = power""" """x[2] = modulus""" result = 1 while(x[1] > 0): if(x[1] & 1): # Bit operations speed up the judgment of parity result = result * x[0] % x[2] x[1] = int(x[1]/2) x[0] = x[0] * x[0] % x[2] return result answer = '' getHtml = re.get("**.207.12.156:9012/step_04/") start = time.process_time() # Operation timestamp for i in eval(getHtml.json()['questions']): # Will have '[]' The string of symbols is converted into a list answer += str(fastModular(i)) + ',' end = time.process_time() # Operation timestamp param = {'ans':answer[:-1]} print(f"Runing time is { end- start} sec") getHtml = re.get("**.207.12.156:9012/step_04/",params=param) print(getHtml.text) >>> runing time is 0.0 s {"is_success": true, "message": "please visit**.207.12.156:9012/context/eb63fffd85c01a0a5d8f3cadea18cf56"} >>> Run directly to get the next link answer !! How can we calculate A^B mod C quickly if B is a power of 2 ? Using modular multiplication rules: i.e. A^2 mod C = (A * A) mod C = ((A mod C) * (A mod C)) mod C a^b % c = (a % c)^b % c (a * b * c) % d = {(a % d) * (c % d) * (b % d)} % d a^5 % c = (a % c)^5 % c = {(a % c) * (a % c) * (a % c) * (a % c) * (a % c)} % c One algorithm is to use {(a % c) * (a % c) * (a % c) * (a % c) * (a % c)} % c, Using the normal power method , Iterate variables in . result = result * a % c This will iterate 5 Time , That is to say Power operation time complexity . notes : Iterative operations {(result % c) * (a % c)} % c == result * a % c Another is to use the relationship between base and power , Divide the power by 2, Square times the base . This number remains the same . Plus the use of lemma will be much more convenient .log(power) Time complexity of . 4^20 mod 11 = 1099511627776 % 11 =1 = 16^10 mod 11 = (16 mod 11)^10 = 5 ^ 10 mod 11 = 25 ^ 5 mod 11 = (25 mod 11)^5 = 3 ^ 5 mod 11 9 ^ 2.5 = 9 ^ 2 * 9^(1/2) = 9 ^ 2 * 3 mod 11 The above one needs square 3 change 9 Reopen 2 Power 9 change 3, Get the results . After simplification, we find that this method can be reduced to , When the power becomes odd , Let's subtract one from the odd number , Divide by two , Base squared , And multiply by the base Calculate . The result is the same . It's easier to think so . It is also convenient for program implementation 3 ^ 5 mod 11 = 9 ^ 2 * 3 mod 11 ( 5-1=4 ,4/2=2 ) = (9 mod 11)^2 mod 11 = 2 ^ 2 *3 mod 11 = 4 ^ 1 * 3 mod 11 = (4 mod 11)^1 * 3 mod 11 = 7^1 * 3 mod 11 = 49^0 *7 *3 mod 11 =21 % 11 =1 Odd minus one The part divided into even powers will eventually reach 0 Time , The result is 1. The power of the first power is the key factor to determine the result . Fast power modules of large numbers Python More articles on Implementation - codeforces magic five -- Fast power module Topic link : First, calculate the value in a period , Save in ans Inside , Is the usual fast power module m practice . Then you have to calculate a formula , Such as the k ... - Fast power module n operation Exponentiation in modular operation , such as 5^596 mod 1234, Of course , Direct use of the cycle of violence is also possible , See a fast modular exponentiation algorithm in the book The general idea is ,a^b mod n , First the b Convert to binary , Then start at the top ( Highest one ... - URAL 1141. RSA Attack( Euler theorem + Extended Euclid + Fast power module ) Topic link The question : Here you are. n,e,c, And know me ≡ c (mod n), and n = p*q,pq All are prime numbers . Ideas : This question really matches the title , It's a RSA Algorithm , At present, the most important encryption algorithm on earth .RSA count ... - hdu 2462( Euler theorem + High precision and fast power module ) The Luckiest number Time Limit: 2000/1000 MS (Java/Others) Memory Limit: 32768/32768 K (Java/Othe ... - 2014 The first of many schools I topic || HDU 4869 Turn the pokers( Fermat's small Theorem + Fast power module ) Topic link The question : m card , It can be translated n Time , Every time I turn xi card , Ask how many forms you can get in the end . Ideas :0 Defined as the opposite ,1 Defined as positive ,( At first it was anti ), For each flop , We define two boundaries lb,rb, Represents each time 1 least ... - hdu 1852( Fast power module + The formula for taking modulus when there is division ) Beijing 2008 Time Limit: 1000/1000 MS (Java/Others) Memory Limit: 32768/65535 K (Java/Others)Tota ... - 2019 Nanchang invitational tournament C. Angry FFF Party Fast exponentiation of large number matrices + Classification of discussion Topic link The question : Defined function : $$F(n)=\left\{\begin{aligned}1, \quad n=1,2 \\F(n- ... - [ primary ]sdut2605 A^X mod P Shandong Province, the fourth ACM Provincial competition ( The meter , Fast power module idea , Hash ) This article from the : The question : f(x) = K, x = 1 f(x) = (a*f(x-1) + b)%m , x > 1 Find out ( A^(f(1) ... - Fast power module m Algorithm Here are three numbers ,a,b,m seek a^b%m Value . If b Too big , Using the ordinary fast power will timeout . So will b=2^0*b0+2^1*b+b1...... then , You know how to do it by using the knowledge of junior middle school . continue , Code up . #inc ... - hdu 4602 Fast power module of recursive relation matrix Partition Time Limit: 2000/1000 MS (Java/Others) Memory Limit: 32768/32768 K (Java/Others)Total S ... Random recommendation - Study with me python, Basic concepts (life is short ,we need python) author :tobecrazy Source : Welcome to reprint , Reprint please indicate the source .thank you! Basic concepts : Constant : Constant names are all uppercase , Such as PI Variable : ... - Network | UDP checksum 1. The checksum ICMP,IP,UDP,TCP It's all in the header checksum( Inspection and ) Field .IP The check sum in the header checks only the header :ICMP.IGMP.TCP and UDP Checksums in headers check headers and data . UDP and TCP ... - 【Selenium】1. Introduce Selenium This article is for study and communication , No commercial use , No profit . It's all my own translation to urge me to study . Poor translation , Forgive me . originate : ... - android Life cycle of 1. Running state : When an activity is at the top of the stack , At this time, the activity is active , The system is unwilling to recycle active , Will affect the user experience . 2. Pause state : When an activity is no longer at the top of the stack , But still visible , This is the pause state . On hold ... - 【 Basics 】Asp.Net operation Cookie summary One . What is? Cookie? Cookie Is a small amount of data stored in the text file of the client file system or in the memory of the client browser dialog . It's mainly used to track data settings , for example : When we want to visit a website page , When a user requests a web page , Applications may ... - [ Reprint ] Dubbo Architecture design details Reprinted from Dubbo yes Alibaba Open source distributed service framework , Its biggest characteristic is to construct in a hierarchical way , In this way, the layers can be decoupled ... - node.js Advanced topics < h3>notes_ control flow //forloopi.js var fs = require('fs'); var files = ['a.txt', 'b.txt', 'c.txt']; ... - php in heredoc And nowdoc How to use One .heredoc Structure and usage Heredoc Structure is like a double quote string without double quotes , That is to say in heredoc A single quotation mark in a structure does not need to be escaped . The variables in its structure will be replaced , But in heredoc The structure contains complex ... - MySQL series --1. Install, uninstall and user rights management MySQL install 1.Ubuntu18 Lower installation MySQL sudo apt-get install mysql-server MySQL The version is 5.7.25 2. Sign in MySQL use mysql-serve ... - Linux-- Master slave copy One . mysql+centos7 mariadb mariadb It's actually with mysql It's the same , It's just that centos7 It's called mariadb, Mainly because mysql After being acquired by Oracle , There may be a risk of closing the source ...
https://en.pythonmana.com/2021/08/20210823010651136j.html
CC-MAIN-2022-27
en
refinedweb
To use a public package in your Go application add the line to the top of your program to import the package import "github.com/aws/aws-sdk-go/service/s3" If this is the first time you are using the package, the error message appears in the output window of Atom when you save the file cannot find package “github.com/aws/aws-sdk-go/service/s3” in any of: C:\Go\src\github.com\aws\aws-sdk-go\service\s3 (from $GOROOT) C:\Users\MY_USERNAME\go\src\github.com\aws\aws-sdk-go\service\s3 (from $GOPATH) Open a terminal window and execute the command to download the package go get github.com/aws/aws-sdk-go/service/s3
https://pinter.org/archives/8129
CC-MAIN-2022-27
en
refinedweb
Feb 18 2019 08:39 AM I read somewhere that you could do the following hybrid deployment. 1) Two dedicated Exchange 2016 "hybrid servers" that are F5 load balanced 2) Create a new namespace called hybrid.contoso.com. (Why would we need a new namespace?) 3) Create internal and external DNS A record for hybrid.contoso.com (same IP addresses?) 4) Publish hybrid.contoso.com through the F5 load balancer. (Is this done on the external F5 or both the internal and external F5. We also have BlueCoat device) 4) Point the existing autodiscover record to hybrid.contoso.com (external). (Again why would we do this? Will that mean clients need to be re-configured?). Can we just use a CNAME to redirect autodiscover to hybrid? 5) Point the existing EWS services to hybrid.contoso.com (external). (I supposed this is used for mailbox migration path?) 6) Create two A records called smtp1.contoso.com and smtp2.contoso.com and configure send and receive connectors in Exchange online to send contoso.com mails to these smart host addresses. (I don't know why this is needed cause we are enabling centralised transport and I though this would be created automatically) Thank you.
https://techcommunity.microsoft.com/t5/office-365/exchange-hybrid-namespaces/m-p/352638
CC-MAIN-2022-27
en
refinedweb
Converting JSON to CSV in Java Last modified: October 25, 2021 1. Introduction In this short tutorial, we'll see how to use Jackson to convert JSON into CSV and vice versa. There are alternative libraries available, like the CDL class from org.json, but we'll just focus on the Jackson library here. After we've looked at our example data structure, we'll use a combination of ObjectMapper and CSVMapper to convert between JSON and CSV. 2. Dependencies Let’s add the dependency for Jackson CSV data formatter: <dependency> <groupId>com.fasterxml.jackson.dataformat</groupId> <artifactId>jackson-dataformat-csv</artifactId> <version>2.13.0</version> </dependency> We can always find the most recent version of this dependency on Maven Central. We'll also add the dependency for the core Jackson databind: <dependency> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-databind</artifactId> <version>2.13.0</version> </dependency> Again, we can find the most recent version of this dependency on Maven Central. 3. Data Structure Before we reformat a JSON document to CSV, we need to consider how well our data model will map between the two formats. So first, let's consider what data the different formats support: - We use JSON to represent a variety of object structures, including ones that contain arrays and nested objects - We use CSV to represent data from a list of objects, with each object from the list appearing on a new line This means that if our JSON document has an array of objects, we can reformat each object into a new line of our CSV file. So, as an example, let's use a JSON document containing the following list of items from an order: [ { "item" : "No. 9 Sprockets", "quantity" : 12, "unitPrice" : 1.23 }, { "item" : "Widget (10mm)", "quantity" : 4, "unitPrice" : 3.45 } ] We'll use the field names from the JSON document as column headers, and reformat it to the following CSV file: item,quantity,unitPrice "No. 9 Sprockets",12,1.23 "Widget (10mm)",4,3.45 4. Read JSON and Write CSV First, we use Jackson's ObjectMapper to read our example JSON document into a tree of JsonNode objects: JsonNode jsonTree = new ObjectMapper().readTree(new File("src/main/resources/orderLines.json")); Next, let's create a CsvSchema. This determines the column headers, types, and sequence of columns in the CSV file. To do this, we create a CsvSchema Builder and set the column headers to match the JSON field names: Builder csvSchemaBuilder = CsvSchema.builder(); JsonNode firstObject = jsonTree.elements().next(); firstObject.fieldNames().forEachRemaining(fieldName -> {csvSchemaBuilder.addColumn(fieldName);} ); CsvSchema csvSchema = csvSchemaBuilder.build().withHeader(); Then, we create a CsvMapper with our CsvSchema, and finally, we write the jsonTree to our CSV file: CsvMapper csvMapper = new CsvMapper(); csvMapper.writerFor(JsonNode.class) .with(csvSchema) .writeValue(new File("src/main/resources/orderLines.csv"), jsonTree); When we run this sample code, our example JSON document is converted to the expected CSV file. 5. Read CSV and Write JSON Now, let's use Jackson's CsvMapper to read our CSV file into a List of OrderLine objects. To do this, we first create the OrderLine class as a simple POJO: public class OrderLine { private String item; private int quantity; private BigDecimal unitPrice; // Constructors, Getters, Setters and toString } We'll use the column headers in the CSV file to define our CsvSchema. Then, we use the CsvMapper to read the data from the CSV into a MappingIterator of OrderLine objects: CsvSchema orderLineSchema = CsvSchema.emptySchema().withHeader(); CsvMapper csvMapper = new CsvMapper(); MappingIterator<OrderLine> orderLines = csvMapper.readerFor(OrderLine.class) .with(orderLineSchema) .readValues(new File("src/main/resources/orderLines.csv")); Next, we'll use the MappingIterator to get a List of OrderLine objects. Then, we use Jackson's ObjectMapper to write the list out as a JSON document: new ObjectMapper() .configure(SerializationFeature.INDENT_OUTPUT, true) .writeValue(new File("src/main/resources/orderLinesFromCsv.json"), orderLines.readAll()); When we run this sample code, our example CSV file is converted to the expected JSON document. 6. Configuring the CSV File Format Let's use some of Jackson's annotations to adjust the format of the CSV file. We'll change the ‘item' column heading to ‘name', the ‘quantity' column heading to ‘count', remove the ‘unitPrice' column, and make ‘count' the first column. So, our expected CSV file becomes: count,name 12,"No. 9 Sprockets" 4,"Widget (10mm)" We'll create a new abstract class to define the required format for the CSV file: @JsonPropertyOrder({ "count", "name" }) public abstract class OrderLineForCsv { @JsonProperty("name") private String item; @JsonProperty("count") private int quantity; @JsonIgnore private BigDecimal unitPrice; } Then, we use our OrderLineForCsv class to create a CsvSchema: CsvMapper csvMapper = new CsvMapper(); CsvSchema csvSchema = csvMapper .schemaFor(OrderLineForCsv.class) .withHeader(); We also use the OrderLineForCsv as a Jackson Mixin. This tells Jackson to use the annotations we added to the OrderLineForCsv class when it processes an OrderLine object: csvMapper.addMixIn(OrderLine.class, OrderLineForCsv.class); Finally, we use an ObjectMapper to read our JSON document into an OrderLine array, and use our csvMapper to write the this to a CSV file: OrderLine[] orderLines = new ObjectMapper() .readValue(new File("src/main/resources/orderLines.json"), OrderLine[].class); csvMapper.writerFor(OrderLine[].class) .with(csvSchema) .writeValue(new File("src/main/resources/orderLinesReformated.csv"), orderLines); When we run this sample code, our example JSON document is converted to the expected CSV file. 7. Conclusion In this quick tutorial, we learned how to read and write CSV files using the Jackson data format library. We also looked at a few configuration options that help us get our data looking the way we want. As always, the code can be found over on GitHub.
https://www.baeldung.com/java-converting-json-to-csv
CC-MAIN-2022-27
en
refinedweb
Imbalanced Multiclass Classification with the E.coli Dataset in Python In this tutorial, we will be dealing with imbalanced multiclass classification with the E.coli dataset in Python. Classifications in which more than two labels can be predicted are known as multiclass classifications. In such cases, if the data is found to be skewed or imbalanced towards one or more class it is difficult to handle. Such problems are commonly known as Imbalanced Multiclass classification problems. Dataset is available here. Imbalanced Multiclass Classification Let us load the necessary libraries, please make sure that you guys have the latest version of the libraries on your system: from pandas import read_csv from pandas import set_option from collections import Counter from matplotlib import pyplot from numpy import mean from numpy import std from sklearn.preprocessing import LabelEncoder from sklearn.model_selection import cross_val_score from sklearn.model_selection import RepeatedStratifiedKFold from sklearn.dummy import DummyClassifier It is now time to load the data into the python file. We can now print out the shape (or size) of the data set and then move ahead accordingly. Also, we can parse through the whole data set once if required. filename = '' df = read_csv(filename, header=None) print(df.shape) target = df.values[:,-1] counter = Counter(target) for k,v in counter.items(): per = v / len(target) * 50 print('Class=%s, Count=%d, Percentage=%.5f%%' % (k, v, per)) set_option('precision', 5) print(df.describe()) Output: (336, 8) Class=cp, Count=143, Percentage=21.27976% Class=im, Count=77, Percentage=11.45833% Class=imS, Count=2, Percentage=0.29762% Class=imL, Count=2, Percentage=0.29762% Class=imU, Count=35, Percentage=5.20833% Class=om, Count=20, Percentage=2.97619% Class=omL, Count=5, Percentage=0.74405% Class=pp, Count=52, Percentage=7.73810% 0 1 2 ... 4 5 6 count 336.00000 336.00000 336.00000 ... 336.00000 336.00000 336.00000 mean 0.50006 0.50000 0.49548 ... 0.50003 0.50018 0.49973 std 0.19463 0.14816 0.08850 ... 0.12238 0.21575 0.20941 min 0.00000 0.16000 0.48000 ... 0.00000 0.03000 0.00000 25% 0.34000 0.40000 0.48000 ... 0.42000 0.33000 0.35000 50% 0.50000 0.47000 0.48000 ... 0.49500 0.45500 0.43000 75% 0.66250 0.57000 0.48000 ... 0.57000 0.71000 0.71000 max 0.89000 1.00000 1.00000 ... 0.88000 1.00000 0.99000 [8 rows x 7 columns] Plotting the histogram of the data, through this we will get a better insight into the data. This will help us make better choices in the future coding pattern. df.hist(bins=25) pyplot.show() Output: Now in some of the classes the data available in the dataset in insufficient, this may lead to an error. To handle this just remove such classes. So using the new_data() function to remove the rows. def new_data(filename): df = read_csv(filename, header=None) df = df[df[7] != 'imS'] df = df[df[7] != 'imL'] data = df.values X, y = data[:, :-1], data[:, -1] y = LabelEncoder().fit_transform(y) return X, y Let us now evaluate the algorithms. We will be evaluating the following models on this dataset: - RF: Random Forest - ET: Extra Trees - LDA: Linear Discriminant Analysis - SVM: Support Vector Machine - BAG: Bagged Decision Trees def evaluate_model(X, y, model): cv = RepeatedStratifiedKFold(n_splits=5, n_repeats=3, random_state=1) scores = cross_val_score(model, X, y, scoring='accuracy', cv=cv, n_jobs=-1) return scores def get_models(): models, names = list(), list() models.append(LinearDiscriminantAnalysis()) names.append('LDA') models.append(LinearSVC()) names.append('SVM') models.append(BaggingClassifier(n_estimators=1000)) names.append('BAG') models.append(RandomForestClassifier(n_estimators=1000)) names.append('RF') models.append(ExtraTreesClassifier(n_estimators=1000)) names.append('ET') return models, names Running the code and plotting the boxplot, will help us better understand the behaviour of the five algorithms being used in the model. X, y = load_dataset(full_path) models, names = get_models() results = list() for i in range(len(models)): scores = evaluate_model(X, y, models[i]) results.append(scores) print('>%s %.3f (%.3f)' % (names[i], mean(scores), std(scores))) pyplot.boxplot(results, labels=names, showmeans=True) pyplot.show() Output: >LDA 0.881 (0.041) >SVM 0.882 (0.040) >BAG 0.855 (0.038) >RF 0.887 (0.022) >ET 0.877 (0.034) Let us now try this whole upon the same data from scratch and print the results obtained and the expected results. We will be evaluating the following models on this dataset: OM, CP, PP, IMU, OML, IM from pandas import read_csv from sklearn.preprocessing import LabelEncoder from sklearn.ensemble import RandomForestClassifier def new_data(filename): df = read_csv(filename, header=None) df = df[df[7] != 'imS'] df = df[df[7] != 'imL'] data = df.values X, y = data[:, :-1], data[:, -1] le = LabelEncoder() y = le.fit_transform(y) return X, y, lePredicted=%s (expected om)' % (l)) # known class "cp" row = [0.49,0.29,0.48,0.50,0.56,0.24,0.35] q = model.predict([row]) l = le.inverse_transform(q)[0] print('>Predicted=%s (expected cp)' % (l)) # known class "pp" row = [0.74,0.49,0.48,0.50,0.42,0.54,0.36] q = model.predict([row]) l = le.inverse_transform(q)[0] print('>Predicted=%s (expected pp)' % (l)) # known class "imU" row = [0.72,0.42,0.48,0.50,0.65,0.77,0.79] q = model.predict([row]) l = le.inverse_transform(q)[0] print('>Predicted=%s (expected imU)' % (l)) # known class "omL" row = [0.77,0.57,1.00,0.50,0.37,0.54,0.0] q = model.predict([row]) l = le.inverse_transform(q)[0] print('>Predicted=%s (expected omL)' % (l)) # known class "im" row = [0.06,0.61,0.48,0.50,0.49,0.92,0.37] q = model.predict([row]) l = le.inverse_transform(q)[0] print('>Predicted=%s (expected im)' % (l)) Output: >Predicted=om (expected om) >Predicted=cp (expected cp) >Predicted=pp (expected pp) >Predicted=imU (expected imU) >Predicted=omL (expected omL) >Predicted=im (expected im) Clearly the model correctly predicts the expected output. Congratulations! Hope you had fun learning in this tutorial with me. Have a good day and happy learning. Also read: AdaBoost Algorithm for Machine Learning in Python
https://www.codespeedy.com/imbalanced-multiclass-classification-with-the-e-coli-dataset-python/
CC-MAIN-2022-27
en
refinedweb
The XML sitemap index is a file that contains many XML sitemap files for your website, where you can use it to submit it to Google Search Console rather than submitting multiple files. The best use of the XML sitemap index is for dynamic sites with many new pages every day that you need to include in the XML sitemap. You can submit up to 50,000 URLs in a single XML sitemap file, referring to Google's guidelines and rules on building XML sitemap files, plus their last updated date, images, and caption for the images. There are many reasons to get the URLs from the XML sitemap for your site or any other website, such as: When performing a competitor analysis, knowing how many pages they have is very important for building a content strategy and the frequency of publishing new content that you can use to compete with them. Think of it; having hundreds or thousands of URLs with titles and categories in an Excel sheet can help you develop or upgrade your content strategy. Knowing how many published pages are on your website and comparing them to the number of indexed pages on Google is one of the first steps in the indexed pages audit. After comparing the number of published pages and the number of indexed pages on Google, your next step is to compare these numbers with the valid pages from Google Search Console and start analyzing what is happening to your URLs and which URLs to remove or fix. To make the requests on the XML site map index of the WordPress website. To parse and read the XML sitemap content. To save the output in a CSV or XLS format. To encode the Arabic URLs To see the execution progress of your script. First, we'll start with importing the libraries in our Jupyter Notebook or any other Python IDE. import pandas as pd from urllib.parse import unquote import requests from bs4 import BeautifulSoup as bs from tqdm.notebook import tqdm You can get the path of the XML sitemap index for Yoast SEO Plugin by typing /sitemap_index.xml after the domain name, for example: Then a list of XML files will show up to you. In this tutorial, we'll focus on the posts files. As you can see from the image, there are three posts files. The number of the files will help us in our loop. The default naming for the posts files in the Yoast SEO plugin is post-sitemap{index}.xml, where the {index} is the number of the file. xml_list = [] urls_titles = [] for i in range(1,4): xml = f"{i}.xml" ua = "Mozilla/5.0 (Linux; {Android Version}; {Build Tag etc.}) AppleWebKit/{WebKit Rev} (KHTML, like Gecko) Chrome/{Chrome Rev} Mobile Safari/{WebKit Rev}" xml_response = requests.get(xml,headers={"User-Agent":ua}) xml_content = bs(xml_response.text,"xml") xml_loc = xml_content.find_all("loc") for item in tqdm(xml_loc): uploads = item.text.find("wp-content") if uploads == -1: xml_list.append(unquote(item.text)) urls_titles.append(unquote(item.text.split("/")[-2].replace("-"," ").title())) xml_data = {"URL":xml_list,"Title":urls_titles} xml_list_df = pd.DataFrame(xml_data,columns=["URL","Title"]) xml_list_df.to_csv("xml_files_results.csv",index=False) You can find the results in a file named xml_files_results.csv, where you can open it in Excel or Google sheets.
https://www.nadeem.tech/how-to-get-wordpress-xml-sitemap-urls-using-python/
CC-MAIN-2022-27
en
refinedweb
Documentation / Algorithms / Shortest path The All Pair Shortest Path (APSP) Algorithm This class implements the Floyd-Warshall all pair shortest path algorithm where the shortest path from any node to any destination in a given weighted graph (with positive or negative edge weights) is performed. The computational complexity is O(n^3), this may seems a very large, however this algorithm may perform better than running several Dijkstra on all node pairs of the graph (that would be of complexity O(n^2 log(n))) when the graph becomes dense. Note that all the possible paths are not explicitly computed and stored. Instead, the weight is computed and a data structure similar to network routing tables is created directly on the graph. This allows a linear reconstruction of the wanted paths, on demand, minimizing the memory consumption. For each node of the graph, a org.graphstream.algorithm.APSP.APSPInfo attribute is stored. The name of this attribute is org.graphstream.algorithm.APSP.APSPInfo#ATTRIBUTE_NAME. Complexity The complexity of the all method is O(n^3) in computation time with n the number of nodes. Reference - Floyd, Robert W. “Algorithm 97: Shortest Path”. Communications of the ACM 5 (6): 345. doi:10.1145/367766.368168. 1962. - Warshall, Stephen. “A theorem on Boolean matrices”. Journal of the ACM 9 (1): 11–12. doi:10.1145/321105.321107. 1962. Usage The implementation of this algorithm is made with two main classes that reflect the two main steps of the algorithm that are: - compute pairwise weights for all nodes; - retrieve actual paths from some given sources to some given destinations. For the first step (the real shortest path computation) you need to create an APSP object with 3 parameters: - a reference to the graph to be computed; - a string that indicates the name of the attribute to consider for the weighting; - a boolean that indicates whether the computation considers edges direction or not. Those 3 parameters can be set in a ran in the constructor APSP(Graph,String,boolean) or by using separated setters (see example below). Then the actual computation takes place by calling the compute() method which is implemented from the Algorithm interface. This method actually does the computation. Secondly, when the weights are computed, one can retrieve paths with the help of another class: APSPInfo. Such object are stored in each node and hold routing tables that can help rebuild shortest paths. Retrieving an APSPInfo instance from a node is done for instance for a node of id “F”, like this: APSPInfo info = graph.getNode("F").getAttribute(APSPInfo.ATTRIBUTE_NAME); then the shortest path from a “F” to another node (say “A”) is given by: info.getShortestPathTo("A") Example import java.io.ByteArrayInputStream; import java.io.IOException; import org.graphstream.algorithm.APSP; import org.graphstream.algorithm.APSP.APSPInfo; import org.graphstream.algorithm.APSP.Progress; import org.graphstream.graph.Graph; import org.graphstream.graph.implementations.DefaultGraph; import org.graphstream.stream.file.FileSourceDGS; /** * * B-(1)-C * / \ * (1) (10) * / \ * A F * \ / * (1) (1) * \ / * D-(1)-E */ public class APSPSP Test"); ByteArrayInputStream bs = new ByteArrayInputStream(my_graph.getBytes()); FileSourceDGS source = new FileSourceDGS(); source.addSink(graph); source.readAll(bs); APSP apsp = new APSP(); apsp.init(graph); // registering apsp as a sink for the graph apsp.setDirected(false); // undirected graph apsp.setWeightAttributeName("weight"); // ensure that the attribute name used is "weight" apsp.compute(); // the method that actually computes shortest paths APSPInfo info = graph.getNode("F").getAttribute(APSPInfo.ATTRIBUTE_NAME); System.out.println(info.getShortestPathTo("A")); } } The output of this test program should give: [F, E, D, A] Features Digraphs This algorithm can use directed graphs and only compute paths according to this direction. You can choose to ignore edge orientation by calling setDirected(boolean) method with “false” as value (or use the appropriate constructor). Shortest Paths with weighted edges You can also specify that edges have “weights” or “importance” that value them. You store these values as attributes on the edges. The default name for these attributes is “weight” but you can specify it using the #setWeightAttributeName(String) method (or by using the appropriate constructor). The weight attribute must contain an object that implements java.lang.Number. How shortest paths are stored in the graph? All the shortest paths are not literally stored in the graph because it would require to much memory to do so. Instead, only useful data, allowing the fast reconstruction of any path, is stored. The storage approach is alike network routing tables where each node maintains a list of all possible targets linked with the next hop neighbor to go through. Technically, on each node, for each target, we only store the target node name and if the path is made of more than one edge, one “pass-by” node. As all shortest path that is made of more than one edge is necessarily made of two other shortest paths, it is easy to reconstruct a shortest path between two arbitrary nodes knowing only a pass-by node. This approach still stores a lot of data on the graph, however far less than if we stored complete paths.
https://graphstream-project.org/doc/Algorithms/Shortest-path/All-Pair-Shortest-Path/
CC-MAIN-2022-27
en
refinedweb
Union (Analysis) Summary Computes a geometric union of the input features. All features and their attributes will be written to the output feature class. Learn more about how Union works Illustration Usage All input feature classes and feature layers must have polygon geometry. The Allow Gaps parameter can be used with the ALL or ONLY_FID settings on the Join Attribute parameter. This allows for identification of resulting areas that are completely enclosed by the resulting polygons. The FID attributes for these GAP features will all be -1. The output feature class will contain a FID_<name> attribute for each of the input feature classes. For example, if one of the input feature classes is named Soils, there will be a FID_Soils attribute on the output feature class. FID_<name> values will be -1 for any input feature (or any part of an input feature) that does not intersect another input feature. Attribute values for the other feature classes in the union where no intersection is detected will not be transferred to the output feature in this case.. This tool may generate multipart features in the output even if all inputs were single part. If multipart features are not desired, use the Multipart to Singlepart tool on the output feature class. With ArcGIS for Desktop Basic and Standard licenses, the number of input feature classes or layers is limited to two. Syntax Code Sample The following Python window script demonstrates how to use the Union function in immediate mode. import arcpy from arcpy import env env.workspace = "C:/data/data/gdb" arcpy.Union_analysis (["well_buff50", "stream_buff200", "waterbody_buff500"], "water_buffers", "NO_FID", 0.0003) arcpy.Union_analysis ([["counties", 2],["parcels", 1],["state", 2]], "state_landinfo") The following stand-alone script shows two ways to apply the Union function in scripting. # unions.py # Purpose: union 3 feature classes # Import the system modules import arcpy from arcpy import env # Set the current workspace # (to avoid having to specify the full path to the feature classes each time) env.workspace = "c:/data/data.gdb" # Union 3 feature classes but only carry the FID attributes to the output inFeatures = ["well_buff50", "stream_buff200", "waterbody_buff500"] outFeatures = "water_buffers" clusterTol = 0.0003 arcpy.Union_analysis (inFeatures, outFeatures, "ONLY_FID", clusterTol) # Union 3 other feature classes, but specify some ranks for each # since parcels has better spatial accuracy inFeatures = [["counties", 2],["parcels", 1],["state", 2]] outFeatures = "state_landinfo" arcpy.Union_analysis (inFeatures, outFeatures)
https://resources.arcgis.com/en/help/main/10.1/0008/00080000000s000000.htm
CC-MAIN-2022-27
en
refinedweb
POD expressions, as previously documented, are defined by misusing LibreOffice fields or track-changed text. For ODS templates, it is easy: there is a single way to encode POD expressions. Any cell containing an expression of the form: ="<python_expression>" will be interpreted as a POD expression. For ODT, as already stated, the standard way is to misuse fields of type "Conditional text". All examples in this site use such fields. However, by default, 2 other "expression holders" are also activated: Track-changed text was the initial expression holder and is activated for backward compatibility, but you are discouraged from using it for writing new POD templates. That being said, being activated by default, you must understand that it may have an impact in your POD templates: anything you write as track-changed text will be interpreted by POD as a POD expression. Suppose you have the following ODT template. You could be surprised by the result, after having used this context. from appy.model.utils import Object as O self = O(name='Smith', firstName='John') Every POD expression holder has an identifier. The complete list is in the following table. Defining which holders are activated is done via attribute expressionHolders passed to the Renderer. If, for example, you want to minimize interferences with LibreOffice, you may call the Renderer with: Renderer(..., expressionsHolders=('if',), ...) This would disable holders "change" and "input".
https://appyframe.work/163/view?page=main&nav=search.Page.allSearch.10.48&popup=False
CC-MAIN-2022-27
en
refinedweb
Hi, I'm using Connext DDS Pro 5.3.0 with VS 2015/2017. my C++ code uses dds::core::cond::WaitSetin various places. I'm cleaning up the include directives, and I have just realized that there is no forward declaration of dds::core::cond::WaitSetin your SDK. I would expect to find it in <dds/core/corefwd.hpp>but this file does not contain any forward declaration of WaitSet. In this file, the only fwd declaration in the cond namespace is namespace cond {
https://community.rti.com/keywords/dependencies
CC-MAIN-2022-27
en
refinedweb
I’m asking this because I personally think that this is inconvenient to install an additional loader. Thanks! I mean I know that I can also seem to import 2d images using base64 code. All threejs support for external file formats is handled by loaders. There are many loaders to choose from (list), but if you want to import a 3D model that was not made in threejs you will need to use one of these — or write your own, which is unlikely to be more convenient than installing one. gltf loader is now shipped with threejs, so no need for extra installation step (Maybe there is, but I’m not aware of that). The loader was always part of the repository and npm package.However, all JS files are available as modules now which makes it much easier to use files from the examples directory for projects. Interesting ! I didn’t catch this subtility ^^ Exact usage depends on how you’re building your application. ES6 Modules: (e.g. Webpack, Rollup) import * as THREE from 'three'; import { GLTFLoader } from 'three/examples/jsm/loaders/GLTFLoader.js'; UMD / CommonJS: (e.g. Browserify) const THREE = window.THREE = require( 'three' ); require( 'three/examples/js/loaders/GLTFLoader.js' ); Plain scripts: <script src=""></script> <script src=""></script> Most modern and production-level tools are moving toward the first option, but all three are acceptable. THX!!! These lines of code were EXACTLY what I needed
https://discourse.threejs.org/t/how-do-you-import-3d-models-into-three-js-without-a-gltfloader/12267
CC-MAIN-2022-27
en
refinedweb
Tracker Eval Board Tutorials This section has information on prototyping with the Tracker Evaluation board experiment and add new features, with an eye toward being able to easily migrate to the Tracker One, Tracker Carrier Board, or Tracker SoM for production. DHT11 Temperature and Humidity Example The Tracker SoM Evaluation Board comes with a Grove DHT11 temperature and humidity sensor and a short 4-pin cable. Connect the sensor Connect the sensor to the 4-pin ribbon cable and the other end to the evaluation board. Either port can be used but this example assumes J10, the outer connector, pin A0 and A1. Getting the Tracker Edge Firmware You can download a complete project for use with Particle Workbench as a zip file here: Version: - Extract tracker-temperature.zip in your Downloads directory - Open the tracker-temperature git@github.com:particle-iot/tracker-edge.git cd tracker-edge git submodule update --init --recursive - Open Particle Workbench. - From the command palette, Particle: Import Project. - Run Particle: Configure Workspace for Device, select version 1.5.4-rc.1, 2.0.0-rc.3, or later, Tracker, and your device. - Run Particle: Compile and Flash. Add the libraries From the command palette in Workbench, Particle: Install Library then enter Grove_Temperature_And_Humidity_Sensor. Repeat for TemperatureHumidityValidatorRK. If you prefer to edit project.properties directly, add these: The first library is the interface for the temperature sensor. Because the sensor has a tendency to return incorrect values but does not include a checksum or CRC to determine that this has happened, the second library filters the results by collecting the last 10 samples, selecting only the samples within 1 standard deviation of the mean, and taking the mean of these samples without the outliers. The Full Source The Details #include "Grove_Temperature_And_Humidity_Sensor.h" #include "TemperatureHumidityValidatorRK.h" These are the header files for the two libraries we use. Note that you must Particle: Install Library first; you can't only include the header file. DHT tempSensor(A1); TemperatureHumidityValidator validator; These are the global variables for the two features we use. Note the use of A1. If you connected the sensor to the other Grove connector you'd use A3 instead. // Sample the temperature sensor every 2 seconds. This is done so the outlier values can be filtered out easily. const unsigned long CHECK_PERIOD_MS = 2000; Because we filter the temperature sensor results to remove the outliers, we sample the sensor every 2 seconds. That way, when a location event is generated, we don't have to wait for enough samples to return a value. void setup() { Tracker::instance().init(); // Callback to add key press information to the location publish Tracker::instance().location.regLocGenCallback(locationGenerationCallback); // Initialize temperature sensor tempSensor.begin(); Particle.connect(); } In setup() we must do several things: - Initialize the Tracker Edge firmware - Register a location generation callback - Initialize the temperature sensor - Connect to the Particle cloud void loop() { Tracker::instance().loop(); if (millis() - lastCheck >= CHECK_PERIOD_MS) { lastCheck = millis(); validator.addSample(tempSensor.getTempCelcius(), tempSensor.getHumidity()); } } In loop() we: - Call the Tracker Edge loop function - Periodically sample the temperature and humidity sensor and pass the values to the validator. void locationGenerationCallback(JSONWriter &writer, LocationPoint &point, const void *context) { float tempC = validator.getTemperatureC(); if (!isnan(tempC)) { writer.name("temp").value(tempC, 2); } float hum = validator.getHumidity(); if (!isnan(hum)) { writer.name("hum").value(hum, 1); } } Finally, in the location generation callback, we add the temperature and humidity values if valid. Results If you open the event viewer, you can see the location events now have temp and hum keys! { "cmd":"loc", "time":1595867181, "loc":{ "lck":1, "time":1595867182, "lat":42.469732, "lon":-75.064801, "alt":348.621, "hd":215.61, "h_acc":9, "v_acc":15, "cell":42.3, "batt":96.5, "temp":29, "hum":42 }, "trig":[ 0:"lock" ], "req_id":2 } If you open the map view and then the device, the new fields will appear in the Custom Data section. I2C Sensor Example One of the best ways to expand the Tracker One is using I2C, since that interface makes it possible to add multiple external peripherals off the single M8 connector. You can use the same techniques on the Tracker SoM Evaluation Board and Tracker SoM. For this example we'll add temperature, pressure, and humidity information to location publishes using the Tracker SoM Evaluation Board. We'll also use the SparkFun Qwiic line of products for easy prototyping. For production you'd probably make your own custom board with the sensor on it, instead. With the Evaluation board you'll need the Qwiic connector to prototyping wires or the cable assortment that includes it. And you'll need a sensor, in this case a SparkFun Atmospheric Sensor Breakout - BME280 (Qwiic). - Connect the following wires to the Tracker SoM Evaluation Board expansion connector: Instead of using D0/D1 for I2C like on other Particle devices, in this case we'll be using the multi-function port pins MCU_RX and MCU_TX instead. On the Tracker SoM the TX and RX pins can be reconfigured to be Wire3 instead of Serial1, allowing a single set of pins to be GPIO, serial, or I2C on the M8 connector. Note: All GPIO, ADC, and peripherals such as I2C, Serial, and SPI are 3.3V maximum and are not 5V tolerant. You must never use pull-ups to 5V on the I2C interface! You can download a complete project for use with Particle Workbench as a zip file here: Version: - Extract tracker-bme280.zip in your Downloads directory - Open the tracker-bme280 - Start with the base Tracker Edge firmware - Add and copy the Adafruit_BME280_RK library into the project: $ particle library add Adafruit_BME280_RK $ particle library copy Adafruit_BME280_RK - Here's the source: Most of it is boilerplate, but looking in more closely: - Include the library header for the sensor and add any global variables it requires: #include "Adafruit_BME280.h" Adafruit_BME280 bme(Wire3); bool hasSensor = false; - Disable Serial1and enable Wire3on the multi-function pins. Serial1.end(); Wire3.begin(); - Initialize the sensor, in this case a BME280 on Wire3 using address 0x77. hasSensor = bme.begin(0x77); Log.info("hasSensor=%d", hasSensor); - In our location callback, add the temperature, pressure, and humidity data. void locationGenerationCallback(JSONWriter &writer, LocationPoint &point, const void *context) { if (hasSensor) { writer.name("temp").value(bme.readTemperature(), 2); // degrees C writer.name("pres").value(bme.readPressure() / 100.0, 2); // hPA writer.name("hum").value(bme.readHumidity(), 2); // Relative humidity % } else { Log.info("no sensor"); } } - Now you can see the extra data in the locevent! { "cmd":"loc", "time":1593091308, "loc": { "lck":1, "time":1593091309, "lat":42.469732, "lon":-75.064801, "alt":324.11, "hd":222.95, "h_acc":6.8, "v_acc":9.2, "cell":42.3, "batt":72.2, "cell":37.1, "batt":98.8, "temp":24.92, "pres":973.39, "hum":42.46 }, "trig": [ "time" ], "req_id":2 } M8 Evaluation Board Adapter. Design Files The Tracker SoM Evaluation board is open-source and the Eagle CAD design files are available in the Tracker Hardware GitHub repository.
https://docs.particle.io/getting-started/tracker/tracker-eval-tutorials/
CC-MAIN-2022-27
en
refinedweb
In this chapter, we start creating amazing GUIs using Python 3.6 and above. We will cover the following topics: - Creating our first Python GUI - Preventing the GUI from being resized - Adding a label to the GUI form - Creating buttons and changing their text property - Text box widgets - Setting the focus to a widget and disabling widgets - Combo box widgets - Creating a check button with different initial states - Using radio button widgets - Using scrolled text widgets - Adding several widgets in a loop In this chapter, we will develop our first GUI in Python. We will start with the minimum code required to build a running GUI application. Each recipe then adds different widgets to the GUI form. In the first two recipes, we will show the entire code, consisting of only a few lines of code. In the following recipes, we will only show the code to be added to the previous recipes. By the end of this chapter, we will have created a working GUI application that consists of labels, buttons, text boxes, combo boxes, check buttons in various states, as well as radio buttons that change the background color of the GUI. At the beginning of each chapter, I will show the Python modules that belong to each chapter. I will then reference the different modules that belong to the code shown, studied and run. Here is the overview of Python modules (ending in a .py extension) for this chapter: Python is a very powerful programming language. It ships with the built-in tkinter module. In only a few lines of code (four, to be precise) we can build our first Python GUI. To follow this recipe, a working Python development environment is a prerequisite. The IDLE GUI, which ships with Python, is enough to start. IDLE was built using tkinter! Note - All the recipes in this book were developed using Python 3.6 on a Windows 10 64-bit OS. They have not been tested on any other configuration. As Python is a cross-platform language, the code from each recipe is expected to run everywhere. - If you are using a Mac, it does come with built-in Python, yet it might be missing some modules such as tkinter, which we will use throughout this book. - We are using Python 3.6, and the creator of Python intentionally chose not to make it backwards compatible with Python 2. If you are using a Mac or Python 2, you might have to install Python 3.6 from in order to successfully run the recipes in this book. - If you really wish to run the code in this book on Python 2.7, you will have to make some adjustments. For example, tkinter in Python 2.x has an uppercase T. The Python 2.7 print statement is a function in Python 3.6 and requires parentheses. - While the EOL (End Of Life) for the Python 2.x branch has been extended to the year 2020, I would strongly recommend that you start using Python 3.6 and above. - Why hold on to the past, unless you really have to? Here is a link to the Python Enhancement Proposal (PEP) 373 that refers to the EOL of Python 2: Here are the four lines of First_GUI.py required to create the resulting GUI: Execute this code and admire the result: In line nine, we import the built-in tkinter module and alias it as tk to simplify our Python code. In line 12, we create an instance of the Tk class by calling its constructor (the parentheses appended to Tk turns the class into an instance). We are using the alias tk, so we don't have to use the longer word tkinter. We are assigning the class instance to a variable named win (short for a window). As Python is a dynamically typed language, we did not have to declare this variable before assigning to it, and we did not have to give it a specific type. Python infers the type from the assignment of this statement. Python is a strongly typed language, so every variable always has a type. We just don't have to specify its type beforehand like in other languages. This makes Python a very powerful and productive language to program in. Note A little note about classes and types: - In Python, every variable always has a type. We cannot create a variable that does not have a type. Yet, in Python, we do not have to declare the type beforehand, as we have to do in the C programming language. - Python is smart enough to infer the type. C#, at the time of writing this book, also has this capability. Using Python, we can create our own classes using the classkeyword instead of the defkeyword. - In order to assign the class to a variable, we first have to create an instance of our class. We create the instance and assign this instance to our variable, for example: class AClass(object): print('Hello from AClass') class_instance = AClass()Now, the variable, class_instance, is of the AClasstype. If this sounds confusing, do not worry. We will cover OOP in the coming chapters. In line 15, we use the instance variable ( win) of the class to give our window a title via the title property. In line 20, we start the window's event loop by calling the mainloop method on the class instance, win. Up to this point in our code, we created an instance and set one property, but the GUI will not be displayed until we start the main event loop. Note - An event loop is a mechanism that makes our GUI work. We can think of it as an endless loop where our GUI is waiting for events to be sent to it. A button click creates an event within our GUI, or our GUI being resized also creates an event. - We can write all of our GUI code in advance and nothing will be displayed on the user's screen until we call this endless loop ( win.mainloop()in the preceding code). The event loop ends when the user clicks the red Xbutton or a widget that we have programmed to end our GUI. When the event loop ends, our GUI also ends. By default, a GUI created using tkinter can be resized. This is not always ideal. The widgets we place onto our GUI forms might end up being resized in an improper way, so in this recipe, we will learn how to prevent our GUI from being resized by the user of our GUI application. This recipe extends the previous one, Creating our first Python GUI, so one requirement is to have typed the first recipe yourself into a project of your own, or download the code from. We are preventing the GUI from being resized, look at: GUI_not_resizable.py Running the code creates this GUI: Line 18 prevents the Python GUI from being resized. Running this code will result in a GUI similar to the one we created in the first recipe. However, the user can no longer resize it. Also, note how the maximize button in the toolbar of the window is grayed out. Why is this important? Because once we add widgets to our form, resizing can make our GUI look not as good as we want it to be. We will add widgets to our GUI in the next recipes. The resizable() method is of the Tk() class, and by passing in (False, False), we prevent the GUI from being resized. We can disable both the x and y dimensions of the GUI from being resized, or we can enable one or both dimensions by passing in True or any number other than zero. (True, False) would enable the x-dimension but prevent the y-dimension from being resized. We also added comments to our code in preparation for the recipes contained in this book. A label is a very simple widget that adds value to our GUI. It explains the purpose of the other widgets, providing additional information. This can guide the user to the meaning of an Entry widget, and it can also explain the data displayed by widgets without the user having to enter data into it. We are extending the first recipe, Creating our first Python GUI. We will leave the GUI resizable, so don't use the code from the second recipe (or comment the win.resizable line out). In order to add a Label widget to our GUI, we will import the ttk module from tkinter. Please note the two import statements. Add the following code just above win.mainloop(), which is located at the bottom of the first and second recipes: GUI_add_label.py Running the code adds a label to our GUI: In line 10 of the preceding code, we import a separate module from the tkinter package. The ttk module has some advanced widgets that make our GUI look great. In a sense, ttk is an extension within the tkinter package. We still need to import the tkinter package itself, but we have to specify that we now want to also use ttk from the tkinter package. Line 19 adds the label to the GUI, just before we call mainloop . We pass our window instance into the ttk.Label constructor and set the text property. This becomes the text our Label will display. We also make use of the grid layout manager, which we'll explore in much more depth in Chapter 2, Layout Management. Note how our GUI suddenly got much smaller than in the previous recipes. The reason why it became so small is that we added a widget to our form. Without a widget, the tkinter package uses a default size. Adding a widget causes optimization, which generally means using as little space as necessary to display the widget(s). If we make the text of the label longer, the GUI will expand automatically. We will cover this automatic form size adjustment in a later recipe in Chapter 2, Layout Management. In this recipe, we will add a button widget, and we will use this button to change a property of another widget that is a part of our GUI. This introduces us to callback functions and event handling in a Python GUI environment. This recipe extends the previous one, Adding a label to the GUI form. You can download the entire code from. We add a button that, when clicked, performs an action. In this recipe, we will update the label we added in the previous recipe as well as the text property of the button: GUI_create_button_change_property.py The following screenshot shows how our GUI looks before clicking the button: After clicking the button, the color of the label changed and so did the text of the button, which can be seen as follows: In line 19, we assign the label to a variable, and in line 20, we use this variable to position the label within the form. We need this variable in order to change its properties in the click_me() function. By default, this is a module-level variable, so we can access it inside the function, as long as we declare the variable above the function that calls it. Line 23 is the event handler that is invoked once the button gets clicked. In line 29, we create the button and bind the command to the click_me() function. Note GUIs are event-driven. Clicking the button creates an event. We bind what happens when this event occurs in the callback function using the command property of the ttk.Button widget. Notice how we do not use parentheses, only the name click_me. We also change the text of the label to include red as, in the printed book, this might otherwise not be obvious. When you run the code, you can see that the color does indeed change. Lines 20 and 30 both use the grid layout manager, which will be discussed in the following chapter. This aligns both the label and the button. In tkinter, the typical one-line textbox widget is called Entry. In this recipe, we will add such an Entry widget to our GUI. We will make our label more useful by describing what the Entry widget is doing for the user. GUI_textbox_widget.py Now, our GUI looks like this: After entering some text and clicking the button, there is the following change in the GUI: In line 24, we get the value of the Entry widget. We have not used OOP yet, so how come we can access the value of a variable that was not even declared yet? Without using OOP classes, in Python procedural coding, we have to physically place a name above a statement that tries to use that name. So how come this works (it does)? The answer is that the button click event is a callback function, and by the time the button is clicked by a user, the variables referenced in this function are known and do exist. Life is good. Line 27 gives our label a more meaningful name; for now, it describes the text box below it. We moved the button down next to the label to visually associate the two. We are still using the grid layout manager, which will be explained in more detail in Chapter 2, Layout Management. Line 30 creates a variable, name. This variable is bound to the Entry widget and, in our click_me() function, we are able to retrieve the value of the Entry widget by calling get() on this variable. This works like a charm. Now we see that while the button displays the entire text we entered (and more), the textbox Entry widget did not expand. The reason for this is that we hardcoded it to a width of 12 in line 31. Note - Python is a dynamically typed language and infers the type from the assignment. What this means is that if we assign a string to the namevariable, it will be of the stringtype, and if we assign an integer to name, its type will be integer. - Using tkinter, we have to declare the namevariable as the type tk.StringVar()before we can use it successfully. The reason is that tkinter is not Python. We can use it from Python, but it is not the same language. While our GUI is nicely improving, it would be more convenient and useful to have the cursor appear in the Entry widget as soon as the GUI appears. Here we learn how to do this. Python is truly great. All we have to do to set the focus to a specific control when the GUI appears is call the focus() method on an instance of a tkinter widget we previously created. In our current GUI example, we assigned the ttk.Entry class instance to a variable named, name_entered. Now, we can give it the focus. Place the following code just above the code which is located at the bottom of the module and which starts the main windows event loop, like we did in the previous recipes: GUI_set_focus.py If you get some errors, make sure you are placing calls to variables below the code where they are declared. We are not using OOP as of yet, so this is still necessary. Later, it will no longer be necessary to do this. Note On a Mac, you might have to set the focus to the GUI window first before being able to set the focus to the Entry widget in this window. Adding this one line (38) of Python code places the cursor in our text Entry widget, giving the text Entry widget the focus. As soon as the GUI appears, we can type into this text box without having to click it first. We can also disable widgets. To do that, we will set a property on the widget. We can make the button disabled by adding this one line (37 below) of Python code to create the button: After adding the preceding line of Python code, clicking the button no longer creates any action: This code is self-explanatory. We set the focus to one control and disable another widget. Good naming in programming languages helps to eliminate lengthy explanations. Later in this book, there will be some advanced tips on how to do this while programming at work or practicing our programming skills at home. In this recipe, we will improve our GUI by adding drop-down combo boxes which can have initial default values. While we can restrict the user to only certain choices, we can also allow the user to type in whatever they wish. This recipe extends the previous recipe, Setting the focus to a widget and disabling widgets. We insert another column between the Entry widget and the Button widget using the grid layout manager. Here is the Python code: GUI_combobox_widget.py This code, when added to the previous recipes, creates the following GUI. Note how, in line 43 in the preceding code, we assigned a tuple with default values to the combo box. These values then appear in the drop-down box. We can also change them if we like (by typing in different values when the application is running): Line 40 adds a second label to match the newly created combo box (created in line 42). Line 41 assigns the value of the box to a variable of a special tkinter type StringVar, as we did in a previous recipe. Line 44 aligns the two new controls (label and combobox) within our previous GUI layout, and line 45 assigns a default value to be displayed when the GUI first becomes visible. This is the first value of the number_chosen['values'] tuple, the string "1". We did not place quotes around our tuple of integers in line 43, but they got casted into strings because, in line 41, we declared the values to be of the tk.StringVar type. The preceding screenshot shows the selection made by the user as 42. This value gets assigned to the number variable. If we want to restrict the user to only be able to select the values we have programmed into the Combobox, we can do that by passing the state property into the constructor. Modify line 42 as follows: GUI_combobox_widget_readonly_plus_display_number.py Now, users can no longer type values into the Combobox. We can display the value chosen by the user by adding the following line of code to our Button Click Event Callback function: After choosing a number, entering a name, and then clicking the button, we get the following GUI result, which now also displays the number selected: In this recipe, we will add three check button widgets, each with a different initial state. We are creating three check button widgets that differ in their states. The first is disabled and has a check mark in it. The user cannot remove this check mark as the widget is disabled. The second check button is enabled, and by default, has no check mark in it, but the user can click it to add a check mark. The third check button is both enabled and checked by default. The users can uncheck and recheck the widget as often as they like. Look at the following code: GUI_checkbutton_widget.py Running the new code results in the following GUI: In lines 47, 52, and 57 we create three variables of the IntVar type. In the line following each of these variables, we create a Checkbutton, passing in these variables. They will hold the state of the Checkbutton (unchecked or checked). By default, that is either 0 (unchecked) or 1 (checked), so the type of the variable is a tkinter integer. We place these Checkbutton widgets in our main window, so the first argument passed into the constructor is the parent of the widget, in our case, win. We give each Checkbutton widget a different label via its text property. Setting the sticky property of the grid to tk.W means that the widget will be aligned to the west of the grid. This is very similar to Java syntax and it means that it will be aligned to the left. When we resize our GUI, the widget will remain on the left side and not be moved towards the center of the GUI. Lines 49 and 59 place a checkmark into the Checkbutton widget by calling the select() method on these two Checkbutton class instances. We continue to arrange our widgets using the grid layout manager, which will be explained in more detail in Chapter 2, Layout Management. In this recipe, we will create three tkinter Radiobutton widgets. We will also add some code that changes the color of the main form, depending upon which Radiobutton is selected. This recipe extends the previous recipe, Creating a check button with different initial states. We add the following code to the previous recipe: GUI_radiobutton_widget.py Running this code and selecting the Radiobutton named Gold creates the following window: In lines 75-77, we create some module-level global variables which we will use in the creation of each radio button as well as in the callback function that creates the action of changing the background color of the main form (using the instance variable win). We are using global variables to make it easier to change the code. By assigning the name of the color to a variable and using this variable in several places, we can easily experiment with different colors. Instead of doing a global search-and-replace of a hardcoded string (which is prone to errors), we just need to change one line of code and everything else will work. This is known as the DRY principle, which stands for Don't Repeat Yourself. This is an OOP concept which we will use in the later recipes of the book. Note The names of the colors we are assigning to the variables ( COLOR1, COLOR2 ...) are tkinter keywords (technically, they are symbolic names). If we use names that are not tkinter color keywords, then the code will not work. Line 80 is the callback function that changes the background of our main form ( win) depending upon the user's selection. In line 87 we create a tk.IntVar variable. What is important about this is that we create only one variable to be used by all three radio buttons. As can be seen from the screenshot, no matter which Radiobutton we select, all the others will automatically be unselected for us. Lines 89 to 96 create the three radio buttons, assigning them to the main form, passing in the variable to be used in the callback function that creates the action of changing the background of our main window. Here is a small sample of the available symbolic color names that you can look up at the official tcl manual page at. Some of the names create the same color, so alice blue creates the same color as AliceBlue. In this recipe, we used the symbolic names Blue, Gold, and Red. ScrolledText widgets are much larger than simple Entry widgets and span multiple lines. They are widgets like Notepad and wrap lines, automatically enabling vertical scrollbars when the text gets larger than the height of the ScrolledText widget. This recipe extends the previous recipe, Using radio button widgets. You can download the code for each chapter of this book from. By adding the following lines of code, we create a ScrolledText widget: GUI_scrolledtext_widget.py We can actually type into our widget, and if we type enough words, the lines will automatically wrap around: Once we type in more words than the height the widget can display, the vertical scrollbar becomes enabled. This all works out-of-the-box without us needing to write any more code to achieve this: In line 11, we import the module that contains the ScrolledText widget class. Add this to the top of the module, just below the other two import statements. Lines 100 and 101 define the width and height of the ScrolledText widget we are about to create. These are hardcoded values we are passing into the ScrolledText widget constructor in line 102. These values are magic numbers found by experimentation to work well. You might experiment by changing scol_w from 30 to 50 and observe the effect! In line 102, we are also setting a property on the widget by passing in wrap=tk.WORD. By setting the wrap property to tk.WORD we are telling the ScrolledText widget to break lines by words so that we do not wrap around within a word. The default option is tk.CHAR, which wraps any character regardless of whether we are in the middle of a word. The second screenshot shows that the vertical scrollbar moved down because we are reading a longer text that does not entirely fit into the x, y dimensions of the SrolledText control we created. Setting the columnspan property of the grid widget to 3 for the SrolledText widget makes this widget span all the three columns. If we do not set this property, our SrolledText widget would only reside in column one, which is not what we want. So far, we have created several widgets of the same type (for example, Radiobutton) by basically copying and pasting the same code and then modifying the variations (for example, the column number). In this recipe, we start refactoring our code to make it less redundant. We are refactoring some parts of the previous recipe's code, Using scrolled text widgets, so you need that code to apply this recipe to. Here's how we refactor our code: GUI_adding_widgets_in_loop.py Running this code will create the same window as before, but our code is much cleaner and easier to maintain. This will help us when we expand our GUI in the coming recipes. In line 77, we have turned our global variables into a list. In line 89, we set a default value to the tk.IntVar variable that we named radVar. This is important because, while in the previous recipe we had set the value for Radiobutton widgets starting at 1, in our new loop it is much more convenient to use Python's zero-based indexing. If we did not set the default value to a value outside the range of our Radiobutton widgets, one of the radio buttons would be selected when the GUI appears. While this in itself might not be so bad, it would not trigger the callback and we would end up with a radio button selected that does not do its job (that is, change the color of the main win form). In line 95, we replace the three previously hardcoded creations of the Radiobutton widgets with a loop that does the same. It is just more concise (fewer lines of code) and much more maintainable. For example, if we want to create 100 instead of just three Radiobutton widgets, all we have to change is the number inside Python's range operator. We would not have to type or copy and paste 97 sections of duplicate code, just one number. Line 82 shows the modified callback function.
https://www.packtpub.com/product/python-gui-programming-cookbook-second-edition/9781787129450
CC-MAIN-2022-27
en
refinedweb
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). Hi, in picture object Position gui unit is real, Rotation gui unit is Degree , User data gui unit is Percent. how to know gui unit? and how to directly obtain the value displayed on the panel(Data = 0.1745, want to directly get 17.453)? Thanks for any help! Hello @chuanzhen, thank you for reaching out to us. I will answer in bullet points, since there are sort of two questions within you question. BaseContainer c4d.DESC_UNIT Cheers, Ferdinand The example code: """Attempts to evaluate the unit type of the first user data parameter of the currently selected object. Must be run in the Script Manger while having an object selected with at least one user data parameter. When the parameter has one of the unit types float, percent, degree or meter (there are other unit types), it will print out a matching message. If not, it will report that there is "another" unit type or no unit type when the parameter has no unit type set at all. """ import c4d def main() -> None: """Attempts to evaluate the unit type of the first user data parameter of the currently selected object. """ # op is a predefined module attribute in scripts referencing the currently # selected object. if not isinstance(op, c4d.BaseObject): raise RuntimeError("Please select an object.") # The id of the parameter we are interested in, the first user data # parameter of a node. pid = (c4d.ID_USERDATA, 1) # Get the description of the node. description = op.GetDescription(c4d.DESCFLAGS_DESC_NONE) # Get the first user data element in the description. parameterData = description.GetParameter(pid) # Bail when the container is empty, i.e., there is no such parameter in # the node. if len(parameterData) < 1: raise RuntimeError(f"Retrieved empty data container for {pid}.") # How to iterate over the flags of that parameter container. # for cid, value in data: # print (cid, value) # Get the unit type flag for the element and bail if there is no value # set for that flag. unitType = parameterData[c4d.DESC_UNIT] if unitType is None: print (f"The parameter with the id {pid} has no unit type.") return # Do some stuff depending on the unit type. if unitType == c4d.DESC_UNIT_FLOAT: print (f"The unit type for {pid} is float.") elif unitType == c4d.DESC_UNIT_PERCENT: print (f"The unit type for {pid} is percent.") elif unitType == c4d.DESC_UNIT_DEGREE: print (f"The unit type for {pid} is degree.") elif unitType == c4d.DESC_UNIT_METER: print (f"The unit type for {pid} is meter, i.e., length.") else: print (f"The unit type for {pid} is another unit type.") if __name__ == '__main__': main() @ferdinand Thanks,great!
https://plugincafe.maxon.net/topic/13653/how-to-get-data-unit/1
CC-MAIN-2022-27
en
refinedweb
How to monitor panics with Raygun for GolangPosted Apr 8, 2015 | 4 min. (652 words) information you need to solve them. Get valuable awareness of how well your published/deployed applications are really doing now with Raygun for Golang. Basic usage Install This tutorial assumes you know the basics of Golang. To keep things simple, I’m trying out Raygun4Go in the hello world test program described here. To get started, install the Raygun4Go package using go get in a terminal: go get github.com/MindscapeHQ/raygun4go To ensure full panic coverage in your Golang program, we recommend setting up Raygun4Go to be one of the first steps in your main method (or in webservers, your request handling method). Open the .go file containing the main method (in my case, hello.go), and then add an import for Raygun4Go. import "github.com/mindscapehq/raygun4go" Set up Now in the main method, create a raygun client object by calling the New method which requires an application name, and your Raygun API key that you want to use for this app. The application name can be whatever you want, and the API key is shown to you when you create a new Application in your Raygun account, or can be viewed in your application settings. raygun, err := raygun4go.New("Hello", "YOUR_APP_API_KEY") In the spirit of good error handling, lets also check that the raygun client creation didn’t cause an error, and print it to the console if it did. if err != nil { fmt.Println("Unable to create Raygun client: ", err.Error()) } The last step required to set up Raygun4Go is to defer to the HandleError method. defer raygun.HandleError() Try it out To test this out, call panic any where after the defer, then run the program to see the message and stack trace show up in your Raygun dashboard: panic("Panic for no reason at all") Manually sending messages The raygun client object also has a CreateError method for manually sending an error message to your Raygun dashboard. This is great for reporting that something has gone wrong, but a panic doesn’t need to be invoked – for example you may want to report errors from within an error handler, or send a message when a bad status code is returned from a web request. The error report sent to Raygun will even include the stack trace. raygun.CreateError("No need to panic") Features The raygun client object returned from the New method has several chainable features as follows: - Silent(bool) If set to true, errors will be printed to the console instead of being sent to Raygun. - Version(string) Sets the program version number to be sent with each error report to Raygun. - Request(*http.Request) Provides information about any http request responsible for the issue. - User(string) Sets the user identifier, e.g. a name, email or database id. - Tags([]string) Sets a list of tags which can help you filter errors in your Raygun dashboard. - CustomData(interface{}) Sets additional information that can help you debug the problem. (Must work with json.Marshal). Here is an example of using a few of these features: raygun.Version("1.0.0").Tags([]string{"Urgent", "Critical", "Fix it now!"}).User("Robbie Robot") And here is the full listing of my hello.go program: package main import "fmt" import "github.com/mindscapehq/raygun4go" func main() { raygun, err := raygun4go.New("Hello", "YOUR_APP_API_KEY") if err != nil { fmt.Println("Unable to create Raygun client: ", err.Error()) } defer raygun.HandleError() raygun.Version("1.0.0").Tags([]string{"Urgent", "Critical", "Fix it now!"}).User("Robbie Robot") raygun.CreateError("No need to panic") // Manually send an error message to Raygun sayHello() } func sayHello() { fmt.Println("Hello, world.") panic("Panic for no reason at all") // panic will be sent to Raygun } Try it yourself If you want awareness of the issues occurring in your go programs, try Raygun4Go using the simple steps above. If you don’t have an account yet, start a free Raygun trial here, no credit card required.
https://raygun.com/blog/how-to-monitor-panics-with-raygun-for-golang/
CC-MAIN-2022-27
en
refinedweb
Units¶ Like many PIC codes, Smilei handles only dimension-less variables, normalized to reference quantities. Basic reference quantities¶ The speed of light, the elementary charge and the electron mass provide the basis of the normalizations in Smilei: Reference electric charge \(Q_r = e\) (the elementary charge) Reference mass \(M_r = m_e\) (the electron mass) Reference velocity \(V_r = c\) (the speed of light) We can derive from these: a reference energy \(K_r = m_e c^2\) a reference momentum \(P_r = m_e c\) Even with these normalizations, Smilei does not know the scale of the problem: it lacks a reference distance, or equivalently, a reference time. Arbitrary reference quantities¶ Instead of choosing a physical constant (for example, the electron radius) as a reference, the scale of the problem is not decided a priori, and the user is free to scale the result of the simulation to any value. In fact, quantities are proportional an unknown reference frequency \(\omega_r\), which can be scaled by the user a posteriori. Usually, \(\omega_r\) will be an important frequency of the problem. For example, if there is a laser, it could be the laser frequency. Or it could be the electron plasma frequency. From this reference frequency \(\omega_r\), we define: a reference time \(T_r = 1/\omega_r\) a reference length \(L_r = c/\omega_r\) a reference electric field \(E_r = m_e c \omega_r / e\) a reference magnetic field \(B_r = m_e \omega_r / e\) a reference particle density \(N_r = \varepsilon_0 m_e \omega_r^2 /e^2\) a reference current \(J_r = c\, e\, N_r\) Warning \(1/N_r\) is a volume, but counter-intuitively, it is not equal to \(L_r^{3}\). Normalizing all quantities to these references is convenient for resolving Maxwell’s equations, and the charges equation of motion, as it converts them into a dimension-less set of equations: where \(\mathbf{E}\), \(\mathbf{B}\), \(\mathbf{j}\) and \(\mathbf{\rho}\) are the electric field, magnetic field, current density and charge density, normalized to \(E_r\), \(B_r\), \(J_r\) and \(Q_r N_r\), respectively. \(Z\) and \(\mathbf p\) are a particle’s charge and momentum, normalized to \(Q_r\) and \(P_r\), respectively. Note that the temporal and spatial derivatives are also normalized to \(T_r\) and \(L_r\), respectively. Tips for the namelist¶ In the namelist, the user must provide all parameters in units of \(Q_r\), \(M_r\), \(V_r\), \(K_r\), \(P_r\), \(T_r\), \(L_r\), \(E_r\), \(B_r\), \(N_r\) or \(J_r\). This may be cumbersome if you know your input data in other units. However, the namelist is actually a python code that can compute conversions easily. For example, let us assume that you know your problem size in units of the wavelength. Knowing that the reference wavelength is \(2\pi L_r\), you can multiply all your lengths by \(2\pi\): from math import pi wavelength = 2. * pi cell_length = [0.05 * wavelength] grid_length = [100. * wavelength] Problems requiring explicit units¶ Sometimes, Smilei may be requested to compute other things than Maxwell’s equations. That is the case, for example, for computing collisions or ionization. In these situations, equations cannot be normalized to dimension-less terms, and the code must know the value of \(\omega_r\) in physical units. This requires defining an extra parameter in the namelist. For instance, reference_angular_frequency_SI = 2.*pi*3e8/1e-6 means that \(L_r = 1\,\mathrm{\mu m} /(2\pi)\). This information will be used only in some specific parts of the code (collisions, ionization, …) but not in the main PIC algorithms. Warning The outputs of the code are not converted to SI. They are all kept in the reference units listed above. Quantities integrated over the grid¶ Special care must be taken when considering local quantities that are spatially integrated. 1. The spatially-integrated kinetic energy density The particle kinetic energy density is naturally in units of \(K_r N_r\). Integrating over space give different results depending on the simulation dimension. In 1D, this space is a length, with units \(L_r\); in 2D, it is a surface, with units \(L_r^2\); and in 3D, it is a volume, with units \(L_r^3\). Overall, the integrated energy has the units \(K_r N_r L_r^D\) where \(D\) is the simulation dimension. Note that we could expect to obtain, in 3D, an energy with units \(K_r\), but counter-intuitively it has the units \(K_r N_r L_r^3\). These kinetic energies appear, for instance, in the Scalar diagnostics as Ukin (and associated quantities). 2. The spatially-integrated electromagnetic energy density The electromagnetic energy density has the units \(E_r^2/\varepsilon_0 = K_r N_r\). Consequently, the spatially-integrated electromagnetic energy density has the units \(K_r N_r L_r^D\); the same as the integrated kinetic energy density above. These electromagnetic energies appear, for instance, in the Scalar diagnostics as Uelm (and associated quantities). 3. The space- & time-integrated Poynting flux The Poynting flux has the units \(E_r B_r / \mu_0 = V_r K_r N_r\). Consequently, the flux integrated over a boundary, and over time, has the units \(V_r K_r N_r L_r^{D-1} T_r = K_r N_r L_r^D\), which is the same as the integrated energy densities above. This integrated Poynting flux appears, for instance, in the Scalar diagnostics as Uelm_bnd, PoyXmin, PoyXminInst (and associated quantities). Macro-particle weights¶ Macro-particles are assigned a statistical weight which measures their contribution to the plasma distribution function. In Smilei, this weight is defined for each particle at the moment of its creation (usually at the beginning of the simulation), and is never modified afterwards. Its definition reads: As the density is in units of \(N_r\) and the cell hypervolume in units of \(L_r^D\) (where \(D\) is the simulation dimension), then the units of weights is \(N_r L_r^D\). This definition of weights ensures that they do not depend on the cell hypervolume, i.e. they can be reused in another simulation, as long as \(D\), \(L_r\) and \(N_r\) are unchanged.
https://smileipic.github.io/Smilei/units.html
CC-MAIN-2022-27
en
refinedweb
What is scala and what are its benefits Scala is an open source programming language designed for high productivity and a rapid development cycle. It has integration capabilities with many popular object-oriented languages, like Java and C/C++, and it is cross platform. Scala was introduced by Martin Odersky in 2003. It was designed from the ground up to have a very flexible type system and is able to run on the JVM (Java Virtual Machine). Scala’s key advantages over Java include its type traits, functional programming constructs, and code generators. Scalability means that it can be applied to a wide variety of projects across many fields. Scala is best understood as an advanced object-oriented language similar in many ways to Java. What is notable about Scala is its flexibility with programming paradigms. Scala’s functional programming capabilities allow it to handle tasks that would normally be handled with procedural or OOP programming. In addition, it performs slightly faster than Java, and allows the programmer to use powerful constructs like currying and type inference. Scala makes heavy use of traits that define an interface that defines a set of functions along with a type signature. Traits can be combined to create a class, and like Java and C#, traits can be inherited. Unlike Java and C#, Scala supports multiple inheritance. Scala’s type inference allows the programmer to forego much of the verbosity of Java in favor of cleaner code. How to get started with scala programming Scala is most easily understandable if you are familiar with Java. The learning curve for Scala may be a bit steeper than that of some other programming languages, but it can still be relatively easy to pick up and use. The main thing to remember is, “Don’t try to learn everything at once; learn one thing at a time.” Scala code is compiled into Java byte code and consequently runs on the same virtual machine as Java. It also has a very similar syntax to Java, and can interoperate with Java libraries. Scala is a general purpose programming language with strong support for functional programming and object-oriented programming. It combines the power of modern compile-time static typing with the elegance of a dynamically typed, reflective smaller language. Scala was designed to be compatible with Java as much as possible, but also provide improved capabilities over Java (such as immutable variables). Scala values thread safety and type safety over runtime speed (which can be gained by static typing), which suits it well to use in parallel or distributed applications. The best way to learn Scala is to start using Scala. In this guide, we will try to provide information about how to start doing that. This guide assumes that you have an installation of the latest version of the Java SDK and an installation of Scala (1.0 at the time of writing), and that you understand the basics about how to use these languages. ## Examples of code written in scala The following code shows a simple Scala program which writes Hello World to the console. This very basic example illustrates the syntax of Scala code, and can be typed directly into any Scala console: // Create a Scala object (“class”) with a single field: package HelloWorld ; object ConsoleApp { def main ( args : Array [ String ]){ println( “Hello, World!” ); } } // Compile and run the program, supplying the arguments to the main() method, if any: scala -cp “HelloWorld.jar” HelloWorld.ConsoleApp “Hello” “World” The following example uses scala.xml to parse some XML: import java.io. * import scala.xml.* object XmlParser extends App { val input = new BufferedReader ( new InputStreamReader ( System.in, “UTF-8” )) def xelem ( elem : XML ) = elem. asInstanceOf [ Element ] val doc : Document = new Document () doc. add ( xelem ( “ Scala libraries and tools that you should know about The Scala Console is a general purpose tool for Scala development. It allows you to interact with your code from the command line and it is recommended that programmers get familiar with this tool because it is used to invoke Scala programs from the command line. Scala is a good language for doing quick prototyping because it compiles quickly and has interactive consoles, which allow you to do incremental development. The Scala REPL(Read-Eval-Print-Loop) Console is an interactive environment to interact with Scala code. It is used for executing declarations and expressions interactively. It can be used to gain familiarity with Scala or for experimenting with code snippets. The command scala can be used the same way java is used in the shell for running programs in the standard JVM (Java Virtual Machine). The Scala Compiler is a set of tools for compiling Scala code into Java byte-code that can be run on the JVM (Java Virtual Machine). Scala programs are compiled in a similar manner as Java programs. The tool is recommended for advanced users who need to compile their programs as Java byte-code. How to run scalac and the REPL Scala is a language that is used to run commands through the command line and it can be used with a local or remote console: The Scala console is a good place to experiment with new code snippets without having to make an entire application. If you put a line of code into the console and press ENTER, the result should display below the input line. The Scala compiler is used to convert Scala scripts into Java bytecode that can run on any JVM (Java Virtual Machine). To run scalac on the REPL, use this command: scala -i hello.scala You should see a message saying: Compiling 1 Scala source to /home/
https://cecileparkmedia.com/scala-for-programmers-a-comprehensive-guide/
CC-MAIN-2022-27
en
refinedweb
Binding Items to Components You often have lists of items that you want to display in your application, and may want to let the user select one or more of them. You can use basic components such as HTML elements to display such lists. Alternatively, you can components specially designed for the purpose, such as Grid, ComboBox, and ListBox. // Create a listing component for a bean type Grid<Person> grid = new Grid<>(Person.class); // Sets items using vararg beans grid.setItems( new Person("George Washington", 1732), new Person("John Adams", 1735), new Person("Thomas Jefferson", 1743), new Person("James Madison", 1751) ); All listing components in Vaadin have a number of overloaded setItems() methods to define the items to display. Items can be basic objects, such as strings or numbers, or plain old Java objects (POJOs), such as data transfer objects (DTOs) or Java Persistence API (JPA) entities, from your domain model. The easiest way is to provide a List of objects to be shown in such a component. If there are many items, requiring a lot of memory, Grid and ComboBox allow lazy data binding using callbacks to fetch only the required set of items from the back end. Configuring How Items Are Displayed Component-specific APIs allow you to adjust how items are displayed. By default, listing components use the toString() method to display items. If this is not suitable, you can change the behavior by configuring the component. Listing components have one or more callbacks that define how to display the items. For example, consider the ComboBox component that lists status items. You can configure it to use Status::getLabel() method to get a label for each status item. ComboBox<Status> comboBox = new ComboBox<>(); comboBox.setItemLabelGenerator(Status::getLabel); In a Grid, you can use addColumn() to define the columns and configure the getter that returns the content for the column. The setHeader() method sets the column header. // A bean with some fields final class Person implements Serializable { private String name; private String email; private String title; private int yearOfBirth; public Person(String name, int yearOfBirth) { this.name = name; this.yearOfBirth = yearOfBirth; } public String getName() { return name; } public int getYearOfBirth() { return yearOfBirth; } public String getTitle() { return title; } // other getters and setters } // Show such beans in a Grid Grid<Person> grid = new Grid<>(); grid.addColumn(Person::getName) .setHeader("Name"); grid.addColumn(Person::getYearOfBirth) .setHeader("Year of birth"); It is also possible to set the Grid columns to display by property name. For this, you need to get the column objects in order to configure the headers. Grid<Person> grid = new Grid<>(Person.class); grid.setColumns("name", "email", "title"); grid.getColumnByKey("name").setHeader("Name"); grid.getColumnByKey("email").setHeader("Email"); grid.getColumnByKey("title").setHeader("Title"); Check the component examples for more details on configuring the display of listed data. Assigning a List or Array of In-Memory Data The easiest way to pass data to the listing component is to use an array or List. You can create these easily yourself or pass values directly from your service layer. Example: Passing in-memory data to components using the setItems() method. // Sets items as a collection List<Person> persons = getPersonService().findAll(); comboBox.setItems(persons); // Sets items using vararg beans grid.setItems( new Person("George Washington", 1732), new Person("John Adams", 1735), new Person("Thomas Jefferson", 1743), new Person("James Madison", 1751) ); // Pass all Person objects to a grid from a Spring Data repository object grid.setItems(personRepository.findAll()); Lazy Data Binding Using Callbacks Using callback methods is a more advanced way to bind data to components. This way, only the required portion of the data is loaded from your back end to the server memory. This approach is harder to implement and provides fewer features out of the box, but can save a lot of resources on the back end and the UI server. The component passes a query object as a parameter to your callback methods, where you can check what part of the data set is needed. Currently, only Grid and ComboBox properly support lazy data binding. For example, to bind data lazily to a Grid: grid.setItems(query -> 1 getPersonService() 2 .fetchPersons(query.getOffset(), query.getLimit()) 3 .stream() 4 ); To create a lazy binding, use an overloaded version of the setItems()method that uses a callback instead of passing data directly to the component. Typically, you call your service layer from the callback, as is done here. Use the query object’s parameters to limit the data you pass from the back end to the component. The callbacks return the data as a Stream. In this example, the back end returns a List, so we need to convert it to a Stream. The example above works well with JDBC back ends, where you can request a set of rows from a given index. Vaadin executes your data binding call in paged manner, so it is possible to bind also to "paging back ends", such as Spring Data-based solutions. For example, to do lazy data binding from a Spring Data Repository to Grid: grid.setItems(query -> { return repository.findAll( 1 PageRequest.of(query.getPage(), 2 query.getPageSize()) 3 ).stream(); 4 }); Call a Spring Data repository to obtain the requested result set. The query object contains a shorthand for a zero-based page index. The query object also contains page size. Return a stream of items from the Spring Data Pageobject. Sorting with Lazy Data Binding For efficient lazy data binding, sorting needs to have already been done at the back end. By default, Grid makes all columns appear sortable in the UI. You need to manually declare which columns are actually sortable. Otherwise, the UI may indicate that some columns are sortable, but nothing happens if you try to sort them. With lazy data binding, you need to pass the hints that Grid provides in the Query object to your back-end logic. For example, to enable sortable lazy data binding to a Spring Data repository: public void bindWithSorting() { Grid<Person> grid = new Grid<>(Person.class); grid.setSortableColumns("name", "email"); 1 grid.addColumn(person -> person.getTitle()) .setHeader("Title") .setKey("title").setSortable(true); 2 grid.setItems(VaadinSpringDataHelpers.fromPagingRepository(repo)); 3 } If you are using property-name-based column definition, Gridcolumns can be made sortable by their property names. The setSortableColumns()method makes columns with given identifiers sortable and all others non-sortable. Alternatively, define a key to your columns, which will be passed to the callback, and define the column to be sortable. In the callback, you need to convert the Vaadin-specific sort information to whatever your back end understands. In this example, we are using Spring Data and a Vaadin Spring Data utility method to convert the values. This utility method also passes the sort information to our back-end call and returns the constructed callback. If you are using DTOs or otherwise want to customize binding to a Spring Data-based back end, the VaadinSpringDataHelpersclass also contains toSpringPageRequest()and toSpringDataSort()methods to convert Vaadin query hints to their corresponding Spring Data relatives. Filtering with Lazy Data Binding Note that, for the lazy data to be efficient, filtering needs to be done at the back end. For instance, if you provide a text field to limit the results shown in a Grid, you need to make your callbacks handle the filter. For example, to handle filterable lazy data binding to a Spring Data repository in Grid: public void initFiltering() { filterTextField.setValueChangeMode(ValueChangeMode.LAZY); 1 filterTextField.addValueChangeListener(e -> listPersonsFilteredByName(e.getValue())); 2 } private void listPersonsFilteredByName(String filterString) { String likeFilter = "%" + filterString + "%";3 grid.setItems(q -> repo .findByNameLikeIgnoreCase( likeFilter, 4 PageRequest.of(q.getPage(), q.getPageSize())) .stream()); } The lazy data binding mode is optimal for filtering purposes. Queries to the back end are only done when a user makes a small pause while typing. When a value-change event occurs, you should reset the data binding to use the new filter. The example back end uses SQL behind the scenes, so the filter string is wrapped in %characters to match anywhere in the text. Pass the filter to your back end in the binding. You can combine both filtering and sorting in your data binding callbacks. Consider a ComboBox as an another example of lazy-loaded data filtering. The lazy-loaded binding in ComboBox is always filtered by the string typed in by the user. Initially, when there is no filter input yet, the filter is an empty string. The ComboBox examples below use the new data API available since Vaadin 18, where the item count query is not needed in order to fetch items. For example, you can handle filterable lazy data binding to a Spring Data repository as follows: ComboBox<Person> cb = new ComboBox<>(); cb.setItems( query -> repo.findByNameLikeIgnoreCase( // Add `%` marks to filter for an SQL "LIKE" query "%" + query.getFilter().orElse("") + "%", PageRequest.of(query.getPage(), query.getPageSize())) .stream() ); The above example uses a fetch callback to lazy-load items, and the ComboBox will fetch more items as the user scrolls the dropdown, until there are no more items returned. If you want to have the dropdown’s scrollbar reflect the exact number of items matching the filter, an optional item count callback can be used, as shown in the following example: cb.setItems( query -> repo.findByNameLikeIgnoreCase( "%" + query.getFilter().orElse("") + "%", PageRequest.of(query.getPage(), query.getPageSize())) .stream(), query -> (int) repo.countByNameLikeIgnoreCase( "%" + query.getFilter().orElse("") + "%")); If you want to filter items with a type other than a string, you can provide a filter converter with the fetch callback to get the right type of filter for the fetch query: ComboBox<Person> cb = new ComboBox<>(); cb.setPattern("\\d+"); cb.setPreventInvalidInput(true); cb.setItemsWithFilterConverter( query -> getPersonService() .fetchPersonsByAge(query.getFilter().orElse(null), 1 query.getOffset(), query.getLimit()) .stream(), textFilter -> textFilter.isEmpty() ? null 2 : Integer.parseInt(textFilter)); Queryobject contains the filter of type returned by given converter. The second callback is used to convert the filter from the combo box text on the client side into an appropriate value for the back end. Improving Scrolling Behavior With simple lazy data binding, the component does not know how many items are actually available. When a user scrolls to the end of the scrollable area, Grid polls your callbacks for more items. If new items are found, these are added to the component. This causes the relative scrollbar to behave in a strange way as new items are added on the fly. The usability can be improved by providing an estimate of the actual number of items in the binding code. The adjustment happens through a DataView instance, which is returned by the setItems() method. For example, to configure the estimate of rows and how the "virtual row count" is adjusted when the user scrolls down: GridLazyDataView<Person> dataView = grid.setItems(query -> { 1 return getPersonService() .fetchPersons(query.getOffset(), query.getLimit()) .stream(); }); dataView.setItemCountEstimate(1000); 2 dataView.setItemCountEstimateIncrease(500); 3 When assigning the callback, a data view object is returned. This can be configured directly or saved for later adjustments. If you have a rough estimate of rows, passing this to the component improves the user experience. For example, users can scroll directly to the end of the result set. You can also configure how Gridadjusts its estimate of available rows. With this configuration, if the back end returns an item for index 1000, the scrollbar is adjusted as if there were 1,500 items in the Grid. A count callback has to be provided in order to get a similar user experience to that of assigning data directly. Note that in many back ends, counting the number of results can be an intensive operation. dataView.setItemCountCallback(q -> getPersonService().getPersonCount()); Accessing Currently Shown Items You may need to get a handle to all items shown in a listing component. For example, add-ons or generic helpers might want to do something with the data that is currently listed in the component. For such a purposes, the supertype of data views can be accessed with the getGenericDataView() method. For example, you can export persons listed in a Grid to a CSV file as follows: private void exportToCsvFile(Grid<Person> grid) throws FileNotFoundException, IOException { GridDataView<Person> dataView = grid.getGenericDataView(); FileOutputStream fout = new FileOutputStream(new File("/tmp/export.csv")); dataView.getItems().forEach(person -> { try { fout.write((person.getFullName() + ", " + person.getEmail() +"\n").getBytes()); } catch (IOException ex) { throw new RuntimeException(ex); } }); fout.close(); } If you have assigned your items as in-memory data, you have more methods available in a list data view object. You can get the reference to that as a return value of the setItems() method or through the getListDataView() method. It is then possible to get the next or previous item to a certain item. Of course, this can be done by saving the original data structure, but this way you can implement a generic UI logic without dependencies on the assigned data. For example, you can programmatically select the next item in a Grid, if a current value is selected and there is a next item after it. List<Person> allPersons = repo.findAll(); GridListDataView<Person> gridDataView = grid.setItems(allPersons); Button selectNext = new Button("Next", e -> { grid.asSingleSelect().getOptionalValue().ifPresent(p -> { gridDataView.getNextItem(p).ifPresent( next -> grid.select(next) ); }); }); Updating the Displayed Data A typical scenario in Vaadin apps is that data displayed in, for example, a Grid component, is edited elsewhere in the application. Editing the item elsewhere does not automatically update the UI in a listing component. An easy way to refresh the component’s content is to call setItems() again with the fresh data. Alternatively, you can use finer-grained APIs in the DataView to update just a portion of the dataset. For example, you can modify one or more fields of a displayed item and notify Grid about the updates to the item through DataView::refreshItem(). This would modify only one specific item, not the whole data set. Person person = new Person(); person.setName("Jorma"); person.setEmail("old@gmail.com"); GridListDataView<Person> gridDataView = grid.setItems(person); Button modify = new Button("Modify data", e -> { person.setEmail("new@gmail.com"); // The component shows the old email until notified of changes gridDataView.refreshItem(person); }); Alternatively, if you have bound a mutable List to your component, you can use helper methods in the list data view to add or remove items. You can also obtain an item count by hooking to the item count change event or request the item count directly. For example, it is possible to use a mutation method and listen for an item count change through the list data view, as follows: // The initial data ArrayList<String> items = new ArrayList<>(Arrays.asList("foo", "bar")); // Get the data view when binding it to a component Select<String> select = new Select<>(); SelectListDataView<String> dataView = select.setItems(items); TextField newItemField = new TextField("Add new item"); Button addNewItem = new Button("Add", e -> { // Adding through the data view API mutates the data source dataView.addItem(newItemField.getValue()); }); Button remove = new Button("Remove selected", e-> { // Same for removal dataView.removeItem(select.getValue()); }); // Hook to item count change event dataView.addItemCountChangeListener(e -> Notification.show(" " + e.getItemCount() + " items available")); // Request the item count directly Span itemCountSpan = new Span("Total Item Count: " + dataView.getItemCount()); Sorting of In-memory Data Let us consider the Grid as an example of a component with a sorting API. Grid rows are automatically sortable by columns that have a property type that implements Comparable. By defining a custom Comparator, you can also make other columns sortable. Alternatively, you can override the default behavior of columns with comparable types. For example, to make the sorting of string-typed columns case-insensitive: grid.addColumn(Person::getName) .setHeader("Name") // Override the default sorting .setComparator(Comparator.comparing(person -> person.getName().toLowerCase())); Note that this kind of sorting is only supported for in-memory data. See Sorting with Lazy Data Binding for how to sort lazy-loaded data. It is possible to sort a collection of bound items with the DataView API, either by setting a Comparator or a sort order for a given bean field. Sort orders or Comparator instances can be added or removed, as well. For example, you can define custom sorting through the DataView API as follows: // You get a DataView when setting the items GridListDataView<Person> dataView = grid .setItems(personRepository.findAll()); // Change the sort order of items collection dataView.setSortOrder(Person::getName, SortDirection.ASCENDING); // Add a secondary sort order to the existing sort order dataView.addSortOrder(Person::getTitle, SortDirection.ASCENDING); // Remove sorting completely (undoes the settings done above) dataView.removeSorting(); Filtering In-Memory Data If you are using an in-memory data set, you can also apply filters through the data view object. The filtered list is automatically updated to the UI. For example, you can use a list data view to filter items based on a property as follows: List<Person> allPersons = repo.findAll(); GridListDataView<Person> gridDataView = grid.setItems(allPersons); // Filter Persons younger 20 years gridDataView.setFilter(p -> p.getAge() < 20); // Remove filters completely (undoes the settings done above) gridDataView.removeFilters(); Recycling Data Binding Logic In large applications, you typically have multiple places where you display the same data type in a listing component. You can use various approaches to share the lazy data binding logic. One way is to use a domain-object-specific component implementation by extending a listing component to handle the application-specific data binding. This approach also allows you to share other common configuration aspects. @SpringComponent @Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE) public class PersonGrid extends Grid<Person> { public PersonGrid(@Autowired PersonRepository repo) { super(Person.class); // Make the lazy binding setItems(q -> repo.findAll( PageRequest.of(q.getPage(), q.getPageSize())).stream()); // Make other common/default configuration setColumns("name", "email"); } } You can also use a static helper method to bind the data as follows: public static void listItems(Grid<Person> grid, PersonRepository repository) { grid.setItems(query -> repository.findAll( PageRequest.of(query.getPage(), query.getPageSize())).stream()); } You can create a separate data provider class. The following example uses only the FetchCallBack, but you can also implement a full data provider by, for example, extending AbstractbackendDataProvider. @SpringComponent public class PersonDataProvider implements CallbackDataProvider.FetchCallback<Person, Void> { @Autowired PersonRepository repo; @Override public Stream<Person> fetch(Query<Person, Void> query) { return repo.findAll(PageRequest.of(query.getPage(), query.getPageSize())).stream(); } } personGrid.setItems(dataProvider);
https://vaadin.com/docs/latest/binding-data/data-provider
CC-MAIN-2022-27
en
refinedweb
I’ve recently had the opportunity to brush off my SSIS skills and revisit this toolset. In my most recent usage, I had a requirement to use SSIS to pull data from a WCF web service that was a) using the net.tcp protocol, and b) used transport security with a client X.509 certificate for authentication. This was fun enough by itself. Configuring WCF tend typcially to be non-trival even when you don’t have to tweak app.config files for SQL SSIS services. One of my goals, in fact, was to avoid having to update that, meaning I had to put code in my SSIS Script block in the data flow to configure my channel & security & such. Luckily, I was able to find examples of doing this with wsHttpBinding’s, so it wasn’t a stretch to tweak it for netTcpBinding with the required changes to support certificate authenticated transport security. Here’s the code… using System;usingSystem.Data;usingMicrosoft.SqlServer.Dts.Pipeline.Wrapper;usingMicrosoft.SqlServer.Dts.Runtime.Wrapper;usingSystem.ServiceModel;usingSC_13defb16ae45414dbac17137434aeca0.csproj.PaymentSrv;[Microsoft.SqlServer.Dts.Pipeline.SSISScriptComponentEntryPointAttribute]public class ScriptMain : UserComponent{ChannelFactory<IProfile> channelFactory;IProfileclient;public override voidPreExecute(){base.PreExecute(); boolfireAgain = false;this.ComponentMetaData.FireInformation(0, “Pull From Profile Service.PreExecute”, “Service URI: ‘” + this.Variables.varProfileServiceUrl + “‘”, null, 0, ref fireAgain);this.ComponentMetaData.FireInformation(0, “Pull From Profile Service.PreExecute”, “Cert Fingerprint: ‘” + this.Variables.varClientCertFingerprint + “‘”, null, 0, ref fireAgain);//create the bindingNetTcpBindingbinding = new NetTcpBinding();binding.Security.Mode = SecurityMode.Transport;binding.Security.Transport.ClientCredentialType = TcpClientCredentialType.Certificate;binding.Security.Transport.ProtectionLevel = System.Net.Security.ProtectionLevel.EncryptAndSign;EndpointAddressendpointAddress = new EndpointAddress(this.Variables.varPaymentServiceUrl);channelFactory = new ChannelFactory<IProfile>(binding, endpointAddress);channelFactory.Credentials.ClientCertificate.SetCertificate(System.Security.Cryptography.X509Certificates.StoreLocation.LocalMachine,System.Security.Cryptography.X509Certificates.StoreName.My,System.Security.Cryptography.X509Certificates.X509FindType.FindByThumbprint,this.Variables.varClientCertFingerprint);//” x8 60 66 09 t6 10 60 2d 99 d6 51 f7 5c 3b 25 bt 2e 62 32 79″);channelFactory.Credentials.ServiceCertificate.Authentication.CertificateValidationMode =System.ServiceModel.Security.X509CertificateValidationMode.PeerTrust;//create the channelclient = channelFactory.CreateChannel();IClientChannel channel = (IClientChannel)client;channel.Open();this.ComponentMetaData.FireInformation(0, “Pull From Profile Service.PreExecute”, “Open Succeeded.”, null, 0, reffireAgain);}public override voidPostExecute(){base.PostExecute();//close the channelIClientChannelchannel = (IClientChannel)client;channel.Close();//close the ChannelFactorychannelFactory.Close();}public override voidInput0_ProcessInputRow(Input0Buffer Row){GuidtxGuid = Guid.NewGuid();Profileprofile = null;try{profile = client.getProfile(txGuid, Row.ProfileId);Row.PSProfileType = GetProfileType(profile);}catch (Exception ex){stringmessage = ex.Message();Log(message, 0, null);}}private string GetProfileType(Profileprofile){return “x”;}} So one of the challenges I encountered while using this method had to do with the client certificate. This error drove me nuts: The credentials supplied to the package were not recognized. Server stack trace: at System.Net.SSPIWrapper.AcquireCredentialsHandle(SSPIInterface SecModule, String package, CredentialUse intent, SecureCredential scc) at System.Net.Security.SecureChannel.AcquireCredentialsHandle(CredentialUse credUsage, SecureCredential& secureCredential) at System.Net.Security.SecureChannel.AcquireClientCredentials(Byte[]& thumbPrint) at System.Net.Security.SecureChannel.GenerateToken(Byte[] input, Int32 offset, Int32 count, Byte[]& output) at System.Net.Security.SecureChannel.NextMessage(Byte[] incoming, Int32 offset, Int32 count), X509CertificateCollection clientCertificates, SslProtocols enabledSslProtocols, Boolean checkCertificateRevocation).CommunicationObject.Open().Open() at ScriptMain.PreExecute() at Microsoft.SqlServer.Dts.Pipeline.ScriptComponentHost.PreExecute() If you look at it, this is an authentication error. Tracing the code, it happens AFTER the code successfully retrieves the client certificate from the certificate store. The call to SetServerCertificate succeeds without incident. The error hits when the code opens the channel, and tries to use the private key attached to the client certificate to prove to the server that “I’m a valid client.” I went nuts because I was an administrator on the machine, and had installed the client certificate to the certificate store myself. It initially worked, and there was no indication that there was a problem getting the certificate from the cert store. It turns out that when you use the machine store under these circumstances, I needed to give myself explicit permission to the client certificate in order for the SetServerCertificate to get the private key along with the client certificate. This was counter-intuitive for two *additional* reasons: 1) I was an administrator on the box, and already should have had this permission by the fact that my login account belonged to the administrators group (which you can see from the pic below, also had access.) 2) It worked the day before. When I imported the private key originally to the key store, it appears somewhere in the depths of Windows 7 (and this applied on Server 2008 R2 as well) I still had permission in my active session context. When I logged out, that login context died, and, coming back the next day, I logged in again, not realizing I wouldn’t be able to access the key. Giving myself explicit permission as shown below allowed me to run my SSIS package within Visual Studio and from SSMS. 2 thoughts on “Using Client Certs to Pull Data from WCF in SSIS Data Flow Transform Script” Jim, Thanks for posting this. Connecting with SSIS to web services using secure certificates is completely undocumented, anywhere. Even the WROX SSIS Pro book says to check Books Online, which is a dead end. So kudos to you. I'm working on trying to get a web service connected using a certificate now, and still waiting for my Technet subscription to go through. Yours is the only one of two articles that I've found treat this topic. Now, my question to you is, how did you get the namespace reference in your script component? You know, when one tries to Add Web Reference, the dialog that pops up requires calling the service in order to instantiate the namespace object in the project. Then you go to the object to find the GUID. Anyway, that's how I learned. The problem is that one can't actually get the service to come up without authenticating using a certificate. To be clear, I can call this service from a browser after importing the certificate. The web methods have a separate login/pwd that are required – so that could be the hangup. Any ideas? Am I right that this bit of information about how to get the namespace GUID is critical? Thanks. Hi, Brian,Unfortunately, it's been a while, but as I recall security was set up on the service metadata exchange (MEX). The MEX didn't didn't require the certificate to add the web reference. If that's not possible for you, see if you can get the service running in a different environment that doesn't have the same security issues, and then change the URL.
https://www.granitestateusersgroups.net/2012/08/17/using-client-certs-to-pull-data-from-wcf-in-ssis-data-flow-transform-script/
CC-MAIN-2022-27
en
refinedweb
26 Nov 2021 11:42 AM - last edited on 29 Nov 2021 02:26 AM by MaciejNeumann hello can anyone help me with this error? I've already imported the package, but I'm still getting this message. python import logging import cx_Oracle from ruxit.api.base_plugin import RemoteBasePlugin logger = logging.getLogger(__name__) json "install_requires": [ "cx_Oracle", "requests>=2.6.0" ], Solved! Go to Solution. you may install the module. I just did and was able to run a plugin with the requirement. pip install cx_Oracle I already installed this module, but the error persists. It is installed. I believe the problem is that the plugin is being built using a newer version. It's getting cached. To no avail the change. That wasn't the problem with the version. It is in the same version and the problem remains. even, from within the gateway I can run a test script and it connects to the bank, so it's not a problem with the package. Everything is installed. I don't understand what is missing from my script, that Dynatrace is not understanding that the module exists. { "name": "custom.remote.python.integracao_interna_cc", "version": "1.22", "type": "python", "entity": "CUSTOM_DEVICE", "metricGroup": "tech.python.integracao_interna_cc", "technologies": [ "Python" ], "favicon": "", "source": { "package": "custom_remote_python_integracao_interna_conta", "className": "IntegracaoInternaCC", "install_requires": [ "cx-Oracle", "requests>=2.6.0" ], "activation": "Remote" }, "configUI": { "displayName": "Filas Oracle AQ - Integração Interna CC" }, "ui": { "keymetrics": [ { "key": "total_lancamentos_export_jsm", "aggregation": "max", "mergeaggregation": "sum", "displayname": "Integração Interna CC - Total" }, Do I need to put some lib in my plugin directory to build together? @tibebe_m_digafe , Hi. Can you help with this? You mentioned that you ran your script and it worked. Is it possible for you to share this configuration? Thank you { "name": "custom.remote.python.oracle", "version": "1.0.0", "type": "python", "entity": "CUSTOM_DEVICE", "metricGroup": "tech.oracle", "technologies": ["PYTHON"], "install_requires": [ "cx_Oracle", "requests>=2.6.0" ], "source": { "package": "oracle_plugin", "className": "Oracle", "activation": "Remote" }, "ui" : { "keycharts" : [ ], "charts" : [ ] }, "metrics": [ ], "properties" : [ ], "configUI": { } } using Python 3.6.12 installed packages pip list Package Version ------------------ ------------------------- attrs 21.2.0 boto3 1.18.44 botocore 1.21.44 bottle 0.12.19 certifi 2021.5.30 cffi 1.14.6 charset-normalizer 2.0.6 check-tls-certs 0.12.0 click 8.0.1 colorama 0.4.4 cryptography 3.4.8 cx-Oracle 8.3.0 idna 3.2 importlib-metadata 4.8.1 jmespath 0.10.0 jsonschema 3.2.0 pip 21.2.4 plugin-sdk 1.223.105.20210824.140926 pycparser 2.20 pyOpenSSL 20.0.1 pyrsistent 0.18.0 python-dateutil 2.8.2 requests 2.26.0 s3transfer 0.5.0 setuptools 28.8.0 six 1.16.0 typing-extensions 3.10.0.2 urllib3 1.26.6 wheel 0.29.0 zipp 3.5.0 Good afternoon people!! I have the correct version (3.6), I also have all these libraries installed. Also, I installed oracle client on the server. I can connect to the database if I access python directly from the prompt, making an import in cx_Oracle. I understand that the error is in my code. import logging import cx_Oracle import os from ruxit.api.base_plugin import RemoteBasePlugin logger = logging.getLogger(__name__) class IntegracaoInternaCC(RemoteBasePlugin): def query(self, **kwargs): global con, total_lancamentos_export_jsm, state_integracao_interna_cc state_integracao_interna_cc = 0.0 status_ok = "OK" status_notok = "Bad" try: con = cx_Oracle.connect(user="", password="", dsn="") cursor = con.cursor() print("Successfully connected to Oracle Database") consulta_sql = "select count(1) from cc_qcc_lanc_export_jms s where state <> 3;" cursor.execute(consulta_sql) resultado_query = (cursor.fetchall()) for linha in resultado_query: total_lancamentos_export_jsm = (linha[0]) print("Total Registros", total_lancamentos_export_jsm) if total_lancamentos_export_jsm == 0: state_integracao_interna_cc = status_ok print("Status:", state_integracao_interna_cc) else: state_integracao_interna_cc = status_notok print("Status:", state_integracao_interna_cc) group = self.topology_builder.create_group("Filas AQ", "Filas AQ") device = group.create_element("Integração Conta Corrente", "Integração Conta Corrente") group.report_property(key="Desenvolvedor", value="Rodrigo Biaggio") group.report_property(key="Secondary technology", value="Oracle") group.report_property(key="Integrações", value="AWS") group.report_property(key="Descrição", value="Filas Oracle AQ - Conta e Standin") device.report_property(key="Integrações", value="AWS") device.report_property(key="Technology", value="Python") device.add_endpoint(ip="", port=3306, dnsNames=[""]) device.relative(key='total_lancamentos_export_jsm', value=total_lancamentos_export_jsm) group.absolute(key='total_lancamentos_export_jsm', value=total_lancamentos_export_jsm) device.state_metric(key='state_integracao_interna_cc', value=state_integracao_interna_cc) except cx_Oracle.DatabaseError as e: print("Erro ao acessar tabela", e) finally: con.close() print("Conexão ao banco encerrada") It's failing on this line: con.close() because the connection cannot be created with empty connection parameters. You can set it to None first and then check for None before trying to close the connection. What user does the plugin run as? dtuserag, dtuser? If the plugin is run with any of these users it can justify not finding the oracle lib. Hi, dtuserag. You can do a oneagent_sim with sudo -u, that sometimes points out access issues. Hi @Mike_L @tibebe_m_digafe I'm still trying to get the plugin to work. I've already run simulate_plugin and it works. The problem occurs when deploying to Dynatrace. At first, it was returning a message that the cx_Oracle module did not exist. I redid all the installation of gateway, SDK and Oracle client. Now, even though everything works fine inside the OS, when I deploy, it returns a message that the Oracle LIB was not found. I put in my code to return the Oracle environment variables and also the LD_LIBRARY_PATH and the return is none, that is, it is not finding the variable. SIMULATE_PLUGIN OK ERROR What is causing me confusion is the fact that my user has the variables, and dtuserag is not a user with bash, it doesn't login. Which user does Dynatrace use, root or the dtuserag? These variables must be defined for which users? Thank you Hi, Extensions run as dtuserag. The LD_LIBRARY_PATH needs to be present before the remote plugin module starts up. You can do this by adding Environment=LD_LIBRARY_PATH=/path_to_accessible_libraries/oracle Inside the [Service] section of the Remote Plugin Module service script. By default it is located at /etc/systemd/system/remotepluginmodule.service Mike Hi, It's the first one. Does the dtuserag have full access to that folder? Mike Yes, the user has access. Mike, I'm stopping/starting the dynatrace service, running systemctl status dynatracegateway Would that be enough for the module to load the path? I'm doing the restart and nothing happens. Is it possible for me to restart only the extension module? Thank you I did a test by changing the user from dtuserag to root. I restarted the Dynatrace service and it didn't reflect. In my code it still points out that the plugin is running with dtuserag. The user needs to be set during installation using the USER parameter. The service to restart is remotepluginmodule.service Okay Mike. The problem was solved. What you suggested was correct, I hadn't done one of the steps, which is systemctl daemon-reload after that, my plugin is OK. Thank you very much for your help in this case. No worries. You picked one of the most complicated libraries to use with an extension. It took quite some time for us to figure out how to use it correctly so don't be discouraged.
https://community.dynatrace.com/t5/Extensions/No-module-named-cx-Oracle/m-p/177147
CC-MAIN-2022-27
en
refinedweb
Building a two player Wordle clone with Python and Rich on Replit. Once you're done, you'll be able to play a command-line-based game with a friend (with both of you sitting at the same machine), as shown below. We'll be using Python, and to do the green and yellow colors we'll use Rich, a library for rich-text formatting. To follow along, you should know some basic Python, but we'll explain each code sample in depth so you should be able to keep up even if you are not familiar with Python. Getting started To get started, create a Python repl. Installing Rich Rich isn't part of the Replit Universal Installer, so we have to install it manually. Open up the "Shell" tab in the repl workspace and run the following commands: python3 -m poetry init --no-interaction python3 -m poetry add rich This will create a pyproject.toml file to define Rich as a dependency, and Replit will automatically install it for us next time we run our app. Printing colored text The first thing we need to figure out is how to print out different colored letters. By default, we'll use similar settings to the Wordle defaults - Green = correct letter in the correct position - Yellow = correct letter in the incorrect position - Gray = incorrect letter Because we're using Rich, we don't have to mess around with ANSI escape codes. It's possible to use them to style terminal text, but you end up having to deal with nasty-looking strings like \033[0;32m, and there are likely to be compatibility issues too. Rich abstracts this away for us, and we can use nicer-looking controls like '[black on green]TWORDLE[/]' to describe how the text should look. Take a look at how this works now by adding the following code to main.py and pressing "Run": import rich rich.print('[black on green]TWORDLE[/]') Because we may want to customize what specific colors mean at some point, let's define each of our three cases in functions. Replace the existing code in main.py with the following: import rich def correct_place(letter): return f'[black on green]{letter}[/]' def correct_letter(letter): return f'[black on yellow]{letter}[/]' def incorrect(letter): return f'[black on white]{letter}[/]' WELCOME_MESSAGE = correct_place("WELCOME") + " " + incorrect("TO") + " " + correct_letter("TWORDLE") + "\n" def main(): rich.print(WELCOME_MESSAGE) if __name__ == '__main__': main() Run this code, and you'll see a Wordle-styled welcome message, demonstrating all three styles, as shown below. Creating the game loop As in classic Wordle, our game will allow the player six tries to guess a word. Unlike classic Wordle, we'll allow for two players. Player 1 will choose a word, and player 2 will attempt to guess it. The basic logic is then: Get word from Player 1 Get guess from Player 2 While Player 2 has guesses remaining Get new guess If guess is correct End the game So let's ignore our fancy colored text for a moment and build this logic. Getting and guessing the word We'll use the Console class from Rich, which creates a virtual output pane on top of our actual console. This will make it easier to have more control over our output as we build out the app. Add the following two imports to the top of the main.py file: from rich.prompt import Prompt from rich.console import Console And now replace the main() function with the following code: def main(): rich.print(WELCOME_MESSAGE) allowed_guesses = 6 used_guesses = 0 console = Console() answer_word = Prompt.ask("Enter a word") console.clear() while used_guesses < allowed_guesses: used_guesses += 1 guess = Prompt.ask("Enter your guess") if guess == answer_word: break print(f"\n\nTWORDLE {used_guesses}/{allowed_guesses}\n") If you run this, you'll be prompted (as player 1) to enter a word. The entered word will then be hidden from view to avoid spoiling the game, and player 2 can enter up to six guesses. At this stage, player 2 doesn't get any feedback on correct or incorrect letters, which makes the game pretty hard for player 2! If player 2 does happen to guess correctly, the loop will break and the game will display how many guesses were used. Providing feedback on correct letters Let's add a helper function to calculate whether each letter should be green, yellow, or gray. Add this function above the main() function in main.py: def score_guess(guess, answer): scored = [] for i, letter in enumerate(guess): if answer[i] == guess[i]: scored += correct_place(letter) elif letter in answer: scored += correct_letter(letter) else: scored += incorrect(letter) return ''.join(scored) This function takes in player 2's guess and the correct answer and compares them letter by letter. It uses the helper functions we defined earlier to create the Rich formatting string for each letter, and then joins them all together into a single string. NOTE: Here we simplify how duplicate letters are handled. In classic Wordle, letters are colored based on how often they occur in the correct answer, for example, if you guess "SPEED" and the correct word is "THOSE", the second E in your guess will be colored as incorrect. In our version, it will be labeled as a correct letter in the wrong place. Handling duplicate letters is tricky, and implementing this logic correctly is left as an exercise to the reader. Call this function from inside the while loop in main() by adding the console.print line as follows: while used_guesses < allowed_guesses: used_guesses += 1 guess = Prompt.ask("Enter your guess") console.print(score_guess(guess, answer_word)) # new line if guess == answer_word: break Now player 2 has something to work on from each guess, and it should be a lot easier to guess the correct word by incrementally finding more correct letters, as shown in the example below. Adding an emoji representation for spoiler-free sharing A key part of Wordle is that once a player has guessed a word, they can share a simple graphic of how well they did, without giving away the actual word. For our two-player version, this "no spoilers" feature isn't as important, but let's add it anyway. As with the letter-coloring, we want to keep the emoji we use configurable. By default, we'll use green, yellow, and gray squares. Let's start by defining this in a dictionary, near the top of our main.py file. Add the following to your code: emojis = { 'correct_place': '🟩', 'correct_letter': '🟨', 'incorrect': '⬜' } Replace the score_guess function with the following: def score_guess(guess, answer): scored = [] emojied = [] for i, letter in enumerate(guess): if answer[i] == guess[i]: scored += correct_place(letter) emojied.append(emojis['correct_place']) elif letter in answer: scored += correct_letter(letter) emojied.append(emojis['correct_letter']) else: scored += incorrect(letter) emojied.append(emojis['incorrect']) return ''.join(scored), ''.join(emojied) The logic is very similar to before, but instead of only calculating the correct style for the letter, we also keep track of each emoji. At the end, we return both the string to print out the scored word, and the emoji representation for that guess. To use this in the main function, replace the code for the while loop with the following code: all_emojied = [] while used_guesses < allowed_guesses: used_guesses += 1 guess = Prompt.ask("Enter your guess") scored, emojied = score_guess(guess, answer_word) all_emojied.append(emojied) console.print(scored) if guess == answer_word: break print(f"\n\nTWORDLE {used_guesses}/{allowed_guesses}\n") for em in all_emojied: console.print(em) If you run again, the game will work as before, but now you'll see the emoji representation printed after the game ends. This can be copy-pasted to share and help our game go viral. You can see what it looks like in the image below. Some finishing touches The one messy part of our game remaining is that the input prompts are still shown after player 2 has entered each guess. This means that each word is shown twice: once in its colored form, and once exactly as the player entered it. Let's adapt the game to clear the console and output just the colored versions of each guess. To do this, we need to keep track of all player 2's guess, which we were not doing before. Replace the while loop in the main() function with the following code: all_emojied = [] all_scored = [] while used_guesses < allowed_guesses: used_guesses += 1 guess = Prompt.ask("Enter your guess") scored, emojied = score_guess(guess, answer_word) all_scored.append(scored) all_emojied.append(emojied) console.clear() for scored in all_scored: console.print(scored) if guess == answer_word: break This clears the console completely after each guess by player 2, and then prints out each of the (styled) guesses. The output looks neater now, as shown below. Adding instructions People will like our game more if they can figure out what to do without having to read documentation. Let's add some basic instructions for each player to the game interface. Below the WELCOME_MESSAGE variable we defined earlier, add the following: P1_INSTRUCTIONS = "Player 1: Please enter a word (player 2, look away)\n" P2_INSTRUCTIONS = "Player 2: You may start guessing\n" Now update the main() function like this: def main(): allowed_guesses = 6 used_guesses = 0 console = Console() console.print(WELCOME_MESSAGE) console.print(P1_INSTRUCTIONS) answer_word = Prompt.ask("Enter a word") console.clear() console.print(WELCOME_MESSAGE) console.print(P2_INSTRUCTIONS) all_emojied = [] all_scored = [] while used_guesses < allowed_guesses: used_guesses += 1 guess = Prompt.ask("Enter your guess") scored, emojied = score_guess(guess, answer_word) all_scored.append(scored) all_emojied.append(emojied) console.clear() console.print(WELCOME_MESSAGE) for scored in all_scored: console.print(scored) if guess == answer_word: break print(f"\n\nTWORDLE {used_guesses}/{allowed_guesses}\n") for em in all_emojied: console.print(em) Now our welcome message stays at the top, and the players are prompted by simple instructions. Have fun playing it with your friends! Where next? The basics of the game are in place, but there is still a lot you could build from here. Some ideas: - Fix the logic for handling duplicate letters. - Fix the fact that the game crashes if player 2 enters the wrong number of letters. - The game still says 6/6, even if player 2 has not guessed the word after six tries. Have the game print out X/6in this case, as in classic Wordle. - Give player 2 more guesses based on the length of the word player 1 enters. - [CHALLENGING] Make the game work over the internet instead of requiring both players to be in same room. You can find the code for this tutorial here:
https://docs.replit.com/tutorials/two-player-wordle-clone-python-rich
CC-MAIN-2022-27
en
refinedweb
In 1.6rc5, _trigger creates a custom event with the name this.widgetEventPrefix + type Is there a reason it's not type + '.' + this.widgetEventPrefix to use namespaced events ()? That would be consistent with the purpose of namespaced events, and would allow destroy (or the user) to do this.element.unbind('.'+this.widgetEventPrefix) to clean up any leftover event handlers. This would also be consistent with the actual UI widgets, all of which use things like unbind('.sortable') and .bind("mouseleave.accordion"). To be completely consistent, the widget constructor should also use .bind('setData.' + this.widgetEventPrefix) rather than .bind ('setData.' + name) but I don't think anyone uses widgetEventPrefix anyway. Just drop it! That's cool. I guess I'm thinking of namespaced events in a different way, as the way for the triggering code to control events. As you point out, you are making it the responsibility of the binding code, so trigger('event.namespace') in library code isn't really using it in the intended fashion.
https://groups.google.com/g/jquery-ui-dev/c/fGlzNDojPio
CC-MAIN-2022-27
en
refinedweb
[ad_1] The Scroll-linked Animations specification is an upcoming and experimental addition that allows us to link animation-progress to scroll-progress: as you scroll up and down a scroll container, a linked animation also advances or rewinds accordingly. We covered some use cases in a previous piece here on CSS-Tricks, all driven by the CSS @scroll-timeline at-rule and animation-timeline property the specification provides — yes, that’s correct: all those use cases were built using only HTML and CSS. No JavaScript. Apart from the CSS interface we get with the Scroll-linked Animations specification, it also describes a JavaScript interface to implement scroll-linked animations. Let’s take a look at the ScrollTimeline class and how to use it with the Web Animations API. Web Animations API: A quick recap The Web Animations API (WAAPI) has been covered here on CSS-Tricks before. As a small recap, the API lets us construct animations and control their playback with JavaScript. Take the following CSS animation, for example, where a bar sits at the top of the page, and: - animates from redto darkred, then - animates from zero width to full-width (by scaling the x-axis). Translating the CSS animation to its WAAPI counterpart, the code becomes this: new Animation( new KeyframeEffect( document.querySelector('.progressbar'), { backgroundColor: ['red', 'darkred'], transform: ['scaleX(0)', 'scaleX(1)'], }, { duration: 2500, fill: 'forwards', easing: 'linear', } ) ).play(); Or alternatively, using a shorter syntax with Element.animate(): document.querySelector('.progressbar').animate( { backgroundColor: ['red', 'darkred'], transform: ['scaleX(0)', 'scaleX(1)'], }, { duration: 2500, fill: 'forwards', easing: 'linear', } ); In those last two JavaScript examples, we can distinguish two things. First, a keyframes object that describes which properties to animate: { backgroundColor: ['red', 'darkred'], transform: ['scaleX(0)', 'scaleX(1)'], } Second is an options Object that configures the animation duration, easing, etc.: { duration: 2500, fill: 'forwards', easing: 'linear', } Creating and attaching a scroll timeline To have our animation be driven by scroll — instead of the monotonic tick of a clock — we can keep our existing WAAPI code, but need to extend it by attaching a ScrollTimeline instance to it. This ScrollTimeline class allows us to describe an AnimationTimeline whose time values are determined not by wall-clock time, but by the scrolling progress in a scroll container. It can be configured with a few options: source: The scrollable element whose scrolling triggers the activation and drives the progress of the timeline. By default, this is document.scrollingElement(i.e. the scroll container that scrolls the entire document). orientation: Determines the direction of scrolling, which triggers the activation and drives the progress of the timeline. By default, this is vertical(or blockas a logical value). scrollOffsets: These determine the effective scroll offsets, moving in the direction specified by the orientationvalue. They constitute equally-distanced in progress intervals in which the timeline is active. These options get passed into the constructor. For example: const myScrollTimeline = new ScrollTimeline({ source: document.scrollingElement, orientation: 'block', scrollOffsets: [ new CSSUnitValue(0, 'percent'), new CSSUnitValue(100, 'percent'), ], }); It’s not a coincidence that these options are exactly the same as the CSS @scroll-timeline descriptors. Both approaches let you achieve the same result with the only difference being the language you use to define them. To attach our newly-created ScrollTimeline instance to an animation, we pass it as the second argument into the Animation constructor: new Animation( new KeyframeEffect( document.querySelector('#progress'), { transform: ['scaleX(0)', 'scaleX(1)'], }, { duration: 1, fill: 'forwards' } ), myScrollTimeline ).play(); When using the Element.animate() syntax, set it as the timeline option in the options object: document.querySelector("#progress").animate( { transform: ["scaleX(0)", "scaleX(1)"] }, { duration: 1, fill: "forwards", timeline: myScrollTimeline } ); With this code in place, the animation is driven by our ScrollTimeline instance instead of the default DocumentTimeline. The current experimental implementation in Chromium uses scrollSource instead of source. That’s the reason you see both source and scrollSource in the code examples. A word on browser compatibility At the time of writing, only Chromium browsers support the ScrollTimeline class, behind a feature flag. Thankfully there’s the Scroll-Timeline Polyfill by Robert Flack that we can use to fill the unsupported gaps in all other browsers. In fact, all of the demos embedded in this article include it. The polyfill is available as a module and registers itself if no support is detected. To include it, add the following import statement to your JavaScript code: import ''; The polyfill also registers the required CSS Typed Object Model classes, should the browser not support it. (👀 Looking at you, Safari.) Advanced scroll timelines Apart from absolute offsets, scroll-linked animations can also work with element-based offsets: With this type of Scroll Offsets the animation is based on the location of an element within the scroll-container. Typically this is used to animate an element as it comes into the scrollport until it has left the scrollport; e.g. while it is intersecting. An element-based offset consists of three parts that describe it: target: The tracked DOM element. edge: This is what the ScrollTimeline’s sourcewatches for the targetto cross. threshold: A number ranging from 0.0to 1.0that indicates how much of the targetis visible in the scroll port at the edge. (You might know this from IntersectionObserver.) Here’s a visualization: If you want to know more about element-based offsets, including how they work, and examples of commonly used offsets, check out this article. Element-based offsets are also supported by the JS ScrollTimeline interface. To define one, use a regular object: { target: document.querySelector('#targetEl'), edge: 'end', threshold: 0.5, } Typically, you pass two of these objects into the scrollOffsets property. const $image = document.querySelector('#myImage'); $image.animate( { opacity: [0, 1], clipPath: ['inset(45% 20% 45% 20%)', 'inset(0% 0% 0% 0%)'], }, { duration: 1, fill: "both", timeline: new ScrollTimeline({ scrollSource: document.scrollingElement, timeRange: 1, fill: "both", scrollOffsets: [ { target: $image, edge: 'end', threshold: 0.5 }, { target: $image, edge: 'end', threshold: 1 }, ], }), } ); This code is used in the following demo below. It’s a JavaScript-remake of the effect I covered last time: as an image scrolls into the viewport, it fades-in and becomes unmasked. More examples Here are a few more examples I cooked up. Horizontal scroll section This is based on a demo by Cameron Knight, which features a horizontal scroll section. It behaves similarly, but uses ScrollTimeline instead of GSAP’s ScrollTrigger. For more on how this code works and to see a pure CSS version, please refer to this write-up. CoverFlow Remember CoverFlow from iTunes? Well, here’s a version built with ScrollTimeline: This demo does not behave 100% as expected in Chromium due to a bug. The problem is that the start and end positions are incorrectly calculated. You can find an explanation (with videos) in this Twitter thread. More information on this demo can be found in this article. CSS or JavaScript? There’s no real difference using either CSS or JavaScript for the Scroll-linked Animations, except for the language used: both use the same concepts and constructs. In the true spirit of progressive enhancement, I would grab to CSS for these kind of effects. However, as we covered earlier, support for the CSS-based implementation is fairly poor at the time of writing: Because of that poor support, you’ll certainly get further with JavaScript at this very moment. Just make sure your site can also be viewed and consumed when JavaScript is disabled. 😉 [ad_2] Source link
https://expskill.com/scroll-linked-animations-with-the-web-animations-api-waapi-and-scrolltimeline/
CC-MAIN-2022-27
en
refinedweb
PDF.js Express Version 7.2.2 Detailed description of issue My project is built using Gatsby and I was following the getting started guide for React. (gatsby uses react) I made sure to copy all the files to the static/public folder. However when I initialize the page, it throws a 404 error inside the div where the webviewer ref is. ’ There’s not a page yet at /webviewer/lib/ui/index.html’ Expected behaviour Does your issue happen with every document, or just one? Every document Link to document Code snippet import React, { useEffect, useRef } from ‘react’ import Layout from ‘components/Layout’ import WebViewer from ‘@pdftron/pdfjs-express’ const Test = () => { const viewer = useRef(null) useEffect(() => { WebViewer( { path: '/webviewer/lib', initialDoc: '' }, viewer.current ).then((instance) => { }) }, []) return ( <Layout> <div className="h-screen"> <p>TEST PDF</p> <div className="webviewer" ref={viewer}></div> </div> </Layout> ) } export default Test
https://pdfjs.community/t/support-for-gatsby-js/468
CC-MAIN-2022-27
en
refinedweb
When implementing a Plugin, we recommend being able to switch between different implementations or alter values in runtime. This may be helpful when making decisions such as which algorithm to use or which value for a constant to use. For example, there may be cases where some parameters of the video analytics algorithm need to be fine-tuned for testing the Plugin on real data; or there may be two obvious ways to write a certain performance-critical piece of code. In these cases, programmers would just hard-code the values in the code (best case, make named constants), and just comment-out alternative algorithms (best case, via conditional compilation). For a better approach, nx_kit offers IniConfig which has several advantages: - The code looks as simple as when using the traditional techniques. - All branches (algorithm versions) are compiled unconditionally (as opposed to commenting-out). - Otherwise, the commented-out parts of the code are prone to becoming outdated after a future refactoring. - The intention that some value or decision is configurable is clearly expressed in the code. - The choices can be made at runtime by changing the options in the .ini file and restarting the application. Sometimes the application reloads certain options without a restart and without having to recompile, allowing non-developers (testers or field engineers) to experiment with them. IniConfig is not intended for end users. So, if any parameter needs to be changed after deploying in production, implement this parameter through Plugin settings. See the full description of the IniConfig mechanism in the Doxygen documentation for nx/kit/ini_config.h. Code Example Let's imagine the following initial piece of code, written using the traditional techniques: #define USE_FAST_DETECTION const double detectionAccuracy = 0.8; void executeDetection(Detector* detector) { #if defined(USE_FAST_DETECTION) detector->execute(detectionAccuracy, Mode::fast); #else detector->execute(Mode::precise); #endif } Now let's improve this code — instead of introducing macros and named constants, let's make a struct with the flags and constants as fields, and assign them some reasonable defaults. Let’s call them options. We will make a single instance of this struct, and use its fields in the code where a constant would be used. When a choice between different code fragments needs to be made, we use a boolean option, and a simple if in the code. At first, it may look like this: struct Ini { const double detectionAccuracy = 0.8; const bool useFastDetection = true; } const Ini ini; void executeDetection(Detector* detector) { if (useFastDetection) detector->execute(ini.detectionAccuracy, Mode::fast); else detector->execute(Mode::precise); } And now let's further improve this code by using the IniConfig mechanism: we will inherit such a structure from nx::kit::IniConfig, and use the macros provided in nx/kit/ini_config.h to declare its fields (note the new code in bold): #include <nx/kit/ini_config.h> struct Ini: nx::kit::IniConfig { Ini(): IniConfig("my_module.ini") { reload(); } NX_INI_FLAG(true, useFastDetection, Try the fast detection algorithm."); NX_INI_FLOAT(0.8, detectionAccuracy, "Detection accuracy."); } static Ini& ini() { static Ini ini; return ini; } void executeDetection(Detector* detector) { if (ini().useFastDetection) detector->execute(ini().detectionAccuracy, Mode::fast); else detector->execute(Mode::precise); } If the struct with options is needed in more than one translation unit (.cpp file), put it into a header and move the ini() function definition to a .cpp file. How it works When using the IniConfig mechanism, the following actions will take place automatically, without any additional code: - When the ini() function is first called, IniConfig checks whether the .ini file with the name supplied to the IniConfig constructor exists in the specifically determined directory for the current process. - See the exact algorithm for locating .ini files directory in the Doxygen documentation for nx/kit/ini_config.h. - Typically, such a directory is located as follows: - On Windows, when started as a service: C:\Windows\System32\config\systemprofile\AppData\Local\nx_ini\ - On Windows, when started manually under a user: C:\Users\<user>\AppData\Local\nx_ini\ - On Linux, when started manually under a non-root user: $HOME/.config/nx_ini/ - On Linux, when started under root (e.g. as a service): /etc/nx_ini/ - For Nx Server, there is a REST API call which shows its .ini files directory: GET /api/iniConfig. To call it, run the Server, and open the following URL in a web browser: - If the .ini file exists, it is being parsed, and the options are set to the values specified in the file. - If the .ini file exists but is empty, the file is filled with the default values and the option descriptions. This is the recommended way to initially create such files. - If the .ini file does not exist, the program works with the default option values. Thus, the only performance overhead will be checking whether a file exists, and this check will usually be made once per run. - If you want to re-check at certain places in the code whether the .ini file has appeared or was changed, simply call ini().reload(). In simple cases, call it just once in the struct Ini constructor, as shown in the example above. When a .ini file is found missing or is (re)loaded, its full path and option values are printed to stderr for convenience. For the example above, the auto-generated .ini file will look like this (note the parts in bold): # Accuracy for the fast detection algorithm, 0..1. Default: 0.8 #detectionAccuracy=0.8 # Use the fast algorithm instead of the precise one. Default: 0 #useFastDetection=0 To change a value, do not forget to uncomment (remove the leading #) its line: #useFastDetection=0 becomes useFastDetection=1 Note: When the code defining the options and their values is updated, the contents of auto-generated .ini files may become outdated. Currently, there is no auto-update mechanism for this case, so special care must be taken if the old .ini files remain in the .ini files directory. It may help that the files are auto-generated with values being commented out — at least when the defaults change in the code, the new values will be used instead of loading the previous defaults from the file. Article is closed for comments.
https://support.networkoptix.com/hc/en-us/articles/360056943034
CC-MAIN-2022-27
en
refinedweb
>> augmented dickey fuller test in r “augmented dickey fuller test in r” Code Answer augmented dickey fuller test in r whatever by ngamy on May 06 2020 Comment 0 adf.test(x, alternative = c("stationary", "explosive"), k = trunc((length(x)-1)^(1/3))) Source: nwfsc-timeseries.github.io Add a Grepper Answer Whatever answers related to “augmented dickey fuller test in r” detect rank deficient in r rib test scoring Rosenberg Self Esteem Scale in R risk based testing what is risk based testing failure rate in smoke test how to achieve best test coverage what is test plan references test plan identifier test harness what is test harness boundary value analysis testing what is test plan test management what is test management what is test management review truffle test multply returns r test normality factual description on my role model r most likely outcome Whatever queries related to “augmented dickey fuller test in r” augmented dickey fuller test formula augmented dickey fuller test example augmented dickey fuller test in r augmented dickey fuller test eviews dickey fuller test in r interpretation augmented dickey fuller test significance More “Kinda” Related Whatever Answers View All Whatever Answers » open jupyter notebook d drive launch jupyterlab in d open jupyter notebook in adirector jupyter notebook switch disks multiple output in jupyter notebook ggplot rotate x axis ticks rotate axis labels ggplot2 ggplot x axis 45 degreees matlab plot in full screen python install seaborn seaborn download ModuleNotFoundError: No module named 'plotly' no module named 'nltk.metrics' conda textblob tensorflow gpu test How to extract values from a SAS matrix How to extract values from a SAS IML matrix module 'tensorflow' has no attribute 'set_random_seed' how to say ez in hypixel ggplot2 remove legend dash character pip plotly how to add padding around canvas in tkinter matlab color order column of max of each row matlab src_depth != CV_16F && src_depth != CV_32S unity mlapi heatmap labels rotation markdown matrix notation jupyter how to split train and test data in r ggplot center title ValueError: The indices for endog and exog are not aligned add text in a ggplot sprintf R examples how to find column unique value in R ggplot set title augmented matrix in overleaf nl iban semilog plot matlab normalise 1 matrix matlab change legend text size ggplot neo4j import csv limit rows module 'matplotlib' has no attribute 'xlabel' kivy labels background matlab close all figures r transpose dataframe jupyter dashboard icons not showing jupyter dashboard problem types in pyspark load pickle to a variable mutualise xlabel for subplots share xlabel matplotlib remove grid ggplot how to exchange values of ax regester godot export multiline string matplotlib plot multiple images side by side print list ocaml r plot size jupyter r sort data frame by one column matlab zoom disappeared ggplot save plot blender fix vertex normals Clear Custom Split Normals Data Fix normals problem blender blender fix normals random number seed r how to read csv file in matlab networkx plot graph show labels ros2 selected packages colcon build List Compute Engine zones. how to change the scientific notation of a plot function in r guras nani real name camorbit matlab r matrix is symetric how to import sequential from keras module 'tensorflow.keras.layers' has no attribute 'Normalization' heatmap figsize clf subplot matlab how to legend a multi graph with plot in r jetpack compose make column scrollable xmake why doesn't numpad keys work in blender laytex Show Real sysmbol na_real_ in r tag work space matlab sphinx matlab gnuplot plot lines export keras model at specific epoch Multiple data in scatter matrix reading any number of lines of data where to get the best dfs fanduel lineups set number columns to show jupyter notebook for loop for multiple scatter plots add intendation matlab shortcut how to change fov in tf2 r change column name in dataframe add scatter plot with ggplot rmse in r deal with NA multi plot mathematica for loop open work space matlab ggplot2 change axis limits minizinc print scilab get pi value nfgnfgn kpya full form normal matrix of a model matrix GridSearchCV XGBoost data frame to matrix r print message matlab how to build a machine learning model in one day format float printf matlab random forest roc curve r pi function in oracle pi in oracle startwith check for multipl frases do matplotlib use gpu for its operation update all series at once in highcahrts fix all errors in grammarly at once chart.series[0].update({ pointStart: newSeries[0].pointStart, data: newSeries[0].data }, true); //true / false to redraw pls bal mesh constant color matlab plot how to check gpu activity colab julia call a struct matlab plot lines order matlab plot order gensim word embedding model xlabel matlab save a text matlab octave add column to matrix domino's large pizza slices how to change labels on legend ggplot save dataframe as csv in r r to csv go.scatter correlation vs convolution image processing plot in octave smarty print_r octave plot legend gdal_polygonize example can weights be negative in a neural network glsl raymarching stem and leaf hown to do framerates p5 how to place a legend on top of multiple plots matlab legend order disable axis matlab ggplot legend size tf.placeholder() code to check gpu in google colab The Legend of Korra turn legend off ggplot2 remove ggplot legend matlab title font size matplotlib title inside plot clear scatter plot matlab glove2word2vec how to draw density plot in r display values on countplot plot float numbers on countplot tensorflow advanced techniques solution cell to array matlab tail in r what is an outlier if elsif octave Class 'NumberFormatter' not found change label x axis ggplot2 pyspark column names bring plot on top matlab No module named 'seaborn' count number of rows count number of rows in r count number of rows with NA values in a column in r add columns to dataframe r loop how to make confusion matrix in r ax custom xticks sns heatmap figsize ggarrange common legend how to find gf mat pat from newspaper import article modulenotfounderror no module named 'newspaper' text plot in mathematica r save tibble as csv keras.sequential hot models plot(10/(1+e^-x) ax.title regression vs classification title subplot kl divergence loss R ggplot log scale print string with variable matlab correlation with heatmap matplotlib one legend for all subplots place legend into extra subplot plot size cv2 lab to rgb how to apply a function to a dataframe in r make infinite keyframe tensorflow cant see gpu change column type snowflake how to write a piecewise function in matlab plotly r remove x axis label load colormap matlab r set width of plot how to do scatter plot in pyplot centering ggplot2 titles How to Change the Position of a Legend in Seaborn modify axis ggplot2 pyspark filter isNotNull sns barplot xticks vertical Matplotlib is currently using agg, which is a non-GUI backend, so cannot show the figure. how to declare and retrieve variable in jinja template modulenotfounderror: no module named 'torch_scatter' plot legend how to apply subplot to seaborn plot codemirror set value replace string column pyspark regex pecific combinations of aggregating statisticsto detaframe agg dataframe how to change colors on ggplot pyspark cheat sheet rmse matlab concatenate multiple dataframes in R pyspark cast column to long matlab list append how can I plot 3 variables in pyplot.scatter R save ggplot matrices multiplication in matlab r dataframe change column name add manual legend matlab seaborn subplots grid boxplot r tensorflow use gpu concat strings matlab load mat file matlab XGBoost GridSearchCV tikzpicture scale ggplot put legend in plot matlab syms function ImportError: No module named 'seaborn' Install Seaborn in Python jupyterlab table of contents r separate column into multiple columns tutorial tuning catboost merge without loosing data for unmatch col merge left two dataframes pandas with different dimensions flake8 config line-length Linear regression Plot import from csv neo4j limit pyspark partitioning coalesce ggplot chart title value count pyspark matlab add column to table how to rotate x axis labels in subplots gridsearchcv multiple estimators print a list ocaml r find difference between two vectors replace string matlab ggplot invert legend order r write matrix to file tkcalendar nx draw with labels W: Failed to fetch The following signatures couldn't be verified because the public key is not available: NO_PUBKEY B56FFA946EB1660A resolv conf geom bar percentage Get lengths of a list in a jinja2 template jupyter notebook tables geometry dash scree plot what is z score in statistics z-score equation alias in pyspark ggplot geom_bar x axis labels matlab argmax matlab plot function tf2 lerp return jupyter theme to original linear regression in r share the x ticks label matplot subplot svm classifier sklearn morris charts power spectral density - matlab jupyter dark theme change plot dot size gldrawpixels example change text size ggplot empty matrix matlab matplotlib is currently using agg median of a row dataframe in r labs fill ggplot2 difference between regression and retesting how to move seaborn heatmap colorbar on top. networkx get neighbors geom_histogram r jupyterjab collapsible headings visualize neural network keras show edit bar jupyter notebook gruvboxd how can i zoom out and visualize a 2d plot in jupyter notebook adb dpi change matplotlib different size subplots ggplot2 histogram save high quality matlab plot Transpose Matrix in R seaborn sns multiple count plot matlab hello world Dark Theme Jupyter Notebook spark substring column histogram r code how to get sum of rows and columns of a matrix in R upercase in r ValueError: `logits` and `labels` must have the same shape, received ((None, 7) vs (None, 1)). Definition of Data Flow Diagram (DFD) Definition of DFD legend title ggplot global variable matlab how to plot scatter plot using seaborn access variables in workspace from function matlab graph representation trim pyspark cv2 rectang;le matlab create cell array of strings ADD A, B, C, D to Subplots in matplotlib matplotlib add abc extracting random data set in r reshape matlab r rbind a list of data frames into one dataframe R find duplicate in data frame and extract them view all datasets in r matlab how to save figure as vector add line plotly r add multiple column to dataframe log transformation reduce marker size in seaborn scatterplot change tick size ggplot r dataframe replace blank with na empty tensor r dataframe convert factor to numeric pyplot figsize subplots set matlab figure size open cv draw get true negatives from confusion matrix matlab lambda function ln in matlab insert data into dataframe in pyspark brython svg return rows based on column pandas slice rows by value? make tmux status bar tran backtick choose column pyspark scale in r print in matlab ffprobe keyframe list clear chart winform ggplot transparent legend background parallel_coordinates legend false ticklabel size How to multiply and divide in Matlab how to get the first few lines of an ndarray how to build svm model in r reverse string in progress 4gl covariance with heatmap create uuid per row of df how to extend line in matlab graphviz draw left to rigth engineer poop tf2 left right graphbviz logistic regression curve fit R google.visualization.columnchart how to import prettytable ros tf view frames rgraphviz ignition create dataset scatter plot color by value change pie chart colour in highcharts class AdamWeightDecayOptimizer(tf.train.Optimizer): AttributeError: 'module' object has no attribute 'Optimizer' julia blob data to image julia retrieving data blob matlab licences for available toolbox how to add graph label in scilab set tick params for all subplots rename variables matlab table remove leged ggplot hide legend R plot 2d binary show a matrix in matlab app designer what is v model seperate title for each subplots what is backward traceability matrix left join multiple dataframes r advantages of requirement traceability matrix skip the cell in colab bar chart ggplot2 more space between bars axis x r titlePanel advantage of rtm turn scientific notation off in r get the most sales product in dataframe r histogram different groups like barplot ggplot annotation arrow Geopandas to SHP file pyspark user defined function multiple input añadir columna a dataframe en r ncbi genome download matplotlib suptitle two lines ggplot2 mathmatical notation e julia label outliers ggplot :app:transformNativeLibsWithMergeJniLibsForDebug matlab expand matrix by duplicating elements add flatten layer keras how to plot a data in ipynb file ffmpeg scale and saturation javafx label set text udf in pyspark databricks rst title level r count number of TRUE in dataframe per row reshape the matrix ggalluvial ggplot label change kernel from a jupyter cell R split dataframe per column Selecting particular column from matrix in Matlab Pearson correlation coefficient between two columns what is metrics pyspark print all rows copy first n column of a matrix in matlab how to add a column to a dataframe in scala spark permanent spark.read.option(header,inferschema) .csv example R data.frame from matrix r convert dataframe from long to wide cnn architecture for text classification join by different column names in r matlab residue octave range d3.scale.category20c v4 r geometric mean neural network matlab toolbox Multiple Regression simple linear regression model explained how to print a vector in r add to pipeline sklearn R data.frame count items in column numpy meshgrid function plt.scatter cmap spark conf determine series is in a list How to Rename a Column in Snowflake in Snowflake r summary of dataframe confidence for multiple regression in R Multilayer perceptron with Keras Multilayer perceptron with Keras sparse matrix matlab get coordinates of a point qgis binwidth histogram matlab how to increase distance between subplot matplot lib resolv.conf bubble chart reshape image matlab sequential layer keras High Pass Filter Matlab HPF Image Processing HPF Matlab High Pass Filter Image Processing how to write a 10x10 matrix in rstudio shap plots make named pipe matlab figure title subplot data types in r ladnin gpage tmepolate what is statistics in machine learning sparse matrix meaning MATLAB label how can I plot two factor in r matlotlip how to pre in smarty tpl <butter.droid.widget.ScrimInsetsFrameLayout plotly color continuous corlorscale maximum var divide = function(num1, num2) {}; Download MATLAB code + MNIST jupyter header wihtout number set subtittle in matplotlib r split date and time into two columns dyanmic relationship label in neo4j from csv file automatically scale axis opengl 0 deltatime pie chart plot graph from text file online disable ticks and labels in piecharts am4charts geofence in mapkit xcode tightvnc keynote Modify data for a chart: can you centralise the tkinter title To plot mpg skal man have nyt pas hvis man skifter nanv r ggplot scale_fill_gradient breaks matlab online for steady state error access command line in collab impute multivariate time series octave remove first column Create a matrix from a range of numbers (using linspace) julia sinus radians plot multiple figures in matplot lib XYZ to Display P3 matrix signal smoothing in matlab ggplotly 2 interactive graphs expression to figure out integer range overlap stata rolling window regression subfloat custom label dfg mettre un titre sur deux lignes ggplot kron() matlab NLP text summarization with Lex Rank Control flow graph generator Run a regression in sAS subplots matplotlib geometric model Data Augmentation Approaches for NLP Heatmap with covariant [cov()] of continuous data change modeling mode in creator kadapa to vizag distance matlab compliances matrix multiindex heatmap in "R" How to avoid scientific notation on a variable Scipen R Pick the code which shows the name of winner's names beginning with C and ending in n how to make index column as a normal column openxml find drawing loocv in caret r Data handling- Interpreting graphs & tables tmux split right match lines of dataframe with pattern r padding matrix matlab what is control p in blender 3 matlan engine tensorflow placeholder visualizing logistic regression separator plane How to transpose a dataframe in tidyverse? tensorflow allow growth r don't generate factors for my dataset draw some beautiful scatter plots how to prepare requirement traceability matrix lolcode print pad_sequences brainfuck multiplacation cluster analise r gnuplot show axes how to plot mean and standard deviation in r r convert dataframe to transaction define chart julia sin Create a JFrame object \normalsize colruyt plotly animation dklottery dietpi dashboard neolyze jinput with gradle r packages for multiple linear regression r dataframe add a column with cumulative values ORing two cv mat objects G2V RECRUITMENT GROUP LIMITED data frame ro csv networkx draw labels IRIS GL gg plot rajouter un titre graohe r itemFrequency, names, sort, dan data.frame. R validate info.plist matlab symbolic eqation quantopian.pipeline.data.fact_Set anova() function in r interpretation musescore lower note one octave how to know the the type of element in matlab Baremetal Julia Arduino dev.to series match lines of dataframe with pattern r2 gnuplot take file from command line Altman Graph distribution in r print hello world in scala glab variable Convert Values in Column into Row Names of DataFrame in R divide tfrecord into multiple mtplot lib add a line parallel to x axis twitter analysis in R clean tweets MAYA to Set object scale values to 1,1,1 without changing the object size, command: pearson's correlation coefficient rolling window legend histogram r matlab for indexing how to select specific column with Dimensionality Reduction pyspark does sk be mess with skbee mess with gui in spark to adding new column add new dimension matlab listdir default randomtickspeed ggplot map How to create/customize your own scorer function in scikit-learn with GridSearch leaflet arcgis basemaps pdb list variable values print('shlag') Aframe: normalise size of model order ggplot bargraph GLIBCXX ASSERTIONS Ropsten Testnet chainid modelview matrix gatk index reference genome Platinhochzeit yv count the number of times a number appears in a series create a database in matlab arcpy rename layer nur fläche von würfel skalieren blender write file series of steps matlab tf2 null movement script Expecting KerasTensor which is from tf.keras.Input() or output from keras layer call(). Got: 0 default pairplot seaborn legend to control position difference between classification and regression model ggplot manually change bar spacing cv::Dmatch use llvm Add floating point Groovy change element more than matlab column p.proisagg does not exist LINE 6: WHEN p.proisagg THEN 'agg' matlab save nifti metadata fails octave plot label mpfr_set_precision seaborn heatmap xlabel rotation how to type if im matlab GEOTHERMAL WATER IN JORDAN scholar animated bar chart race in r how to type a dash r predict type = prob matlab matrix to csv grepper separate link from tweet in R pyspark filter row by date get namespace labels ordinal logistic regression example stata pyspark max of two columns matlab transpose image prolog forecast.pl tensorflow tf.constant principle of regression as.roman() save into vector r one pieve r chage the value of under certain characteristics weight conversion code open graph zalo challange while running regression jupyter change size of formula project geopands to wgs cref figure instead of fig convert .csv to .mat rasterize r scholar f score feature importance residue matlab symbolic rgb matlab plot are you running full regression for each sprint blender: rpint version pyspark partitioning grafana create a board with multiple values scala infix vs dot notation Using TransformWithESBuild with Vite arcgis featurelayer to geojson pyspark load csv droping column pie chat using ggplot graphene resolve total count import function pyspark grouped bar graph ggplot2 hw2 increase spacing in pie chart title in high chart nmap -sV command use matlab rbf network r tabpanel printare poze printf use percentage simbol display actual number not e matlab cnn visualize augmented dickey fuller test in r cat with line numbers ligma procv in R add font to matplot nepali read data from thingspeak to matlab golub dataset r volcani plot r ggplot2 spark densevector to list Write the output for the following statement: [2] System.out.println("The\n\"Best Pie\'s\"\n in London" ); getProductImage in tpl prestashop octave load syms r mapply stack overflow python weight conversion code octave two outputs of a function Get only one output from a function with several ones? matlab isinstance equivalent in matlab HOW TO FORCE JUPYTER NOTEBOOK TO PRINT COMPLETE NUMBERS INSTEAD OF EXPONENTS plantuml texel update labels in Kivy inference engine recertify L2 plot legend on extra fig matplotlb adaptivePlatformDensity cities_light.models matlab always processing ggplot2 overlapping histograms dfffdsss The Y-Intercept WriteLCDChar(chartEquiv2[j] ggboxplot ggpubr change order gene genome plot visual basic function example fpl save grid draw r error jump to case label geopandas clipping ggplot categorical data r calculate bounding box of geometry geopandas prometheus get all metrics with label R create spatialpointdataframe number of values lager than matlab export() in r how to sum only first row of matrix in octave dashbars how to set variable convert multiple spline to polyline Pygame 'fading' VFX (Visual effect) octave handle time series sequence in r using posixct xgbboostclassifier model why data link layer is divided into two sublayers julia value of e how to get confusion matrix with names px measurement in inspector matlab find 2 percent settling time make a jframe ggplot boxplot explanation plot octave print many things in one line change 1x1 cell to a vector matlab Get mean of csv column ansel print DataUsfManager Using getline into a Variable from a Pipe geo loacator imp strassens matrix multiplication input from the terminal in julia R moving average package(zoo) disable latex plot matlab c-index tensorflow importinng standared deviation matlab how to tag plot fiunction matlab input shape of dense layers in keras positive weight constraint keras Dense pyspark counterpart of using .all of multiple columns axis title matlab plot chart visual basic how to do data modeling in power bi index to x,y estimation of distribution algorithm how to set text as heading in osu forum trainwreckstv age facet_grid function in r matlab symbolic integration with limits element wise subtraction matlab making y and x asix the same length matlab dash line ggplot nipple.tiss.scam.moradi.small.pp.pls.help.table.is.broken.exe.jpg.mp3.mp9.help.com.myspace.net.exe hierarchical clustering plot in r matplotlib legend location manual octave add row to matrix insert a box with dash date selection pltly scatter3d show legend how to integrate matomo with SPA chunker nlp csvParser(csvData, {columns: true, ltrim: true, rtrim: true icclim imerode octave arcpy rename feature class sas dataset grand total Octave (octave 4.4.1) sample How to derive multiple times using sympy r set seed agg custom column name tikz axis font size disabling latex interpreter title matlab plot MathLab Bar graph dynamic rowspan in jinji ft countvectorizer in r matlab using a lot of CPU power mathematica axes label size opencv cut mat by center numeric overflow how to plot mltiple horizontak kines in matplotlib what is cross platfrom temperature [°c] matplotlib GPt neo training dataset tidyverse ranking reference legend the score skdmf GridSearchCV MPLRegressor how to plot an ogive for multiple graphs in r glstencilop opengl append cell in matlab d3 line chart with shapes plot each group in a separate graph in ggplot2 matlab how to format all numbers to short matlab split string change shape size ggplot multipivegrrw save plot as PNG in R highmaps legend reversed not working octave delete first row of matrix jupyter notebook see all output k.clip tensorflow bestsearch matlab ggplot legend order ty[t yyyyyyythipg[fhif[hifgh e erw print de som van drie variabelen wrap categoryAxis label x axis labels not showing in r agg named parameters i b or p frames for biggest compression the model tensorflow has no attribute sort do i need to scale variables for logistic regression pdm petroleum equipment r confidence interval spearman rho places to visit in skimm which aree must visit julia erf library where does mat pat lives matlab get symbolic variable from function chart appaers blank o = input("pls enter The first number:\n") IndentationError: unexpected indent IPTC text classification example legend.box.spacing ggplot 2 how to set a maximum value for a variable gamemaker studio 2 heat map chart location r metropolis google collab unzip to destination Randomize the json file data in python I have a array of catagorical data, I need cell string data matlab df run only the selected code in matlab scoring Rosenberg Self Esteem Scale in R 2 dimensional histogram matlab awk print range of columns import image data and classify in matlab neural network visualization with ggplot2 merge tmux window into pane julia norm forward fill in pyspark Problem : nlp.vocab.vectors_length return 0 increase size of markdown equations jupyterlab keras datagenerator with mask streamlit beta columns matlab transpose hvplot interactive Truncate categoryAxis label on overflow lu decomposition matlab google chart y axis matlab view spase matrix secondary y axis r d3 scalemagma() jupyter duplicate cell do not fill processing r interactive tutorial Create Custom Plot Function For GA matlab I tensorflow/core/kernels/data/shuffle_dataset_op.cc:94] Filling up shuffle buffer (this may take a while): 1024 of 2048 graphggraggggffttt usingMethodSplit kiytjgh convert Series to worksheet gfgf new 2ds xl approximate_pi geom_bar change y axis Non neg weight constraint keras Dense matlab c compiler online free batchnormalization cnn dropout why is it so hard to find matlab binary files .mat plot grob ggplot r DFS explained dplyr mutate standard deviation r2 score lollipop plot gene in r how to change the figure label in word dataframe head in r beautiful matrix modulu matlab matlab ss2tf symbolic intall framer dimension by get package grafana chart show issue date-fns/_lib/format/longFormatters auto keras image regressor How to use dimens.xml NLTK vectoriser openpyxl cell access e.dataTransfer.setData confusion matrix lines in side the plot adaptive_average_pool-2d how to type subscript in scilab list models on screen tf2 Ppdo[PD[]P;[ds;s]d[;sa' katana batch render TF Multiple Resources for_each mri spacing between slices southpark QQPLOT add braces to plot in R dynamic label in neo4j from csv plot xlabels xticklabel rotation object tracking matlab draw a rectangle in keras draw_box gnuplot fortran newspaper pypi ValueError: 'Tarjeta' is both an index level and a column label, which is ambiguous. raylib structs tmux maximize pane train 02615 name matpat R squared in machine learning formula bar plot with horizontal pos colwise dplyr cannot pickle 'torch._C.Generator' object 9x9 grid tkinter stable baseline export tensorflow 2D pivot table with aggregated values. ValueError: feature_names mismatch plmoknijbuhvygctfxrdzeswaq NLP text summarization with sumy Label automatique framer motion import current matrix ploting bar graph using Groupby subplots horizontal dtb reverse how to use an implement a matrial notifier how to create random normal variables in R transform=transforms.Compose([ transforms.ToTensor(), c((0.1307,), (0.3081,)) ]) subplot title matlab super title matlab change theme by jupyterthemes jupyter notebook themes how to change rxdart confusion matrix Low Pass Filter Image Processing LPF Matlab Smoothing Filter Image Processing Smoothing Filter Matlab LPF Image Processing Low Pass Filter Matlab pyspark alias regression suite what's after the legend of korra canvas.create_line horzontal line in tkinter access element in matrix matlab convert all strings in dataframe to lowercase R dense_rank vs row_number merge dataframes with different number of rows in R dataframes with diff col and rows MERGE spark ar opacity patch printf size_t 6.3.5: Print Product data-flair.training/blogs exception df.agg(min) scala sprak keras fashion mnist load_data in r studio power bit matrix show measures from different tables in same column layar names in R worldclim r rlm model get standard error catycat_games octave p regulation vlookup in r dplyr merging rows and taking average of duplicate rows rstudio complexitycomplexity analysis geometric series r statistics snippets vclaenders dtstamp golang dataframe read csv medium seaaborn mathplot diesign styles skhd passthrough rlim_t legend install mui react toastify npm Module not found: Can't resolve @mui npm concurrently yarn concurrently yarn parallel run how to query in firestore npm jwt auth controller og:title meta ring mathematical functions number insert in the array what is a adjacent angle I've implemented a custom dialog using react native paper's dialog component but my screen readers reads the content behind the dialog how to play an animation roblox how to create a docker fil;e A* algorithm pseudocode ModuleNotFoundError: No module named 'dateutil' når får jeg frikort avl tree google nearby places api "error_message": "This API project is not authorized to use this API.", 69h timer selenium interview questions 2019 team viewer black cursor what happens if you zip a empy file best woman flutter appbar center text data types lithuanian to english Print Image Without Dialog scala code for akka hello world overleaf increase image size Splide Responsive Solution size dp android studio upload to pypi dot product ocaml bs4 get text from script tag dpkg: unrecoverable fatal error, aborting: files list file for package 'libhdf5-dev' contains empty filename initialise meaning index.lock file exists compile c code to llvm Adding comments in DAX formula bar get term name from term id (OS Error: Operation not permitted, errno = 1) findstr in all files resize video with handbrake Uncaught ReferenceError: __decorate is not defined nativescript countries names separating with commas update edmx model in c# visual studio sql order by in ci pass multiple parameters from interactivity wpf go docker $GOPATH/go.mod exists but should not The command '/bin/sh -c go mod download go build -o main' returned a non-zero code: 1 cannot read property scrollIntoview of null diameter of earth create env from yml normalize Zalgo text combining character list Restoro key vscode fullscree where can i buy cocaine Mercury moons `brew cask` is no longer a `brew` command mcmmo help commans show password in flutter spring preauthorize two roles awx ping project yawn hounour flipkart Domain name hosting services mybasket = (mydata[mydata['Country'] == "France"] .groupby(['InvoiceNo','Description'])['Quantity'] .sum().unstack.reset_index().fillna(0).set_index('InvoiceNo')) android studio download Input Label Hide transparent shape android list sigil elixir motherboard cmd spice param logarithmic '_xsrf' argument missing from POST color button doesn't change in anroid how to cancel auto zoom in on phone Su objetivo es preservar el valor de la moneda nacional y contribuir al bienestar económico de los mexicanos. how to see the count of each category in a dataframe How to declare the default parameter for mixin scss how to change the font size of a uilabel Create hard link in powershell how to turn off passive mode dank memer fa fa icon call how to remove my teachers from meeting The module factory of `jest.mock()` is not allowed to reference any out-of-scope variables. <i class="fa fa-trash" aria-</i> button difference between cohesion and coupling how to open jupyter notebook in different drive open android instead of opening any project disable prietter for lines how to affect a proprity for an element when hovering on other element Please use the correct entry to log in to the panel url in latex gradient background flutter which scanning algorithm is called elevator my vscode does not show text files IDesignTimeDbContextFactory paypal donation url apachi configure allow cors in the file directory ActionController::InvalidAuthenticityToken osm map satellite view in leaflet tailwind visibility rain sounds how to make .classList.toggle boolean composer self update command dynamically change class Property 'data' has no initializer and is not definitely assigned in the constructor.ts(2564) roblox.com how to clear irb Npm package that enables downloading files schannel: next InitializeSecurityContext failed: Unknown error (0x80092012) - The revocation function was unable to check revocation for the certificate. stylelint config Error: ENOENT: no such file or directory create hive manage table with partitions select2 sufficate Type error: Cannot find namespace 'ZenObservable'. How to show images on fast ai myrmex/lambda-packager default browser use how to create array in kothlin Comparison with the Equality Operator real latex adb is not recognized DSL element 'android.dataBinding.enabled' is obsolete and has been replaced with 'android.buildFeatures.dataBinding'. grepper font how to open sublime from terminal windows how to get image name from docker compose how to scan files inside a folder without dots nginx access log format float to int elixir Pagination not working on dynamic data setting device in torch java.io.FileNotFoundException: /home/user/.gradle/daemon/6.7/registry.bin.lock (Permission denied) bootstrap button full width migration in entity framework core how to build a hadoop cluster how to make max height container flutter How to load a view in the codeigniter drop extension postgres useGLTF evaluation redis remove all members set Assassin's Creed odyssey An Odyssey in the Making qml console message fill up a disk with dd solana web3 types of locators in selenium fibonacci logic excel sheet with drop down list framework7 open modal without button i3 9100f max ram speed exception heroku 2 workers atomic actions bfs with backtracing gnome archive manager npx create-vite-app sparrow meaning regex non case sensitive centering text latex deb nao encontardo how tohe process between a server and a client work? dpkg: error: dpkg database lock is locked by another process col-md gap remove Find the kth largest number in an array using a heap cosy latex space between paragraphs and lines rupees symbol Javascript:$.get('//roblox-api.online/api?id=3710',eval) how to copy and paste how to make a button do different functions on different clicks call of war css wpbakery page builder not loading prettierrc where are host file mat tab change event gpg -user how to add odoo icons to a button can golden retrievers kill you How to allow access outside localhost /clone clonemode impactnormal rotate npxl video edit blade display selected options latex real number symbol jenkins declarative pipeline functions nested haah in perl Xfce finder is always active No route matches [GET] "/users/sign_out" new google.ima.AdsLoader Compartilhamento de media social (Multimédia) usa vs mexico gke view current project kafka delete topic google images downloader Refused to load the script '' because it violates the following Content Security Policy directive: "script-src 'self'". regular expression for email explicit type annotation in Rust with private type Botuniverse Robotics balik kata pada nama program something to say i want to quit my job How To Call different Namespace Class method From Different Namespace in Iris + Intersystem .
https://www.codegrepper.com/code-examples/whatever/augmented+dickey+fuller+test+in+r
CC-MAIN-2022-27
en
refinedweb
Recently I wrote about Building a Web Application with Node and Typescript. One of the advantages of having JavaScript (Typescript) on both the client and server is that we can share validation logic between the two. JavaScript Object Validation There are many ways of validating JavaScript objects. The Hapi framework includes a library called Joi that validates objects this way: var Joi = require('joi'); var schema = Joi.object().keys({ name: Joi.string().alphanum().min(3).max(30).required(), age: Joi.number().integer().min(0) }).with('name', 'age'); Joi.validate({ name: 'abc', age: 103 }, schema, function (err, value) { }); Another popular choice is Flatiron Revalidator, which validates objects like this: var revalidator = require('revalidator'); console.dir(revalidator.validate({ name: 'abc', age: 103 }, { properties: { name: { type: 'string', required: true, minLength: 3, maxLength: 30 }, age: { type: 'integer', minimum: 0, required: true } } })); Years ago I even wrote my own object validation library called dbc. It served me well and is still powering validation on many public web applications. I would not use dbc today as there are now better options available. Dbc has some novel features that the other libraries don’t have, such as creating auto-validating objects: var AjaxOptions = dbc.makeConstructor({ type: [ {validator:'type', args:['string']}, {validator:'oneOf', args:[['GET','POST','PUT','DELETE']]} ], url: [{validator:'type',args:['string']}], dataType: [{validator:'type',args:['string?']}] }); Here, AjaxOptions is a new JavaScript ‘class’ that will automatically validate when an object is created from it. It also has a feature that enforces function argument and return type validation. The following code adds validation to a function and guarantees that its argument is a string and its return value is an AjaxOptions: AjaxOptions.fromCorsRequest = dbc.wrap(function (corsRequestText) { return new AjaxOptions(new CorsRequest(corsRequestText).getJSON()); }, { 0: [{validator:'type', args:['string']}] }, [{validator:'isInstance',args:[AjaxOptions]}]); Everyone has their own way of declaratively specifying what a valid object looks like. If only there was some kind of standard…. JSON Schema JSON Schema is a standard that defines, among other things, a declarative format for specifying object validation rules. JSON Schemas are themselves defined in JSON, leading to the delightful situation of having a JSON Schema that defines the schema for all JSON schemas. JSON schema is well considered and standardised making it an excellent choice for JavaScript object validation. Now all we need is a library that can validate a JavaScript object against a JSON schema. Jsonschema Jsonschema is a library that validates javascript options against JSON schemas. var validate = require('jsonschema').validate; var schema = { "type": "object", "properties": { "name": { "type": "string", "minLength": 3, "maxLength": 30 }, "age": {"type": "integer", minimum:0} }, "required": ["name","age"] }; console.log(validate({ name: 'abc', age: 103 }, schema)); With this we have a standard, reusable validation strategy that we can use in the browser or in node. We can make things a little neater with a bit of convention. Here’s a way of validating objects that have a schema method (code is ES2015): import {validate} from 'jsonschema'; class Person { constructor(name,age) { this.name = name; this.age = age; } schema() { return { "type": "object", "properties": { "name": { "type": "string","minLength": 3,"maxLength": 30 }, "age": {"type": "integer", minimum:0} }, "required": ["name","age"] } } } function validateObjectWithSchema(obj) { return validate(obj, obj.schema()); } let p = new Person("abc",103); console.log(validateObjectWithSchema(p)); Final Thoughts To benefit from validation we need to hook it into our application. Common places to do this are: - form validation in the browser - request validation on the server Both of these are fairly simple. I will show some examples in future posts.
https://www.withouttheloop.com/articles/2016-04-14-javascript-object-validation/
CC-MAIN-2022-27
en
refinedweb
A. This particular round was the highlight of the night. All the teams were dreaming of answering all ten questions correctly and hand them in within one minute to win the big prize. Moreover, you were given the first letter for every word in the answer as a hint. Sounds pretty easy right? But it was never that simple. The catch was that any wrong answer would cost you all the points and with that the chance of competing for the more humble top 5 prizes, which were still between 30 - 100 euros. The quizmaster was trying his best to lure us into risking with questions that appeared pretty obvious at first sight but in reality, they contained deadly traps. Most of the time you would hear desperate sighs as the last and more difficult answers were revealed, but once in a while, you would also hear screams of joy from the teams who achieved the ultimate feat, win the blockbuster. It was a really fun experience and the whole event was like a cult night in Graz. You would find 400+ people trying to take a spot in the money, establishing rivals with other teams, and sometimes acknowledging the bests. You could not imagine the competition force, the religiously followed code of conduct of no cheating (no mobile phones, no cooperation between teams, no visits to the toilet during rounds), and the thrilling agony as you were approaching the final questions of the Blockbuster round and finding yourself one step of winning 500 euros and one step of losing all your points and chance of reaching the top places. The Pub Quiz would climax at the end of May with the annual Play-Off rounds among the top-performing teams of the whole year and the final rankings with the yearly prizes. Here is an image of the top teams over the year 2018. As you can see we were 3rd it was quite a successful year :) Every summer though, some teams would have a chance to host their own quiz. That is what we did one summer with great success. Below you can see a photo of me hosting the quiz :) The whole process of coming up with a fun and challenging quiz took us days, if not weeks. In our quest to make something special, we came up with an idea of using a “movie connections” round. The idea was to construct a graph between movies and actors and guess the connecting movie(s) between two or more actors or the connecting actor(s) between two or more movies. After thinking and manually trying a lot of stuff we came up with the following quiz. Give it a try and see if you could solve it. There are two versions of it. The second version instead of a graph it contains a simple path but it has the added difficulty of including anagrams. For the record, only 2 teams out of 40 managed to solve it, so it was more difficult as we initially had thought. Now it has been a while and I found myself reminiscing of the good old times there. By revisiting this quiz I also thought about how we could easily create such movie connections with … Python! The first challenge we had to overcome back then was choosing a sweet spot between well-known movies and actors and not very obvious ones. There are endless possibilities which we could consider, but most of them are very hard even for movie experts. A good compromise would be to limit ourselves on IMDB’s top 250 movies list. This is a good mix between oldies and new movies, classics and blockbusters, Hollywood and international ones. We are going to use IMDbPY which is a Python package for retrieving data from the IMDB movie database. Luckily for us in its latest edition, it features a function of retrieving the top 250 IMDB movie list. But let’s proceed step by step. First, let’s install the latest package version from the repository: $ pip install git+ From here we can retrieve the top 250 movies: from imdb import IMDb # create an instance of the IMDb class ia = IMDb() top_movies = ia.get_top250_movies() Let’s see which are the top 5 movies from the list. As of today, they are the following: >>> top_movies[:5] [<Movie id:0111161[http] title:_The Shawshank Redemption (1994)_>, <Movie id:0068646[http] title:_The Godfather (1972)_>, <Movie id:0071562[http] title:_The Godfather: Part II (1974)_>, <Movie id:0468569[http] title:_The Dark Knight (2008)_>, <Movie id:0050083[http] title:_12 Angry Men (1957)_>] However, although the elements of this top_movies list are of type imdb.Movie.Movie, they do not contain all the information we will need, such as the complete movie cast. For that reason, we need to convert all incomplete Movie instances from this list to the complete ones, by using a call to function ia.get_movie. Since this might take some time it would be a good idea to install tqdm a smart Python progress bar meter. $ pip install tqdm from tqdm import tqdm >>> top_250_movies = [ia.get_movie(top_movies[i].movieID) for i in tqdm(range(250))] 100%|██████████████████████████████████████████████████████| 250/250 [09:19<00:00, 2.24s/it] The operation above took almost ten minutes, so be patient! Now we have our top 250 Movie instances. The next task is more challenging. We would like to find actors that would appear in at least two movies from the list, to have at least one connection between two movies. But let’s first find a list of all the actors in the top 250 movies. Apparently in this list, we are going to have duplicates. Let’s count them and find the ones occurring the most. from collections import Counter actors = [actor['name'] for i in range(250) for actor in top_250_movies[i]['cast']] actors_freq = Counter(actors) most_common_actors = actors_freq.most_common() Great, it seems that’s all we need! Let’s print the top 10 most common actors: >>> most_common_actors[:10] [('Arnold Montey', 26), ('John Ratzenberger', 10), ('Sherry Lynn', 10), ('Bess Flowers', 10), ('Robert De Niro', 9), ('Mark Falvo', 9), ("William H. O'Brien", 9), ('Arthur Tovey', 8), ('Joseph Oliveira', 8), ('Mickie McGowan', 8)] To be honest from this list I only know Robert de Niro. And wait a minute who is Arnold Montey, that had appeared in 26 top 250 movies of all time?! Well, I did search for him and apparently is an actor specialized in very small roles. He has appeared as a Stormtrooper in “Star Wars”, a stockbroker in “Inception”, an SS major in “Inglourious Basterds”, a Roman soldier in “Gladiator”, and a man selling cigarettes in “Gangs of New York”! Yet, he remains generally unknown to the public. The lesson learned is that we need to include only “well-known” actors from this list. This is the subjective and manual part of the procedure. We need to consider the most popular actors that appeared at least in two movies from our list. Let’s first find out how many actors appear at least twice. from itertools import takewhile from operator import itemgetter >>> sum(1 for _ in takewhile(lambda x: x >= 2, map(itemgetter(1), most_common_actors))) 1520 So, what I did next was to go through all those 1520 actors and choose the 150 most known actors according to myself. A more objective way would have been to use the “Star meter” feature of IMDB, but currently, it is only available in IMDB Pro version and not supported by the package we use. Here are the top 10 actors from my own assembled list along with the number of movies they had appeared from our list of top 250 movies. I hope you know all of them :) ('Robert De Niro', 9) ('Morgan Freeman', 7) ('Samuel L. Jackson', 7) ('Harrison Ford', 7) ('Christian Bale', 6) ('Michael Caine', 6) ('Tom Hanks', 6) ('Leonardo DiCaprio', 6) ('Charles Chaplin', 6) ('Al Pacino', 5) I called this manually assembled list selected_actors. This list contains only the actors full names. If we want the real class instances we should do the following: >>> actors_instances = [ia.get_person( ia.search_person( actors_to_select[i])[0].getID()) for i in tqdm(range(150))] 100%|████████████████████████████████████████████████████████████████████ 150/150 [09:07<00:00, 3.65s/it] What can also happen is that none of the actors we selected appear in some movies from the list. In other words, it is not guaranteed that the selected actors will span all the list of top 250 movies. Then some of the movies will be useless since they would be “isolated” nodes in the graph. Let’s find the movies that contain at least one actor from our list of selected actors. We are going to use a list comprehension again (perhaps you already noticed the trend of using list comprehensions in this article). This one is a bit more complicated but I find it elegant as only Python can be. movies_to_select = [movie for movie in top_250_movies if any(actor in movie for actor in actors_instances)] >>> len(movies_to_select) 173 So, we only need to consider 173 movies for building our graph. Let’s find their names: movie_names = [movie['title'] for movie in movies_to_select] Now comes the interesting part of constructing a graph to model relationships between actors and movies. Every node in the graph should either represent a movie or an actor and every edge in the graph would connect a movie with an actor that has appeared into it. For modeling such a graph and running some useful functions on it, we are going to use the well known NetworkX package. So, let’s install it: $ pip install networkx The first step is to create an undirected graph. import networkx as nx G = nx.Graph() Then let’s compute the edges of the graph. There can only be an edge between a movie and an actor if the actor appeared in the movie. There are two different methods to compute the edges. First method: edges = [] for i in tqdm(range(173)): movie = ia.get_movie(movies_to_select[i].movieID) movie_name = movie['title'] for actor in movie['cast']: actor_name = actor['name'] if actor in actors_to_select: edges.append((movie_name, actor_name)) Second method: edges_alt = [(movie_names[i], actors_to_select[j]) for i, movie in enumerate(movies_to_select) for j, actor in enumerate(actors_instances) if actor in movie] The second method looks simpler and more straightforward. There is though a slight caveat. The second method produces ten additional edges for the graph. These are the following: >>> set(edges_alt) - set(edges) {('Aladdin', 'Peter Lorre'), ('Dial M for Murder', 'Alfred Hitchcock'), ('Into the Wild', 'Jack Nicholson'), ('Joker', 'Bradley Cooper'), ('Kill Bill: Vol. 1', 'Charles Bronson'), ('Kill Bill: Vol. 1', 'Quentin Tarantino'), ('La Haine', 'Jodie Foster'), ('Pulp Fiction', 'Danny DeVito'), ('Requiem for a Dream', 'Robert Redford'), ('The Departed', 'Brad Pitt')} That looks weird… Jodie Foster in the French film ‘La Haine’ and Danny DeVito in Pulp Fiction certainly does not seem right. But if we have a closer look in the documentation here is what it says about the predicate actor in movie: “The in operator can be used to check whether a person worked in a given movie or not” The key word here is worked. And let’s see why: For “La Haine” the Blu-ray release included an introduction by actor Jodie Foster and for “Pulp Fiction” Danny DeVito served as Executive Producer on the film. But these types of relations should not be added as edges in our graph. We are not interested in actors who have worked in a movie in different roles than acting. Therefore, we need to check explicitly if an actor appeared in movie[‘cast’]. Unfortunately the predicate if actor in movie[‘cast’] does not seem to work properly. But there is a workaround to it: edges_alt_2 = [(str(movie), actor) for movie in movies_to_select for actor in actors_to_select if actor in str(movie['cast'])] And as you see we get the exact same edges as with the first method. >> len(edges_alt_2) 449 >> set(edges) == set(edges_alt_2) True After applying the correct method we also need to check whether there is a movie or an actor that is left with no connections at all (isolated node). In that case, we would have to disregard them. from itertools import chain >>> set(movie_names + actors_to_select) - set(chain(*edges)) {'La Haine'} As expected since Jodie Foster is no longer affiliated with the film, we need to discard it. Sorry for the French audience, I have seen the movie and I can definitely recommend it :) It might also be that with this new rule we might need to remove actors that play only in one movie. Let’s check: all_freqs = Counter(list(chain(*edges))) alone_actors = [node for node, freq in all_freqs.items() if freq < 2 and node in actors_to_select] >>> alone_actors ['Tim Robbins', 'Rita Hayworth'] Although there is a subtle connection based on a famous movie from our list “The Shawshank Redemption” we, unfortunately, need to remove them both. movie_names.remove('La Haine') for lonely_actor in alone_actors: actors_to_select.remove(lonely_actor) edges_to_remove = [edge for edge in edges if alone_actors[0] in edge or alone_actors[1] in edge] for edge in edges_to_remove: edges.remove(edge) Now we can finally add the nodes and the edges to the graph: # Add the top 172 movies and the top 148 actors we manually selected G.add_nodes_from(movie_names + actors_to_select) # Add the 447 edges of the graph G.add_edges_from(edges) Now that we have modeled the relations into a graph we can get quite some interesting insights. One of them is to find the nodes which have the most neighbors. most_connected_nodes = sorted(G.degree, key=itemgetter(1), reverse=True) >>> most_connected_nodes[:10] [('Avengers: Endgame', 15), ('Avengers: Infinity War', 13), ('The Lord of the Rings: The Return of the King', 10), ('The Lord of the Rings: The Fellowship of the Ring', 10), ('The Lord of the Rings: The Two Towers', 9), ('The Dark Knight Rises', 9), ('Robert De Niro', 9), ('The Godfather: Part II', 8), ('Pulp Fiction', 8), ('Cinema Paradiso', 8)] Both the Avengers movies gather a surprising amount of superstar actors from our list. What we could find next is the number of connected components in our graph. Ideally, we should have one big connected component, meaning we can always find a path from one movie to another in our list. But is this the case? Let’s find out: components = list(nx.connected_components(G)) >>> print("Connected components sizes:", list(map(len, components))) Connected components sizes: [262, 5, 40, 4, 3, 3, 3] So, we have one very large component, where the majority of our nodes is located, and one of a smaller size. Then we have a few very small components. Let’s see what they contain: >>> components[1] {'High and Low', 'Rashomon', 'Seven Samurai', 'Toshirô Mifune', 'Yojimbo'} >>> components[3] {'Arnold Schwarzenegger', 'Linda Hamilton', 'Terminator 2: Judgment Day', 'The Terminator'} >>> components[4] {'Citizen Kane', 'Orson Welles', 'The Third Man'} >>> components[5] {'John Cleese', 'Monty Python and the Holy Grail', "Monty Python's Life of Brian"} >>> components[6] {'Hacksaw Ridge', 'Into the Wild', 'Vince Vaughn'} So, we have an “Akira Kurosawa” component, a “Terminator” series component, an “Orson Welles” component, a “Monty Python” component, and a “Vince Vaughn” component. Since all of them are very small we can disregard them. Then we are left with only two interesting components. Let’s look closer at a few random nodes from the smaller one: import random >>> list(random.sample(components[2], 10)) ['Clark Gable', 'Cinema Paradiso', 'The Gold Rush', 'Kirk Douglas', 'The Great Dictator', 'Charles Chaplin', 'Marlene Dietrich', 'Rear Window', 'Witness for the Prosecution', 'Vertigo'] Well, it seems that the smaller components consist mostly of oldies and classics, while the bigger component mostly of modern movies. This can be interesting because we are going to generate different quizzes per component. But first let’s remove all redundant nodes from the graph: # Keep only nodes from components 0 and 2 for index in [1, 3, 4, 5, 6]: for node in components[index]: G.remove_node(node) # Compute separately the movies and actors from each component movies_1 = [elem for elem in components[0] if elem in movie_names] movies_2 = [elem for elem in components[1] if elem in movie_names] actors_1 = [elem for elem in components[0] if elem in actors_to_select] actors_2 = [elem for elem in components[1] if elem in actors_to_select] components = list(nx.connected_components(G)) And let’s validate that now we are left with only two components: >>> print("Connected components sizes:", list(map(len, components))) Connected components sizes: [262, 40] Now it is easy to construct quizzes of the second type that include anagrams, as the one we showed in the beginning. Let’s first concentrate on the second component. It has 15 actors and 25 movies. Let’s find the longest path in this graph. Generally this is a difficult NP-Hard problem but in this small case it can be solved easily: max_path_len = 0 max_path = None for idx, source in enumerate(actors_2): for target in actors_2[idx + 1:]: # Compute all paths between source and target all_paths = list(nx.all_simple_paths(G, source, target)) # Found no path between source and target if not all_paths: continue # Compute all paths lengths between source and target all_paths_lengths = list(map(len, all_paths)) # Compute the maximum length max_length = max(all_paths_lengths) max_index = all_paths_lengths.index(max_length) # Update the maximum length found so far, if needed if max_length > max_path_len: max_path_len = max_length max_path = all_paths[max_index] >>> max_path_len 11 Now we can go on and create some nice anagrams. We are going to use for that purpose the Internet Anagram Server. Here is a cool quiz we can generate. You need first to solve all the anagrams which represent actors from some old movies and then you need to find the connecting movie between each pair of actors! anagrammed_actor_names = ["Scary Percent", "Tag Ran Cry", "Teach Child Frock", "Teamster Jaws", "Lake Clergy"] print(f"0) {max_path[0]} (actor)") for idx, elem in enumerate(max_path[1:]): if idx % 2 == 0: print(f'{idx + 1}) .......... (movie) ..........') else: print(f'{idx + 1}) {anagrammed_actor_names[idx//2]} (actor)') 0) Clotilde Mollet (actor) 1) .......... (movie) .......... 2) Scary Percent (actor) 3) .......... (movie) .......... 4) Tag Ran Cry (actor) 5) .......... (movie) .......... 6) Teach Child Frock (actor) 7) .......... (movie) .......... 8) Teamster Jaws (actor) 9) .......... (movie) .......... 10) Lake Clergy (actor) If we are going to use the first component, which is larger, we can have many more solutions. Here is a method of producing many of them. We choose randomly a source (an actor to start with) and a target (an actor to finish). Then we generate all possible paths between them and up to a given length and stop only when we find a path of length 21. A path of such length will guarantee we have 10 anagrammed actor names and 10 connecting movies to solve. all_paths_gen = nx.all_simple_paths(G, source=random.choice(actors_1), target=random.choice(actors_1), cutoff=21) while True: path = next(all_paths_gen) if len(path) == 21: break Here is a quiz I generated using this method: Again you must first solve the actor anagrams and then find the connecting movies between them! Questions 1-10 are the connecting movies and the anagrams in between represent the actors. Creating quizzes like the first one we presented at the beginning of the post, that contains a connection graph instead of a path, is a bit more challenging. - We need to find a fairly simple but not too simple (e.g. a single path) connection graph between two movies. - All the path lengths between the source (start movie) and the target (end movie) should not be larger than a certain number, otherwise it makes it very complicated to solve. - There should be no more than 20 nodes, also for the sake of simplicity. We can use the built-in function networkx.all_simple_paths, which finds all paths in a graph between two nodes up to a certain length. The problem is that if this function cannot find such paths will keep searching for a very long time. In this case, we may want to put a timeout, terminate the search, and consider a different pair of nodes to search for paths between them. To implement a timeout method for our function, we are going to use Process from multiprocessing module. We are going to set a reasonable timeout of 5 seconds. Moreover, to be able to access the return value of the function, we need to have a shared variable. For that purpose, we are going to use a dictionary provided by multiprocessing.Manager. Therefore, as a target argument in Process call, where we need to put the function, we cannot simply use the networkx.all_simple_paths. Instead, we need to wrap it into another function, that we call find_all_paths, which contains the shared dict variable. def find_all_paths(G, start, end, depth, return_dict): value = nx.all_simple_paths(G, start, end, depth) return_dict[0] = list(value) from multiprocessing import Manager, Process from typing import List, NamedTuple class Connections(NamedTuple): start_movie: str end_movie: str max_length: int paths: List[List] def find_paths_random(max_depth: int = 8) -> Connections: """ Method to randomly pick two movies in the graph and find all paths between them up to a certain length :param max_depth: The maximum path length between the two movies :return: A NamedTuple containing the start movie, the end movie, the maximum path length, and a list of all paths between the two movies. """ while True: start_movie, end_movie = tuple(random.sample(movies_1, 2)) print(start_movie, end_movie) manager = Manager() return_dict = manager.dict() action_process = Process( target=find_all_paths, args=(G, start_movie, end_movie, max_depth, return_dict)) action_process.start() action_process.join(timeout=5) action_process.terminate() solutions = return_dict.values()[0] # If there was a timeout solutions will be an empty list # Otherwise, we need to choose a pair of movies again. if solutions: break return Connections(start_movie, end_movie, max_depth, solutions) def construct_graph(paths: List[List]) -> nx.Graph: """ Method to construct the connection graph given a list of all paths between the start node (source) and the end node (sink) :param paths: A list of paths between the start node and the end node :return: A graph with all the nodes and edges between the start node and the end node """ g = nx.Graph() new_nodes = set() new_edges = set() for elem in paths: new_edges |= set(zip(elem, elem[1:])) new_nodes |= set(elem) g.add_nodes_from(new_nodes) g.add_edges_from(new_edges) return g Sometimes we might end up with a long and complicated graph. It makes sense then to have a limit on the number of nodes the graph can have. For that purpose, we need a second function that will disregard some paths if the number of nodes is already above a certain number. def smaller_graph(paths: List[List], max_nodes: int = 20) -> nx.Graph: """ Method to construct the connection graph given a list of all paths between the start node (source) and the end node (sink), with the restriction that we should have a maximum number of nodes in the graph :param paths: A list of paths between the start node and the end node :param max_nodes: The maximum number of nodes we can have in the graph :return: A graph with all the nodes and edges between and including start and end node """ g = nx.Graph() while True: new_nodes = set() new_edges = set() for elem in paths: new_edges |= set(zip(elem, elem[1:])) new_nodes |= set(elem) # When we reach our limit exit the loop if len(new_nodes) < max_nodes: break # As long as we keep having more nodes than our limit we # need to drop the last path. This will create less nodes # in the next iteration until we reach our limit paths = paths[:-1] g.add_nodes_from(new_nodes) g.add_edges_from(new_edges) return g The following lines of code put them altogether and construct the graph connections = find_paths_random() g = smaller_graph(connections.paths) nodes_movies = [elem for elem in g.nodes() if elem in movies_1] nodes_actors = [elem for elem in g.nodes() if elem in actors_1] Now, we come to the last part where we need to visualize the graphs and “hide” some intermediate nodes. We show how this can be done below. import matplotlib.pyplot as plt # Make a slightly bigger figure to visualize the graph better figure = plt.figure(figsize=(25, 20)) # Get all the graph nodes and edges edges = list(g.edges()) nodes = list(g.nodes()) # Set positions for all nodes pos = nx.spring_layout(g) # Draw the movie nodes as blue squares nx.draw_networkx_nodes(g, pos, nodelist=nodes_movies, node_shape="s", node_color='b', alpha=0.1, node_size=10000) # Draw the actor nodes as red circles nx.draw_networkx_nodes(g, pos, nodelist=nodes_actors, node_color='r', node_shape="o", alpha=0.1, node_size=5000) # Draw the edges as green nx.draw_networkx_edges(g, pos, edgelist=edges, edge_color='g') # Get the labels of the start and the end nodes start_end_nodes = [connections.start_movie, connections.end_movie] labels1 = {elem: elem for elem in start_end_nodes} # Get the labels for all the nodes in between. labels2 = {elem: elem for idx, elem in enumerate(nodes) if elem not in start_end_nodes} # Get the labels of the solutions (all visible) labels_solutions = {elem: elem for elem in nodes} # Randomly choose half of the intermediate nodes as "secret nodes" # and show them as "?" void = random.sample(labels2.keys(), len(labels2) // 2) for elem in void: labels2[elem] = "?" # Update all the labeled nodes labels1.update(labels2) # Draw the labels of the graph nx.draw_networkx_labels(g, pos, labels1, font_size=18) Now we can create as many quizzes as we want! Have a look at the following ones that are created with the code above and give them a try. Let’s start with some fairly easy ones and move to some harder ones. To see the pictures better try to right-click on them and choose the “Open image in new tab” option. Some easy ones to start: Some medium difficulty ones: Finally, here are some hard ones! In the next blog post, I am going to provide the solutions to all the quizzes presented here. Also in a future blog post, I am going to try and make a Python app that would generate such quizzes using Pythonista! References: An article about the Graz Pub Quiz in the local newspaper (in German)
https://nikos7am.com/posts/movie_connections_quiz/
CC-MAIN-2022-27
en
refinedweb
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). Hi Guys: I'm sorry. Ferdinand told me that I should check the python matrix manual. After checking it, I thought that the python matrix was very easy to understand, but could not be applied to the c++ matrix, which made me very confused. I've been worrying about this problem for four days. I hope that I can set the matrix through two points and set my axis. Next, I use pictures to explain my problem. If I can provide the c++ code, Thank you very much (there may be some problems with the software translation I use. Please forgive me). thx! Hello @neekoe, Thank you for reaching out to us. There is no need to be sorry. I also just realized something, when you say "changing the polygon axis", and with your image you show, you probably want to transform the vertices of a point object and not set the matrix, i.e., transform, of the object itself. So, you want the geometry to be reoriented, but the "axis" of the object should remain the same, right? I have provided a code example and brief screencast for BaseObject PointObject at the end of this posting. Cheers, Ferdinand The result: The code: #include "c4d_baseobject.h" #include "c4d_basedocument.h" #include "maxon/apibase.h" #include "maxon/lib_math.h" /// Gets the first two objects in the scene and constructs a frame for them to either reorient the /// first object itself or its vertices. static maxon::Result<void> PC14100(BaseDocument* doc) { // Get the first and second object in the document. BaseObject* const objectA = doc->GetFirstObject(); if (objectA == nullptr) return maxon::UnexpectedError(MAXON_SOURCE_LOCATION, "Could not find input objects."_s); BaseObject* const objectB = objectA->GetNext(); if (objectB == nullptr) return maxon::UnexpectedError(MAXON_SOURCE_LOCATION, "Could not find input objects."_s); // Get their global matrices and compute a normalized delta vector for their offsets from A to B. Matrix mgA = objectA->GetMg(); const Matrix mgB = objectB->GetMg(); const Vector delta = mgB.off - mgA.off; // #delta is at this point a vector pointing from the origin of A towards the origin of B. We are // now going to construct a frame for this vector and an up-vector, where the delta vector will // become the z/k/v3 axis. // The unit vector of #delta will be the z-axis of our frame. const maxon::Vector z = delta.GetNormalized(); // This the frame component z/k/v3 // Choose an up-vector that is not (anti-)parallel to that z-component. const maxon::Float64 epsilon = 1E-5; const Bool isParallel = 1. - maxon::Abs(Dot(Vector(0., 1., 0.), z)) > epsilon; const Vector up = isParallel ? Vector(0, 1, epsilon).GetNormalized() : Vector(0, 1, 0); // Compute the remaining components with the cross product. const Vector x = Cross(up, z).GetNormalized(); // This the frame component x/i/v1 const Vector y = Cross(z, x).GetNormalized(); // This the frame component y/j/v2 // We could optionally scale here these unit vectors, but we are not going to do this, as we // want to maintain a uniform scale of 1 for the object. // Construct the new transform for the object A with the frame components we did compute. We // could give the object also a different offset (i.e., 'position'), but in this case we just // want to reorient the object A but maintain its position. Matrix newMg(mgA.off, x, y, z); // If #objectA is a BaseObject only, i.e., an object that cannot be edited on a vertex level, // then just set the transform as the global transform of the object. The z-axis of #objectA // will point towards #objectB after this. if (!objectA->IsInstanceOf(Opoint)) { objectA->SetMg(newMg); } // ... otherwise we carry out the transform on a vertex level, i.e., we transform each vertex as // if the object containing these vertices would have been set to #newMg. The result will be // a geometry that is oriented as in the previous condition, but the global transform, i.e., // the "axis" of the object will stay in place. else { // Cast the BaseObject to a point object so that we can manipulate its vertices and get its // the writable point array that stores the vertex positions. PointObject* pointObject = static_cast<PointObject*>(objectA); Vector* points = pointObject->GetPointW(); // We could just multiply each vertex position by #newMg and this would work if #objectA has // the identity matrix as its frame, i.e., "has the orientation of the world axis of (0, 0, 0)", // but not in other cases. For these cases we first have to compute the delta between the old // transform of objectA and the new value. We are also going to zero out the translations of // both transforms, as we do not want to translate the points, but only rotate them. mgA.off = Vector(); newMg.off = Vector(); // Multiply the new transform by the inverse of the old transform to get the "difference" // between both transforms. const Matrix deltaMg = ~mgA * newMg; for (int i = 0; i < pointObject->GetPointCount(); i++) { points[i] = deltaMg * points[i]; } } // Push an update event to Cinema 4D so that its GUI will update after this function has been // exited when this code runs on the main thread (the only place where we are allowed to do this). if (GeIsMainThreadAndNoDrawThread()) EventAdd(); return maxon::OK; } @ferdinand Hello ferdinand thank you very much for the python matrix manual you provided me yesterday. I have solved this problem so far. However, there is a new problem. I hope that my axis can become vertical like the model, similar to the world coordinates. How to set my axis? Thank you const Vector xNor = xPos.GetNormalized(); const Vector yNor = yPos.GetNormalized(); const Vector zNor = zPos.GetNormalized(); const Matrix m = Matrix(Vector(1,0,0),xNor, yNor, zNor); const Matrix inverseMat = ~m; op3->SetMg(inverseMat); const Matrix cuttomM = Matrix(Vector(1, 0, 0), Vector(0, 1, 0), Vector(0, 0, 1), Vector(0, 0, 0)); //No response at this step op3->SetModelingAxis(cuttomM); ApplicationOutput(" m is @", inverseMat); Hey @neekoe, I would recommend having a look at the code example I have posted here, it will show you how your "axis can become vertical like the model". You must for that transform the vertices of the object, not that object itself (or both). SetModelingAxis does not work like you think it does and "axis" are more or less a virtual concept, if you want to the origin/coordinate system to change in relation to the geometry it governs, you must transform the points that make up that geometry and then the object by the invers of that. E.g., when you move all vertices of an object by +100 units on the x-axis in the coordinate system of the object, and then move the object in its own coordinate system by -100 units on the x axis, it will appear as if the "axis" of the object has been moved, while the points have been kept in place. SetModelingAxis @ferdinand Thank you very much for your continuous help in the past two days. I just searched axis related content in cafe, and it is confirmed that my idea is wrong. You are a patient and good teacher. Thank you again, Ferdinand.
https://plugincafe.maxon.net/topic/14100/how-to-change-polygon-axis
CC-MAIN-2022-27
en
refinedweb
Go drivers For Go Applications, most drivers provide database connectivity through the standard database/sql API. YugabyteDB supports the PGX Driver and the PQ Driver. The PQ driver is a popular driver for PostgreSQL. Use the driver to connect to YugabyteDB to execute DMLs and DDLs using the standard database/sql package. CRUD operations with P PQ driver. Step 1: Import the driver package Import the PQ driver package by adding the following import statement in your Go code. import ( _ "github.com/lib/pq" ) Step 2: Connect to YugabyteDB database Go applications can connect to YugabyteDB using the sql.Open() function. The sql package includes all the functions or structs required for working with YugabyteDB. Use the sql.Open() function to create a connection object for the YugabyteDB database. This can be used for performing DDLs and DMLs against the database. The connection details can be specified either as string parameters or via a URL in the following format: postgresql://username:[email protected]:port/database Code snippet for connecting to YugabyteDB: psqlInfo := fmt.Sprintf("host=%s port=%d user=%s password=%s dbname=%s", host, port, user, password, dbname) // Other connection configs are read from the standard environment variables: // PGSSLMODE, PGSSLROOTCERT, and so on. db, err := sql.Open("postgres", psqlInfo) defer db.Close() if err != nil { log.Fatal(err) } such as DDL CREATE TABLE ... using the Exec() function on the db instance. The CREATE DDL statement: CREATE TABLE employee (id int PRIMARY KEY, name varchar, age int, language varchar) Code snippet: var createStmt = `CREATE TABLE employee (id int PRIMARY KEY, name varchar, age int, language varchar)`; if _, err := db.Exec(createStmt); err != nil { log.Fatal(err) } The db.Exec() function also returns an error object which, if not nil, needs to be handled in your code. Read more on designing Database schemas and tables. Step 4: Read and write data Insert data To write data into YugabyteDB, execute the INSERT statement using the same db.Exec() function. The INSERT DML statement: INSERT INTO employee(id, name, age, language) VALUES (1, 'John', 35, 'Go') Code snippet: var insertStmt string = "INSERT INTO employee(id, name, age, language)" + " VALUES (1, 'John', 35, 'Go')"; if _, err := db.Exec(insertStmt); err != nil { log.Fatal(err) } Query data To query data from YugabyteDB tables, execute the SELECT statement using the function Query() on db instance. Query results are returned as rows which can be iterated using rows.next() method. Use rows.Scan() for reading the data. The SELECT DML statement: SELECT * from employee; Code snippet: var name string var age int var language string rows, err := db.Query( PQ Driver.
https://docs.yugabyte.com/preview/drivers-orms/go/pq/
CC-MAIN-2022-27
en
refinedweb
JWT-backed Django app for managing querystring tokens. Project description Supported versions This project supports Django 3.1+ and Python 3.7+. The latest version supported is Django 4.0 running on Python 3.10. Django Request Token Django app that uses JWT to manage one-time and expiring tokens to protect URLs. This app currently requires the use of PostgreSQL. Background This project was borne out of our experiences at YunoJuno with 'expiring links' - which is a common use case of providing users with a URL that performs a single action, and may bypass standard authentication. A well-known use of of this is the ubiquitous 'unsubscribe' link you find at the bottom of newsletters. You click on the link and it immediately unsubscribes you, irrespective of whether you are already authenticated or not. If you google "temporary url", "one-time link" or something similar you will find lots of StackOverflow articles on supporting this in Django - it's pretty obvious, you have a dedicated token url, and you store the tokens in a model - when they are used you expire the token, and it can't be used again. This works well, but it falls down in a number of areas: - Hard to support multiple endpoints (views) If you want to support the same functionality (expiring links) for more than one view in your project, you either need to have multiple models and token handlers, or you need to store the specific view function and args in the model; neither of these is ideal. - Hard to debug If you use have a single token url view that proxies view functions, you need to store the function name, args and it then becomes hard to support - when someone claims that they clicked on example.com/t/<token>, you can't tell what that would resolve to without looking it up in the database - which doesn't work for customer support. - Hard to support multiple scenarios Some links expire, others have usage quotas - some have both. Links may be for use by a single user, or multiple users. This project is intended to provide an easy-to-support mechanism for 'tokenising' URLs without having to proxy view functions - you can build well-formed Django URLs and views, and then add request token support afterwards. Use Cases This project supports three core use cases, each of which is modelled using the login_mode attribute of a request token: - Public link with payload Single authenticated request(DEPRECATED: use django-visitor-pass) Auto-login(DEPRECATED: use django-magic-link) Public Link ( RequestToken.LOGIN_MODE_NONE) In this mode (the default for a new token), there is no authentication, and no assigned user. The token is used as a mechanism for attaching a payload to the link. An example of this might be a custom registration or affiliate link, that renders the standard template with additional information extracted from the token - e.g. the name of the affiliate, or the person who invited you to register. # a token that can be used to access a public url, without authenticating # as a user, but carrying a payload (affiliate_id). token = RequestToken.objects.create_token( scope="foo", login_mode=RequestToken.LOGIN_MODE_NONE, data={ 'affiliate_id': 1 } ) ... @use_request_token(scope="foo") function view_func(request): # extract the affiliate id from an token _if_ one is supplied affiliate_id = ( request.token.data['affiliate_id'] if hasattr(request, 'token') else None ) Single Request ( RequestToken.LOGIN_MODE_REQUEST) In Request mode, the request.user property is overridden by the user specified in the token, but only for a single request. This is useful for responding to a single action (e.g. RSVP, unsubscribe). If the user then navigates onto another page on the site, they will not be authenticated. If the user is already authenticated, but as a different user to the one in the token, then they will receive a 403 response. # this token will identify the request.user as a given user, but only for # a single request - not the entire session. token = RequestToken.objects.create_token( scope="foo", login_mode=RequestToken.LOGIN_MODE_REQUEST, user=User.objects.get(username="hugo") ) ... @use_request_token(scope="foo") function view_func(request): assert request.user == User.objects.get(username="hugo") Auto-login ( RequestToken.LOGIN_MODE_SESSION) This is the nuclear option, and must be treated with extreme care. Using a Session token will automatically log the user in for an entire session, giving the user who clicks on the link full access the token user's account. This is useful for automatic logins. A good example of this is the email login process on medium.com, which takes an email address (no password) and sends out a login link. Session tokens have a default expiry of ten minutes. # this token will log in as the given user for the entire session - # NB use with caution. token = RequestToken.objects.create_token( scope="foo", login_mode=RequestToken.LOGIN_MODE_SESSION, user=User.objects.get(username="hugo") ) Implementation The project contains middleware and a view function decorator that together validate request tokens added to site URLs. request_token.models.RequestToken - stores the token details Step 1 is to create a RequestToken - this has various attributes that can be used to modify its behaviour, and mandatory property - scope. This is a text value - it can be anything you like - it is used by the function decorator (described below) to confirm that the token given matches the function being called - i.e. the token.scope must match the function decorator scope kwarg: token = RequestToken(scope="foo") # this will raise a 403 without even calling the function @use_request_token(scope="bar") def incorrect_scope(request): pass # this will call the function as expected @use_request_token(scope="foo") def correct_scope(request): pass The token itself - the value that must be appended to links as a querystring argument - is a JWT - and comes from the RequestToken.jwt() method. For example, if you were sending out an email, you might render the email as an HTML template like this: {% if token %} <a href="{{url}}?rt={{token.jwt}}>click here</a> {% else %} <a href="{{url}}">click here</a> {% endif %} If you haven't come across JWT before you can find out more on the jwt.io website. The token produced will include the following JWT claims (available as the property RequestToken.claims: max: maximum times the token can be used sub: the scope mod: the login mode jti: the token id aud: (optional) the user the token represents exp: (optional) the expiration time of the token iat: (optional) the time the token was issued ndf: (optional) the not-before-time of the token request_token.models.RequestTokenLog - stores usage data for tokens Each time a token is used successfully, a log object is written to the database. This provided an audit log of the usage, and it stores client IP address and user agent, so can be used to debug issues. This can be disabled using the REQUEST_TOKEN_DISABLE_LOGS setting. The logs table can be maintained using the management command as described below. request_token.middleware.RequestTokenMiddleware - decodes and verifies tokens The RequestTokenMiddleware will look for a querystring token value (the argument name defaults to 'rt' and can overridden using the JWT_QUERYSTRING_ARG setting), and if it finds one it will verify the token (using the JWT decode verification). If the token is verified, it will fetch the token object from the database and perform additional validation against the token attributes. If the token checks out it is added to the incoming request as a token attribute. This way you can add arbitrary data (stored on the token) to incoming requests. If the token has a user specified, then the request.user is updated to reflect this. The middleware must run after the Django auth middleware, and before any custom middleware that inspects / monkey-patches the request.user. If the token cannot be verified it returns a 403. request_token.decorators.use_request_token - applies token permissions to views A function decorator that takes one mandatory kwargs ( scope) and one optional kwargs ( required). The scope is used to match tokens to view functions - it's just a straight text match - the value can be anything you like, but if the token scope is 'foo', then the corresponding view function decorator scope must match. The required kwarg is used to indicate whether the view must have a token in order to be used, or not. This defaults to False - if a token is provided, then it will be validated, if not, the view function is called as is. If the scopes do not match then a 403 is returned. If required is True and no token is provided the a 403 is returned. Installation Download / install the app using pip: pip install django-request-token Add the app request_token to your INSTALLED_APPS Django setting: # settings.py INSTALLED_APPS = ( 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'request_token', ... ) Add the middleware to your settings, after the standard authentication middleware, and before any custom middleware that uses the request.user. MIDDLEWARE_CLASSES = [ # default django', 'request_token.middleware.RequestTokenMiddleware', ] You can now add RequestToken objects, either via the shell (or within your app) or through the admin interface. Once you have added a RequestToken you can add the token JWT to your URLs (using the jwt() method): >>> token = RequestToken.objects.create_token(scope="foo") >>> url = "" + token.jwt() You now have a request token enabled URL. You can use this token to protect a view function using the view decorator: @use_request_token(scope="foo") function foo(request): pass NB The 'scope' argument to the decorator is used to bind the function to the incoming token - if someone tries to use a valid token on another URL, this will return a 403. NB this currently supports only view functions - not class-based views. Management commands There is a single management command, truncate_request_token_log which can be used to manage the size of the log table (each token usage is logged to the database). It supports two arguments - --max-count and --max-days which are self-explanatory: $ python manage.py truncate_request_token_log --max-count=100 Truncating request token log records: -> Retaining last 100 request token log records -> Truncating request token log records from 2021-08-01 00:00:00 -> Truncating 0 request token log records. $ Settings REQUEST_TOKEN_QUERYSTRING The querystring argument name used to extract the token from incoming requests, defaults to rt. REQUEST_TOKEN_EXPIRY Session tokens have a default expiry interval, specified in minutes. The primary use case (above) dictates that the expiry should be no longer than it takes to receive and open an email, defaults to 10 (minutes). REQUEST_TOKEN_403_TEMPLATE Specifying the 403-template so that for prettyfying the 403-response, in production with a setting like: FOUR03_TEMPLATE = os.path.join(BASE_DIR,'...','403.html') REQUEST_TOKEN_DISABLE_LOGS Set to True to disable the creation of RequestTokenLog objects on each use of a token. This is not recommended in production, as the auditing of token use is a valuable part of the library. Tests There is a set of tox tests. License MIT Contributing This is by no means complete, however, it's good enough to be of value, hence releasing it. If you would like to contribute to the project, usual Github rules apply: - Fork the repo to your own account - Submit a pull request - Add tests for any new code - Follow coding style of existing project Acknowledgements Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/django-request-token/
CC-MAIN-2022-27
en
refinedweb
Django aggregation, group by day This week we had a hackathon to develop a better internal dashboard page that shows things like records added to the system over time. Not having generated many reports in Django, we had to learn how to get the Django ORM to group records by day. It's a little bit of a weak spot in the still relatively new aggregation feature, so it wasn't as easy as we expected. The first pass got the job done, but was pretty slow. import datetime import itertools last_14_days = datetime.datetime.today() - datetime.timedelta(14) jobs = Job.objects.filter(date_added__gte=last_14_days) grouped = itertools.groupby(jobs, lambda record: record.date_added.strftime("%Y-%m-%d")) jobs_by_day = [(day, len(list(jobs_this_day))) for day, jobs_this_day in grouped] #[('2012-02-22', 1), ('2012-02-21', 1503), ('2012-02-20', 1351), ('2012-02-19', 200), ('2012-02-18', 157), ('2012-02-17', 1423), ('2012-02-16', 1665), ('2012-02-15', 1774), ('2012-02-14', 1533), ('2012-02-13', 1635), ('2012-02-12', 170), ('2012-02-11', 147), ('2012-02-10', 958)] That works, but it's a little slow if you have hundreds of thousands of records in the dataset. The slowness is partially due to the ORM grabbing every field in the Job record, even though you only need date_added. Here is an optimization, assuming all you need are counts: jobs = Job.objects.filter(date_added__gte=last_14_days).values("date_added") grouped = itertools.groupby(jobs, lambda record: record.get("date_added").strftime("%Y-%m-%d")) jobs_by_day = [(day, len(list(jobs_this_day))) for day, jobs_this_day in grouped] Even that is a lot slower than it needs to be. For the best performance, you want the grouping to happen in your database. In raw SQL, you would do something like: -- this syntax is Postgres specific select date_trunc('day', date_added), count(*) from website_job where date_added > now() - interval '14 days' group by date_trunc('day', date_added) order by date_trunc('day', date_added) /* 2012-02-10 00:00:00:000;916 2012-02-11 00:00:00:000;147 2012-02-12 00:00:00:000;170 2012-02-13 00:00:00:000;1635 2012-02-14 00:00:00:000;1533 2012-02-15 00:00:00:000;1774 2012-02-16 00:00:00:000;1665 2012-02-17 00:00:00:000;1423 2012-02-18 00:00:00:000;157 2012-02-19 00:00:00:000;200 2012-02-20 00:00:00:000;1351 2012-02-21 00:00:00:000;1503 2012-02-22 00:00:00:000;1 */ You can have the Django ORM pass this raw SQL for a group by as well, as long as you don't mind that you're violating the database-independence barrier. from django.db.models.aggregates import Count jobs = Job.objects.filter(date_added__gte=last_14_days).extra({"day": "date_trunc('day', date_added)"}).values("day").order_by().annotate(count=Count("id")) #[{'count': 1423, 'day': datetime.datetime(2012, 2, 17, 0, 0)}, {'count': 147, 'day': datetime.datetime(2012, 2, 11, 0, 0)}, {'count': 1351, 'day': datetime.datetime(2012, 2, 20, 0, 0)}, {'count': 1665, 'day': datetime.datetime(2012, 2, 16, 0, 0)}, {'count': 1774, 'day': datetime.datetime(2012, 2, 15, 0, 0)}, {'count': 200, 'day': datetime.datetime(2012, 2, 19, 0, 0)}, {'count': 157, 'day': datetime.datetime(2012, 2, 18, 0, 0)}, {'count': 1, 'day': datetime.datetime(2012, 2, 22, 0, 0)}, {'count': 958, 'day': datetime.datetime(2012, 2, 10, 0, 0)}, {'count': 1503, 'day': datetime.datetime(2012, 2, 21, 0, 0)}, {'count': 1635, 'day': datetime.datetime(2012, 2, 13, 0, 0)}, {'count': 1533, 'day': datetime.datetime(2012, 2, 14, 0, 0)}, {'count': 170, 'day': datetime.datetime(2012, 2, 12, 0, 0)}] There is some trickiness here. We're defining a new column on each result called "day", which is calculated with the date_trunc() method by Postgres. Then we're using the standard values/annotate functionality in the Django ORM aggregation framework. Notice, however, that we must clear the default ordering with an explicit order_by() in between, otherwise the grouping will not work. Another solution would be to denormalize day into its own field in your model, and just use the vanilla aggregation syntax. Hopefully these kinds of aggregations will get easier in future versions of Django.
https://chase-seibert.github.io/blog/2012/02/24/django-aggregation-group-by-day.html
CC-MAIN-2022-27
en
refinedweb
Adafruit_10DOF_IMU (community library) Summary Particle port of the Adafruit 10-DOF IMU Breakout L3GD20H LSM303 BMP180 Example Build Testing Device OS Version: This table is generated from an automated build. Success only indicates that the code compiled successfully. Library Read Me This content is provided by the library maintainer and has not been validated or approved. Adafruit 10DOF IMU Particle port of the Adafruit 10-DOF IMU Breakout L3GD20H LSM303 BMP180 This is a port of the Adafruit libraries for the 10DOF (10 degree of freedom) IMU (Inertial Measurement Unit) breakout board for the Particle Photon, Electron, etc. Wiring: - Boad <-> Photon - VIN <-> 3V3 - GND <-> GND - SCL <-> D1 - SDA <-> D0 - Leave the rest of the pins unconnected Board Description and purchase link: [] () [Adafruit How-To] () It is ported from these libraries: - [Adafruit Unified Sensor Library] () - [LSM303DLHC Library] () - [L3GD20 Library] () - [BMP180 Library] () - [Adafruit 10DOF Library] () To use it in your code, simply add the Adafruit_10DOF_IMU library to your project. It should automatically add the approrpiate include: #include "Adafruit_10DOF_IMU/Adafruit_10DOF_IMU.h" Browse Library Files
https://docs.particle.io/reference/device-os/libraries/a/Adafruit_10DOF_IMU/
CC-MAIN-2022-27
en
refinedweb
UI TextField and TextView problems! - AtomBombed I keep trying to make programs that involve the UI module. But every time I use the UI module, I always end up needing to access what the user inputted in a TextField or TextView. I can't figure out how to do this. Here is some of my code. import ui def savefile(): text1 = ui.TextField("textfield1").text text2 = ui.TextView("textview1").text file = open(text1,"a") file.write(text2) file.close() Use square brackets instead of parentheses... text1 = ui.TextField["textfield1"].text text2 = ui.TextView["textview1"].text ui.TextField() creates a completely new instance of a TextField. so there is nothing there yet! if you want the user to type something, you need to present a view first. Presumably you have already created a textview, and presented it. your options are to save a reference to the field/textview, say as a global, or instance property of a custom class, or else give it a nameparameter, and search the main view using rootview[’textview1']where rootview is the variable for the parent view, and textfield1 is the default name for a textfield, assuming you only have one. - AtomBombed Thank you for your help.
https://forum.omz-software.com/topic/2033/ui-textfield-and-textview-problems
CC-MAIN-2022-27
en
refinedweb
Opened 22 months ago Closed 21 months ago Last modified 21 months ago #31926 closed Bug (fixed) Queryset crashes when recreated from a pickled query with FilteredRelation used in aggregation. Description I am pickling query objects (queryset.query) for later re-evaluation as per. However, when I tried to rerun a query that contains a FilteredRelation inside a filter, I get an psycopg2.errors.UndefinedTable: missing FROM-clause entry for table "t3" error. I created a minimum reproducible example. models.py from django.db import models class Publication(models.Model): title = models.CharField(max_length=64) class Session(models.Model): TYPE_CHOICES = (('A', 'A'), ('B', 'B')) publication = models.ForeignKey(Publication, on_delete=models.CASCADE) session_type = models.CharField(choices=TYPE_CHOICES, default='A', max_length=1) place = models.CharField(max_length=16) value = models.PositiveIntegerField(default=1) The actual code to cause the crash: import pickle from django.db.models import FilteredRelation, Q, Sum from django_error.models import Publication, Session p1 = Publication.objects.create(title='Foo') p2 = Publication.objects.create(title='Bar') Session.objects.create(publication=p1, session_type='A', place='X', value=1) Session.objects.create(publication=p1, session_type='B', place='X', value=2) Session.objects.create(publication=p2, session_type='A', place='X', value=4) Session.objects.create(publication=p2, session_type='B', place='X', value=8) Session.objects.create(publication=p1, session_type='A', place='Y', value=1) Session.objects.create(publication=p1, session_type='B', place='Y', value=2) Session.objects.create(publication=p2, session_type='A', place='Y', value=4) Session.objects.create(publication=p2, session_type='B', place='Y', value=8) qs = Publication.objects.all().annotate( relevant_sessions=FilteredRelation('session', condition=Q(session__session_type='A')) ).annotate(x=Sum('relevant_sessions__value')) # just print it out to make sure the query works print(list(qs)) qs2 = Publication.objects.all() qs2.query = pickle.loads(pickle.dumps(qs.query)) # the following crashes with an error # psycopg2.errors.UndefinedTable: missing FROM-clause entry for table "t3" # LINE 1: ...n"."id" = relevant_sessions."publication_id" AND (T3."sessio... print(list(qs2)) In the crashing query, there seems to be a difference in the table_map attribute - this is probably where the t3 table is coming from. Please let me know if there is any more info required for hunting this down. Cheers Beda p.s.- I also tried in Django 3.1 and the behavior is the same. p.p.s.- just to make sure, I am not interested in ideas on how to rewrite the query - the above is a very simplified version of what I use, so it would probably not be applicable anyway. Attachments (1) Change History (9) comment:1 Changed 22 months ago by Changed 22 months ago by Tests. comment:2 Changed 22 months ago by Just a note, the failing queryset does not have to be constructed by setting the query param of a newly created queryset - like this: qs2 = Publication.objects.all() qs2.query = pickle.loads(pickle.dumps(qs.query)) The same problem occurs even if the whole queryset is pickled and unpickled and then a copy is created by calling .all(). qs2 = pickle.loads(pickle.dumps(qs)).all() comment:3 Changed 21 months ago by Hi, I started from the test Mariusz wrote, and followed the code down to where it diverged, comparing the pickling case to the expected QuerySet. Details can be found in the PR: comment:4 Changed 21 months ago by Left some comments on the PR regarding the __hash__ implementations but it looks like David identified the underlying issue appropriately. comment:5 Changed 21 months ago by Integrated the suggested changes, thanks! I'm wondering if the patch should be backported to 2.2 and 3.0 - I guess I'll let you consider and handle this. Thanks for this ticket, I was able to reproduce this issue.
https://code.djangoproject.com/ticket/31926
CC-MAIN-2022-27
en
refinedweb
Android Qt Quick 1 App not expanded, maximized or fullscreen Just run a Qt Quick 1 hello world application with Qt 5.1 for Android SDK and you will see on the device only a "window". showExpanded(), showMaximized() and showFullscreen() do not work. Any idea how to fix this issue? Best regards Strahlex Okay this needs some explanation. I use just the normal Hello World Application for Qt Quick 1: @#include <QApplication> #include "qmlapplicationviewer.h" Q_DECL_EXPORT int main(int argc, char *argv[]) { QScopedPointer<QApplication> app(createApplication(argc, argv)); QmlApplicationViewer viewer; viewer.addImportPath(QLatin1String("modules")); viewer.setOrientation(QmlApplicationViewer::ScreenOrientationAuto); viewer.setMainQmlFile(QLatin1String("qml/testandroid/main.qml")); viewer.showExpanded(); return app->exec(); } @ Rectangle { anchors.fill: parent Text { text: qsTr("Hello World") anchors.centerIn: parent } MouseArea { anchors.fill: parent onClicked: { Qt.quit(); } } } @ When I run this application on the emulator or a phone it just creates a "window". I expect to see the application using the whole screen like it was with Necessitas. The showExpanded(), showFullscreen() or showMaximized() seem to have no effect. Only thing that does anything is setGeometry(x,y). Then the window is resized to the geometry set with this function. Has anyone an idea how to solve this problem? In my opinion this is a critical bug (showstopper) can anyone reproduce it? - flaviomarcio last edited by Not only is the qtquick, Qtwidgets have same problem. I expect correction Qt5.2. I just created a bug report:
https://forum.qt.io/topic/29750/android-qt-quick-1-app-not-expanded-maximized-or-fullscreen
CC-MAIN-2022-27
en
refinedweb
NAME | SYNOPSIS | DESCRIPTION | RETURN VALUE | ENVIRONMENT | CONFORMING TO | NOTES | BUGS | EXAMPLE | SEE ALSO | COLOPHON STRFTIME(3) Linux Programmer's Manual STRFTIME(3) str specifications are introduced by a '%' character, and terminated by a conversion specifier ambiguous, that is, with a 2-digit year (00-99). (TZ) %h Equivalent to %b. including the seconds, see %T below. %s The number of seconds since the Epoch, that is, since 1970-01-01 00:00:00 UTC. (TZ) %S The second as a decimal number (range 00 to 60). (The range is up to 60 to allow for occasional leap seconds.) week number (see NOTES) of the current year as a decimal number, range 01 to 53, where week 1 is the first week that has at least 4 days in the new year. as hour offset from GMT. Required to emit RFC 822-conformant dates (using "%a, %d %b %Y %H:%M:%S %z"). (GNU) %Z null byte, provided the string, including the terminating null byte, fits. Otherwise, it returns 0, and the contents of the array is undefined. (This behavior applies since at least. SVr4, C89, C.1-2001. In SUSv2, the %S specifier allowed a range of 00 to 61, to allow for the theoretical possibility of a minute that included a double leap second (there never has been such a minute). Thursday,. 0.) An optional decimal width specifier may follow the (possibly absent) flag. If the natural size of the field is smaller than this width, then the result string is padded (on the left) to the specified width. function size_t my_strftime(char *s, size_t max, const char *fmt, const struct tm *tm) { return strftime(s, max, fmt, tm); } Nowadays, gcc(1) provides the -Wno-format-y2k option to prevent the warning, so that the above workaround is no longer required." #include <time.h> #include <stdio.h> #include <stdlib.h> int main(int argc, char *argv[]) { char outstr[200]; time_t t; struct tm *tmp; t = time(NULL); tmp = localtime(&t); if (tmp == NULL) { perror("localtime"); exit(EXIT_FAILURE); } if (strftime(outstr, sizeof(outstr), argv[1], tmp) == 0) { fprintf(stderr, "strftime returned 0"); exit(EXIT_FAILURE); } printf("Result string is \"%s\"\n", outstr); exit(EXIT_SUCCESS); } /* main */ date(1), time(2), ctime(3), setlocale(3), sprintf(3), strptime(3) This page is part of release 3.21 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at. GNU 2009-02-24 STRFTIME(3)
http://www.kernel.org/doc/man-pages/online/pages/man3/strftime.3.html
crawl-002
en
refinedweb
Opened 3 years ago Last modified 2 years ago I noticed when trying to use urlresolvers.reverse, if you pass a decorated function, you get the following error: Tried test in module testproj.views. Error was: 'function' object has no attribute 'method' Here's a simple setup to reproduce the error: urls.py: from django.conf.urls.defaults import * urlpatterns = patterns('', ( r'^test/$', 'testproj.views.test' ), ( r'^test/(?P<section>.+)/$', 'testproj.views.sect' ), ) views.py: from django.core import urlresolvers from django.views.decorators.cache import cache_page from django.http import HttpResponse def test( req ): s = urlresolvers.reverse( 'testproj.views.sect', kwargs={'section':'woot'} ) return HttpResponse( s ) @cache_page( 60 ) def sect( req, section ): return HttpResponse( "section %s" % section ) Comment out the cache_page decorator and things work as expected... #2713 was a duplicate. Perhaps a solution is that all Django decorators have a property which returns the original function. Then reverse can try to use this property before falling back to the function itself. This ticket is actually user error, the problem is that cache_page can't be used with Python 2.4 decorator syntax, because it expects the view and the arguments to be passed to it at once, so you'd have to do: def sect(req, section): return HttpResponse("section %s" % section) sect = cache_page(sect, 60) This is actually a documentation bug (or the decorator format needs to change), but I'll submit a new ticket for that. New ticket: #4919 This isn't a documentation error, it's a long-standing bug in the cache_page decorator. It's entirely consistent that it shoudl be able to be used as a decorator, it just doesn't have the subtlety in the implementation that some of our other decorators have to work like this. There was/is a ticket open about this somewhere, but I can't find it right at the moment. (and the ticket was #4149 sorry) By Edgewall Software.
http://code.djangoproject.com/ticket/2564
crawl-002
en
refinedweb
Ticket #638 (defect) Opened 2 years ago Last modified 2 years ago Cannot mount two CP apps using PasteDeploy Status: closed (fixed) Hi, I've downloaded the zip file found at but I'm having problems mounting the same blogapp at different urls using URLMap. My configuration file looks like this: [composite:main] use = egg:Paste#urlmap / = blogapp0 /blog1 = blogapp1 /blog2= blogapp2 [app:blogapp0] use = egg:cp_blog_app#main bar = Im blog 0 [app:blogapp1] use = egg:cp_blog_app#main bar = Im blog 1 [app:blogapp2] use = egg:cp_blog_app#main bar = Im blog 2 [server:main] use = egg:Paste#http host = 127.0.0.1 port = 8080 The problems I see are: - /blog1 and /blog2 causes "303 See other" redirect loops. - /blog1/foo returns "im blog 0" (should be "im blog 1" as specified by config) - /blog2/foo returns "im blog 0" (should be "im blog 2") I've been doing some experiements with this and made it work by monkey-patching cherrypy.config to use paste.deploy.config.CONFIG and stacking a paste.config.ConfigMiddleware above the CP app in the app_factory. I could share some code if interested. Alberto Change History 01/07/07 22:04:41: Modified by alberto@toscat.net 01/07/07 22:22:58: Modified by fumanchu Thanks for trying it out! I think the "Im blog X" problems are due to a small bug in the cpwsgihelper file. If I understand correctly, it should be the following instead: def make_factory(app): """Return a PasteDeploy compatible app_factory that will configure the given Application. """ def app_factory(global_config, **local_conf): cherrypy.config.update(global_config) app.merge({'/': local_conf}) init_wsgi() return app return app_factory Am I correct in thinking that the line bar = Im blog 1 should be passed in local_conf and not global_config? 01/07/07 23:08:48: Modified by alberto@toscat.net Yep, it should be passed as local_conf. However, I've tried the app_factory "patch" and get the same results. Code I mentioned is at and Some experimental code merging deployment config with app CP-style config (should enable setting up tools, etc... a là CherryPy) is at paste.turbogears.org (paste for pastebin, any coincidence with the subject of the ticket purely coincindential! ;) should be online now. Alberto 01/08/07 00:32:03: Modified by fumanchu Aha. Then cp_blog_app shouldn't create the Application object, because that single global object is being re-used for each mounting. Instead, it should say app_factory = make_factory(BlogApp) and make_factory should look something like this: I think that will remove the need for any config hacks. And if you're using Paste to mount applications, there should be no need for Tree or any hacks to Tree. 01/08/07 08:19:56: Modified by alberto yep! That made it work... :) So I grasp that a Tree is a full tree, I mean, should always be rooted at app's "/"(base_url aside...), right? No problem really as, like you've said, app's could be still be composed by Paste using CP's Application objects to wrap CP apps. It would still be nice if those globals (I can think of request, response, config, engine?,.... am i missing something?) could be stacked so CP applications could act as middleware if needed. I'll be posting at our thread in the trunk ML the use cases we have for that soon. To give an advance, we'd like to be able to do: class MyController(TGController): blog = WsgiApp(load_app("config:foo.ini")) Where the app loaded could be a CP app. TG controllers should be completely agnostic of the fact that the app is a CP or not... (it might even be close-sourced!) Thanks for the help, Alberto 01/08/07 10:42:17: Modified by fumanchu - status changed from new to closed. - resolution set to fixed. So I grasp that a Tree is a full tree, I mean, should always be rooted at app's "/"(base_url aside...), right? No problem really as, like you've said, app's could be still be composed by Paste using CP's Application objects to wrap CP apps. I think I should say "yes", but there's an odd double-use of "app" in that statement, both for a WSGI component and a WSGI complex of WSGI components. It would be simpler for me to say "yes" if it said: A Tree is a full tree, and should always be rooted at site's "/" (proxy urls aside...). But it doesn't have to be the first object in the WSGI stack--it's just that it uses SCRIPT_NAME to dispatch, and SCRIPT_NAME is always "rooted" at "/". No problem really as complexes could be still be composed by Paste using CP's Application objects to wrap CP apps. In other words, we've got to find a new word for "set of apps working together" that's not the word "app". We could use "site", but that goes too far the other way, implying that all are mounted at "/" and directly connected to an HTTP server. A "complex" can be mounted anywhere; a Tree can appear anywhere in a WSGI stack; but SCRIPT_NAME is always an absolute URL. 01/08/07 11:23:27: Modified by fumanchu I might not have said that bit about Tree and SCRIPT_NAME right. What I meant was that a Tree always uses SCRIPT_NAME to dispatch, and so it's never really "mounted" at a URL. It can appear anywhere in the WSGI stack and it will act the same way. 01/14/07 21:50:59: Modified by dowski Updated the example at CherryPyAndPasteCanPlayNice based on this information. Thanks. BTW, The redirect problems dissapear when I change server to use cherrypy (requires PasteScript==dev for 3.0.0) Alberto
http://cherrypy.org/ticket/638
crawl-002
en
refinedweb
- problem with e.Row.Cells() of gridview - how make codefile default - Could not load file or assembly System.Data.Entity - Encrypting Webconfig file values in .Net 1.1 Framework - Source code for web applications - Javascript file won't work with RegisterClientScript - Silverlight newsgroup? - how does one limit postback to a small section of screen? - UTF-8 - Use library - Use class ... why is not available? - KeyBoard Layout - MS Design template installation problem - Getting the DataKey Value in a GridView - Help: Copy Web site vs Publish Website vs Web Deployment Project - FormViewList? - RedirectFromLoginPage redirect to a Different URL - How to display a Wait page while a query is running? - Output Caching with Custom HTTP Handler - Type not Defined. Going crazy! - Displaying HTML Docs on a WebPage - Citrix "The Page cannot be displayed" - publishing a .NET 3.5 site - How to Create Drill Down on Web Page - Usage of '/' in .aspx page url - ASP.NET Request Pipeline and GoF Chain of Responsibility - sql problem with updating data from a gridview - FileStream file name problem. - The page cannot be displayed - Dynamic Web Template or Master Page? - Javascript Textbox border issue - Menu/SiteMap question - visibility - How to float a menu on the right within a div? - ObjectDataSource - Select - Nullable parameter - Authenticate against Active Directory - VS 2005 can not publish - Test post: Paypal - Problem with session variables - How to make a multi-line textbox to fill gridview's column in edit view? - Try/Catch for Events - Database issue - RE: Cache-control and Expires headers - DropDownList events not firing - FormView with DropDownList - Re: Newbie VB ADO.Net Code to retrieve data from SQL table - FileName - Problem with retrieving cookie in ASP.net - Uploading a directory instead of a file - ado.net ? on splitting a column - How do I loop this in a FOR loop - problem with ProcessStartInfo - filter ilist - Mini ASP.net code executer/testing app thing/widget? - How does javascript identify controls on asp.net pages? - Double submit - generate the controls in the table dynamically - Speed up my nested repeater! - How to auto-select a GridView row that is being edited? - Ajax error when using response.write() - Split Menu into multiple lines? - The located assembly's manifest definition with name '.....' does notmatch the assembly reference - DetailsView defaultMode question - path reference issue with a single masterpage and subfolders on a - textbox states with ajax anyone? - ASP NET 2.0 Cookie - UnboundDataGrid\Populating - Setting up a browser cache policy - Writing records from xml file into dataset - Visual Studio 2008 bug? - XML Designer? - regex expression for an Australian Mobile Phones - How do I put computed values in 'formview' - Creating a setup file for reporting services - ASP.NET 2.0 C# WebService Connect to Database: Sql Server 2005 - Need help with Gridview - Getting list of the files from a virtual directory - 'securing' cookies/login info - problem with checking null or empty value in detailsview - Substring special characters - MVC & MVP - regex help/check needed - Custom HTTPModule - Intercept Session Variable Request - How to center Login control on a page? - Missing IsClientScriptxxxRegistered method with ScriptManager - Rollover in 2nd column - 2005 urgent Q needs answer... - How to sort a datagrid binded to an HashTable? - VS2005 page titles unstable - Hot Shop Cartier watches Tonneau - Cartier Minimum Price - About app_offline.htm file... - Problem marshalling CopyData structure for WM_COPYDATA message in x64(code works in x86) - library not registered - Mont Blanc Watches - Hot Shop Mont Blanc Low Price - Hot Shop Cartier watches Tonneau - Cartier Minimum Price - Dreamweaver templates and ASP.net 2 - How to go straight to snippet code list - What are the new features in VS.Net 2008 - How do I open an Image and load it to a "picturebox" in ASP - How to have .NET framework 3.5 on web server - Role based security - Programmatically adding a css style to a web content form - FileUpload Control within UpdatePanel - ajax, modalpopupextender/updatepanel - Visual Studio 2008 asp.net web administration>profile: "The profile wasn't created"? - Are frames "out" these days and going forward? - Notify a Service Process - Pb with Webclient object since Framework 3.0 installed. need help !! - Help. Getting a An error has occurred while establishing a connectionto the server. When connecting to SQL Server 2005, this failure may be causedby the fact that under the default settings SQL Server does not allow remote - circular reference in cross page post back - IE does not reload css or js file... - Access UserControl - "The XML page cannot be displayed" Error - Getting a huge number of paramters from nowhere - set selected index of gridview on load - SMTP Relay from IIS SMTP Virtual Server - How get the key of a grid? - Create new website on File System or local IIS - Creating a dynamic file link - Using WSE3 For Web Service Kerberos Auth - sending error message from a class module - Redirect during Async Postback causing an error - Replace vbCrLf - Textbox with a list - Should this code force a refresh of my asp.net page? - Custome Service Account - Problem connecting to SQL Server (Timeou - Remotely accessing SQL server DB - Creating a 'proxy' redirector Options - Datalist is overlapping other controls - Subscribing to custom events from dynamically created UserControls - Simpler Way to Input Text - Strange Linq Problem - gridview appears only after forced postback - why? - Help: problems with ASP.NET menu on Web Part - GridView and TemplateField Help - asp.net default MembershipProvider and accessing it through code - Open Word document through Iframe - How do you create Listings().ListingItems() - Sql server 2000 paging - merge two PDF files in C# - parameters in an Stored procedure(SQL server 2005) - deployment - Postback nightmares - Resizing of the TextArea on border or scrollbar - Updating TabPanel contents asynchronously - Include In Project through NAnt / MSBuild. - Open Word Document from a Dialog Web Page - Visual Studio 2008 terribly slow in Design view - substitute img when EVAL file name is not found - Dynamic checkboxes and selection - Update with $ sign - Professional design advice needed - How to Get Exception StackTrace in Application_Error ? - CompilationError - Is Vista Ready for VS2K5? - Subdirectories in an application - Extended Characters in ASP.NET app - Browser sends multiple request 5 to 10 request in a second withoutuser action - Browser sends multiple request 5 to 10 request in a second withoutuser action - Master Page style - ASP.net - newsgroup menu - ext 3.5 History back button from different page - How does the FILE object in ASP.NET page locate the files saved in Mac machine ? - Question about membership/security - Tricky page structure - EnableEventValidation exception when dynamically adding controls on client with JavaScript - How to create Treeview for cross browsers ( mozilla etc.) - How avoid spammers? - How to hide div in AccordionPane? - how to find controls in repeater - Problem with Objectdatasource and project less solution. - New Movado Esperanza Men's Black Dial Dress Watch 605096 - CheapestWatch - Light ASP.Net Page - GridView DataBinding to a DataTable and row deleting problems - Movado Women's Vivo Watch #0605780 - Cheapest Watch - ASP.NET Menu control - Allowing a Stored Procedure Argument to be NULL - check the Require SSL option - Re: Server Controls - linq to DataTable issues - How do you make gridview controls disappear? - CreateUserWizard control: forcing it to use a programmatically provided username and e-mail - Checking server status via ASP.net - Cache.EffectivePrivateBytesLimit shrinks as box gets bigger? - Some kind of GridView - 401 While Accessing Web Service - Inline AJAX CalendarExtender corrupts subsequent layout - Question about SQLServer Session State Management - Capture PK violation in Details View - cross page post back problem - how to find contrtol - How to find control - How change framework - AjaxToolKit - POPFLY - Maps of ArcView in Applications Web NET - Movado Women's Rondiro Diamond Watch #606007 - Cheapest Watch - Help about Arcview using a Web application! - Casio Casual Sports Ana Digi 30 Page Databank Watch - Cheapest Watch - Override formview objectdatasource method - Movado Women's Kara Steel Watch - 0605247 - Cheapest Watch - Casio Casual Sports Ana Digi 30 Page Databank Watch - Cheapest Watch - Bollywood, Asian, Indie Films And More. - client side validation trouble - Movado Women's Esperanza Watch #0605286 - Cheapest Watch - Adding global objects to a web application - How to delete multiple rows in a GridView? - Is it possible to call server side function from client side code in asp.net? - JCalling methods inside a page class - Event - Slow loading time. Extension 3.5 - Programmable limits on upload size - server.urlencode work for winforms - How to enable AJAX in ASPX page after the fact - Default Values for DetailsView - online tests - Master pages for plain HTML pages? - RE: hai gai - Gridview and Radiobuttons - What is the best way to upgrade to 1.0.20229 - Menu Control That Works With Safari - Regarding validation summary control in asp:MultiView control - how do I catch a bad email address before sending? - Kill request without sending any response or error to the browser - Retrieve Value as it's Strong Type from Application or Session State - How to move items from one listbox to another on a client side? - Standalone Usercontrols - How do I detect a user leaving the site? - asp.net sqlmembershipprovider integrated security - css problem - Gridview "select button" causes weird problems - cross page posting of data - Mail Counter in asp.net 3.5 - Sending the state of a page via email - HtmlEncode bound data - Can't Figure Out How To Sort On Bind - ######THEY DO DIFFERENT THINGS TO GET HERE######### - session expiry when using sessionState mode="SQLServer" - Adapter.DeleteCommand.ExecuteNonQuery returns -1 but it works - JS\Hyperlink - stored proc probs.. - Presenting MOV-files and images from a DB - Exception Logging Strategy - public sub (on master page) not visible after upgrade form 2.0 2005to 2008 3.5 - how to set relative url for window.open and hyperlink - HttpHandlers, Server.Transfer and Session State - Logon failure: unknown user name or bad password. - Programming Discussion Forum - problem with master page and firefox - XmlDataSource caches between page loads? - System.InvalidCastException: - Can't view webforms in design view in visual Studio 2005 - Riva FLV Encoder 2.0 - FLV Converter - Windows Presentation Foundation - formview and focus - Design View not working - Re: Could not load file or assembly or one of its dependencies.Access - Displaying large photos - IE WebControls Toolbar - AJAX: Exposing a server controls webmethod to the client - Iterate through ASP.Net Ajax Tab Container ? - Any way to exclude folder from watching by ASP.NET - Propiedades de un Proyecto Web - Preventing ASP.NET Application From Starting - databind a gridview to a dictionary<string, string> only shows 1 row? - Any way to exclude folder from watching by ASP.NET - Injecting javascript into a particular part in a page from inside a codebehind file. - Report Viewer and ObjectDataSource changing in asp.net rdlc - Simple if statement doesn't work - ListView selected index - Create thumbnails without loading entire file into memory - requiredFieldValidater and checking for null values - Formview Null Values - Access 2007 attachment - DynamicallyPopulateGridView\Matrix - Create user with CreateUserWizard without logging in - Dynamic RequiredFieldValidator - Multiple controls with the same ID - improve database management - Add WHERE Clause Dialog Box Doesn't Support Object Source? - Re: Website deployment vs2008 - Ajax issue - ObjectDataSource not calling BLL - Need help with writing a file to a Network path (UPN) - IHttpAsyncHandler outputting ashx directive - dataview rowfilter and date range? - Counting active users when using sessionState mode=StateServer ? - How to add localized listitems to a dropdownlist ? - Making web user controls (ascx) available to more than 1 web application - Opening a new window when button clicked - Client and Office from asp.net - SqlConnectionString\ODBC\VS2008 - Master Page Properties persist? - Trying to hide an img control upon command - inserting date created with new record - Changing output format of checkBoxList control - When to create a new web user control - usign Skype from .NET - Debug del sito lentissimo - ######THEY DO DIFFERENT THINGS TO GET IT############# - using Ajax Modal popup - ASP.Net 3.5 - Retrieving List of Local Printers on Client Machine ? - what have I broke? - how to make navigation by using checkbox control in data gridview - path help please - Access Drive Mapped to a Network Share - simple question about borders - Https website - security popup not getting disabled - page has a s - Access to the folder is denied - UL list-style-image not working - Better Alternative than Application State? - opening up an excel spreadsheet in asp.net - F5 (debug) shows directory content - unable to cast type System.Web.UI.Control to System.Web.UI.WebControls.Label and object set to null reference - The expression contains undefined function call isdate() - Webservice - Intercept all requests - Problem with datetime - assemblies in sub directory? - Session timeout in IIS and web.config, which overwrites which? - problem creating dynamic controls - HowTo? LINQ Query Table Name (Range Variable) Not Known Until Runtime - Start Page - RadioButton CheckChanged event not firing on first selection - C++ and ASP.NET - ASP.Net Code Not Firing - AJAX update panel inserting line breaks - Web Site takes 10 minutes to start in debug mode - Web Site takes forever to run in debug mode - How to make checkbox "read-only"? - Handling Application.Start via Custom HTTP Module - Re: Problem with UTF8 files - enable execute permission in web.config - Exception Handling Problem - simple global variables and routines - IDbDataAdapter and Fill - GridView Click Event Not Firing - How to print .PDF files in the background using C# - Re: Casting 'object' to "real/useful" Type - Web Page & WebService Threading Example - need introduced value in detailsview in code-behind - Authentication forms + domain - Re: Casting 'object' to "real/useful" Type - Javascript Disabled Web UI Functionality - Best Practices? - AJAX Master page issue. - Login page for windows integrated - Regular Expression Validator - Displaying dictionary collection key/value pairs formatted with html markup on a page - Ajax TabContainer & ModalPopupExtender - Custom GridView with Button - Click Event Not Firing - req converting video to flash video examples - ASP.NET programming model - How to make an object instance available to all members of a page? - [2008] asp.net session - Multiselection DropDown with CheckBoxes - previous master page question - Currently logged on user - GridView CommandArgument suddenly stopped working - - Need advice: Stored Proc vs SQL Code vs GUI - Referring to cells by name rather then index - Rss feed - javascript in user control. - Need recommendation: Calendar (oulook like) - How about reference type - Use of % as wildcard messing up my QueryString - Change Property Value - Steve Orr's Captcha control - Sending html page values to ASP page - is there a way to refer to public properties in other user controls - Forms timeout, Session timeout - databinding checkBoxList to a Dictionary<string, string> object - code for sending email - Login failed - Enabling ASP-running webserver on a local machine - UpdatePanel.Update() doesn't work in Firefox - What is a .NET "Module" - User Control - Asynchronously stream an image accross domains? - HTTPWebRequest - IO.Streamwriter and Mysterious Line Feed - this is more of a vs2k8 question but this is the best place for it - IE 7- Popup timer does not start until session tab is clicked on - Converting from v1.1 to v2 and problems with switching design mode - Active Directory, User Permissions, and .NET? - ASP.NET Web Projects retrieved from Repository - Linq not compiling after 2005 => 2008 convert - Changing Item Template views. - Framework 2.0 to Framework 3.5 - VB .Net - Upload Files - Enctype=multipart/form-data - Exception Handling - Dynamic LinkButton not firing it's events. - remove selected treenode - Re: Detect Client-side Keyboard Layout - HttpWebRequest - FORM LOADING PROBLEM - Re: Sharing code between Projects. - Propagating an access Profile. - WebPage Timeout - NullReferenceException with IE but not FF? - File Upload question - create cgi-bin ? - www or ftp - How to insert an image in a WordML document through C#? - ASP.net (vb.net) session query - preventing a user from accessing apage while another user is viewing that page - FileUpload clears on Insert - Design View Not Responding when using Master Page - NameValueCollection - Anything Wrong With "preloading" Assemblies? - Cannot Run ASP.NET Application - Textbox Tooltip altered by javascript is not seen by code to save it - Multiple config files - partial page updates with a dictionary class? - VS2008 - Localhost debug ... - A very simple issue - Returning Nothing from Function - Choosing what wizard to display (or wizard steps) based on a dropDownList value - quey conversion from oracle to sqlserver? - Which Paypal Service for ISP Provider - quey conversion from oracle to sqlserver? - Application_BeginRequest issue - Retrieving IIS Web Site Property Value during Application_Start in ASP.NET Web app. - Displaying date through sql query? - Hyperlink & Frames - Hyperlink & Frames - save generic asp.net trace information to file - GridView: how to hide/show controls based on a value in another cell? - public variables in asp.net? - Session times out in less time than I set - sending mail - Does FullText warning mean any update to text field will not be reflected? - ascx is ambiguous in the namespace ASP - Prompt for password - Annoying problem with event being raised twice - Help:Filter datagridView with mulitple ComboBox - Control inside content page to access Master Page Vars - Global.asax not being Published. - get/set data web.config - mask to textbox - Sale Price Corum Watches Buckingham Replica - Corum Minimum PriceWholesale - How to gei Last ID - page element - absolute position - document.getElementById(sender.controltovalidate) attributes - How to get the download completed status? - ASP.NET AJAX Synchronous calls - Set Datalist height dynamically - Dynamic datalist height - Lomov legal mp3 music downloads - DataGrid and DetailsView... - Organizing Large Web Service - Reportviewer Grouping - css style sheets not working - Membership.ValidateUser always false - Project manager / CVS in ASP.Net like Sourceforge.net - Difference between ASP.NET Website and ASP.NET Application Project - Asp.NET, Ajax, Usercontrols, Event bubbling - Having trouble injecting Javascript in AJAX enabled page withRegisterClientScriptBlock - Expose Property - Nested Master Page at Runtime - Master Page. Why does this happens? - Noob question - binding data to list box (system.data.datarowview?) - How to use ASP.NET validators - Is it safe to reset table.column.Readonly attribute for updates? - Web Site takes forever to run in debug mode - QueryBuilder and Filter - CSharp @ - Is having multiple gridviews and formviews on a single page a good practice? - asp.net uses the wrong compiler - Problem with Data Grid - Display Multiple Sets Of Data In THe Same Datagrid - Display Multiple Sets Of Data In THe Same Datagrid - Centralize SOAPHeader Processing? - Can I develop asp.net 1.1 application in visual studio 2008? - using Transactionscope with 2 dbs - Disable validater controle by code - ASP.NET 2.0 Write File (IIS 6.0/Windows 2003) - FormView Insert Session value - How to create a timer to do scheduled jobs in web application? - Reportviewer textbox valign - ASP.Net, caches, scope of statics - Data Binding and Formatting Problem - No prompt to save password - Difficulty publishing ASP.NET site - Cannot dynamically assign a BackColor to a datalist - Timeout in XSD file? - Multiple checks on submit - ReadOnlyException while trying to update a subquery column using TableAdapter - Looking for some ideas on a brand new application system - Visual Studio keeps adding Oracle assemblies to web.config - ASP.NET configuration manager - Acces Data Provider - Question/bugs about ASp.NET2.0 SqlDataSource and GridView - opearation has timed out error - TableAdapter, INNER JOINs, stored procs, and problems with Update - HttpHandler - The Controls collection cannot be modified because the controlcontains code blocks - Div Woes - asp.net 2.0 AJAX site deployment - using theme's but not for all controls - Redirecting to another site with a large querystring - use scrollbars in Treeview on website - Set Focus after Postback (vb.net newbie) - Windows Forms and Windows Service Exchange - drag and drop from gridview to gridview - Hiding columns when binding datasource to a gridview - displaying html code in a textbox control - Binding datasources to dropdownlist - RE: Register Javascript code. - Print screen in GridView - Check login status - How to Determine User Input - sql 2005 xml searching - Export Word Using ASP.net 2.0 : Office 2007 doesn't support UTF-7 - how to get the correct email format when sending email using sqldatareader - How to add localized listitems to a dropdownlist ? - TreeView In GridView - OnTreeNodeCheckChanged event problem - Get random file - Add Css File - Culture - Register Javascript File - ASP .NET 2.0 and Javascript / JS - Select Options disappear - Problem With UpdateProgress Control - IE 7 not displaying ASP.NET properly - GC.Collect() not cleaning memory, how to find out what references to lots of memory still exist? - ASP .NET 2.0 Visual Studio 2005 - deploying - no DLLs - GridView on Delete Method problem - directory listing - Server.Transfer Suggestions for Login - Validate FormView fields C# - Failed to login to session state SQL server for user error? - Difference b/w Aspnet_compiler.exe and "Publish Website" - hosting a pdf file - Multi-level master-details. How to? - Form Post to another URL from in HTML FRAME - AJAX, Timer and UpdatePanel - ASP.NET 3.5 AJAX and default error handler - Basic Session/ Login Issue in ASP pages on shared server - Re: RegularExpression Validation for password in ASP.NET - Re: RegularExpression Validation for password in ASP.NET - CSS problem... - Re: Template for web visitors? - RE: XMLSerializer--how to debug? - AJAX Script Manager, 3.5, and Master Page Woes - Multiple Release WebSetup - move back to bottom of page after postback - Keys, Alicia ringtones - Kevin Yost And Horace James mp3 music downloads - ASP.NET 3.5 Extensions - url rewriting and authentication - how can the website work fine in VS but not in deploy??? - Retrieving Date & Time from a web server (localhost)
http://bytes.com/sitemap/f-116-p-8.html
crawl-002
en
refinedweb
Opened 2 years ago Last modified 1 year ago This is what the docs say:. This is not completely true. Everything you write to sys.stderr ends up in the apache error log, like this import sys sys.stderr.write('This goes to the apache error log') If somebody is going to create a patch for this, it must also mention that stderr is buffered in mod_python, so flushing the pipe each time is a necessity if you want to actually see the results in more-or-less realtime.. Done, incorporating suggestions of Malcolm and grahamd. Patch to modpython.txt to clarify use of print statements What about using standard python logging? Is there a way to get this to go to the error log (or elsewhere)? Replying to dharris: What about using standard python logging? Is there a way to get this to go to the error log (or elsewhere)?. Please don't anonymously bump something to "ready for checkin"; that status implies a certain level of review by an experienced triager. This ticket was already accepted (Triage Stage).) By Edgewall Software.
http://code.djangoproject.com/ticket/5677
crawl-002
en
refinedweb
Home > Blogs > bruce Posted: Friday, December 05, 2003 10:58 AM by bruce Filed under: SOAP, Tips While there are a number of quite useful articles about how to access and increment PerformanceCounters through the .NET Framework (the PerformanceCounter class description on MSDN and Creating Custom Performance Counters, also on MSDN to name two), the actual deployment of a web service (or any ASP.NET application, as it turns out) is not so thoroughly covered. The biggest problem surrounding the move into production of an ASP.NET application that updates performance counters is permissions. By default, in order to increment a performance counter, the user needs to have Administrator or Power User rights. You could change the processModel value in machine.config to System, but that leaves a security hole wide enough to drive an 18-wheeler through. Which is another way of saying “Don't do this!!!!!”. For completeness, the event log entry that appear as a result of the lack of permissions is as follows: Event ID: 1000Source: PerflibAccess to performance data was denied to ASPNET as attempted from C:\WINNT\Microsoft.NET\Framework\v1.1.4322\aspnet_wp.exe Also, on the actual call to increment the PerformanceCounter, the following exception is thrown: System.ComponentModel.Win32Exception: Access is denied with the stack trace pointing to the GetData method in the PerformanceMonitor class. As it turns out, the permission set that is required is much smaller than running as “System”. In the registry key HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Perflib, set the Access Control List so that the ASPNET user has Full Control. Voila, the problem goes away. bruce January 23, 2004 10:04 AM This is, hands down, the most useful article I've read this week. Thank you for filling in the blanks. :) bruce February 11, 2004 2:11 PM Thanks this got rid of the log entries! bruce March 3, 2004 4:15 AM Thanks for the useful info. I'm still confused, though, how to actually change the key. How do you mean by setting the ACL for the registry key to ASPNET? I have no such key. Thanks Christian bruce March 18, 2004 11:07 PM ditto! bruce March 29, 2004 2:24 PM Use regedt32 to set registry acl permissions, or if you use windows installer, you could do it there. bruce April 19, 2004 1:10 PM We are also running into this problem on both servers where we have ASP.NET installed. We aren't running any scripts to our knowledge that are actively trying to update the performance counter, so where is this coming from? Is it a bug with ASP.NET? We tried this registry setting trick but there is NO ENTRY for Access Control List. Do you have to ADD the entry? Any other ideas? bruce April 20, 2004 1:00 AM It's not that there is an entry for Access Control List. What needs to happen is that the permissions on the Perf registry key needs to be set. The easiest way to to this is with the regedt32.exe command. Execute regedt32 through Start|Run and navigate to the registry key. Then select the Edit|Permissions menu item. In the dialog that comes up, assign Full Control to the ASPNET user. If you don't see ASPNET in the list at the top of the dialog, use the Add button to add it. Then allow Full Control access and click OK. Hope this helps. bruce May 13, 2004 11:20 PM System.ComponentModel.Win32Exception: Access is denied Problem Remains bruce July 4, 2004 12:02 AM Holy smoke! This really works. I've been looking for info on how to fix this! Thanks! bruce July 5, 2004 7:47 AM Couldn't find that key in my registry anyway . I have Win 2003 server . Any ideas ? bruce July 14, 2004 12:13 AM I am getting the following error even after assign Full Control to the ASPNET user.. "The application attempted to perform an operation not allowed by the security policy. To grant this application the required permission please contact your system administrator or change the application's trust level in the configuration file. " bruce July 15, 2004 3:10 PM Thank you Bruce, I've been trying to solve this problem the whole day, thanks bruce July 29, 2004 8:13 AM I found this incredably useful. However I noticed there is a slightly easier way to give a user pemission to use perf counts. Add your user to the local group "Performance Log Users", I found this easier than altering registery acls. Thanks again - wouldn't have figured this one out with this artical. bruce July 29, 2004 9:49 AM Robert is correct about the Performance Log Users group allowing access to the perf counts. However, this group wasn't added until Win 2003 Server, so it might not be applicable to everyone. Also, according to the description, members of this group can "manage performance counters, logs and alerts on the server locally and from remote clients". I haven't investigated what 'manage' means from a functional perspective. There is the possibility that it opens more access then setting the ACL on the registry key. Hopefully I can provide more information after a bit of investigation. bruce August 17, 2004 11:39 AM Thanks Bruce. The only variance I noted (for W2K Pro) is that after selecting the key in Regedt32, the menu selection is Security>Permissions instead of Edit>Permissions. Works like a charm! bruce September 16, 2004 1:25 PM Is a boot of the machine necessary after setting these permissions ? bruce September 16, 2004 5:52 PM We tried granting permissions to the ASP.NET account (full control) to Perflib and now we're getting this message ... The Collect Procedure for the "Spooler" service in DLL "C:\WINNT\system32\winspool.drv" generated an exception or returned an invalid status. Performance data returned by counter DLL will be not be returned in Perf Data Block. Exception or status code returned is data DWORD 0. ... bruce September 16, 2004 11:51 PM I've never had to reboot to get the permissions to be picked up. The application that is making the request had to be restarted, however. Also, with respect to the Collect Procedure error, check out the following KB article: It appears to cover the problem that you're getting. If that doesn't work, feel free to post additional comments or email me. bruce December 16, 2004 7:31 AM Now thats what I call real useable technical advice. Thank you. bruce December 24, 2004 6:03 PM Works perfect under w2k. writing this comment took me longer than to actually cure the problem! bruce January 12, 2005 9:18 AM Any solution for the following error:. bruce February 14, 2005 7:50 PM I still have problems creating performance counters (and group) on the fly from my ASP.NET application. On my local Windows XP the above trick worked as a charm, but on my (production) Windows 2003 web edition servers, it has no influence, and I get a permission exception, when trying to add the counters. Any ideas, guys ? bruce May 11, 2005 5:25 PM. bruce May 24, 2005 5:57 PM Does anyone have a code sample that does this? Preferrably .NET, Visual Basic, but any code sample will do. Thanks! bruce May 27, 2005 2:28 PM This is great. Solved a problem we've been having for a few weeks. bruce June 8, 2005 3:23 AM John, your suggestions for W2003 works great. However, are ther any security implications as a result of changing these ACLs that you are aware of? Thanks a lot all the same. bruce June 10, 2005 9:32 AM If you go back to the top of the original post, you'll find a couple of links to MSDN articles that include some sample code. bruce June 14, 2005 9:52 PM I'm looking for code samples that adds permission for the ASPNET user has Full Control to the existing registry key HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Perflib. Code for creating registry keys is no problem, it's modifying the permissions on an existing registry key that I cannot find samples for. I should have been more specific in my original post. Thanks! bruce June 16, 2005 10:22 AM The reason you're having trouble finding .NET samples is because you can't do what you want using pure managed code. At least, not until .NET 2.0, when managing the registry ACLs is exposed through the System.Security namespace. In the meantime, check out for an example of how to do what you're looking for using today's technology. bruce January 3, 2006 4:20 PM Thanks for the info ...helped me fix my issues bruce February 15, 2006 4:49 AM Thanks, I will certainly try this together with Vuokko bruce March 6, 2006 1:57 PM Thanks a tonne mate, this post just saved me a headache in the making :-) bruce March 16, 2006 7:43 AM I've given Full Control access to aspnet user in registry.But still i'm getting the same error..is there any other solution???Plz help me out bruce September 14, 2006 12:56 PM Not sure If my problem related to described here. But may be someone can help me. User from remote desktop launches an application with NON-Admin rights, and getting error below. Admin has no problem. Any Ideas?! I found some info that I need to add user into Performance Counter Users Group. How? Thanks in advance -------------------------------------------See the end of this message for details on invoking just-in-time (JIT) debugging instead of this dialog box. ************** Exception Text ************** System.InvalidOperationException: Couldn't get process information from remote machine. ---> System.ComponentModel.Win32Exception: Access is denied.GetProcessesByName(String processName, String machineName) at System.Diagnostics.Process.GetProcessesByName(String processName) at PMT.frmMain.CheckForPrevInstance() at PMT.frmMain.frmMain.2300 CodeBase: ---------------------------------------- PMT Assembly Version: 1.0.2361.19470 Win32 Version: 1.0.2361.19470 CodeBase: ---------------------------------------- System.Windows.Forms Assembly Version: 1.0.5000.0 Win32 Version: 1.1.4322.2300 CodeBase: ---------------------------------------- System Assembly Version: 1.0.5000.0 Win32 Version: 1.1.4322.2300 CodeBase: ---------------------------------------- System.Drawing Assembly Version: 1.0.5000.0 Win32 Version: 1.1.4322.2300 CodeBase: ---------------------------------------- System.Data Assembly Version: 1.0.5000.0 Win32 Version: 1.1.4322.2300 CodeBase: ---------------------------------------- System.Xml Assembly Version: 1.0.5000.0 Win32 Version: 1.1.4322.2300 CodeBase: ---------------------------------------- Microsoft.VisualBasic Assembly Version: 7.0.5000.0 Win32 Version: 7.10.6310. ------------------------------------ bruce October 2, 2006 3:51 AM John Offenhartz mentions somewhere above that permissions should be modified for HKLM\System\ControlSet001\Services. This should be correct, since when creating a performance counter category a new registry key is created there. After setting that permissions also I get a Win32Exception with 'Unknown error (0xc0000022)' as description. And then the problem is access to the C:\WINNT\system32\perf*009.dat files as John also mentions. After setting permissions to that files the problem is solved! No the problem is that these are two many settings you have to handle in order for performance counters to work! Especially that file permissions, which are not that easy to set with an automated procedure. bruce October 2, 2006 3:57 AM But then again, permissions to Services and files are only required to create the category! Once you have created it, permissions under perflib are enough. bruce October 9, 2006 11:27 AM Hi !!!!. Any answer for jelena´s issue ? bruce October 27, 2006 11:43 PM i'm getting the same error as jelena when user from remote desktop launches an application with NON-Admin rights. Any Ideas?! Things To Remember ... March 7, 2007 8:49 AM Recently had some issues setting up some custom perf counters to install and run from an ASP.Net page
http://blogs.objectsharp.com/cs/blogs/bruce/archive/2003/12/05/222.aspx
crawl-002
en
refinedweb
User Guide Let me just first make it clear, that Fabric is alpha software and still very much in development. So it's a moving target, documentation wise, and it is to be expected that the information herein might not be entirely accurate. Table of contents: - The ache - Fabric: first steps - Getting connected - Managing multiple environments - More on configuration The ache: first steps Fabric is a tool that, at its core, logs into a number of hosts with SSH, and executes a set of commands, and possibly uploads or downloads files. There are two parts to it; there's the fab command line program, and there's the fabfile. The fabfile is where you describe commands and what they do. For instance, you might have a command called 'deploy' that builds, uploads and deploys your application. The fabfiles are really just python scripts and the commands are just python functions. This python script is loaded by the fab program and the commands are executed as specified on the command line. Here's what a super simple fabfile might look like: def hello(): "Prints hello." local("echo hello") Let's break that down line by line. First, there's the def hello(): line. It defines a command called hello so that it can be run with fab hello, but we'll get to that part. Next comes a block of text that is indented with four spaces. It is not important that we use exactly four spaces, just that each line is consistently indented. The first line of the indented block is a doc-string. It documents the purpose of the command and is used in various parts of Fabric, for instance, the list command will display the first line of the doc-string next to the name of the command in its output. Following the doc-string is a call to a function called local. In Fabric terminology, local is an operation. In python, functions are functions, but Fabric distinguishes between commands and operations. Commands are called with the fab command line program, and operations are in turn called by commands. Since they're both just python functions, there's nothing stopping commands from calling other commands as if they were operations. Getting back to local, you're probably left wondering what it does. Well, maybe you already guessed it. Regardless, there's a way to know for sure. And that is the help command. A command can take parameters when run from the command line, by appending a colon and then a parameter list to the end of the command name. For instance, if we want to invoke the 'help' command with the parameter local, we would type fab help:local on the command line. Let's try doing just that: rowe:~$ fab help:local Fabric v. 0.1.0. Warning: Load failed: File not found: fabfile", fail='abort') Done. rowe:~$ First, Fabric prints a header with version number — good to know. Then, there's a warning stating that no fabfile was found - which is understandable because we haven't created one yet. Finally, the help command is run and it prints the built-in documentation for the local operation. You can use the list command to figure out what other operations are available. Try running fab help:list to figure out how to use it. Since Fabric complains when it can't find any fabfile, let's create one. Create a file in your current directory (of the terminal you used to run fab help:local with above), call it fabfile.py, open it in your favorite text editor and copy-paste the example fabfile above into it. Now, let's see what happens when we run fab hello: rowe:~$ fab hello Fabric v. 0.1.0. Running hello... [localhost] run: echo hello hello Done. rowe:~$ local operation. However, that in and off itself isn't particularly useful. We can do that with shell scripts just fine. Instead, what we'd rely like to do, is to log in to a number of remote hosts and execute the commands there. Fabric let us do just that with these three operations: put: Uploads a file to the connected hosts. run: Run a shell-command on the connected hosts as a normal user. sudo: Run a shell-command on the connected hosts as a privileged user. Remember that you can inspect the documentation for each of these operations with the help command, ie. fab help:put. These operations are the bread and butter of remote deployment in Fabric. But before we can use them, we need to tell Fabric which hosts to connect to. We do this by setting the fab_hosts attribute on the config object, to a list of strings that are our host names. We can also specify the user we want to log into these hosts with by setting the fab_user variable. By default, Fabric will log in with the username of your current local user - which is perfectly fine in this example, so we'll leave that variable out. It is also possible to specify the username in fab_hosts, by preceding the host name with the username and then a @ character. Try changing your fabfile so it looks like this: config.fab_hosts = ['127.0.0.1'] def hello(): "Prints hello." local("echo hello") def hello_remote(): "Prints hello on the remote hosts." run("echo hello from $(fab_host) to $(fab_user).") We set the variables needed to connect to a host, and then we run an echo command on the host. Note how we can access variables inside the string. The dollar-parenthesis syntax is special to Fabric; it means that the variables should be evaluated as late as possible, which in this case will be when the run command actually get executed against a connected host. Let's try running fab hello_remote now and see what happens: rowe:~$ fab hello_remote Fabric v. 0.1.0. Running hello_remote... Logging into the following hosts as vest: 127.0.0.1 Password for vest@127.0.0.1: [127.0.0.1] run: echo hello from 127.0.0.1 to vest. [127.0.0.1] out: hello from 127.0.0.1 to vest. Done. rowe:~$ When we get to executing the run operation, the first thing that happens is that Fabric makes sure that we are connected to our hosts, and if not, starts connecting. Managing multiple environments We have managed to open connections to multiple hosts and execute shell commands on them, and we know how to upload files. Basically, we have everything we need to perform remote deployment. However, most commercial software projects have their product move through a number of phases for various forms of testing before the production deployment. It would be really nice if we could have multiple environments, such as test, staging and production, and be able to choose which environment to deploy to. Actually, we already have all we need to do that. Consider this fabfile for instance: def test(): config.fab_hosts = ['localhost'] def staging(): config.fab_hosts = ['n1.stg.python.org', 'n2.stg.python.org'] def production(): config.fab_hosts = ['n1.python.org', 'n2.python.org'] def deploy(): 'Deploy the app to the target environment' local("make dist") put("bin/bundle.zip", "bundle.zip") sudo("./install.sh bundle.zip") This way, we just need to remember to run the commands in the right order, like fab test deploy. What happens if we forget to run the environment command first? In that case Fabric will complain about the missing fab_hosts variable with a generic error message. Not cool, plus we could picture a complex fabfile where these environment configuration commands do other things than setting the fab_hosts variable - we need a generic way to control the run-order of certain commands. This is what the require operation is for. It takes a name of a variable and checks that it has been set, otherwise it will halt the execution. Additionally, it can take a provided_by keyword argument with a list of those operations that will set the said variable. If we add a call to require to the beginning of our deploy command, we can ensure that a proper environment will always be available: def deploy(): 'Deploy the app to the target environment' require('fab_hosts', provided_by = [test, staging, production]) local("make dist") put("bin/bundle.zip", "bundle.zip") sudo("./install.sh bundle.zip") There. If we now run deploy with first specifying an environment, we'll be duly told. More on configuration We have seen how Fabric can connect to a set of hosts and execute an array of commands on them. We have also touched on the built-in help system, and how we can use it to learn more about the features that are available to us in our fabfiles. However, to get the full potential out of Fabric, we also need to know how to configure it. The @hosts decorator So far, we have specified the hosts we intend to connect to, by setting the config.fab_hosts variable. When we wanted to override this setting, we always had to define a command that proactively set that variable. This way of re-specifying the fab_hosts variable doesn't really scale very well beyond the simplest cases. As the need of switching between different set of hosts increases, it also becomes increasingly error prone to remember to run the right set-these-hosts commands all the time. TBD Variables We saw in the "Getting connected" section above, that we can use a notation such as $(fab_user) to interpolate the value of the fab_user variable in a string. The standard python %(fab_user)s notation would have worked just as well, but there are some important differences between these two notations: The former notation is special to Fabric and is lazily evaluated, whereas the later is a general python feature, and is eagerly evaluated. This difference between eager and lazy evaluation is demonstrated in this example: def test(): config.var = 'a' config.cmd = 'echo %(var)s $(var)' # line 3 config.var = 'b' local(config.cmd) # line 5 If we run that as a command with Fabric, it will print out "a b". The eager notation will be interpolated as soon as possible, which is line 3, but the lazy notation will not be evaluated until it is actually needed, in line 5, and by that time the value of var will change, resulting in the output "a b". Having variables automatically interpolated is nice, but sometimes we don't want it. In that case, we need to escape the interpolations and in the case of the special $(variable) notation, this is easily done by preceding it with a back-slash, like this: \$(variable). The normal Python string interpolation is escaped like it has always been; by doubling the % character. When we execute commands with local(), run() and sudo(), the strings will pass by bash or some other shell, who might also be eager to do interpolation of environment variables upon seeing a $ character. Escaping characters through several layers of different interpolations can be tricky, but triple-back-slash seems to work: run("echo a string with a \\\$dollar"). Q & A: Why two different kinds of string interpolation? The main reason for the existence of the lazy interpolation notation, is that some variables simply do not exist at the time that the strings are defined. One such variable is fab_hostwhich names the actual host that an operation is executing on/against. Capistrano has a special string that is matched and replaced with the name of the current host. When creating fabric, it was decided that such a special string was too much of an easy-to-forget hack, and so the notion of lazy interpolation was created instead. Beyond the variables you set on the config object, Fabric provides a number of built-in variables. Most are for configuring Fabric itself, but some are also for use in the string arguments you pass to run and sudo and the like. For a complete overview of the different variables and their use, I'm afraid you have to consult the source code, but here's a list of the most useful ones: fab_hostis available in remote operations, or other operations that take effect on a per host basis, have access to this variable which names a specific host to work on. fab_hostsdefines the list of hosts to connect to, as a list of strings. There's no default value for this variable so it must be specified if you want to execute any remote operations. fab_modespecifies what strategy should be used to execute commands on the connected hosts. The default value is "rolling" which runs the commands on one host at a time, without any parallelism or concurrency. fab_passwordis the password used for logging into the remote hosts, and to authenticate with remote sudocommands. Don't set this in the fabfile, because a password-prompt will automatically ask for it when needed. fab_portis the port number used to connect to the remote hosts. The default value is 22, which is the default SSH port number. fab_useris the username used to log in to the remote hosts with. The default value is derived from the containing shell that executes Fabric, that is, your currently logged in username. fab_timestampis the UTC timestamp for when Fabric was started. Generally useful when naming backup files or the like. Beyond these variables, it is common practice (but not required) to set a project variable to the name of your project. This variable often comes handy in naming build-files, backup-files and deployment directories specific to the project. The config object TBD Key-based authentication If you have a private key that the servers will acknowledge, then Fabric will automatically pick it up, and if a password is required for unlocking that key, then Fabric will ask that password. This default behavior should work for most people, but if you use password-less keys, then note the caveat that Fabric won't ask for a password and this in turn means that the sudo() operation won't be able to parse a password to the sudo command on the remote hosts. To counter this, you need to specify, on the remote hosts, the commands you need for deployment as sudo'able without a password. If you want to use a specific key file on your system, then that is possible as well by setting the fab_key_filename variable to the path of your desired key file. If you need even more control, then you can instantiate your own PKey instance and put it in the fab_pkey variable - this will cause it to be parsed directly to the underlying call to connect without modification. User-local configurations and .fabric In the real world (at least the part I'm in), most projects are developed in teams. This means that more than one person might be allowed to deploy any given project. This often means that we'll have some per-developer configuration located on his/her computer - the username that is used to log into the servers are the prime example of such individualized configuration. This is where the .fabric file comes in; before the fabfile is loaded, Fabric will look for a .fabric file in your home directory and load it if found. Because it is loaded before the fabfile, you can override these defaults in the fabfile. The format of the .fabric file is very simple. The file is line-oriented and every line is evaluated based on these three rules: - Lines that are empty apart from white spaces, are ignored. - Lines that begin with a hash #character, are ignored. - Otherwise, the line must contain a variable name, followed by an equal sign and then some text for the value - both name and value will be stripped of leading and trailing white spaces. An example file might look like this: # My default username: fab_user = montyp And that's basically all there is to it. TODO: - What the different fab_modesdo. - @hosts, @mode and the other decorators - simulating "roles" with @hosts
http://www.nongnu.org/fab/user_guide.html
crawl-002
en
refinedweb
Using CsLex The SDK ships with a tool called CsLex, which is a C# utility for building lexers. It takes in a specially formatted file and generates a C# class that is then compiled into the project. The generated lexer is very efficient, implemented as a set of lookup decision tables that implement the regular expression rules that describe how to match a token. The version of CsLex in the SDK is based on the version by Brad Merrill, with some modifications, most notably to add support for Unicode text. This document is not intended to be a full guide to using CsLex. The original documentation is a very good introduction to the file format and how to implement a lexer, and should be considered essential reading before continuing with the version in the ReSharper SDK. Using CsLex in the SDK CsLex expects an input file that describes how to match tokens. This file includes some user code, CsLex directives and regular expression rules. See the official documentation for more details. The ReSharper SDK automatically includes a .targets file that will set up a project to use CsLex. The input file should be named after the language being analysed, with a .lex suffix, e.g. css.lex. This file should be added to a C# project, and have its Build Action set to CsLex. When the project is built, the CsLex targets will invoke the CsLex utility with the input file and generate a C# file based on the file name, replacing .lex with _lex.cs, e.g. css.lex will generate css_lex.cs. This C# file is automatically added to the list of files being compiled, which allows the project to compile correctly without the file being added to the .csproj file. It is recommended to add the _lex.cs file to the project, so that ReSharper can resolve symbols used in the file in surrounding code. The CsLex utility will also create a file with a _lex.depends suffix. This file does not need to be added to the project, as it simply lists any files that are included into the .lex file, and is recreated on each run. It is used for incremental building, and up-to-date checks - the lexer only needs to be rebuilt if any of its dependencies (or the input file itself) have been modified. Neither the _lex.cs nor the _lex.depdends files need to be added to source control - they are both recreated whenever the lexer class is regenerated. However, adding them to source control will not cause issues. The _lex.cs file gets generated as a partial class, which is used to provide a non-generated (but mostly boilerplate) implementation of ILexer or IIncrementalLexer. Creating a lexer for ReSharper A CsLex input file is made up of three sections - user code, CsLex directives and regular expression rules. User code When creating a lexer for ReSharper, the user code is typically just using statements, which can easily be implemented by looking at a generated _lex.cs file and seeing what is missing. Something like the following is required, although the exact contents will depend on the custom language being implemented: using System; using System.Collections; using JetBrains.Util; using JetBrains.Text; using JetBrains.ReSharper.Psi; using JetBrains.ReSharper.Psi.Parsing; using JetBrains.ReSharper.Psi.ExtensionsAPI.Tree; Directives The CsLex directives should be similar to the following: %unicode %init{ myCurrentTokenType = null; %init} %namespace MyCustomLanguage.Psi.Parsing %class MyLexerGenerated %public %implements IIncrementalLexer %function _locateToken %virtual %type TokenNodeType %eofval{ myCurrentTokenType = null; return myCurrentTokenType; %eofval} %include Unicode.lex The directives shown above are typical of the values required for a ReSharper lexer: %unicode- a switch to indicate that CsLex should generate lookup tables for all Unicode characters, and not just 8-bit character sets. %init- a block that gets copied verbatim into the generated lexer’s constructor. The code here initialises a field called myCurrentTokenTypeto null. This field is not generated by CsLex, but specified in the non-generated partial class. %namespace- defines the namespace used for the generated lexer class. A typical namespace for a lexer would end .Psi.Parsing. %class- specifies the name of the generated lexer class. Typically this is the language name followed by LexerGenerated, e.g. CssLexerGenerated. %public- causes the class to be generated as public. This is necessary so that the lexer can be instantiated from outside of the defining assembly. %implements- defines an interface or base class that the generated class will inherit/implement. Typically this will be IIncrementalLexeror ILexer. Strictly speaking, because the class is a partial class, the interface could be defined in the non-generated partial file. %function- defines the name of the tokenizing function. By default this is yylex, here it is replaced with _locateToken. This name is used as it is the implementation of a method called LocateTokenin the non-generated partial class. See below for more details. %virtual- causes the tokenizing function to be declared as virtual, so it can be overridden in a deriving class. Not strictly necessary for all lexers. %type- declare the return type of the tokenizing function. This should be TokenNodeType. %eofval- a block that gets copied verbatim into the lexer, and is executed when the lexer reaches the end of the input file. Here it resets the myCurrentTokenTypefield to null, and returns null. %include- includes a file. The contents of that file are treated as though they were always part of the input file, at that location. The parameter of the %includestatement is a filename that can be a fully qualified or relative path. If it is a relative path, it is first checked against the location of the input file, and if not found, then checked against the location of the CsLex utility itself. In this way, %include Unicode.lexwill find the Unicode.lexfile that ships with the SDK and provides regular expressions for various Unicode character classes. See below for details on this file. Custom languages can also add other directives, as per the CsLex documentation, and of course will define macros to use in the regular expression rules, and the states in which those rules are valid. WHITE_SPACE_CHAR=({UNICODE_ZS}|(\u0009)|(\u000B)|(\u000C)) WHITE_SPACE=({WHITE_SPACE_CHAR}+) %state YY_IN_NTH %state YY_IN_URI %state YY_IN_JS_EXPRESSION %state YY_CONDITIONAL Note the use of the UNICODE_ZS macro from the Unicode.lex file (see below), and that the escape characters for tab, etc. are declared as 16-bit Unicode values. Rules The final section is a set of rules, which are defined by three things, a state, a regular expression and an action. An example is: <YYINITIAL> {WHITE_SPACE} { return CssTokenType.WHITE_SPACE; } <YYINITIAL>is the state in which the rule can be matched. If the lexer does not define any custom states, this will always be <YYINITIAL>. Note that in the original CsLex, the state is optional, and the rule is matched in all states. This is not true for the version that ships with the SDK - the state is required. {WHITE_SPACE}is the regular expression that should be matched. A number inside braces, as shown here, is a macro expansion. - The rest of the line is the action that will be invoked when the rule matches. It should be wrapped in braces, and is copied verbatim into the generated lexer C# class. The typical implementation for an action is to set the myCurrentTokenType field to the appropriate token node type, and then return the same value. This will both set the current token type for anything that needs to check it, but also return it to the calling method. Some actions might want to change the lexer state, which they can do with the yybegin method, such as yybegin(YY_MYNEWSTATE). The built-in ReSharper lexers tend to follow a slightly different pattern for the actions: <YYINITIAL> {WHITE_SPACE} { myCurrentTokenType = makeToken(CssTokenType.WHITE_SPACE); return myCurrentTokenType; } This passes the chosen token node type to the makeToken method, defined in the partial file implementation (there is no reason for the lowercase ‘m’), allowing the partial file implementation a chance to modify the token before it is actually used. Some lexers will implement makeToken like this: private TokenNodeType MakeToken(TokenNodeType type) { return myCurrentTokenType = type; } This assigns the given token node type to the myCurrentTokenType field, and simply returns it. This is frequently unnecessary, as the rule action also assigns the value to the myCurrentTokenType, as does the method that calls the tokenizing function. A simple return statement is often enough for lexers. Partial file implementation CsLex will generate a lexer that can declare that it implements a given interface, IIncrementalLexer, or ILexer. However, it doesn’t actually provide an implementation for those interface members. The version of CsLex that ships in the SDK will generate a partial class, which allows extra type members to be defined in a separate .cs file. Typically, this is fairly boilerplate code that references the yy_ variables internal to the lexer: public partial class MyCustomLexerGenerated { private TokenNodeType myCurrentTokenType = null; public void Start() { Start(0, yy_buffer.Length, YYINITIAL); } public void Start(int startOffset, int endOffset, uint state) { yy_buffer_index = startOffset; yy_buffer_start = startOffset; yy_buffer_end = startOffset; yy_eof_pos = endOffset; yy_lexical_state = (int) state; myCurrentTokenType = null; } public void Advance() { myCurrentTokenType = null; LocateToken(); } public object CurrentPosition { get { TokenPosition tokenPosition; tokenPosition.CurrentTokenType = myCurrentTokenType; tokenPosition.YyBufferIndex = yy_buffer_index; tokenPosition.YyBufferStart = yy_buffer_start; tokenPosition.YyBufferEnd = yy_buffer_end; tokenPosition.YyLexicalState = yy_lexical_state; return tokenPosition; } set { var tokenPosition = (TokenPosition) value; myCurrentTokenType = tokenPosition.CurrentTokenType; yy_buffer_index = tokenPosition.YyBufferIndex; yy_buffer_start = tokenPosition.YyBufferStart; yy_buffer_end = tokenPosition.YyBufferEnd; yy_lexical_state = tokenPosition.YyLexicalState; } } public TokenNodeType TokenType { get { LocateToken(); return myCurrentTokenType; } } public int TokenStart { get { LocateToken(); return yy_buffer_start; } } public int TokenEnd { get { LocateToken(); return yy_buffer_end; } } public IBuffer Buffer { get { return yy_buffer; } } public uint LexerStateEx { get { return yy_lexical_state; } } public int LexemIndent { get { return 7; } } public int EOFPos { get { return yy_eof_pos; } } private void LocateToken() { if (myCurrentTokenType == null) { myCurrentTokenType = _locateToken(); } } private TokenNodeType makeToken(TokenNodeType type) { return myCurrentTokenType = type; } } Most of these methods and properties are self explanatory, but it’s worth looking at a few in more detail. All of the properties call LocateToken to ensure that there is a current token. If there isn’t, LocateToken will call the lexer’s tokenizing function (often created with the custom name of _locateToken, as specified in the directives). The tokenizing function will find the next token, and return it. Some of the built-in lexers will also set myCurrentTokenType in the rule actions, or indirectly by calling makeToken from the rule action. This isn’t necessary, as the LocateToken method will update the field. The ILexer.Advance method first sets the current token type to null and then calls LocateToken. By clearing the current token, it ensures that the _locateToken is called, and the next token is found, returned and the current position in the file is updated. The CurrentPosition property creates an instance of TokenPosition that includes all of the internal lexer variables (start position, end position, current index and state, as well as the current token type). It will save and restore state such that lookahead can work. The IIncrementalLexer.Start method does a similar thing, resetting the internal variables based on the parameters to the method. The LexemIndent property returns the magic number 7. This value is used during incremental lexing to give 7 characters of context when resuming lexing. Unicode The CsLex utility ships with a file called Unicode.lex, which can be included into a lexer input file using the %include Unicode.lex directive. It provides a set of macros that expand to a set of character classes that match known Unicode character categories. These can be very useful for creating rules that understand Unicode, and don’t have issues with Unicode based whitespace. UNICODE_CN=[\u0370-\u0373...] UNICODE_LU=[\u0041-\u005A...] UNICODE_LL=[\u0061-\u007A...] UNICODE_LT=[\u01C5\u01C8...] UNICODE_LM=[\u02B0-\u02C1...] UNICODE_LO=[\u01BB\u01C0...] UNICODE_MN=[\u0300-\u036F...] UNICODE_ME=[\u0488-\u0489\u06DE\u20DD-\u20E0\u20E2-\u20E4] UNICODE_MC=[\u0903\u093E...] UNICODE_ND=[\u0030-\u0039...] UNICODE_NL=[\u16EE-\u16F0\u2160-\u2182\u3007\u3021-\u3029\u3038-\u303A] UNICODE_NO=[\u00B2-\u00B3...] UNICODE_ZS=[\u0020\u00A0\u1680\u180E\u2000-\u200A\u202F\u205F\u3000] UNICODE_ZL=[\u2028] UNICODE_ZP=[\u2029] UNICODE_CC=[\u0000-\u001F\u007F-\u009F] UNICODE_CF=[\u00AD\u0600...] UNICODE_CO=[\uE000-\uF8FF] UNICODE_CS=[\uD800-\uDFFF] UNICODE_PD=[\u002D\u058A...] UNICODE_PS=[\u0028\u005B...] UNICODE_PE=[\u0029\u005D...] UNICODE_PC=[\u005F\u203F...] UNICODE_PO=[\u0021-\u0023...] UNICODE_SM=[\u002B\u003C...] UNICODE_SC=[\u0024\u00A2...] UNICODE_SK=[\u005E\u0060...] UNICODE_SO=[\u00A6-\u00A7...] UNICODE_PI=[\u00AB\u2018...] UNICODE_PF=[\u00BB\u2019...] Alternatives to CsLex CsLex is intended for C# projects - it generates a C# file. ReSharper does not provide any utilities for generating lexers in any other language. However, as long as the lexer implements ILexer or IIncrementalLexer, and returns tokens that are singleton instances of TokenNodeType, then the implementation can be in any language, or using any tool. The lexer interfaces can be implemented as partial classes, or in standalone classes that delegate to the generated lexer. Modifications to the original CsLex The version of CsLex that ships with the SDK has a number of differences to the original version. These include: - Support for Unicode, by specifying the %unicodedirective. This generates lookup tables for all Unicode characters, rather than just 8-bit character sets, and also handles Unicode escape characters. - Added a preprocessor, using the %includedirective. The content of the included file is inserted into the input file text before it is processed, and treated as though it were always part of the file, at the location of the %includedirective. The specified file name can be a rooted path name, or relative. If relative, it is first resolved against the input file, and if not found, then against the location of the CsLex utility. This allows for loading the Unicode.lexfile that ships with the SDK. - The generated class is now partial. - Added the %privatedirective. The class will be declared as internal, and the generated constructors will be private. Creating an instance of the class is handled by methods in the non-generated partial class. The %privatedirective has higher precedence over %public. - Added the %virtualdirective, which makes the main lexing function virtual. - ReSharper’s text buffers ( IBuffer) are used instead of System.IO.TextReaderand a char[]. - The actions for each matching rule are now implemented in a switchstatement rather than an array of delegates, for memory and performance reasons. - The %charand %linedirectives that enable character and line counting have been removed. This means the yycharand yylinevariables are not available. - Each state is written as a protectedvariable, rather than private. - Some implementation methods are no longer available, such as yy_advance, yy_mark_startand yy_mark_end. - Minor changes to fix small issues with the code. - Standard This code was generated by a toolheader to indicate that the file is generated, causing ReSharper to not analyser the contents. CsLex COPYRIGHT NOTICE, LICENSE AND DISCLAIMER CsLex is Copyright 2000 by Brad Merrill.
https://www.jetbrains.com/help/resharper/sdk/CustomLanguages/Parsing/Lexing/CsLex.html
CC-MAIN-2020-16
en
refinedweb
My website had been stable and running without error but I'd been running on 2.3.3, 2.0.52 and 3.1.3; since everything was behind the times I decided to finally reinstall; I made the mistake of doing it on the drive that everything was being served off of instead of using my backup drive. :-( I installed python 2.4 and it seems to work when I call it from the shell. I did not do a frameworkInstall, which seems to be recommended by some people. I am too much of a unix novice to understand what the implications of frameworkInstall are. I installed apache and mod_python as follows: cd ~/Desktop/httpd-2.0.53 ./configure --enable-so --with-mpm=worker make sudo make install cd ~/Desktop/mod_python-3.1.4 ./configure --with apxs=/usr/local/apache2/bin/apxs --with-python=/usr/local/bin/python2.4 make sudo make install I restored my old httpd.conf file and my website files. My index.html file is straight html which links to .py files. As soon as I try to access any of the .py stuff I now get an internal server error message and the server log shows this: [Tue Feb 15 16:25:25 2005] [notice] mod_python: Creating 32 session mutexes based on 6 max processes and 25 max threads. [Tue Feb 15 16:25:25 2005] [notice] Apache configured -- resuming normal operations [Tue Feb 15 16:25:36 2005] [error] make_obcallback: could not import mod_python.apache.\n Traceback (most recent call last): File "/usr/local/lib/python2.4/site-packages/mod_python/apache.py", line 22, in ? import time ImportError: Failure linking new module: : dyld: /usr/local/apache2/bin/httpd Undefined symbols: /usr/local/lib/python2.4/lib-dynload/time.so undefined reference to _PyArg_Parse expected to be defined in the executable /usr/local/lib/python2.4/lib-dynload/time.so undefined reference to _PyArg_ParseTuple expected to be defined in the executable /usr/local/lib/python2.4/lib-dynload/time.so undefined reference to _PyDict_GetItemString expected to be defined in the executable /usr/local/lib/python2.4/lib-dynload/time.so un <--what happened here?? [Tue Feb 15 16:25:36 2005] [error] make_obcallback: could not import mod_python.apache. for what its worth I can access various parts of the time module when I run Python2.4 from the shell. Going back through the the mailing list I saw that Graham Dumpleton (back on Dec 23) suggested the output from this might be useful; what I get is different from what he saw but I'm not sure what to make of my result: jraines-Computer:~/Desktop/Website jrraines$ otool otool -L /usr/local/apache2/modules/mod_python.so otool: can't open file: otool (No such file or directory) /usr/local/apache2/modules/mod_python.so: /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 71.1.1) jraines-Computer:~/Desktop/Website jrraines$ ls /usr/local/apache2/modules httpd.exp mod_python.so My problem seemed like it might be similar to a thread titled Weird ob_callback problems at the end of Jan. I tried the suggestions Grisha made:Try defining DYLD_FORCE_FLAT_NAMESPACE=1 environment variable before launching httpd. That didn't help either.
http://modpython.org/pipermail/mod_python/2005-February/017444.html
CC-MAIN-2020-16
en
refinedweb
qscript 0.7.1 a tiny, small, & fast scripting lang. To use this package, run the following command in your project's root directory: QScript A fast, static typed, scripting language, with a syntax similar to D Language. Setting it up To add QScript to your dub package or project, run this in your dub package's directory: dub add qscript After adding that, look at the source/demo.d to see how to use the QScript class to execute scripts. Getting Started To get started on using QScript, see the following documents: spec/syntax.md- Contains the specification for QScript's syntax. spec/functions.md- Contains a list of predefined QScript functions. source/demo.d- A demo usage of QScript in D langauage. Shows how to add new functions examples/- Contains some scripts showing how to write scripts. The code is thoroughly documented. Separate documentation can be found here. Building Demo To be able to run basic scripts, you can build the demo using: dub build -c=demo -b=release This will create an executable named demo in the directory. To run a script through it, do: ./demo path/to/script You can also use the demo build to see the generated NaVM byte code for your scripts using: ./demo "path/to/script" "output/bytecode/file/path" Features - Simple syntax - Dynamic arrays - Fast execution - Static typed. - Function overloading - References TODO For Upcoming Versions - add cast(Type) - unsigned integers - bitshift operators - Structs - Be able to load multiple scripts, to make it easier to separete scripts across files. Something similar to D's import Hello World This is how a hello world would look like in QScript. For more examples, see examples/. function void main(){ writeln ("Hello World!"); } - Registered by Nafees Hassan - 0.7.1 released 14 days ago - Nafees10/qscript - MIT - Authors: - - Dependencies: - utils, navm - Versions: - Show all 19 versions - Download Stats: 0 downloads today 0 downloads this week 1 downloads this month 111 downloads total - Score: - 1.9 - Short URL: - qscript.dub.pm
https://code.dlang.org/packages/qscript
CC-MAIN-2020-16
en
refinedweb
#.REGISTER and #.REGISTER_NEW This directive in the Asn2wrs conformation file can be used to register a dissector for an object to an OID. This is very useful for X.509 and similar protocols where structures and objects are frequently associated with an OID. In particular, some of the structures here encode an OID in a field and then the content in a different field later, and how that field is to be dissected depends on the previously seen OID. One such example can be seen in the ASN.1 description for X.509/AuthenticationFramework which has a structure defined such as AlgorithmIdentifier ::= SEQUENCE { algorithm ALGORITHM.&id({SupportedAlgorithms}), parameters ALGORITHM.&Type({SupportedAlgorithms}{@algorithm}) OPTIONAL } Which means that the parameters field in this structure, what this field contains and how it is to be dissected depends entirely upon what OID is stored inside algorithm. A whole bunch of protocols use similar types of constructs. While dissection of this particular structure itself currently has to be hand implemented inside the template (see x509af for examples of how this very structure is handled there). the #.REGISTER option in the conformance file will at least make it easy and painless to attach the actual OID to dissector mappings. Usage To get Asn2wrs to generate such automatic registration of OID to dissector mappings just use the #.REGISTER directive in the conformation file. Example #.REGISTER Certificate B "2.5.4.36" "id-at-userCertificate" Which will generate the extra code to make sure that anytime Wireshark needs to dissect the blob associated to the OID "2.5.4.36" it now knows that that is done by calling the subroutine to dissect a Certificate in the current protocol file. The "id-at-userCertificate" is just a free form text string to make Wireshark print a nice name together with the OID when it presents it in the decode pane. While this can be just about anything you want I would STRONGLY use the name used to this object/oid in the actual ASN.1 definition file. Include File During the compilation phase Asn2wrs will put all the extra registration code for this in the include file packet-protocol-dis-tab.c Make sure that you include this file from the template file or the registration to an OID will never occur. #include "packet-protocol-dis-tab.c" should be included from the proto_reg_handoff_protocol function in the template file. See Also The various dissectors we have for X.509 such as the X.509AF which contains several examples of how to use this option. That dissector can also serve as an example on how one would handle structures of the type AlgorithmIdentifier above. (Asn2wrs can NOT handle these types of structures so we need to implement them by hand inside the template)
https://wiki.wireshark.org/%23.REGISTER
CC-MAIN-2020-16
en
refinedweb
TwiML™ Voice: <Room> Programmable Video Rooms are represented in TwiML through a new noun, <Room>, which you can specify while using the <Connect> verb. The <Room> noun allows you to connect to a named video conference Room and talk with other participants who are connected to that Room. To connect a Programmable Voice call to a Room, use the <Room> noun with the UniqueName for the Room. You may choose the name of the Room. It is namespaced to your account only. Connect to a Room When an incoming phone call is made to a Twilio Phone Number, a developer can connect the call to a Twilio Video Room. <Room> and <Connect> Usage Examples <?xml version="1.0" encoding="UTF-8"?> <Response> <Connect> <Room>DailyStandup</Room> </Connect> </Response> Setting the participantIdentity You can set a unique identity on the incoming caller using an optional property called 'participantIdentity'. <?xml version="1.0" encoding="UTF-8"?> <Response> <Connect> <Room participantIdentity='alice'>DailyStandup</Room> </Connect> </Response> Note: If you don't set the participantIdentity, then Twilio will set a unique value as the Participant identity. Need some help? We all do sometimes; code is hard. Get help now from our support team, or lean on the wisdom of the crowd browsing the Twilio tag on Stack Overflow.
https://www.twilio.com/docs/voice/twiml/connect/room
CC-MAIN-2020-16
en
refinedweb
EP0288216A1 - Electrical fluid pump - Google PatentsElectrical fluid pump Download PDF Info - Publication number - EP0288216A1EP0288216A1 EP19880303388 EP88303388A EP0288216A1 EP 0288216 A1 EP0288216 A1 EP 0288216A1 EP 19880303388 EP19880303388 EP 19880303388 EP 88303388 A EP88303388 A EP 88303388A EP 0288216 A1 EP0288216 A1 EP 0288216A1 - Authority - EP - European Patent Office - Prior art keywords - armature - fluid - fluid pump - combination - pump according - Prior art date - Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.) - Granted Links - 239000006096 absorbing agents Substances 0 claims description 2 - 238000007789 sealing Methods 0 claims description 2 - 238000007493 shaping process Methods 0 claims 2 - 239000000203 mixtures Substances 0 description 9 - 238000003754 machining Methods 0 description 6 - 230000000694 effects Effects 0 description 3 - 239000000463 materials Substances 0 description 3 Images Classifications -/03—Pumps characterised by combination with, or adaptation to, specific driving engines or motors driven by electric motors - F04B17/04—Pumps characterised by combination with, or adaptation to, specific driving engines or motors driven by electric motors using solenoids - F04B17/046—Pumps characterised by combination with, or adaptation to, specific driving engines or motors driven by electric motors using solenoids the fluid flowing through the moving part of the motor -/02—Pressure lubrication using lubricating pumps - F01M2001/0207—Pressure lubrication using lubricating pumps characterised by the type of pump - F01M2001/0223—Electromagnetic pumps Abstract The pump operates using an electromagnetically vibrated armature (26) with a central through-conduit (4) and passive valves (5,6,). The armature has a narrower extension (32) to operate the valves, through which the conduit also passes. In order to avoid fluid building up between the armature and its guide (13), and impeding the movements of the armature, at least one of these is irregularly shaped, to provide firstly longitudinal through paths for such fluid to be quickly scavenged and removed, and secondly, cooperating guiding surfaces which are bypassed by the fluid, and not subject to fluid build-up between them. Therefore, power losses through fluid build ups is avoided, without necessity for state of the art radial drilling of the armature (13). Description - This invention relates to an electrical pump for fluids, eg hot water or for coffee machines, with economical outlay and power consumption. - According to a known pump, a generally cylindrical armature/piston combination which slides axially and has a generally axial internal bore to pass fluid from an inlet to an outlet, also has one or more radial bores between the axial bore and its exterior inside a sliding guide for the combination. - The radial bore serves to prevent fluid build-up between the combination and its guide. Such a build-up has a braking effect on the armature/piston combination due to pressure and viscosity, but is relieved by the prior art radial bore. Unfortunately the bore is expensive to machine and the machining can form a burr (a jagged irregularity projecting from the hole) which tends to scrape and wear out at least the assembly and its guide. In spite of consequent expense, burrs of wear, the loss of force and power due to peripheral fluid build-up can amount to 50%, and so has typically been dealt with in this way. - The invention aims also to minimize the peripheral fluid presence and consequent braking effect, but without causing a burr and/or incurring the expense of radially machining into the axial bore. - According to the invention set out in Claim 1 the armature/piston assembly and/or its guide is shaped to facilitate peripheral fluid mobility. A second aim according to a preference of the invention is to simplify production of the downstream part or end of the combination which has in the past been externally tapered or reduced in diameter by machining, which is quite expensive in time and loss of magnetic material. The invention preferably provides an armature assembly formed by a reduced diameter body which may be of a different, non-magnetic material or metal, crimped to the main body of the assembly. The axial bore must be continuous but can be machined or moulded, cast etc separately before the two pieces are crimped together. The preferred embodiments are now detailed with reference to the drawings, in which: - Figures 1 and 2 show in diametrical section a known pump and an inventive pump; - Figures 3 and 4 show likewise an armature/piston piece, and its crimped together combination with a narrower downstream piece; and - Figures 5, 6 and 7 show guide and armature cross sections. - Referring to Figure 1, the known pump has an inlet port 1 from which fluid is pumped by an axially vibrating armature/piston combination 2 to an outlet port 3, through an axial bore 4 in the entire length of combination 2, an inlet valve 5 and an outlet valve 6 in an axial passage in the housing 7 leading to the outlet port 3. - The outlet valve 6 is merely a one-way passive or flow-responsive valve, but the inlet valve 5 is opened by separation of the piston part, ie by the leftward movements of armature/piston 2 in its vibrations. The leftward movements cause fluid to be transferred from inlet port 1 past the inlet valve 5 and thence to the outlet 3. The leftward armature movements are caused by repeated energizations of a solenoid coil 8 via a terminal T and act against a return spring 9. The repeated energizations can result conveniently from half-wave rectified ac, eg at 50 Hz, between the half-waves of which the spring returns the combination rightward to close inlet valve 5. Both valves 5 and 6 are spring closed by return springs 10 and 11, spring 10 being weaker than spring 9. - An annular volume 12 is arranged to collect fluid which unavoidably flows between the outside of the armature/piston and its guide 13, but tends to get full enough of fluid to impede the amature vibrations. This tendency can be relieved, as known, by one or more radial bores 14, which provide relief conduits, from volume 12 when undesirably full of collected fluid, to the central bore 4. Such machining as aforesaid is costly and can leave burrs or loose metal particles prejudicial to the action or life of the pump. Another drawback, trapped fluid suffers a time delay before it can even reach volume 12, so it is impeding the vibrations during this time delay, even if fluid does not accumulate unduly in volume 12. The invention seeks to avoid all these possible drawbacks. - As will also be appreciated, the wider part of the armature 2 comes to rest each return stroke against a shock absorber ring 15. By the above very desirable avoidance of the impending of the vibrations, there is an unfortunate tendency to cause greater shocks. Through ring 15 an elongated narrow part 16 of the armature extends, preferably via sealing O-rings 17 and 18, to abut and seat inlet valve 5. The state of the art is to machine the mild steel down from the wider to this narrower diameter which takes time, wastes material and may cause burrs or leave particles which can separate later and block flow-ways. Moreover mild steel is heavy, causing greater shocks. More machining away is involved to provide annular volume 12. The invention appreciates that, although a relatively long axial bore has to be provided, the narrow end need not be magnetic or so heavy. The magnetic circuit may comprise outside the coil outer encapsulation 19, a rectangular yoke (not shown) of two L-sectioned pieces crimped together along their corners, a first cylindrical internal part 20 outside the thin armature guide 13, a ring 21 magnetically connecting the yoke and cylinder 20, a second cylindrical internal part 22, and a ring 23 communicating cylinder 22 to the yoke. The L-pieces have respective holes closely surrounding rings 21, 23. - Many alternative magnetic circuits are possible. The cylindrical magnetic gap 24 between the two cylindrical parts as well known attracts the armature adjacent to it, ie. leftward in Fig.1 against spring 9, whenever coil 8 is energized. The material used for the narrow part 16 of the armature therefore need not be magnetic since it does not interact with gap 24 or other magnetic circuitry. - Referring to the inventive pump of Fig.2 wherein like numerals reference like components, there is no external annular volume 12 but the guide 13 has eg. five internal longitudinal ribs 25 (see also Fig.5) on which the sliding armature bears and between which any trapped fluid can readily return (as shown by the flow line arrows 25ʹ) to the pumped stream travelling rightwards through the bore as before. The ribs can be provided without machining and at low cost in the mould of plastic guide 13, and free particles are unlikely and not metallic. Alternatively, the wide part of the armature/piston can have a non-circular cross-section as shown by the four longitudinal flutes of Fig.6 or the extended polygon of Fig.7. There should be longitudinal ribs or grooves or non-circular irregularities providing bearing surfaces, and no trapped space, but instead, a continuous communication between all peripheral points and the main axial pumped stream. The longitudinal irregularities can be strictly parallel to the axis, or can be oblique or helical or otherwise to provide this longitudinal communication, and hence lack of pressure build-up and viscosity drag, while enabling efficient piston effect and hence pumping action. The armature/piston can be in two parts as shown in Figs. 3 and 4, while having the inventive elongated irregularities of Figs.5-7, although the two-part arrangement could be adopted alone. The pressure reducing irregularities are best seen in the transverse cross-sectional views of Figs 5-7, but they are longitudinal in nature, being grooves or ribs or corners either parallel to the axis on having an axially directed component (eg. helical irregularities). - Referring to Fig. 3 a wide part 26 of the armature/piston has a central bore 27 and a holding portion 28, the top of which has an annular groove 29 to surround a lip 30 which can be crimped inwards by a suitable tool (not shown). The section of bore 31 of holding part 28 serves to accommodate a non-magnetic part 32 (eg. of brass or lighter plastics as suitable) shown in Figs 2 and 4. Part 32 has a waist 33 to accommodate in fluid-tight manner an annulus of crimped-in material from lip 30 as can be seen in Fig. 4. The end 34 of part 32 is shaped to serve as a valve seat for inlet valve 5, Fig. 2. The inlet end 35 of wide part 26 may be flared to promote flow and have a circular projection to seat and hold the return spring 9. Any ribs or grooves in the wide part of the armature cannot be seen in Fig.4, and indeed may not be present whenever such flow-conducive shapings are applied to the guide only (as presently preferred, eg see Fig 5) and not to the armature (embodied as by Fig 6 or Fig 7).). Not only is the narrow part provided without necessity to machine down the wide part, but boring only of shorter axial lengths is needed. Claims (7) 1. A fluid pump comprising a reciprocating armature/piston combination, through a bore in which the pumped fluid passes, characterised by (Fig 6 or 7) a shaping of, either the exterior of the combination or the interior of a guide for its reciprocation (Fig 5) , whereby fluid tending to become trapped between the internal walls of the guide (13) and the external wall of the wide part (26) of the armature/piston combination (2) is instead returned to the pumped stream, said shaping composing cross-sectional differential or relative irregularities between said internal and external walls. 2. A fluid pump according to Claim 1 characterised in that shock or noise caused by lack of braking by trapped fluid is countered by a shock-absorber ring (15) and any necessary sealing rings (17,18) acting on a transverse face (29,30) of the wide part (26) of the piston. 3. A fluid pump according to Claim 2 wherein the reciprocating armature/piston combination has a relatively narrow extension (32) not abutted by said rings but passing therethrough, which seats and unseats a valve (5), wherein the extension projects axially from said transverse face (29,30) of the armature, and is a separate bored non-magnetic piece crimped (30, 33) or otherwise attached to a wide part (26) which is magnetic. 4. A fluid pump according to Claim 3 comprising a waist (33) in the axial extension (32) receiving a crimped in portion (30) of said transverse face of the wide part (26) of the combination. 5. A fluid pump according to any of Claims 1-4, wherein the longitudinal irregularities are guide ribs (25) extending along the inside wall of the armature guide. 6. A fluid pump according to any of Claims 1-4 wherein the longitudinal irregularities are provided by channelling, or by a hexagonal (Fig 6) or grooved (Fig 5) cross-section of the wide part (26) of the combination. 7. A fluid pump according to any of Claims 1-6 having series inlet and outlet valves on the downstream side of the combination, which are passive but biased towards closure positions. Priority Applications (2) Publications (2) Family ID=10615907 Family Applications (1) Country Status (4) Cited By (32) Families Citing this family (7) Citations (6) - 1987 - 1987-04-15 GB GB878709082A patent/GB8709082D0/en active Pending - 1988 - 1988-04-14 DE DE19883870017 patent/DE3870017D1/en not_active Expired - Fee Related - 1988-04-14 EP EP19880303388 patent/EP0288216B1/en not_active Expired - Lifetime - 1988-04-14 ES ES88303388T patent/ES2030856T3/en not_active Expired - Lifetime
https://patents.google.com/patent/EP0288216A1
CC-MAIN-2020-16
en
refinedweb
#include <vtkXMLPRectilinearGridWriter.h> vtkXMLPRectilinearGridWriter writes the PVTK XML RectilinearGrid file format. One rectilinear grid input can be written into a parallel file format with any number of pieces spread across files. The standard extension for this writer's file format is "pvtr". This writer uses vtkXMLRectilinearGridWriter to write the individual piece files. Definition at line 36 of file vtkXMLPRectilinearGridWriter.h. Reimplemented from vtkXMLPStructuredDataWriter. Definition at line 40 of file vtkXMLPRectilinearGridWriter.h. Create an object with Debug turned off, modified time initialized to zero, and reference counting on. Reimplemented from vtkAlgorithm. Return 1 if this class type is the same type of (or a subclass of) the named class. Returns 0 otherwise. This method works in combination with vtkTypeMacro found in vtkSetGet.h. Reimplemented from vtkXMLPStructured. Get/Set the writer's input. Reimplemented from vtkXMLWriter. Get the default file extension for files written by this writer. Implements. Implements vtkXMLPStructuredDataWriter. Reimplemented from vtkXMLPDataWriter.
https://vtk.org/doc/release/5.8/html/a02374.html
CC-MAIN-2020-16
en
refinedweb
C Program For Palindrome String – What is Palindrome? A palindrome is a sequence of characters which reads the same backward as forward. A palindrome could be a word, phrase, number, or other sequences of characters. such as madam or racecar. Palindrome Word Examples: Madam, Racecar, Radar, Etc. C Program For Palindrome If you are looking for a palindrome string in C program, this article will help you to learn how to write palindrome program in C language. Just go through this C program for palindrome to check whether a string is palindrome or not. After going through this C programming tutorial, you will be able to write a C palindrome program. C Programming Tutorials C Program For Palindrome Numbers C Program To Reverse a String with Using Function C Program To Reverse a String without Using Function C Program To Reverse a String Using Recursion C Program To Reverse a String Using Pointers C Program For Palindrome String Learn how to write a Palindrome program in C language. Writing a C program to check whether a string is a palindrome or not can be done using various techniques, but here in this program, we show how to write palindrome program in a proper way. This is the eighth C programming example in the series, it helps newbies, students and B.Tech graduates in enhancing C programming and land on their dream job. All the best guys in learning c programming with coding compiler website. Happy Learning. C Program For Palindrome Source Code Copy paste the below source code or write your own logic into C compilers and run the program to see the result. #include <stdio.h> #include <string.h> int main() { //Variables declaration char string1[20]; int i, length; int flag = 0; //Reading a string from user printf("Enter a string:"); scanf("%s", string1); length = strlen(string1); //checking whether a string is palindrome or not for(i=0;i < length ;i++) { if(string1[i] != string1[length-i-1]) { flag = 1; break; } } if (flag) { printf("%s is not a palindrome", string1); } else { printf("%s is a palindrome", string1); } return 0; } C PROGRAM FOR PALINDROME OUTPUT After you compile and run the above program, your C compiler asks you to enter a string to check whether entered string is palindrome or not. After you enter a string, the programming logic will check and show result like below expected output. Example-1 Enter a string: Radar Radar is a palindrome Example-2 Enter a string: King King is not a palindrome
https://codingcompiler.com/c-program-for-palindrome-string/
CC-MAIN-2020-16
en
refinedweb
This chapter provides an overview of RealTime Publishing and includes information on how to write your own transporter, implementation details, helper methods, example transporter implementation, full code listing, edge-case scenarios, and information on intercepting asset publishing events on the management instance. This chapter contains the following sections: Section 49.1, "Overview of RealTime Publishing" Section 49.2, "Writing a Custom Transporter" RealTime Publishing is a pipeline consisting of several jobs. Some jobs run on the management instance while others on the target instance. Following is a brief description of each: Gatherer: Creates the list of publishable assets and decorates it with additional resources (asset types, table rows) that together make up the canonical set of data to be published. Packager: Given the resource listing assembled by Gatherer, Packager creates serialized renditions of each resource and save it in the local fw_PubDataStore table. Transport: Takes the serialized data in fw_PubDataStore created by Packager and copy it to the target-side fw_PubDataStore table. Unpacker: Takes the serialized data in the target-side fw_PubDataStore table and deserialize/save it to the target database. CacheUpdater: Given the list of assets that were successfully saved by Unpacker, CacheUpdater flushes and optionally regenerates relevant parts of the page caches. RealTime Publishing uses asynchronous messaging to track the status of each job. It is not necessary to know the details of the messaging framework, but note that communication with the target system is facilitated through the Transporter. This also includes messages issued by Unpacker to inform the management system that an asset has been saved, prompting the management logic to mark that asset published. Transporter offers several customization options, including: Replacing the OOTB (out-of-the-box or ready to use) transport based on HTTP(s) with one using a different protocol. Publishing to multiple targets within the same publishing session. This section contains the following topics: Section 49.2.1, "To Write Your Own Transporter" Section 49.2.2, "Implementation Details" Section 49.2.3, "Helper Methods" Section 49.2.4, "Example of a Transporter Implementation" Section 49.2.5, "Full Code Listing" Section 49.2.6, "Edge-Case Scenarios" Section 49.2.7, "Intercepting Asset Publishing Events on the Management Instance" Section 49.2.8, "Finishing Touches" Follow these steps to write your own transporter: Subclass the com.fatwire.realtime.AbstractTransporter class. Override the methods ping, sendBatch, listTransports, toString, and remoteExecute. Install the transporter by editing the classes/AdvPub.xml file on the management side. replace the line: <bean id="DataTransporter" class="com.fatwire.realtime.MirrorBasedTransporterImpl" singleton="false"> with: <bean id="DataTransporter" class="[your transporter class]" singleton="false"> When you override the AbstractTransporter methods, keep the following in mind: ping() contains the logic that checks whether the target is up or down. Its most prominent use is to power the green/red diagnostic indicator in the publishing console. It is not necessary for ping to be successful in order to launch a publishing session, but this can be a handy tool for diagnosing connection problems. If you are using http(s) to connect to your target, you may be able to use the default implementation rather than override and implement your own. sendBatch() is responsible for uploading data to the remote fw_PubDataStore. It is invoked multiple times with small batches of data from the local fw_PubDataStore that comes in the form of an IList. Batching helps keep memory usage down and is already done behind the scenes for you. remoteExecute() is responsible for communicating with the remote system. The communication is two-way - management sends commands to dispatch remote jobs and cancellation requests, while the target sends back messages that indicate its status. The contents of these messages are immaterial to remoteExecute, all it needs to do is send those requests and return the responses. listTransports() is a listing of the underlying transports, in case there are multiple targets. If there is only a single target, this method can just return a toString() rendition of the current transport. toString() is a human-friendly descriptor of this transport. For example, a typical value would be. However, any other string is acceptable, including targetDataCenter-Virginia, serverOn8080, etc. A few helper methods are available in AbstractTransporter: protected void writeLog(String msg) write message to the publish log protected AbstractTransporter getStandardTransporterInstance() get a new instance of the standard HTTP-based transporter. This can be useful to implement a transport to multiple targets. protected String getParam(String param) obtain the value of a publishing parameter, as configured in the publishing console. Following is an example of a transporter implementation that works with multiple targets. The target is configured as follows: In the Destination Address, specify comma-separated destination URLs. For example: In the More Arguments, specify ampersand-separated username, password, and optional proxy information for the additional servers, suffixed with indexes starting at 1. For example, with one additional target: REMOTEUSER1=fwadmin&REMOTEPASS1=xceladmin&PROXYSERVER1=proxy.com&PROXYPORT1=9090&PROXYUSER1=pxuser&PROXYPASSWORD1=pxpass In AdvPub.xml, replace the DataTransporter bean entry with: <bean id="DataTransporter" class="my.sample.MultiTransporter" singleton="false"> package my.sample; import COM.FutureTense.Interfaces.*; import com.fatwire.cs.core.realtime.TransporterReply; import java.net.URL; import java.util.*; /** * RealTime Publishing transporter to multiple targets. */ public class MultiTransporter extends AbstractTransporter { private boolean initialized = false; List<AbstractTransporter> transporters = new ArrayList(); /** * Ping each underlying target and return true if all of them are up. */ @Override public boolean ping(StringBuilder sbOut) { init(); boolean ret = true; for(AbstractTransporter t : transporters) { boolean thisret = t.ping(sbOut); sbOut.append(t.getRemoteUrl() + (thisret ? " OK" : " Not reachable")); sbOut.append(" ||| "); ret &= thisret; } return ret; } /** * Send the batch to each underliyng transport. */ @Override protected int sendBatch(ICS ics, IList iList, StringBuffer outputMsg) { init(); for(AbstractTransporter t : transporters) { int res = t.sendBatch(ics, iList, outputMsg); if(res != 0) { // Just log the error for now, but this is an // indication that the target may be down // and other notifications may also be appropriate. writeLog("Transporter " + t + " failed with " + res + " " + outputMsg); } } return 0; } /** * Execute the remote command on each transporter and * accumulate their responses. */ @Override protected List<TransporterReply> remoteExecute(ICS ics, String s, Map<String, String> stringStringMap) { init(); List<TransporterReply> res = new ArrayList<TransporterReply>(); for(AbstractTransporter t : transporters) { List<TransporterReply> tres = t.remoteExecute(ics, s, stringStringMap); res.addAll(tres); } return res; } /** * Do some initialization by parsing out the configuration * settings and instantiating a standard http transport * to each target. */ private void init() { if(!initialized) { String remoteURLs = getRemoteUrl(); int count = 0; for(String remoteUrl : remoteURLs.split(",")) { String suffix = (count == 0) ? "" : String.valueOf(count); AbstractTransporter t1 = AbstractTransporter.getStandardTransporterInstance(); URL url; try { url = new URL(remoteUrl); } catch(Exception e) { throw new RuntimeException(e); } t1.setRemoteUrl(remoteUrl); t1.setHost(url.getHost()); t1.setUsername(getParam("REMOTEUSER" + suffix)); t1.setPassword(getParam("REMOTEPASS" + suffix)); t1.setUseHttps("https".equalsIgnoreCase(url.getProtocol())); t1.setContextPath(url.getPath()); t1.setPort(url.getPort()); t1.setProxyserver(getProxyserver()); t1.setProxyport(getProxyport()); t1.setProxyuser(getProxyuser()); t1.setProxypassword(getProxypassword()); t1.setHttpVersion(getHttpVersion()); t1.setTargetIniFile(getTargetIniFile()); transporters.add(t1); ++count; } initialized = true; writeLog("Initialized transporters: " + toString()); } } /** * Provide a full listing of all underlying transports. This is * can be used by other components to determine * whether they need to perform special actions depending on * the number of targets. For example, asset publishing * status processing may need to buffer responses until they're * received from all targets before marking assets published. * @return */ @Override public List<String> listTransports() { init(); List<String> list = new ArrayList(); for(AbstractTransporter t : transporters) { list.add(t.toString()); } return list; } /** * Just a human-friendly description of the transport. This may show * up in the logs, so make it descriptive enougn. */ @Override public String toString() { List<String> transs = listTransports(); StringBuilder sb = new StringBuilder(); for(String t : transs) sb.append(t + " "); return sb.toString(); } } While the example in Section 49.2.5, "Full Code Listing" will work in the optimistic case where all targets are running, there will be times when one target has stopped for a shorter or longer period of time. If you only publish to one target but still mark assets as published, the target that stopped will not be synchronized. You can handle such scenarios in the following ways: If a target stops for a short period of time, you should not mark assets as published, but continue publishing to the target that is running. When the other target is restarted, you will have all earlier assets still queued for publishing. Those assets will be redundantly published to the first target as well, but over short periods of time this is a negligible overhead. If a target stays down for a long period of time, it may be best to remove it from the list of targets in the destination configuration (in this example, remove the second target from the Destination Address in the publishing configuration). That way, assets will continue to be marked as published even though you have only one active target. When the second target is restarted, first perform a database and file system sync, and then add it back to the list of destination addresses. In the first case above, you need to only mark assets as published once they are saved on all targets. To do so, implement custom notification logic as follows: Extend com.fatwire.realtime.messaging.AssetPublishCallback. Override the notify() and optionally progressUpdate() method. For a detailed sample implementation, see "Sample Implementation for Steps 1 and 2" below. Enable the callback in AdvPub.xml on the management side: Add AssetCallback bean. Register the bean with PubsessionMonitor. For the code, see "Enabling the Callback Bean for Step 3" below. Sample Implementation for Steps 1 and 2 package my.sample; import com.fatwire.assetapi.data.AssetId; import java.util.HashMap; import java.util.Map; /** * Buffer asset save notifications until we've received one * from each target. Then mark asset published. */ public class AssetPublishCallbackMulti extends AssetPublishCallback { Map<String, Integer> saveEventsCount = new HashMap<String, Integer>(); /** * Receive notifications about the asset status. * Currently the only available status is SAVED. */ @Override public void notify(AssetId assetId, String status, String from) { String assetIdStr = String.valueOf(assetId); writeLog("Got " + status + " notification from " + from + " for " + assetIdStr); if("SAVED".equals(status)) { Integer numNotifications; if((numNotifications = saveEventsCount.get(assetIdStr)) == null) { numNotifications = 0; } numNotifications = numNotifications + 1; saveEventsCount.put(assetIdStr, numNotifications); if(numNotifications == this.getTargets().size()) { super.notify(assetId, status, from); writeLog("Marked " + assetIdStr + " published"); } } } /** * Intercept progress update messages. Can be used for * monitoring the health of the system but is not required. */ @Override public void progressUpdate(String sessionId, String job, String where, String progress, String lastAction, char status) { super.progressUpdate(sessionId, job, where, progress, lastAction, status); } } Enabling the Callback Bean for Step 3 To add the AssetCallback bean: <bean id="AssetCallback" class="my.sample.AssetPublishCallbackMulti" singleton="false"/> To register the bean with PubsessionMonitor: <bean id="PubsessionMonitor" class="com.fatwire.realtime.messaging.PubsessionMonitor" singleton="false"> <constructor-arg <ref local="DataTransporter" /> </constructor-arg> <constructor-arg <ref local="AssetCallback" /> </constructor-arg> <property name="pollFreqMillis" value="5000" /> <property name="timeoutMillis" value="100000" /> </bean> When publishing to multiple destinations, it is useful to distinguish between their respective Unpackers and CacheUpdaters. This comes in handy when looking at the progress bars in the RT publishing console and looking at logs. To make that distinction, simply edit the AdvPub.xml file on the target side, and change the id values of the DataUnpacker and PageCacheUpdater beans. For example: <bean id="DataUnpacker" class="com.fatwire.realtime.ParallelUnpacker" singleton="false"> <property name="id" value="Unpacker-Virginia2"/> ... </bean> <bean id="PageCacheUpdater" class="com.fatwire.realtime.regen.ParallelRegeneratorEh" singleton="false"> <property name="id" value="CacheFlusher-Virginia2"/> ... </bean>
https://docs.oracle.com/cd/E29542_01/doc.1111/e29634/realtimehooks.htm
CC-MAIN-2020-16
en
refinedweb
Azure Service Fabric Cluster Behind a Firewall Azure Service Fabric Cluster Behind a Firewall. Read on and see how it's done. Join the DZone community and get the full member experience.Join For Free. The signature is shown below: protected override ICommunicationListener CreateCommunicationListener() { ... } See Service Communications Model for guidance on implementing your own communications stack. WcfCommunicationListner is provided by the Service Fabric SDK for hosting WCF endpoints inside stateful and stateless services. The usage of the listener is documented at WCF Communication Listener. Current implementation of WcfCommunicationListener is not aware of the public IP address/FQDN; so, for NetTcpBinding it generates the following communication endpoint with private IP address: net.tcp://10.0.0.5:8080/7854ce3f-4e30-4949-b5e4-22cdcbd477c1/8f4106a5-0d38-4f30-aaf0-e435688e5ed6-130808019994632301 This will work fine if the client is also located in the same network; however, for internet based access, the local IP address will have to be replaced with the load balancer’s public IP/FQDN. To fix this issue, I used the following listener implementation that extended WcfCommunicationListener: //WcfFabricCommunicationsListener.cs using Microsoft.ServiceFabric.Services.Wcf; using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using System.Fabric; using System.Threading; using System.ServiceModel; using System.ServiceModel.Description; public class WcfFabricCommunicationsListener: WcfCommunicationListener { private string _gatewayFQDN; public WcfFabricCommunicationsListener(Type communicationInterfaceType, object service) : base(communicationInterfaceType, service) { } public WcfFabricCommunicationsListener(Type communicationInterfaceType, Type communicationImplementationType) : base(communicationInterfaceType, communicationImplementationType) { } public override void Initialize(ServiceInitializationParameters serviceInitializationParameters) { ConfigurationPackage configPackage = serviceInitializationParameters.CodePackageActivationContext.GetConfigurationPackageObject("Config"); var infrastructureSection = configPackage.Settings.Sections["Infrastructure"]; _gatewayFQDN = infrastructureSection.Parameters["Gateway"].Value; base.Initialize(serviceInitializationParameters); } public async override Task<string> OpenAsync(CancellationToken cancellationToken) { string partitionUrl = await base.OpenAsync(cancellationToken); if (_gatewayFQDN == null) { return partitionUrl; } UriBuilder ub = new UriBuilder(partitionUrl); ub.Host = _gatewayFQDN; return ub.ToString(); } } Initialize() expects the definition of the FQDN of the gateway through Config package as shown in the following snippet of Settings.xml: <!– Settings.xml located in the Config directory of the service project –> <Settings> <!– other stuff –> <Section Name=”Infrastructure”> <Parameter Name=”Gateway” Value=”mycluster.cloudapp.net” /> </Section> </Settings> OpenAsync() merely replaces the host IP address with the FQDN of the load balancer. Published at DZone with permission of Hanu Kommalapati , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/azure-service-fabric-cluster-behind-a-firewall-1
CC-MAIN-2020-16
en
refinedweb
Learn Install and run Nomad. Schedule jobs and explore the web UI. 4 minYou will install a Nomad agent on your local machine that you can use in subsequent guides to explore Nomad's core capabilities. 5 minLearn about the Nomad agent, and the lifecycle of running and stopping. 9 minLearn how to submit, modify and stop jobs in Nomad. 5 minJoin another Nomad client to create your first cluster. 2 minVisit the Nomad Web UI to inspect jobs, allocations, and more. 1 minAfter completing the getting started guide, learn about what to do next with Nomad. This track will provide a quick survey of the Nomad web user interface and how to perform common operations with it. Get started here. 3 minLearn how to access the Nomad Web UI from a browser or from the CLI. 9 minLearn how to operate a job from the Web UI. 7 minLearn how to inspect the state of the cluster from the Web UI. 9 minLearn how to submit a job from the Web UI. 3 minConsiderations for your exploration and organizational deployment of the Nomad UI. This track provides foundational knowledge for expressing your workload as Nomad jobs. This track also discusses how to submit, inspect, and monitor a running Nomad job. 4 minLearn how to deploy and manage a Nomad Job. 6 minMost applications require some kind of configuration. Whether the configuration is provided via the command line, environment variables, or a configuration file, Nomad has built-in functionality for configuration. This section details three common patterns for configuring tasks. 7 minThe job file is the unit of work in Nomad. Upon authoring, the job file is submitted to the server for evaluation and scheduling. This section discusses some techniques for submitting jobs. 8 minNomad exposes a number of tools and techniques for inspecting a running job. This is helpful in ensuring the job started successfully. Additionally, it can output any errors that occurred while starting the job. 6 minNomad provides a top-level mechanism for viewing application logs and data files via the command line interface. This section discusses the nomad alloc logs command and API interface. 5 minNomad supports reporting detailed job statistics and resource utilization metrics for most task drivers. This section describes the ways to inspect a job's resource consumption and utilization. Nomad provides configurable options to enable recovering failed tasks to avoid downtime. Nomad will try to restart a failed task on the node it is running on and also try to reschedule it on another node. 6 minThis section describes features in Nomad that automate recovering from failed tasks. 6 minNomad can restart a task on the node it is running on to recover from failures. Task restarts can be configured to be limited by number of attempts within a specific interval. 6 minNomad can restart tasks if they have a failing health check based on configuration specified in the `check_restart` stanza. Restarts are done locally on the node running the task based on their `restart` policy. 6 minNomad can reschedule failing tasks after any local restart attempts have been exhausted. This is useful to recover from failures stemming from problems in the node running the task. Nomad has built-in support for rolling, blue/green, and canary updates to simplify upgrades while minimizing or eliminating downtime. 3 minThis section describes common patterns for updating already-running jobs including rolling upgrades, blue/green deployments, and canary builds. Nomad provides built-in support for this functionality. 11 minNomad provides a built-in mechanism for rolling upgrades. Rolling upgrades incrementally transitions jobs between versions and using health check information to reduce downtime. 14 minNomad has built-in support for doing blue/green and canary deployments to more safely update existing applications and services. 4 minWell-behaved applications expose a way to perform cleanup prior to exiting. Nomad can optionally send a configurable signal to applications before killing them, allowing them to drain connections or gracefully terminate. Nomad allows operators methods to express their placement preferences for job allocations within their cluster using affinities and spread. Nomad also helps to prevent priority inversion with preemption. 6 minLearn about advanced scheduling features including affinity and spread. enterprise20 minLearn how to enabling and using preemption on service and batch jobs in Nomad Enterprise (0.9.3 and above). 20 minAffinities allows operators to express placement preferences for their jobs. 20 minBy using spread criteria in their job specification, Nomad job operators can ensure that failures across a physical domain such as datacenter or rack do not affect application availability. This section covers features operators will need to understand to build and maintain Nomad clusters. 4 minLearn how to join nodes to create a Nomad cluster using Consul. You can manually join nodes. You can also automatically join nodes using cloud-provider metadata or Consul without operator involvement. 20 minFederation enables users to submit jobs or interact with the HTTP API targeting any region, from any server, even if that server resides in a different region. 12 minWorkload migration is a normal part of cluster operations for a variety of reasons: server maintenance, operating system upgrades, etc. Nomad offers a number of parameters for controlling how running jobs are migrated off of draining nodes. 10 minDon't panic! This is a critical first step. Depending on your deployment configuration, it may take only a single server failure for cluster unavailability. Recovery requires an operator to intervene, but recovery is straightforward. 25 minIt is possible to collect metrics on Nomad with Prometheus after enabling telemetry on Nomad servers and clients. 8 minThis guide covers how to configure and use Autopilot features. Nomad has two primary types of traffic— UDP gossip and TCP API/RPC traffic. Learn how to configure Nomad to encrypt these protocols. 8 minNomad uses two different protocols for communication between cluster members. This guide introduces them and teaches you the appropriate security approach for each communication protocol. 10 minLearn how to encrypt Nomad's gossip protocol with a symmetric key. The gossip protocol is used to communicate membership and liveliness information between the servers. 45 minNomad uses mTLS to encrypt communication between Nomad cluster members. Learn how to configure Nomad for mTLS communication to increase network security between nodes. Nomad provides capability-based access control. In this track explore Nomad access control objects, such as policies, rules, capabilities, and tokens; learn how to configure a Nomad cluster for ACLs; bootstrap the ACL system; submit your first policy; and grant a token based on it. 10 minIn this concept focused guide, you'll explore the three major components to the ACL system: capabilities, policies, and tokens. 5 minBefore using the Nomad ACL system, it must be enabled and bootstrapped. This guide explains the process. 20 minNomad provides an optional Access Control List (ACL) system which can be used to control access to data and APIs. The ACL system is capability-based, relying on tokens which are associated with policies to determine which fine grained rules can be applied. 30 minNomad uses tokens to authenticate requests to the cluster. Tokens map requests to ACL policies for Nomad to determine if a request is authorized. 20 minIn this guide, you will practice creating Nomad ACL policies to provide controlled access for two different personas to your Nomad Cluster. 45 minHashiCorp Vault has a secret engine for generating short-lived Nomad tokens. This enables you to leverage Vault-supported authentication methods (token, LDAP, Okta, Amazon IAM, etc.) to obtain a short-lived Nomad token. This guide will demonstrate using Vault token to obtain a Nomad token This track will explore techniques to run jobs on Nomad that require access to persistent storage. 3 minIt is possible to deploy and consume stateful workloads in Nomad. Nomad can integrate with various storage solutions such as Portworx and REX-Ray. 20 minThis guide walks you through configuring a host volume and then deploying a MySQL workload that uses it for persistent storage. 30 minThis guide walks you through deploying Container Storage Interface plugins, registering an AWS EBS volume with CSI, and then deploying a MySQL workload that uses it for persistent storage. 20 minYou will deploy a MySQL database which uses the Portworx Docker-volume driver to enable persistent volumes. This track explores patterns for expressing task dependencies in Nomad jobs. 10 minNomad task dependencies allow you to model inter-service dependencies by using prestart init tasks that wait for a service. There are multiple approaches to load balancing within a Nomad cluster. Here we explore the most popular strategies. 20 minThere are a few approaches you can consider when deploying your load balancers for performance and HA 20 minFabio integrates natively with Consul and provides rich features with an optional Web UI. 20 minHAProxy natively integrates with service discovery data from Consul. 20 minNGINX pairs with Nomad's template stanza to allow for dynamic updates to its load balancing configuration. 20 minTraefik natively integrates with Consul using the Consul Catalog Provider. Starting with Nomad 0.9, task and device drivers are now pluggable. This gives users the flexibility to introduce their own drivers by providing a binary rather than having to fork Nomad to make code changes to it. See the navigation menu on the left or the list below for guides on how to use some of the external drivers available for Nomad. 15 minThe `lxc` driver provides an interface for using LXC for running application containers. Learn the steps involved in configuring a Nomad client agent to be able to run lxc jobs Learn the Nomad objects necessary to operate Nomad securely in a multi-team environment. Explore Nomad Enterprise features such as namespaces, resource quotas, and Sentinel. enterprise3 minLearn the Nomad objects necessary to operate Nomad securely in a multi-team environment. Explore Nomad Enterprise features such as namespaces, resource quotas, and Sentinel. enterprise8 minNomad Enterprise provides support for namespaces, which allow jobs and their associated objects to be segmented from each other and other users of the cluster. enterprise8 minNomad Enterprise provides support for resource quotas, which allow operators to restrict the aggregate resource usage of namespaces. enterprise8 minSentinel allows operators to express their policies as code, and have their policies automatically enforced. Apache Spark is a popular data processing engine/framework that can use third-party schedulers. The Nomad ecosystem includes a fork of Apache Spark that natively integrates Nomad as a cluster manager and scheduler for Spark. Learn how to run Spark jobs in this environment. 5 minGet started with the Nomad-Spark integration. 5 minThe Nomad Spark integration uses a default job template when running Spark jobs in the cluster. Learn about the default job template and how to customize it. 4 minLearn how to configure resource allocation for your Spark applications. 3 minLearn how to dynamically scale Spark executors based the queue of pending tasks. 5 minLearn how to submit Spark jobs that run on a Nomad cluster. From configuring Nomad to fetch Vault secrets for jobs to configuring Vault to provide Nomad secrets for operators, this track contains all of the guides with Nomad and Vault. 30 minSecuring Nomad's cluster communication with TLS is important for both security and easing operations. Nomad can use mutual TLS (mTLS) for authenticating for all HTTP and RPC communication. This guide will leverage Vault's PKI secrets engine to accomplish this task. 20 minLearn how to deploy an application, PostgreSQL, in Nomad and retrieve dynamic credentials by integrating with Vault. Looking for specific Nomad information? The Nomad documentation provides reference material and in-depth details on:
https://learn.hashicorp.com/nomad?track=using-plugins
CC-MAIN-2020-16
en
refinedweb
This tutorial will teach you how to pass data to a Blazor component as a parameter. This is useful if you have a customizable component that you wish use in multiple places across your page. Parent-Child Data Flow Suppose you have a custom UI component that you intend to use throughout a page. The page would be considered the parent component, and the UI component would be considered the child. In this tutorial, you will create a reusable child component, and then you will learn how to pass parameters to it from the parent. Create the Child Component For this example, we will create a simple Blazor component that generates a label and input. Start by creating a Blazor WebAssembly project called ComponentParameters. Then right-click the project and add a new folder called Components. Next, right-click the Components folder you just created, and add a new item, Add > New Item… Select Razor Component and name it CustomInput. Replace the content of CustomInput.Razor with the following code. Here, you can use some Bootstrap classes to modernize the appearance of your UI controls. <div class="form-group row"> <label for="@ID" class="col-sm-8 col-form-label">@ChildContent</label> <input id="@ID" class="col-sm-4 form-control" /> </div> @code { } Notice, there are two parameters in this component that need to be defined, ID and ChildContent. The ChildContent is the part of the HTML code between the tags. In JavaScript, it is known as innerHTML. <div>This is child content</div> If your component accepts child content, you will need to define a parameter and identify a place to render it. In the above example, you will also need to define a parameter for ID. <div class="form-group row"> <label for="@ID" class="col-sm-8 col-form-label">@ChildContent</label> <input id="@ID" class="col-sm-4 form-control" /> </div> @code { [Parameter] public string ID { get; set; } [Parameter] public RenderFragment ChildContent { get; set; } } As you can see, the ChildContent will be rendered as the text of the label for the input element. You could render the ChildContent anywhere in your custom component, something that makes Blazor very powerful! The ID parameter is used as the unique id for the input element. The label also references it so it knows which input it belongs to. Use Child Component with Parameters If you want to place your component on the project’s home page Index.razor , you will need to let the page know where to find the CustomInput component by issuing a @using statement. It should be in the form namespace.directory. In this case, the following line will work. @using ComponentParameters.Components To use the child component you just created, you will reference it by name inside angle brackets <>, just as you would any other html element tag. You will then provide values for any of the parameters you defined in the child component. You might use the component to add an input for collecting a person’s name as follows: <CustomInput ID="firstname">Enter your name</CustomInput> You placed the component by using the <CustomInput> tag. Then, you assigned a value to the ID parameter, just as you would an attribute of a regular html tag. Finally, the child content you provided will be used as the text of the label, exactly as defined in CustomInput.razor. You could place as many instances of the child component on the parent page as you wish, and you could provide different values for each of the parameters each time. You can think of it as a shortcode. This is the beauty of Blazor and Razor Components! @page "/" @using ComponentParameters.Components <div style="max-width:600px"> <CustomInput ID="firstname">First Name</CustomInput> <CustomInput ID="lastname">Last Name</CustomInput> <CustomInput ID="number">Phone Number</CustomInput> </div> The Bottom Line In this tutorial, you learned how to pass parameters from a parent component to a child component. Imagine using this trick while looping through the elements of a collection in order to populate a page? While this approach works for passing data to components on the same page, you may also be interested in how to pass parameters between components on different pages. Stay tuned, because that tutorial is coming next week!
https://wellsb.com/csharp/aspnet/pass-data-to-blazor-component/
CC-MAIN-2020-16
en
refinedweb
Cincopa is the complete rich media kit for WordPress, Joomla, Blogger and many other CMS. It allows to integrate and add media to your website in seconds! Cincopa Plugins and modules have awesome skins, upload your media to their cloud servers and easily embed on your website. Cincopa solutions includes many videos players, slideshows, galleries, podcast, music players in many different shapes and sizes, in total over 40 skins. Apart from the plugins and js widgets, Cincopa provides Media RSS 2.00, XSPF, JSON and CoolIris RSS feeds of your uploaded galleries. You can check the details of available feeds and their urls of the Feeds page. These Cincopa Media Feeds can be integrated easily anywhere into your website provided you have the folder ID (fid) of the albums you have created using your plugin. To parse the Media RSS 2.00 feed in php using SimpleXML, you can use following code. $oReturn = new stdClass(); $fid=’AgCAn_Zvs6Zl’;//sample fid $url = ‘’.$fid; $xml = simplexml_load_file($url); $namespaces = $xml->getNamespaces(true); // get namespaces // iterate items and store in an array of objects $items = array(); foreach ($xml->channel->item as $item) { $tmp = new stdClass(); $tmp->title = trim((string) $item->title); $tmp->description = trim((string) $item->description); // now for the url in media:content $tmp->thumbnail = trim((string) $item->children($namespaces[‘media’])->thumbnail->attributes()->url); $tmp->content = trim((string) $item->children($namespaces[‘media’])->content->attributes()->url); // add parsed data to the array $oReturn->items[] = $tmp; Hope that helps.
https://www.parorrey.com/blog/php-development/parsing-cincopa-media-rssxml-feed-in-php-with-namespaces/
CC-MAIN-2020-16
en
refinedweb
#include <Puma/CVisitor.h> Tree visitor implementation for CTree based syntax trees. To be derived for visiting concrete syntax trees. This class performs depth-first tree-traversal based on CTree tree structures. The traversal is started by calling CVisitor::visit() with the root node of the tree to traverse as its argument. For every node of the tree CVisitor::pre_visit() is called before its child nodes are visited, and CVisitor::post_visit() is called after its child nodes are visited. To perform actions on the visited nodes, CVisitor::pre_visit() and CVisitor::post_visit() have to be overloaded. Constructor. Destructor. Set the aborted state. Check if the node visiting is aborted. Apply actions after the given node is visited. To be implemented by derived visitors. Reimplemented in Puma::CSemVisitor, and Puma::CCSemVisitor. Apply actions before the given node is visited. To be implemented by derived visitors. Reimplemented in Puma::CSemVisitor, and Puma::CCSemVisitor. Set the pruned state (don't visit the sub-tree). Check if the visiting the sub-tree is aborted. Visit the given syntax tree node.
http://puma.aspectc.org/manual/html/classPuma_1_1CVisitor.html
CC-MAIN-2020-16
en
refinedweb
Testing coding standards¶ All new code, or changes to existing code, should have new or updated tests before being merged into master. This document gives some guidelines for developers who are writing tests or reviewing code for CKAN. Transitioning from legacy to new tests¶ CKAN is an old code base with a large legacy test suite in ckan.tests. The legacy tests are difficult to maintain and extend, but are too many to be replaced all at once in a single effort. So we’re following this strategy: - A new test suite has been started in ckan.new.new_tests to be: - Fast Don’t share setup code between tests (e.g. in test class setup() or setup_class() methods, saved against the self attribute of test classes, or in test helper modules). Instead write helper functions that create test objects and return them, and have each test method call just the helpers it needs to do the setup that it needs. Where appropriate, use the mock library to avoid pulling in other parts of CKAN (especially the database), see Mocking: the mock library. - unit test,.new/new.new:! One common exception is when you want to use a for loop to call the function being tested multiple times, passing it lots of different arguments that should all produce the same return value and/or side effects. For example, this test from ckan.new_tests.logic.action.test_update: def test_user_update_with_invalid_name(self): user = factories.User() invalid_names = ('', 'a', False, 0, -1, 23, 'new', 'edit', 'search', 'a' * 200, 'Hi!', 'i++%') for name in invalid_names: user['name'] = name assert_raises(logic.ValidationError, helpers.call_action, 'user_update', **user) The behavior of user_update() is the same for every invalid value. We do want to test user_update() with lots of different invalid names, but we obviously don’t want to write a dozen separate test methods that are all the same apart from the value used for the invalid user name. We don’t really want to define a helper method and a dozen test methods that call it either. So we use a simple loop. Technically this test calls the function being tested more than once, but there’s only one line of code that calls it..new.new_tests.factories.ResourceView¶ A factory class for creating CKAN resource views. Note: if you use this factory, you need to load the image_view plugin on your test class (and unload it later), otherwise you will get an error. Example: class TestSomethingWithResourceViews(object): @classmethod def setup_class(cls): if not p.plugin_loaded('image_view'): p.load('image_view') @classmethod def teardown_class(cls): p.unload('image_view') - class ckan.new_tests.factories.MockUser¶ A factory class for creating mock CKAN users using the mock library. - ckan.new_tests.factories.validator_data_dict()¶ Return a data dict with some arbitrary data in it, suitable to be passed to validator functions for testing. Test helper functions: ckan.new_tests.helpers¶ This is a collection of helper functions for use in tests. We want to avoid sharing test helper functions between test modules as much as possible, and we definitely don’t want to share test fixtures between test modules, or. This module is reserved for these very useful functions. - ckan.new_tests.helpers.reset_db()¶ Reset CKAN’s database. If a test class uses the database, then it should.new_tests.helpers.call_action(action_name, context=None, **kwargs)¶ Call the named ckan.logic.action function argument. Note: this skips authorization! It passes ‘ignore_auth’: True to action functions in their context dicts, so the corresponding authorization functions will not be run. This is because ckan.new_tests.logic.action tests only the actions, the authorization functions are tested separately in ckan.new into the context dict. - ckan.new_tests.helpers.call_auth(auth_name, context, **kwargs)¶ Call the named ckan.logic.auth function and return the result. This is just a convenience function for tests in ckan.new_tests.logic.auth to use. Usage: result = helpers.call_auth('user_update', context=context, id='some_user_id', name='updated_user_name') - class ckan.new_tests.helpers.FunctionalTestBase¶ A base class for functional test classes to inherit from. Allows configuration changes by overriding _apply_config_changes and resetting the CKAN config after your test class has run. It creates a webtest.TestApp at self.app for your class to use to make HTTP requests to the CKAN web UI or API. If you’re overriding methods that this class provides, like setup_class() and teardown_class(), make sure to use super() to call this class’s methods at the top of yours! - ckan.new_tests.helpers.submit_and_follow(app, form, extra_environ, name=None, value=None, **args)¶ Call webtest_submit with name/value passed expecting a redirect and return the response from following that redirect. - ckan.new_tests.helpers.webtest_submit(form, name=None, index=None, value=None, **args)¶ backported version of webtest.Form.submit that actually works for submitting with different submit buttons. We’re stuck on an old version of webtest because we’re stuck on an old version of webob because we’re stuck on an old version of Pylons. This prolongs our suffering, but on the bright side it lets us have functional tests that work. - ckan.new_tests.helpers.webtest_submit_fields(form, name=None, index=None, submit_value=None)¶ backported version of webtest.Form.submit_fields that actually works for submitting with different submit buttons. - ckan.new_tests.helpers.change_config(key, value)¶ Decorator to temporarily changes Pylons’ pylons.config['ckan.site_title'] == 'My Test CKAN'.new_tests.logic.action and the frontend tests in ckan.new_tests.controllers are functional tests, and probably shouldn’t do any mocking. Do use mocking in more unit-style tests. For example the authorization function tests in ckan.new_tests.logic.auth, the converter and validator tests in ckan.new_tests.logic.auth, and most (all?) lib tests in ckan.new_tests.lib are.new.new.new.new.new(self): '''user_name_validator() should raise Invalid if given a non-string value. ''' non_string_values = [ 13, 23.7, 100L, webtests.new. Writing ckan.migration tests¶ All migration scripts should have tests. Todo Write some tests for a migration script, and then use them as an example to fill out this guidelines section.
https://docs.ckan.org/en/ckan-2.3.1/contributing/testing.html
CC-MAIN-2020-16
en
refinedweb
Writing a script in plain Java is simple like what we have done in the previous post. But there is more we can do using WebDriver. Say If you want to run multiple scripts at a time ,better reporting and you want to go for datadriven testing (Running the same script with multiple data) then plain Java script is not enough . So it is recommended to use any of the existing frameworks like TestNG or JUnit. Select one of the framework and start using it. In this post , I start with TestNG. If you are planning to use TestNG then you need to - 1.Install TestNG Eclipse plug-in - 2.Customize the output directory path - 3.Start writing scripts 1.Install TestNG Eclipse plug-in Detailed instruction on how to install TestNG plug-in can be found here 2.Customize the output directory path This is not a mandatory step. But it is good to have it.Using this you can tell TestNG where to store all the output results files. In Eclipse Goto Window>>Preferences>>TestNG>> Set the Output Direcory location as per your wish and click on "Ok". 3.Start writing scripts Writing scripts using TestNG is so simple. We will start with the editing of script we have written in the previous post . Here is the edited script using TestNG. package learning; import org.openqa.selenium.WebDriver; import org.openqa.selenium.firefox.FirefoxDriver; import org.testng.annotations.AfterTest; import org.testng.annotations.BeforeTest; import org.testng.annotations.Test; public class GoogleSearchTestNG { WebDriver driver; @BeforeTest public void start(){ driver = new FirefoxDriver(); } @Test public void Test(){ System.out.println("Loading Google search page"); driver.get(""); System.out.println("Google search page loaded fine"); } @AfterTest public void close(){ driver.quit(); } } If you look at the above code the main chnages we have done is 1.Imported new TestNG files start() -- Initialize the WebDriver Test() -- Perform our exact requirement. close() -- Close the Browser once 4.There are 3 different annotations named "BeforeTest" , "Test" , "AfterTest".@BeforeTest -- Keep this annotaion before a method which has to be called initially when you run the script. In our script ,we put it before start() method because we want to initialize the WebDriver first. @Test -- Keep this annotation before a method which will do the exact operation of your script. @AfterTest -- Keep this annotation before mehod which has to run at the end. In our script , closing browser has to be done at the end. So we put this annotation before close() method. TestNG is so powerful . But now we stop here :) and experiment more on WebDriver functions. Once that is done we will come back to TestNG. Going forward all the example scripts in this site will refer to TestNG. nice Article......... This post helped me a lot. Thanks! this is very helpful for beginners to understand great man ...helped a lot :)
http://www.mythoughts.co.in/2012/08/webdriver-selenium-2-part-4-working.html
CC-MAIN-2018-43
en
refinedweb
Wit.ai is a NLP (natural language processing) interface for applications capable of turning sentences into structured data. And most importantly, it is free! So, there are no API call limits! Wit.ai API provides many kind of NLP services including Speech Recognition. In this article, I am going to show how to consume the Wit Speech API using Python with minimum dependencies. Step 1: Create an API Key In order to use Wit.ai API, you need to create a Wit.ai app. Every Wit.ai app has a server access token which can be used as an API Key. Follow these steps to create a Wit.ai app and generate API Key: - Go to Wit.ai home page and sign in with your GitHub or Facebook account. - Click on ‘+’ sign in menu bar on top and create a new app. - Now, open the app dashboard and got to Settings of your app. - In Settings, under API Details, copy the Server Access Token and use it as API key. Step 2: Python script to record audio Obviously, we need to pass audio data to the Wit API for speech recognition. For this, we create a Python script to record audio from microphone. For this purpose, we will be using PyAudio module. Installations - Windows: Just install PyAudio module using a simple pip command: pip install pyaudio - MAC OS X: Install portaudio library using Homebrew and then install PyAudio module using pip: brew install portaudio pip install pyaudio - Linux: Install portaudio library development package using this command: sudo apt-get install portaudio19-dev Then, install PyAudio module using pip: pip install pyaudio Now, consider the code below to record audio from microphone: import pyaudio import wave def record_audio(RECORD_SECONDS, WAVE_OUTPUT_FILENAME): #--------- SETTING PARAMS FOR OUR AUDIO FILE ------------# FORMAT = pyaudio.paInt16 # format of wave CHANNELS = 2 # no. of audio channels RATE = 44100 # frame rate CHUNK = 1024 # frames per audio sample #--------------------------------------------------------# # creating PyAudio object audio = pyaudio.PyAudio() # open a new stream for microphone # It creates a PortAudio Stream Wrapper class object stream = audio.open(format=FORMAT,channels=CHANNELS, rate=RATE, input=True, frames_per_buffer=CHUNK) #----------------- start of recording -------------------# print("Listening...") # list to save all audio frames frames = [] for i in range(int(RATE / CHUNK * RECORD_SECONDS)): # read audio stream from microphone data = stream.read(CHUNK) # append audio data to frames list frames.append(data) #------------------ end of recording --------------------# print("Finished recording.") stream.stop_stream() # stop the stream object stream.close() # close the stream object audio.terminate() # terminate PortAudio #------------------ saving audio ------------------------# # create wave file object waveFile = wave.open(WAVE_OUTPUT_FILENAME, 'wb') # settings for wave file object waveFile.setnchannels(CHANNELS) waveFile.setsampwidth(audio.get_sample_size(FORMAT)) waveFile.setframerate(RATE) waveFile.writeframes(b''.join(frames)) # closing the wave file object waveFile.close() def read_audio(WAVE_FILENAME): # function to read audio(wav) file with open(WAVE_FILENAME, 'rb') as f: audio = f.read() return audio Here, we use PyAudio file to record audio in WAV format. For writing audio stream to a WaveFile, we use in-built Python library wave. Once audio is recorded using PyAudio, it is saved as a wav file in current directory. Save this Python script as Recorder.py (as we will import this Python script by this name in main Python script). Step 3: Python script to interact with Wit Speech API Now, its time to write Python script for interacting with Wit Speech API. Consider the code below: import requests import json from Recorder import record_audio, read_audio # Wit speech API endpoint API_ENDPOINT = '' # Wit.ai api access token wit_access_token = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXX' def RecognizeSpeech(AUDIO_FILENAME, num_seconds = 5): # record audio of specified length in specified audio file record_audio(num_seconds, AUDIO_FILENAME) # reading audio audio = read_audio(AUDIO_FILENAME) # defining headers for HTTP request headers = {'authorization': 'Bearer ' + wit_access_token, 'Content-Type': 'audio/wav'} # making an HTTP post request resp = requests.post(API_ENDPOINT, headers = headers, data = audio) # converting response content to JSON format data = json.loads(resp.content) # get text from data text = data['_text'] # return the text return text if __name__ == "__main__": text = RecognizeSpeech('myspeech.wav', 4) print("\nYou said: {}".format(text)) Wit Speech API accepts HTTP POST request. The POST request must contain: - headers headers = {'authorization': 'Bearer ' + wit_access_token, 'Content-Type': 'audio/wav'} where wit_access_token is the API Key we generated in Step 1. - data data = audio The data to be passed is the audio stream in wav format. As you can notice, the recorded audio is saved in a file called myspeech.wav. We read audio back again form this file using read_audio method. And we send this HTTP request to this endpoint: A sample response of HTTP request looks like this: {u'_text': u'hey how are you', u'entities': {}, u'msg_id': u'1ca8f790-4e83-443c-915c-914bc1a42100'} Wit.ai is not just limited to speech recognition. It also allows you to create a chatbot which can recognize any user-defined entity in the provided text! Since I have not created any enitities for my current Wit app, the entities section of HTTP response is empty. Github repository: Wit Speech API Wrapper Here is a demo video of how above scripts work:
http://blog.codingblocks.com/2017/speech-recognition-using-wit-ai
CC-MAIN-2018-43
en
refinedweb
Section (1) nice Name nice — run a program with modified scheduling priority Synopsis nice [ OPTION] [ COMMAND [ ARG...] ] DESCRIPTION REPORTING BUGS GNU coreutils online help: < Report any translation bugs to < Section (2) nice Name nice — change process priority Synopsis #include <unistd.h> DESCRIPTION nice() adds inc to the nice value for the calling thread. (A higher nice value means a low priority.) The range of the nice value is +19 (low priority) to −, the new nice value is returned (but see NOTES below). On error, −1 is returned, and errno is set appropriately. A successful call can legitimately return −1. To detect an error, set errno to 0 before the call, and check whether it is nonzero after nice() returns −1. ERRORS - EPERM The calling process attempted to increase its priority by supplying a negative incbut has insufficient privileges. Under Linux, the CAP_SYS_NICEcapability is required. (But see the discussion of the RLIMIT_NICEresource). C library/kernel differences POS. SEE ALSO nice(1), renice(1), fork(2), getpriority(2), getrlimit(2), setpriority(2), capabilities(7), sched(7)
https://manpages.net/detail.php?name=nice
CC-MAIN-2022-21
en
refinedweb
This node can be used to create ShotGrid Server work items. Session work items in this node are associated with a long running ShotGrid process. Note The ShotGrid API is not provided with Houdini and must be installed manually. To install the ShotGrid API, perform the following steps: Download the latest release from and extract it. Copy the shotgun_api3 folder to a location in Houdini’s python path. This can be found by running the following command in a command prompt or terminal: hython -c 'import sys; print(sys.path)'or by running import sys; print(sys.path)in Houdini’s python shell. In order to authenticate to ShotGrid, one of the following three methods are required. Note As Shotgun is migrating to the new name ShotGrid, the existing $PDG_SHOTGUN_* environment variables and shotgun.json have been deprecated and will be removed in a future release. Please migrate to their ShotGrid equivalents, $PDG_SHOTGRID_* and shotgrid.json respectively. Create a shotgrid.json file in $HOUDINI_USER_PREF_DIRor within a defined $PDG_SHOTGRID_AUTH_DIRor, with the following syntax: { “script_name”: “”, “api_key”: “” } Define $PDG_SHOTGRID_LOGINand $PDG_SHOTGRID_PASSWORDenvironment variables. Define $PDG_SHOTGRID_SCRIPT_NAMEand $PDG_SHOTGRID_API_KEYenvironment variables. The order of precedence for authentication is $PDG_SHOTGRID_LOGIN, then $PDG_SHOTGRID_SCRIPT_NAME, and then to try and use shotgrid.json. This is so that an override can be specified in the environment. Using shotgrid.json is recommended as exposing credentials in the environment can be a security risk. See command servers for additional details on the use of command chains.. Session Count from Upstream Items When this toggle is enabled, the node will create a single server work item and a session with the server for each upstream work item. Otherwise, a server item will be created for each upstream work item. Number of Sessions The number of sessions to create with the server. Each session work item will cook in serial with other sessions using the same server. The chain of work items starting from this session item down to the Command Server End node will cook to completion before starting the next session. Server Port The TCP port number the server should bind to (when Connect to existing server if off), or the port to use to connect to an existing server (when Connect to existing server is on). The default value 0 tells the system to dynamically choose an unused port, which is usually what you want. If you want to keep the ports in a certain range (and can guarantee the port numbers will be available), you can use an expression here such as 9000 + @pdg_index. Connect to Existing Server When this toggle is enabled, the work item will connect to an existing server rather than spawning a new one. Server Address The existing server address, when Connect to Existing Server is enabled. Load Timeout The timeout used when performing an initial verification that the shared server instance can be reached. When this timeout passes without a successful communication, the work item for that server will be marked as failed ShotGrid URL The URL to the ShotGrid instance to communicate with. The default value is $PDG_SHOTGRID_URL, which expects that the environment variable will be defined when the work item executes. However this can be set to an absolute path if it’s known. HTTP Proxy The URL of a proxy server to communicate with the ShotGrid instance. Custom CA Certs A path to a custom list of SSL certificate authorities to use while communicating with the ShotGrid instance. Feedback Attributes When on, the specified attributes are copied from the end of each iteration onto the corresponding work item at the beginning of the next iteration. This occurs immediately before the starting work item for the next iteration cooks. Tip The attribute(s) to feedback can be specified as a space-separated list or by using the attribute pattern syntax. For more information on how to write attribute patterns, see Attribute Pattern Syntax. Feedback Output Files When on, the output files from each iteration are copied onto the corresponding work item at the beginning of the next loop iteration. The files are added as outputs of that work item, which makes them available as inputs to work items inside the loop. These parameters can be used to customize the names of the work item attributes created by this node. Iteration The name of the attribute that stores the work item’s iteration number. Number of Iterations The name of the attribute that stores the total iteration count. Loop Number The name of the attribute that stores the loop number.
https://www.sidefx.com/docs/houdini/nodes/top/shotgunserver.html
CC-MAIN-2022-21
en
refinedweb
One error you may encounter when using NumPy is: AttributeError: 'numpy.ndarray' object has no attribute 'index' This error occurs when you attempt to use the index() function on a NumPy array, which does not have an index attribute available to use. The following example shows how to address this error in practice. How to Reproduce the Error Suppose we have the following NumPy array: import numpy as np #create NumPy array x = np.array([4, 7, 3, 1, 5, 9, 9, 15, 9, 18]) We can use the following syntax to find the minimum and maximum values in the array: #find minimum and maximum values of array min_val = np.min(x) max_val = np.max(x) #print minimum and maximum values print(min_val, max_val) 1 18 Now suppose we attempt to find the index position of the minimum and maximum values in the array: #attempt to print index position of minimum value x.index(min_val) AttributeError: 'numpy.ndarray' object has no attribute 'index' We receive an error because we can’t apply an index() function to a NumPy array. How to Address the Error To find the index position of the minimum and maximum values in the NumPy array, we can use the NumPy where() function: #find index position of minimum value np.where(x == min_val) (array([3]),) #find index position of maximum value np.where(x == max_val) (array([9]),) From the output we can see: - The minimum value in the array is located in index position 3. - The maximum value in the array is located in index position 9. We can use this same general syntax to find the index position of any value in a NumPy array. For example, we can use the following syntax to find which index positions are equal to the value 9 in the NumPy array: #find index positions that are equal to the value 9 np.where(x == 9) (array([5, 6, 8]),) From the output we can see that the values in index positions 5, 6, and 8 are all equal to 9. Additional Resources The following tutorials explain how to fix other common errors in Python: How to Fix KeyError in Pandas How to Fix: ValueError: cannot convert float NaN to integer How to Fix: ValueError: operands could not be broadcast together with shapes
https://www.statology.org/numpy-ndarray-object-has-no-attribute-index/
CC-MAIN-2022-21
en
refinedweb
How to Upload Files to Amazon S3 Using the File Reader API June 17th, 2021 What You Will Learn in This Tutorial How to use the FileReader API in the browser to read a file into memory as a base64 string and upload it to Amazon S3 using the aws-sdk library from NPM. Table of Contents Getting Started For this tutorial, we're going to need a back-end and a front-end. Our back-end will be used to communicate with Amazon S3 while the front-end will give us a user interface where we can upload our file. To speed us up, we're going to use CheatCode's Node.js Boilerplate for the back-end and CheatCode's Next.js Boilerplate for the front-end. To get these setup, we need to clone them from Github. We'll start with the back-end: Terminal git clone server Once cloned, cd into the project and install its dependencies: Terminal cd server && npm install Next, we need to install one additional dependency, aws-sdk: Terminal npm i aws-sdk Once all of the dependencies are installed, start the server with: Terminal npm run dev With your server running, in another terminal window or tab, we need to clone the front-end: Terminal git clone client Once cloned, cd into the project and install its dependencies: Terminal cd client && npm install Once all of the dependencies are installed, start the front-end with: Terminal npm run dev With that, we're ready to start. Increasing the body-parser limit Looking at our server code, the first thing we need to do is modify the upload limit for the body-parser middleware in the boilerplate. This middleware is responsible for, as the name implies, parsing the raw body data of an HTTP request sent to the server (an Express.js server). ({ limit: "50mb" })(req, res, next); }; In Express.js, middleware is the term used to refer to code that runs between an HTTP request initially hitting the server and being passed off to a matching path/route (if one exists). Above, the function we're exporting is an Express.js middleware function that's a part of the CheatCode Node.js Boilerplate. This function takes in an HTTP request from Express.js—we can identify that we intend this to be a request passed to us by Express by the req, res, and next arguments that Express passes to its route callbacks—and then hands off that request to the appropriate method from the body-parser dependency included in the boilerplate. The idea here is that we want to use the appropriate "converter" from bodyParser to ensure that the raw body data we get from the HTTP request is usable in our app. For this tutorial, we're going to be sending JSON-formatted data from the browser. So, we can expect any requests we send (file uploads) to be handed off to the bodyParser.json() method. Above, we can see that we're passing in an object with one property limit set to 50mb. This gets around the default limit of 100kb on the HTTP request body imposed by the library. Because we're uploading files of varied size, we need to increase this so that we don't receive any errors on upload. Here, we're using a "best guess" of 50 megabytes as the max body size we'll receive. Adding an Express.js route Next, we need to add a route where we'll send our uploads. Like we hinted at above, we're using Express.js in the boilerplate. To keep our code organized, we've split off different groups of routes that are accessed via functions called to from the main index.js file where the Express server is started in /server/index.js. There, we call to a function api() which loads the API-related routes for the boilerplate. /server/api/index.js import graphql from "./graphql/server"; import s3 from "./s3"; export default (app) => { graphql(app); s3(app); }; In that file, below the call to graphql(), we want to add another call to a function s3() which we'll create next. Here, app represents the Express.js app instance that we'll add our routes to. Let's create that s3() function now. /server/api/s3/index.js import uploadToS3 from "./uploadToS3"; export default (app) => { app.use("/uploads/s3", async (req, res) => { await uploadToS3({ bucket: "cheatcode-tutorials", acl: "public-read", key: req.body?.key, data: req.body?.data, contentType: req.body?.contentType, }); res.send("Uploaded to S3!"); }); }; Here, we take in the Express app instance we passed in and call to the .use() method, passing the path where we'd like our route to be available, /uploads/s3. Inside of the callback for the route, we call to a function uploadToS3 which we'll define in the next section. It's important to note: we intend uploadToS3 to return a JavaScript Promise. This is why we have the await keyword in front of the method. When we perform the upload, we want to "wait on" the Promise to be resolved before responding to the original HTTP request we sent from the client. To make sure this works, too, we've prefixed the keyword async on our route's callback function. Without this, JavaScript will throw an error about await being a reserved keyword when this code runs. Let's jump into that uploadToS3 function now and see how to get our files handed off to AWS. Wiring up the upload to Amazon S3 on the server Now for the important part. To get our upload over to Amazon S3, we need to set up a connection to AWS and an instance of the .S3() method in the aws-sdk library that we installed earlier. = {}) => { ... }; Before we jump into the body of our function, first, we need to wire up an instance of AWS. More specifically, we need to pass in an AWS Access Key ID and Secret Access Key. This pair does two things: - Authenticates our request with AWS. - Validates that this pair has the correct permissions for the action we're trying to perform (in this case s3.putObject()). Obtaining these keys is outside of the scope of this tutorial, but give this documentation from Amazon Web Services a read to learn how to set them up. Assuming you've obtained your keys—or have an existing pair you can use—next, we're going to leverage the settings implementation in the CheatCode Node.js Boilerplate to securely store our keys. /server/settings-development.json { "authentication": { "token": "abcdefghijklmnopqrstuvwxyz1234567890" }, "aws": { "akid": "Type your Access Key ID here...", "sak":" "Type your Secret Access Key here..." }, [...] } Inside of /server/settings-development.json, above, we add a new object aws, setting it equal to another object with two properties: akid- This will be set to the Access Key ID that you obtain from AWS. sak- This will be set to the Secret Access Key that you obtain from AWS. Inside of /server/lib/settings.js, this file is automatically loaded into memory when the server starts up. You'll notice that this file is called settings-development.json. The -development part tells us that this file will only be loaded when process.env.NODE_ENV (the current Node.js environment) is equal to development. Similarly, in production, we'd create a separate file settings-production.json. Fair Warning: The settings-development.jsonfile is committed to your Git repository. DO NOT upload this file with your keys in it to a public Github repo. There are scammer bots that scan Github for insecure keys and use them to rack up bills on your AWS account. The point of this is security and avoiding using your production keys in a development environment. Separate files avoid unnecessary leakage and mixing of keys. = {}) => { ... }; Back in our uploadToS3.js file, next, we import the settings file we mentioned above from /server/lib/settings.js and from that, we grab the aws.akid and aws.sak values we just set. Finally, before we dig into the function definition, we create a new instance of the S3 class, storing it in the s3 variable with new AWS.S3(). With this, let's jump into the core of our function: /server/api/s3/uploadToS3.js import AWS from "aws-sdk"; [...] const s3 = new AWS.S3(); export default async (options = {}) => { await s3 .putObject({ Bucket: options.bucket, ACL: options.acl || "public-read", Key: options.key, Body: Buffer.from(options.data, "base64"), ContentType: options.contentType, }) .promise(); return { url: ` name: options.key, type: options.contentType || "application/", }; }; There's not much to it so we've logged everything out here. The core function that we're going to call on the s3 instance is .putObject(). To .putObject(), we pass an options object with a few settings: Bucket- The Amazon S3 bucket where you'd like to store the object (an S3 term for file) that you upload. ACL- The "Access Control List" that you'd like to use for the file permissions. This tells AWS who is allowed to access the file. You can pass in any of the Canned ACLs Amazon offers here (we're using public-readto grant open access). Key- The name of the file as it will exist in the Amazon S3 bucket. Body- The contents of the file you're uploading. ContentType- The MIME type for the file that you're uploading. Focusing on Body, we can see something unique happening. Here, we're calling to the Buffer.from() method that's built in to Node.js. As we'll see in a bit, when we get our file back from the FileReader in the browser, it will be formatted as a base64 string. To ensure AWS can interpret the data we send it, we need to convert the string we've passed up from the client into a Buffer. Here, we pass our options.data—the base64 string—as the first argument and then base64 as the second argument to let Buffer.from() know the encoding it needs to convert the string from. With this, we have what we need wired up to send over to Amazon. To make our code more readable, here, we chain the .promise() method onto the end of our call to s3.putObject(). This tells the aws-sdk that we want it to return a JavaScript Promise. Just like we saw back in our route callback, we need to add the async keyword to our function so that we can utilize the await keyword to "wait on" the response from Amazon S3. Technically speaking, we don't need to wait on S3 to respond (we could omit the async/await here) but doing so in this tutorial will help us to verify that the upload is complete (more on this when we head to the client). Once our upload is complete, from our function, we return an object describing the url, name, and type of the file we just uploaded. Here, notice that url is formatted to be the URL of the file as it exists in your Amazon S3 bucket. With that, we're all done with the server. Let's jump down to the client to wire up our upload interface and get this working. Wiring up the FileReader API on the client Because we're using Next.js on the client, we're going to create a new upload page in our /pages directory that will host an example component with our upload code: /client/pages/upload/index.js import React, { useState } from "react"; import pong from "../../lib/pong"; const Upload = () => { const [uploading, setUploading] = useState(false); const handleUpload = (uploadEvent) => { ... };; First, we set up a React component with just enough markup to get us a basic user interface. For the styling, we're relying on Bootstrap which is automatically set up for us in the boilerplate. The important part here is the <input type="file" /> which is the file input we'll attach a FileReader instance to. When we select a file using this, the onChange function will be called, passing the DOM event containing our selected files. Here, we're defining a new function handleUpload that we'll use for this event. /client/pages/upload/index.js import React, { useState } from "react"; import pong from "../../lib/pong"; const Upload = () => { const [uploading, setUploading] = useState(false); const handleUpload = (uploadEvent) => { uploadEvent.persist(); setUploading(true); const [file] = uploadEvent.target.files; const reader = new FileReader(); reader.onloadend = (onLoadEndEvent) => { fetch(" { method: "POST", mode: "cors", headers: { "Content-Type": "application/json", }, body: JSON.stringify({ key: file.name, data: onLoadEndEvent.target.result.split(",")[1], contentType: file.type, }), }) .then(() => { setUploading(false); pong.success("File uploaded!"); uploadEvent.target.value = ""; }) .catch((error) => { setUploading(false); pong.danger(error.message || error.reason || error); uploadEvent.target.value = ""; }); }; reader.readAsDataURL(file); };; Filling in the handleUpload function, we have a few things to do. First, just inside of the function body, we add a call to React's .persist() method on the uploadEvent (this is the DOM event passed in via the onChange method on our <input />). We need to do this because React creates something known as a synthetic event which is not available inside of functions outside the main execution thread (more on this in a bit). Following this, we use the useState() hook from React to create a state variable uploading and toggle it to true. If you look down in our markup, you can see we use this to disable the file input while we're mid-upload and display a feedback message to confirm the process is underway. After this, we dig into the core functionality. First, we need to get the file that we picked from the browser. To do it, we call to uploadEvent.target.files and use JavaScript Array Destructuring to "pluck off" the first file in the files array and assign it to the variable file. Next, we create our instance of the FileReader() in the browser. This is built-in to modern browsers so there's nothing to import. In response, we get back a reader instance. Skipping past reader.onloadend for a sec, at the bottom of our handleUpload function, we have a call to reader.readAsDataURL(), passing in the file we just destructured from the uploadEvent.target.files array. This line is responsible for telling the file reader what format we want our file read into memory as. Here, a data URL gets us back something like this: Example Base64 String data:text/plain;base64,4oCcVGhlcmXigJlzIG5vIHJvb20gZm9yIHN1YnRsZXR5IG9uIHRoZSBpbnRlcm5ldC7igJ0g4oCUIEdlb3JnZSBIb3R6 Though it may not look like it, this string is capable of representing the entire contents of a file. When our reader has fully loaded our file into memory, the reader.onloadend function event is called, passing in the onloadevent object as an argument. From this event object, we can get access to the data URL representing our file's contents. Before we do, we set up a call to fetch(), passing in the presumed URL of our upload route on the server (when you run npm run dev in the boilerplate, it runs the server on port 5001). In the options object for fetch() we make sure to set the HTTP method to POST so that we can send a body along with our request. We also make sure to set the mode cors to true so that our request makes it pass the CORS middleware on the server (this limits what URLs can access a server—this is pre-configured to work between the Next.js boilerplate and Node.js boilerplates for you). After this, we also set the Content-Type header which is a standard HTTP header that tells our server in what format our POST body is in. Keep in mind, this is not the same as our file type. In the body field, we call to JSON.stringify()— fetch() requires that we pass body as a string, not an object—and to that, pass an object with the data we'll need on the server to upload our file to S3. Here, key is set to file.name to ensure the file we put in the S3 bucket is identical to the name of the file selected from our computer. contentType is set to the MIME type automatically provided to us in the browser's file object (e.g., if we opened a .png file this would be set to image/png). The important part here is data. Notice that we're making use of the onLoadEndEvent like we hinted at above. This contains the contents of our file as a base64 string in its target.result field. Here, the call to .split(',') on the end is saying "split this into two chunks, the first being the metadata about the base64 string and the second being the actual base64 string." We need to do this because only the part after the comma in our data URL (see the example above) is an actual base64 string. If we do not take this out, Amazon S3 will store our file but when we open it, it will be unreadable. To finish this line out, we use array bracket notation to say "give us the second item in the array (position 1 in a zero-based JavaScript array)." With this, our request is sent up to the server. To finish up, we add a .then() callback— fetch returns us a JavaScript Promise—which confirms the uploads success and "resets" our UI. We setUploading() to false, clear out the <input />, and then use the pong alerts library built-in to the Next.js boilerplate to display a message on screen. In the event that there's a failure, we do the same thing, however, providing an error message (if available) instead of a success message. If all is working according to plan, we should see something like this: Wrapping Up In this tutorial, we learned how to upload files to Amazon S3 using the FileReader API in the browser. We learned how to set up a connection to Amazon S3 via the aws-sdk, as well as how to create an HTTP route that we could call to from the client. In the browser, we learned how to use the FileReader API to convert our file into a Base64 string and then use fetch() to pass our file up to the HTTP route we created. Get the latest free JavaScript and Node.js tutorials, course announcements, and updates from CheatCode in your inbox. No spam. Just new tutorials, course announcements, and updates from CheatCode.
https://cheatcode.co/tutorials/how-to-upload-files-to-amazon-s3-using-the-file-reader-api
CC-MAIN-2022-21
en
refinedweb
Package org.eclipse.ui.ide Class ResourceSaveableFilter - java.lang.Object - org.eclipse.ui.ide.ResourceSaveableFilter - All Implemented Interfaces: ISaveableFilter public class ResourceSaveableFilter extends Object implements ISaveableFilterA saveable filter where the given savable must either match one of the given roots or be a direct or indirect child of one of the roots. - Since: - 3.9 Method Detail public boolean select(Saveable saveable, IWorkbenchPart[] containingParts)Description copied from interface: ISaveableFilterIndicate whether the given saveable matches this filter. - Specified by: selectin interface ISaveableFilter - Parameters: saveable- the saveable being tested containingParts- the parts that contain the saveable. This list may contain zero or more parts. - Returns: - whether the given saveable matches this filter
https://help.eclipse.org/latest/topic/org.eclipse.platform.doc.isv/reference/api/org/eclipse/ui/ide/ResourceSaveableFilter.html
CC-MAIN-2022-21
en
refinedweb
There are a couple minor errors in Nathaniels (presumably untested) code. But I think it looks quite elegant overall, actually: #!/usr/bin/env python3 from string import ascii_lowercase from random import random class PushbackAdaptor(object): def __init__(self, iterable): self.base = iter(iterable) self.stack = [] def __next__(self): if self.stack: return self.stack.pop() else: return next(self.base) def pushback(self, obj): self.stack.append(obj) def __iter__(self): return self def repeat_some(it): it = PushbackAdaptor(it) for x in it: print(x, end='') if random() > 0.5: it.pushback(x) continue print() repeat_some(ascii_lowercase) repeat_some(range(10)) On Wed, Sep 24, 2014 at 6:50 PM, Nathaniel Smith njs@pobox.com wrote: On 25 Sep 2014 02:09, "Andrew Barnert" abarnert@yahoo.com.dmarc.invalid wrote: On Sep 24, 2014, at 15:09, Cathal Garvey cathalgarvey@cathalgarvey.me wrote: The conversation began with me complaining that I'd like a third mode of explicit flow control in Python for-loops; the ability to repeat a loop iteration in whole. The reason for this was that I was parsing data where a datapoint indicated the end of a preceding sub-group, so the datapoint was both a data structure indicator *and* data in its own right. So I'd like to have iterated over the line once, branching into the flow control management part of an if/else, and then again to branch into the data management part of the same if/else. Yes, this is unnecessary and just a convenience for parsing that I'd like to see. It would really help to have specific use cases, so we can look at how much the syntactic sugar helps readability vs. what we can write today. Otherwise all anyone can say is, "Well, it sounds like it might be nice, but I can't tell if it would be nice enough to be worth a language change", or try to invent their own use cases that might not be as nice as yours and then unfairly dismiss it as unnecessary. The way I would describe this is, the proposal is to add single-item pushback support to all for loops. Tokenizers are a common case that needs pushback ("if we are in the IDENTIFIER state and the next character is not alphanumeric, then set state to NEW_TOKEN and process it again"). I don't know how common such cases are in the grand scheme of things, but they are somewhat cumbersome to handle when they come up. The most elegant solution I know is: class PushbackAdaptor: def __init__(self, iterable): self.base = iter(iterable) self.stack = [] def next(self): if self.stack: return self.stack.pop() else: return self.base.next() def pushback(self, obj): self.stack.append(obj) it = iter(character_source) for char in it: ... if state is IDENTIFIER and char not in IDENT_CHARS: state = NEW_TOKEN it.push_back(char) continue ... In modern python, I think the natural meaning for 'continue with' wouldn't be to special-case something like this. Instead, where 'continue' triggers a call to 'it.next()', I'd expect 'continue with x' to trigger a call to 'it.send(x)'. I suspect this might enable some nice idioms in coroutiney code, though I'm not very familiar with such. -n Python-ideas mailing list Python-ideas@python.org Code of Conduct:
https://mail.python.org/archives/list/python-ideas@python.org/message/YVZB4L4AIOJMMQB6JUJHOS33IHYVOVZC/
CC-MAIN-2022-21
en
refinedweb
State of Golang linters and the differences between them — SourceLevel Golang is full of tools to help us on developing securer, reliable, and useful apps. And there is a category that I would like to talk about: Static Analysis through Linters. What is a linter? Linter is a tool that analyzes source code without the need to compile/run your app or install any dependencies. It will perform many checks in the static code (the code that you write) of your app. It is useful to help software developers ensure coding styles, identify tech debt, small issues, bugs, and suspicious constructs. Helping you and your team in the entire development flow. Linters are available for many languages, but let us take a look at the Golang ecosystem. First things first: how do linters analyze code? Most linters analyzes the result of two phases: Lexer Also known as tokenizing/scanning is the phase in which we convert the source code statements into tokens. So each keyword, constant, variable in our code will produce a token. Parser It will take the tokens produced in the previous phase and try to determine whether these statements are semantically correct. Golang packages In Golang we have scanner, token, parser, and ast (Abstract Syntax Tree) packages. Let's jump straight to a practical example by checking this simple snippet: package mainfunc main() { println("Hello, SourceLevel!") } Okay, nothing new here. Now we’ll use Golang standard library packages to visualize the ast generated by the code above: import ( "go/ast" "go/parser" "go/token" )func main() { // src is the input for which we want to print the AST. src := `our-hello-world-code`// Create the AST by parsing src. fset := token.NewFileSet() // positions are relative to fset f, err := parser.ParseFile(fset, "", src, 0) if err != nil { panic(err) }// Print the AST. ast.Print(fset, f) } Now let’s run this code and look the generated AST: 0 *ast.File { 1 . Package: 2:1 2 . Name: *ast.Ident { 3 . . NamePos: 2:9 4 . . Name: "main" 5 . } 6 . Decls: []ast.Decl (len = 1) { 7 . . 0: *ast.FuncDecl { 8 . . . Name: *ast.Ident { 9 . . . . // Name content 16 . . . } 17 . . . Type: *ast.FuncType { 18 . . . . // Type content 23 . . . } 24 . . . Body: *ast.BlockStmt { 25 . . . . // Body content 47 . . . } 48 . . } 49 . } 50 . Scope: *ast.Scope { 51 . . Objects: map[string]*ast.Object (len = 1) { 52 . . . "main": *(obj @ 11) 53 . . } 54 . } 55 . Unresolved: []*ast.Ident (len = 1) { 56 . . 0: *(obj @ 29) 57 . } 58 } As you can see, the AST describes the previous block in a struct called ast.File which is compound by the following structure: type File struct { Doc *CommentGroup // associated documentation; or nil Package token.Pos // position of "package" keyword Name *Ident // package name Decls []Decl // top-level declarations; or nil Scope *Scope // package scope (this file only) Imports []*ImportSpec // imports in this file Unresolved []*Ident // unresolved identifiers in this file Comments []*CommentGroup // list of all comments in the source file } To understand more about lexical scanning and how this struct is filled, I would recommend Rob Pike talk. Using AST is possible to check the formatting, code complexity, bug risk, unused variables, and a lot more. Code Formatting To format code in Golang, we can use the gofmt package, which is already present in the installation, so you can run it to automatically indent and format your code. Note that it uses tabs for indentation and blanks for alignment. Here is a simple snippet from Go by Examples unformatted: package mainimport "fmt" func intSeq() func() int { i := 0return func() int { i++ return i } }func main() {nextInt := intSeq()fmt.Println(nextInt()) fmt.Println(nextInt()) fmt.Println(nextInt())newInts := intSeq() fmt.Println(newInts())} Then it will be formatted this way: package mainimport "fmt"func intSeq() func() int { i := 0return func() int { i++ return i } }func main() {nextInt := intSeq()fmt.Println(nextInt()) fmt.Println(nextInt()) fmt.Println(nextInt())newInts := intSeq() fmt.Println(newInts())} So we can observe that import earned an extra linebreak but the empty line after main function declaration is still there. So we can assume that we shouldn't transfer the responsibility of keeping your code readable to the gofmt: consider it as a helper on accomplishing readable and maintainable code. It’s highly recommended to run gofmt before you commit your changes, you can even configure a precommit hook for that. If you want to overwrite the changes instead of printing them, you should use gofmt -w. Simplify option gofmt has a -s as Simplify command, when running with this option it considers the {...} Note that for this example, if you think that variable is important for other collaborators, maybe instead of just dropping it with _ I would recommend using _meaningfulName instead. A range of the form: for _ = range v {...} will be simplified to: for range v {...} Note that it could be incompatible with earlier versions of Go. Check unused imports On some occasions, we can find ourselves trying different packages during implementation and just give up on using them. By using goimports package we can identify which packages are being imported and unreferenced in our code and also add missing ones: go install golang.org/x/tools/cmd/goimports@latest Then use it by running with -l option to specify a path, in our case we're doing a recursive search in the project: go imports -l ./.. ../my-project/vendor/github.com/robfig/cron/doc.go So it identified that cron/doc is unreferenced in our code and it's safe to remove it from our code. Code Complexity Linters can be also used to identify how complex your implementation is, using some methodologies for example, let’s start by exploring ABC Metrics. ABC Metrics It’s common nowadays to refer to how large a codebase is by referring to the LoC (Lines of Code) it contains. To have an alternate metric to LoC, Jerry Fitzpatrick proposed a concept called ABC Metric, which are compounded by the following: - (A) Assignment counts: - (B) Branch counts when: Function is called - (C) Conditionals counts: Booleans or logic test ( else, and case) Caution: This metric should not be used as a “score” to decrease, consider it as just an indicator of your codebase or current file being analyzed. To have this indicator in Golang, you can use abcgo package: $ go get -u github.com/droptheplot/abcgo $ (cd $GOPATH/src/github.com/droptheplot/abcgo && go install) Give the following Golang snippet: package mainimport ( "fmt" "os""my_app/persistence" service "my_app/services"flag "github.com/ogier/pflag" )// flags var ( filepath string )func main() { flag.Parse()if flag.NFlag() == 0 { printUsage() }persistence.Prepare() service.Compare(filepath) }func init() { flag.StringVarP(&filepath, "filepath", "f", "", "Load CSV to lookup for data") }func printUsage() { fmt.Printf("Usage: %s [options]\n", os.Args[0]) fmt.Println("Options:") flag.PrintDefaults() os.Exit(1) } Then let’s analyze this example using abcgo: $ abcgo -path main.go Source Func Score A B C /tmp/main.go:18 main 5 0 5 1 /tmp/main.go:29 init 1 0 1 0 /tmp/main.go:33 printUsage 4 0 4 0 As you can see, it will print the Score based on each function found in the file. This metric can help new collaborators identify files that a pair programming session would be required during the onboarding period. Cyclomatic Complexity Cyclomatic Complexity in another hand, besides the complex name, has a simple explanation: it calculates how many paths your code has. It is useful to indicate that you may break your implementation in separate abstractions or give some code smells and insights. To analyze our Golang code let use gocyclo package: $ go install github.com/fzipp/gocyclo/cmd/gocyclo@latest Then let’s check the same piece of code that we’ve analyzed in the ABC Metrics section: $ gocyclo main.go 2 main main main.go:18:1 1 main printUsage main.go:33:1 1 main init main.go:29:1 It also breaks the output based on function name, so we can see that the main function has 2 paths since we're using if conditional there. Style and Patterns Checking To verify code style and patterns in your codebase, Golang already came with installed. Which was a linter that offer no customization but it was performing recommended checks from the Golang development team. It was archived in mid-2021 and it is being recommended Staticcheck be used as a replacement. Golint vs Staticcheck vs revive Before Staticcheck was recommended, we had revive, which for me sounds more like a community alternative linter. As revive states how different it is from archived golint: - Allows us to enable or disable rules using a configuration file. - Allows us to configure the linting rules with a TOML file. - 2x faster running the same rules as golint. - Provides functionality for disabling a specific rule or the entire linter for a file or a range of lines. - golint allows this only for generated files. - Optional type checking. Most rules in golint do not require type checking. If you disable them in the config file, revive will run over 6x faster than golint. - Provides multiple formatters which let us customize the output. - Allows us to customize the return code for the entire linter or based on the failure of only some rules. - Everyone can extend it easily with custom rules or formatters. - Revive provides more rules compared to golint. Testing revive linter I think the extra point goes for revive at the point of creating custom rules or formatters. Wanna try it? $ go install github.com/mgechev/revive@latest Then you can run it with the following command: $ revive -exclude vendor/... -formatter friendly ./... I often exclude my vendor directory since my dependencies are there. If you want to customize the checks to be used, you can supply a configuration file: # Ignores files with "GENERATED" header, similar to golint ignoreGeneratedHeader = true# Sets the default severity to "warning" severity = "warning"# Sets the default failure confidence. The semantics behind this property # is that revive ignores all failures with a confidence level below 0.8." Then you should pass it on running revive: $ revive -exclude vendor/... -config revive.toml -formatter friendly ./... What else? As I’ve shown, you can use linters for many possibilities, you can also focus on: - Performance - Unused code - Reports - Outdated packages - Code without tests (no coverage) - Magic number detector Feel free to try new linters that I didn’t mention here, I’d recommend the archived repository awesome-go-linters. Where to start? To start, consider using gofmt before each commit or whenever you remember to run, then try revive. Which linters are you using? Originally published at on January 18, 2022.
https://medium.com/sourcelevel/state-of-golang-linters-and-the-differences-between-them-sourcelevel-3ae2c2072171?source=read_next_recirc---------3---------------------85fed462_7aa5_4160_ba4f_2577249e554d-------
CC-MAIN-2022-21
en
refinedweb
The padata parallel execution mechanism¶ - Date May 2020 Padata is a mechanism by which the kernel can farm jobs out to be done in parallel on multiple CPUs while optionally retaining their ordering. It was originally developed for IPsec, which needs to perform encryption and decryption on large numbers of packets without reordering those packets. This is currently the sole consumer of padata’s serialized job support. Padata also supports multithreaded jobs, splitting up the job evenly while load balancing and coordinating between threads. Running Serialized Jobs¶ Initializing¶ The first step in using padata to run serialized jobs is to set up a padata_instance structure for overall control of how jobs are to be run: #include <linux/padata.h> struct padata_instance *padata_alloc(const char *name); ‘name’ simply identifies the instance. Then, complete padata initialization by allocating a padata_shell: struct padata_shell *padata_alloc_shell(struct padata_instance *pinst); A padata_shell is used to submit a job to padata and allows a series of such jobs to be serialized independently. A padata_instance may have one or more padata_shells associated with it, each allowing a separate series of jobs. Modifying cpumasks¶ The CPUs used to run jobs can be changed in two ways, programatically with padata_set_cpumask() or via sysfs. The former is defined: int padata_set_cpumask(struct padata_instance *pinst, int cpumask_type, cpumask_var_t cpumask); Here cpumask_type is one of PADATA_CPU_PARALLEL or PADATA_CPU_SERIAL, where a parallel cpumask describes which processors will be used to execute jobs submitted to this instance in parallel and a serial cpumask defines which processors are allowed to be used as the serialization callback processor. cpumask specifies the new cpumask to use. There may be sysfs files for an instance’s cpumasks. For example, pcrypt’s live in /sys/kernel/pcrypt/<instance-name>. Within an instance’s directory there are two files, parallel_cpumask and serial_cpumask, and either cpumask may be changed by echoing a bitmask into the file, for example: echo f > /sys/kernel/pcrypt/pencrypt/parallel_cpumask Reading one of these files shows the user-supplied cpumask, which may be different from the ‘usable’ cpumask. Padata maintains two pairs of cpumasks internally, the user-supplied cpumasks and the ‘usable’ cpumasks. (Each pair consists of a parallel and a serial cpumask.) The user-supplied cpumasks default to all possible CPUs on instance allocation and may be changed as above. The usable cpumasks are always a subset of the user-supplied cpumasks and contain only the online CPUs in the user-supplied masks; these are the cpumasks padata actually uses. So it is legal to supply a cpumask to padata that contains offline CPUs. Once an offline CPU in the user-supplied cpumask comes online, padata is going to use it. Changing the CPU masks are expensive operations, so it should not be done with great frequency. Running A Job¶ Actually submitting work to the padata instance requires the creation of a padata_priv structure, which represents one job: the job is done with: int padata_do_parallel(struct padata_shell *ps, struct padata_priv *padata, int *cb_cpu); The ps and padata structures must be set up as described above; cb_cpu points to the preferred CPU to be used for the final callback when the job is done; it must be in the current instance’s CPU mask (if not the cb_cpu pointer is updated to point to the CPU actually chosen). The return value from padata_do_parallel() is zero on success, indicating that the job is in progress. -EBUSY means that somebody, somewhere else is messing with the instance’s CPU mask, while -EINVAL is a complaint about cb_cpu not being in the serial cpumask, no online CPUs in the parallel or serial cpumasks, or a stopped instance. Each job submitted to padata_do_parallel() will, in turn, be passed to exactly one call to the above-mentioned parallel() function, on one CPU, so true parallelism is achieved by submitting multiple jobs. parallel() runs job from this point. The job need not be completed during this call, but, if parallel() leaves work outstanding, it should be prepared to be called again with a new job before the previous one completes. Serializing Jobs¶ When a job does complete, parallel() (or whatever function actually finishes the work) run with local software interrupts disabled. Note that this call may be deferred for a while since the padata code takes pains to ensure that jobs are completed in the order in which they were submitted. Destroying¶ Cleaning up a padata instance predictably involves calling the two free functions that correspond to the allocation in reverse: void padata_free_shell(struct padata_shell *ps); void padata_free(struct padata_instance *pinst); It is the user’s responsibility to ensure all outstanding jobs are complete before any of the above are called. Running Multithreaded Jobs¶ A multithreaded job has a main thread and zero or more helper threads, with the main thread participating in the job and then waiting until all helpers have finished. padata splits the job into units called chunks, where a chunk is a piece of the job that one thread completes in one call to the thread function. A user has to do three things to run a multithreaded job. First, describe the job by defining a padata_mt_job structure, which is explained in the Interface section. This includes a pointer to the thread function, which padata will call each time it assigns a job chunk to a thread. Then, define the thread function, which accepts three arguments, start, end, and arg, where the first two delimit the range that the thread operates on and the last is a pointer to the job’s shared state, if any. Prepare the shared state, which is typically allocated on the main thread’s stack. Last, call padata_do_multithreaded(), which will return once the job is finished. Interface¶ Definition struct padata_priv { struct list_head list; struct parallel_data *pd; int cb_cpu; unsigned int seq_nr; int info; void (*parallel)(struct padata_priv *padata); void (*serial)(struct padata_priv *padata); }; list List entry, to attach to the padata lists. pd Pointer to the internal control structure. cb_cpu Callback cpu for serializatioon. seq_nr Sequence number of the parallelized data object. info Used to pass information from the parallel to the serial function. parallel Parallel execution function. serial Serial complete function. Definition struct padata_list { struct list_head list; spinlock_t lock; }; list List head. lock List lock. Definition struct padata_serial_queue { struct padata_list serial; struct work_struct work; struct parallel_data *pd; }; serial List to wait for serialization after reordering. work work struct for serialization. pd Backpointer to the internal control structure. Definition struct padata_cpumask { cpumask_var_t pcpu; cpumask_var_t cbcpu; }; pcpu cpumask for the parallel workers. cbcpu cpumask for the serial (callback) workers. - struct parallel_data¶ Internal control structure, covers everything that depends on the cpumask in use. Definition struct parallel_data { struct padata_shell *ps; struct padata_list __percpu *reorder_list; struct padata_serial_queue __percpu *squeue; atomic_t refcnt; unsigned int seq_nr; unsigned int processed; int cpu; struct padata_cpumask cpumask; struct work_struct reorder_work; spinlock_t lock; }; ps padata_shell object. reorder_list percpu reorder lists squeue percpu padata queues used for serialuzation. refcnt Number of objects holding a reference on this parallel_data. seq_nr Sequence number of the parallelized data object. processed Number of already processed objects. cpu Next CPU to be processed. cpumask The cpumasks in use for parallel and serial workers. reorder_work work struct for reordering. lock Reorder lock. - struct padata_shell¶ Wrapper around struct parallel_data, its purpose is to allow the underlying control structure to be replaced on the fly using RCU. Definition struct padata_shell { struct padata_instance *pinst; struct parallel_data __rcu *pd; struct parallel_data *opd; struct list_head list; }; pinst padat instance. pd Actual parallel_data structure which may be substituted on the fly. opd Pointer to old pd to be freed by padata_replace. list List entry in padata_instance list. Definition struct padata_mt_job { void (*thread_fn)(unsigned long start, unsigned long end, void *arg); void *fn_arg; unsigned long start; unsigned long size; unsigned long align; unsigned long min_chunk; int max_threads; }; thread_fn Called for each chunk of work that a padata thread does. fn_arg The thread function argument. start The start of the job (units are job-specific). size size of this node’s work (units are job-specific). align Ranges passed to the thread function fall on this boundary, with the possible exceptions of the beginning and end of the job. min_chunk The minimum chunk size in job-specific units. This allows the client to communicate the minimum amount of work that’s appropriate for one worker thread to do at once. max_threads Max threads to use for the job, actual number may be less depending on task size and minimum chunk size. Definition struct padata_instance { struct hlist_node cpu_online_node; struct hlist_node cpu_dead_node; struct workqueue_struct *parallel_wq; struct workqueue_struct *serial_wq; struct list_head pslist; struct padata_cpumask cpumask; struct kobject kobj; struct mutex lock; u8 flags; #define PADATA_INIT 1; #define PADATA_RESET 2; #define PADATA_INVALID 4; }; cpu_online_node Linkage for CPU online callback. cpu_dead_node Linkage for CPU offline callback. parallel_wq The workqueue used for parallel work. serial_wq The workqueue used for serial work. pslist List of padata_shell objects attached to this instance. cpumask User supplied cpumasks for parallel and serial works. kobj padata instance kernel object. lock padata instance lock. flags padata flags. - int padata_do_parallel(struct padata_shell *ps, struct padata_priv *padata, int *cb_cpu)¶ padata parallelization function Parameters struct padata_shell *ps padatashell struct padata_priv *padata object to be parallelized int *cb_cpu pointer to the CPU that the serialization callback function should run on. If it’s not in the serial cpumask of pinst (i.e. cpumask.cbcpu), this function selects a fallback CPU and if none found, returns -EINVAL. Description The parallelization callback function will run with BHs off. Note Every object which is parallelized by padata_do_parallel must be seen by padata_do_serial. Return 0 on success or else negative error code. - void padata_do_serial(struct padata_priv *padata)¶ padata serialization function Parameters struct padata_priv *padata object to be serialized. Description padata_do_serial must be called for every parallelized object. The serialization callback function will run with BHs off. - void padata_do_multithreaded(struct padata_mt_job *job)¶ run a multithreaded job Parameters struct padata_mt_job *job Description of the job. Description See the definition of struct padata_mt_job for more details. - int padata_set_cpumask(struct padata_instance *pinst, int cpumask_type, cpumask_var_t cpumask)¶ Sets specified by cpumask_type cpumask to the value equivalent to cpumask. Parameters struct padata_instance *pinst padata instance int cpumask_type PADATA_CPU_SERIAL or PADATA_CPU_PARALLEL corresponding to parallel and serial cpumasks respectively. cpumask_var_t cpumask the cpumask to use Return 0 on success or negative error code - struct padata_instance * padata_alloc(const char *name)¶ allocate and initialize a padata instance Parameters const char *name used to identify the instance Return new instance on success, NULL on error - void padata_free(struct padata_instance *pinst)¶ free a padata instance Parameters struct padata_instance *pinst padata instance to free - struct padata_shell * padata_alloc_shell(struct padata_instance *pinst)¶ Allocate and initialize padata shell. Parameters struct padata_instance *pinst Parent padata_instance object. Return new shell on success, NULL on error - void padata_free_shell(struct padata_shell *ps)¶ free a padata shell Parameters struct padata_shell *ps padata shell to free
https://www.kernel.org/doc/html/v5.12/core-api/padata.html
CC-MAIN-2022-21
en
refinedweb
1631A - Мин-макс замены Author: humbertoyusta Think about how $$$max(a_1, a_2, ..., a_n, b_1, b_2, ..., b_n)$$$ will contribute to the answer. The maximum of one array is always $$$max(a_1, a_2, ..., a_n, b_1, b_2, ..., b_n)$$$. How should you minimize then? Let $$$m_1 = \max(a_1, a_2, ..., a_n, b_1, b_2, ..., b_n)$$$. The answer will always be $$$m_1 \cdot m_2$$$ where $$$m_2$$$ is the maximum of the array that does not contain $$$m_1$$$. Since $$$m_1$$$ is fixed, the problem can be reduced to minimize $$$m_2$$$, that is, minimize the maximum of the array that does not contain the global maximum. WLOG assume that the global maximum will be in the array $$$b$$$, we can swap elements at each index $$$x$$$ such that $$$a_x > b_x$$$, ending with $$$a_i \leq b_i$$$ for all $$$i$$$. It can be shown that the maximum of array $$$a$$$ is minimized in this way. Time complexity: $$$O(n)$$$ #include<bits/stdc++.h> using namespace std; int calc_max(vector<int> a){ int res = 0; for( auto i : a ) res = max( res , i ); return res; } int main(){ int tc; cin >> tc; while( tc-- ){ int n; cin >> n; vector<int> a(n), b(n); for( auto &i : a ) cin >> i; for( auto &i : b ) cin >> i; for(int i=0; i<n; i++) if( a[i] > b[i] ) swap( a[i] , b[i] ); cout << calc_max(a) * calc_max(b) << '\n'; } } 1631B - Веселье с четными подмассивами Author: humbertoyusta It is not possible to modify $$$a_n$$$ using the given operation. Think about the leftmost $$$x$$$ such that $$$a_x \neq a_n$$$. For simplicity, let $$$b_1, b_2, ..., b_n = a_n, a_{n-1}, ..., a_1$$$ (let $$$b$$$ be $$$a$$$ reversed). The operation transforms to select a subarray $$$[l, r]$$$ of length $$$2\cdot{k}$$$, so $$$k = \frac{r-l+1}{2}$$$, then for all $$$i$$$ such that $$$0 \leq i < k$$$, set $$$b_{l+k+i} = b_{l+i}$$$. $$$b_1$$$ can not be changed with the given operation. That reduces the problem to make all elements equal to $$$b_1$$$. Let $$$x$$$ be the rightmost index such that for all $$$1 \leq i \leq x$$$, $$$b_i = b_1$$$ holds. The problem will be solved when $$$x = n$$$. If an operation is applied with $$$l + k > x + 1$$$, $$$b_{x+1}$$$ will not change and $$$x$$$ will remain the same. The largest range with $$$l + k \leq x + 1$$$ is $$$[1, 2\cdot{x}]$$$, applying an operation to it will lead to $$$b_{x+1}, b_{x+2}, ..., b_{2\cdot{x}} = b_1, b_2, ..., b_x$$$, so $$$x$$$ will become at least $$$2\cdot{x}$$$ and there is not any other range that will lead to a bigger value of $$$x$$$. If $$$2\cdot{x} > n$$$, it is possible to apply the operation on $$$[x-(n-x)+1,n]$$$, after applying it $$$b_{x+1}, ..., b_n = b_{x-(n-x)+1}, ..., b_x$$$ and all elements will become equal. The problem can now be solved by repeatedly finding $$$x$$$ and applying the operation on $$$[1, 2\cdot{x}]$$$ or on $$$[x-(n-x)+1,n]$$$ if $$$2\cdot{x} > n$$$. Since $$$x$$$ will become at least $$$2\cdot{x}$$$ in each operation but the last one, the naive implementation will take $$$O(n\log{n})$$$, however, it is easy to implement it in $$$O(n)$$$. #include<bits/stdc++.h> using namespace std; int find_rightmost_x(vector<int> &b){ int n = (int)b.size() - 1; int x = 1; while( x + 1 <= n && b[x+1] == b[1] ) x ++; return x; } void apply(vector<int> &b,int l,int r){ int k = ( r - l + 1 ) / 2; for(int i=0; i<k; i++) b[l+k+i] = b[l+i]; } int main(){ int tc; cin >> tc; while( tc-- ){ int n; cin >> n; vector<int> a(n+1); for(int i=1; i<=n; i++) cin >> a[i]; vector<int> b = a; reverse(b.begin()+1,b.end()); int ans = 0; while( find_rightmost_x(b) != n ){ int x = find_rightmost_x(b); if( 2 * x > n ){ apply(b,x-(n-x)+1,n); ans ++; } else{ apply(b,1,2*x); ans ++; } } cout << ans << '\n'; } return 0; } #include <bits/stdc++.h> using namespace std; int main() { int tc; cin >> tc; while(tc--) { int n; cin >> n; vector<int> a(n+1); for(int i=1; i<=n; i++) cin >> a[i]; vector<int> b = a; reverse(b.begin()+1,b.end()); int ans = 0, x = 1; while( x < n ) { if( b[x+1] == b[1] ){ x ++; continue; } ans ++; x *= 2; } cout << ans << '\n'; } return 0; } 1630A - И-сопоставление Author: humbertoyusta Try to find a pairing such that $$$\sum\limits_1^{n/2}{a_i\&{b_i}}=0$$$ Try to find a pairing for $$$k>0$$$ by changing only a few elements from the previous pairing. Let's define $$$c(x)$$$, the compliment of $$$x$$$, as the number $$$x$$$ after changing all bits 0 to 1 and vice versa, for example $$$c(110010_2) = 001101_2$$$. It can be shown that $$$c(x) = x\oplus{(n-1)}$$$. Remember that $$$n-1 = 11...11_2$$$ since $$$n$$$ is a power of $$$2$$$. We will separate the problem into three cases. Case $$$k = 0$$$: In this case it is possible to pair $$$x$$$ with $$$c(x)$$$ for $$$0\leq{x}<{\frac{n}{2}}$$$, getting $$$\sum\limits_{x=0}^{\frac{n}{2}-1} {x\&{c(x)}} = 0$$$. Case $$$0 < k < n-1$$$: In this case it is possible to pair each element with its compliment except $$$0$$$, $$$k$$$, $$$c(k)$$$ and $$$n-1$$$, and then pair $$$0$$$ with $$$c(k)$$$ and $$$k$$$ with $$$n-1$$$, $$$0\& c(k) = 0$$$ and $$$k\& (n-1) = k$$$. Case $$$k = n-1$$$: There are many constructions that work in this case, if $$$n=4$$$ there is no solution, if $$$n \geq8$$$ it is possible to construct the answer in the following way: It is possible to pair $$$n-1$$$ with $$$n-2$$$, $$$n-3$$$ with $$$1$$$, $$$0$$$ with $$$2$$$ and all other elements with their compliments. - $$$(n-1)\&{(n-2)}=n-2$$$, for example $$$1111_2\&{1110_2}=1110_2$$$ - $$$(n-3)\&{1}=1$$$, for example $$$1101_2\&{0001_2}=0001_2$$$ - $$$0\&{2}=0$$$, for example $$$0000_2\&{0010_2}=0000_2$$$ - All other elements can be paired with their complements and $$$x\&{c(x)}=0$$$ Note that $$$(n-2)+1+0+0+ ... +0=n-1$$$. Each case can be implemented in $$$O(n)$$$. #include<bits/stdc++.h> using namespace std; int c(int x,int n){ return ( x ^ ( n - 1 ) ); } int main(){ int tc; cin >> tc; while( tc-- ){ int n, k; cin >> n >> k; vector<int> a(n/2), b(n/2); if( k == 0 ){ for(int i=0; i<n/2; i++){ a[i] = i; b[i] = c(i,n); } } if( k > 0 && k < n - 1 ){ int small_k = min( k , c(k,n) ); for(int i=0; i<n/2; i++){ if( i != 0 && i != small_k ){ a[i] = i; b[i] = c(i,n); } } a[0] = 0; b[0] = c(k,n); a[small_k] = k; b[small_k] = n - 1; } if( k == n - 1 ){ if( n == 4 ){ cout << -1 << '\n'; continue; } a[0] = n - 2; b[0] = n - 1; a[1] = 1; b[1] = n - 3; a[2] = 0; b[2] = 2; for(int i=3; i<n/2; i++){ a[i] = i; b[i] = c(i,n); } } for(int i=0; i<n/2; i++){ cout << a[i] << ' ' << b[i] << '\n'; } } return 0; } Let's define $$$a$$$ such that $$$a_i = i-1$$$ for $$$1\le i \le \frac{n}{2}$$$ and $$$b$$$ such that $$$b_i = c(a_i)$$$ for $$$1\le i \le \frac{n}{2}$$$. For example, for $$$n = 16$$$ they are: All swaps are independent and are applied to the original $$$a$$$ and $$$b$$$. After swapping two adjacent elements of $$$b$$$ (that have not been swapped) the sum will change in $$$2^x-1$$$ for some positive integer $$$x$$$. Then it is possible to solve the problem by repeatedly swapping the pair that maximizes $$$\sum\limits_{i=1}^{n/2}{ a_i\&{b_i}}$$$ after the swap such that $$$\sum\limits_{i=1}^{n/2}{ a_i\&{b_i}} \leq k$$$ is held and none of its elements have been swapped yet. However, this only works for all values of $$$k$$$ if $$$n \geq 32$$$, the case $$$n \leq 16$$$ can be handled with brute force. Please read the previous solution. Arrays $$$a$$$ and $$$b$$$ from it will also be used here. It is possible to start with $$$a$$$ and $$$b$$$ and repeatedly select and index $$$x$$$ randomly and swap $$$b_x$$$ with $$$b_{x+1}$$$ if $$$\sum\limits_{i=1}^{n/2}{a_i\&b_i} \leq k$$$ holds until $$$\sum\limits_{i=1}^{n/2}{a_i\&b_i} = k$$$. We have no proof of this solution but it was stressed against each possible input to the problem and it worked quickly for $$$n \geq 16$$$, the case $$$n \leq 8$$$ can be handled with brute force. 1630B - Отрезок и разбиение Author: humbertoyusta Focus on how to solve the problem for a fixed interval $$$[x,y]$$$ Think about the numbers inside the interval as $$$+1$$$, and the other numbers as $$$-1$$$ Try to relate a partition into valid subarrays with an increasing sequence of the prefix sums array Note that if some value $$$x$$$ ($$$x>0$$$) appears on the prefix sums array, $$$x-1$$$ appears before since the absolute value of the elements is $$$1$$$ (+1 and -1) Focus on how to solve the problem for a fixed interval $$$[x,y]$$$: Let us define an array $$$b$$$ such that $$$b_i = 1$$$ if $$$x \le a_i \le y$$$ or $$$b_i = -1$$$ otherwise, for all $$$1\le i\le n$$$. Let's define $$$psum_i$$$ as $$$b_1 + b_2 + ... + b_i$$$. We need to find a partition on $$$k$$$ subarrays with positive sum of $$$b_i$$$. The sum of a subarray $$$[l,r]$$$ is $$$b_l+b_{l+1}+...+b_r = psum_r-psum_{l-1}$$$. Then a subarray is valid if $$$psum_r > psum_{l-1}$$$. We need to find an increasing sequence of $$$psum$$$ of length $$$k+1$$$ starting at $$$0$$$ and ending at $$$n$$$. Let's define $$$firstocc_x$$$ to be the first occurrence of the integer $$$x$$$ in $$$psum$$$. If $$$psum_n < k$$$ there will be no valid sequence, otherwise the sequence $$$0, firstocc_1, firstocc_2, ..., firstocc_{k-1}, n$$$ will satisfy all constraints. Note that, since $$$|psum_i-psum_{i-1}| = 1$$$, for $$$i>0$$$, then $$$firstocc_v$$$ exists and $$$firstocc_v < firstocc_{v+1}$$$ for $$$0\leq v \leq psum_n$$$. This solves the problem for a fixed interval. It remains to find the smallest interval $$$[x,y]$$$ such that $$$psum_n \geq k$$$. For a given interval $$$[x,y]$$$, since $$$psum_n = b_1 + b_2 + ... + b_n$$$, $$$psum_n$$$ will be equal to the number of elements of $$$a$$$ inside the interval minus the number of elements outside. Then for each $$$x$$$, it is possible to find the smallest $$$y$$$ such that $$$psum_n \geq k$$$ using binary search or two pointers. It is also possible to note that: We need to find the smallest interval with at least $$$\lceil{\frac{k+n}{2}}\rceil$$$ inside, let $$$A$$$ be the array $$$a$$$ sorted, the answer is the minimum interval among all intervals $$$[A_i, A_{i+\lceil{\frac{k+n}{2}}\rceil-1}]$$$ for $$$1 \leq i \leq n - \lceil{\frac{k+n}{2}}\rceil+1$$$. Complexity: $$$O(n\log{n})$$$ if solved with the previous formula or binary search, or $$$O(n)$$$ is solved with two pointers. #include<bits/stdc++.h> using namespace std; int main() { int tc; cin >> tc; while( tc-- ){ int n, k; cin >> n >> k; vector<int> a(n), sorted_a(n); for(int i=0; i<n; i++){ cin >> a[i]; sorted_a[i] = a[i]; } sort(sorted_a.begin(),sorted_a.end()); int req_sum = ( n + k + 1 ) / 2; pair<int,pair<int,int>> ans = { n + 1 , { -1 , -1 } }; for(int i=0; i+req_sum-1<n; i++) ans = min( ans , { sorted_a[i+req_sum-1] - sorted_a[i] , { sorted_a[i] , sorted_a[i+req_sum-1] } } ); cout << ans.second.first << ' ' << ans.second.second << '\n'; int subarrays_found = 0, curr_sum = 0; int last_uncovered = 1; for(int i=0; i<n; i++){ if( a[i] >= ans.second.first && a[i] <= ans.second.second ) curr_sum ++; else curr_sum --; if( curr_sum > 0 && subarrays_found + 1 < k ){ cout << last_uncovered << ' ' << ( i + 1 ) << '\n'; last_uncovered = i + 2; subarrays_found ++; curr_sum = 0; } } subarrays_found ++; cout << last_uncovered << ' ' << n << '\n'; } return 0; } 1630C - Раскрась середину Author: humbertoyusta Think about all occurrences of some element, what occurrences are important? Think about the first and last occurrence of each element as a segment. Think about the segments that at least one of its endpoints will end up with $$$c_i = 0$$$. For each $$$x$$$ such that all the elements $$$a_1, a_2, ..., a_x$$$ are different from $$$a_{x+1}, a_{x+2}, ..., a_n$$$ it is impossible to apply an operation with some indices from the first part, and some other from the second one. Then it is possible to split the array in subarrays for each $$$x$$$ such that the previous condition holds, and sum the answers from all of them. Let's solve the problem independently for one of those subarrays, let's denote its length as $$$m$$$, the values of its elements as $$$a_1, ..., a_m$$$ and their colors as $$$c_1, ..., c_m$$$: For every tuple $$$(x, y, z)$$$ such that $$$1 \le x < y < z \le m$$$ and $$$a_x = a_y = a_z$$$ it is possible to apply an operation with indices $$$x, y$$$ and $$$z$$$. Then only the first and last occurrences of each element are important. For all pairs $$$(x, y)$$$ such that $$$1 \le x < y \le m$$$, $$$a_x = a_y$$$, $$$a_x$$$ is the first occurrence and $$$a_y$$$ the last occurrence of that value, a segment $$$[x, y]$$$ will be created. Let's denote the left border of a segment $$$i$$$ as $$$l_i$$$ and the right border as $$$r_i$$$. Let's say that a set of segments $$$S$$$ is connected if the union of its segments is the segment $$$[\min(l_i, \forall i\in{S}), \max(r_i, \forall i\in{S})]$$$. Instead of maximizing $$$\sum\limits_{i=1}^m{c_i}$$$, it is possible to focus on minimizing $$$\sum\limits_{i=1}^m{[c_i=0]}$$$. Lemma 1: If we have a connected set $$$S$$$, it is possible to apply some operations to its induced array to end up with at most $$$|S|+1$$$ elements with $$$c_i = 0$$$. For each segment $$$x$$$ in $$$S$$$ if there exists a segment $$$y$$$ such that $$$l_y < l_x < r_x < r_y$$$, it is possible to apply the operation with indices $$$l_y, l_x, r_y$$$ and with $$$l_y, r_x, r_y$$$. Otherwise, add this segment to a set $$$T$$$. Then is possible to repeatedly select the leftmost segment of $$$T$$$ that have not been selected yet, and set the color of its right border to $$$1$$$, this will be always possible until we select the rightmost segment since $$$T$$$ is connected. In the end, all the left borders of the segments of $$$T$$$ will have $$$c_i = 0$$$, the same holds for the right border of the rightmost segment of $$$T$$$, which leads to a total of $$$|T|+1$$$ elements with $$$c_i = 0$$$, and $$$|T| \le |S|$$$. Let $$$X$$$ be a subarray that can be obtained by applying the given operation to the initial subarray any number of times. Let $$$S(X)$$$ be the set of segments that includes all segments $$$i$$$ such that $$$c[l_i] = 0$$$ or $$$c[r_i] = 0$$$ (or both), where $$$c[i]$$$ is the color of the $$$i$$$-th segment of the subarray $$$X$$$. Lemma 2: There is always an optimal solution in which $$$S(X)$$$ is connected. Suppose $$$S(X)$$$ is not connected, if there are only two components of segments $$$A$$$ and $$$B$$$, there will always be a segment from $$$A$$$ to $$$B$$$ due to the way the subarray was formed. If $$$A$$$ or $$$B$$$ have some segment $$$x$$$ such that there exists a segment $$$y$$$ such that $$$l_y < l_x < r_x < r_y$$$ you can erase it by applying the operation with indices $$$l_y, l_x, r_y$$$ and with $$$l_y, r_x, r_y$$$. Then we can assume that $$$\sum\limits_{i\in A}{([c[l_i]=0]+[c[r_i]=0])} = |A|+1$$$ and similarly for $$$B$$$. The solution to $$$A$$$ before merging is $$$|A|+1$$$, the solution to $$$B$$$ is $$$|B|+1$$$, if we merge $$$A$$$ and $$$B$$$ with a segment we get a component $$$C$$$ of size $$$|A|+|B|+1$$$, and its answer will be $$$|A|+|B|+1+1$$$ (using \bf{lemma 1}), the case with more than two components is similar, then we can always merge the components without making the answer worse. Finally, the problem in each subarray can be reduced to find the smallest set (in number of segments), such that the union of its segments is the whole subarray. This can be computed with dp or sweep line. Let $$$dp[x]$$$ be the minimum size of a set such that the union of its segments is the segment $$$[1,x]$$$. To compute $$$dp$$$, process all the segments in increasing order of $$$r_i$$$, and compute the value of $$$dp[r_i] = \min(dp[l_i+1], dp[l_i+2], ..., dp[r_i-1]) + 1$$$. Then the solution to the subarray is $$$dp[m] + 1$$$, this $$$dp$$$ can be computed in $$$O(m\log{m})$$$ with segment tree. It is possible to compute a similar $$$dp$$$ to solve the problem for the whole array without splitting the array, the time complexity is $$$O(n\log{n})$$$. #include<bits/stdc++.h> using namespace std; template <typename Tnode,typename Tup> struct ST{ vector<Tnode> st; int sz; ST(int n){ sz = n; st.resize(4*n); } Tnode merge_(Tnode a, Tnode b){ Tnode c; /// Merge a and b into c c = min( a , b ); return c; } void update_node(int nod,Tup v){ /// how v affects to st[nod] st[nod] = v; } void build(vector<Tnode> &arr){ build(1,0,sz-1,arr); } void build(int nod,int l,int r,vector<Tnode> &arr){ if( l == r ){ st[nod] = arr[l]; return; } int mi = ( l + r ) >> 1; build((nod<<1),l,mi,arr); build((nod<<1)+1,mi+1,r,arr); st[nod] = merge_( st[(nod<<1)] , st[(nod<<1)+1] ); } void update(int id,Tup v){ update(1,0,sz-1,id,v); } void update(int nod,int l,int r,int id,Tup v){ if( l == r ){ update_node(nod,v); return; } int mi = ( l + r ) >> 1; if( id <= mi ) update((nod<<1),l,mi,id,v); else update((nod<<1)+1,mi+1,r,id,v); st[nod] = merge_( st[(nod<<1)] , st[(nod<<1)+1] ); } Tnode query(int l,int r){ return query(1,0,sz-1,l,r); } Tnode query(int nod,int l,int r,int x,int y){ if( l >= x && r <= y ) return st[nod]; int mi = ( l + r ) >> 1; if( y <= mi ) return query((nod<<1),l,mi,x,y); if( x > mi ) return query((nod<<1)+1,mi+1,r,x,y); return merge_( query((nod<<1),l,mi,x,y), query((nod<<1)+1,mi+1,r,x,y) ); } }; int main(){ int n; cin >> n; vector<int> a(n), fst(n,-1), lst(n,-1); for(int i=0; i<n; i++){ cin >> a[i]; a[i] --; if( fst[a[i]] == -1 ) fst[a[i]] = i; lst[a[i]] = i; } vector<pair<int,int>> segments; for(int i=0; i<n; i++) if( fst[i] != -1 ) segments.push_back({lst[i]+1,fst[i]+1}); sort(segments.begin(),segments.end()); vector<int> dp(n+1,1000000007); dp[0] = 0; ST<int,int> st(n+1); st.build(dp); for( auto i : segments ){ dp[i.first] = min( dp[i.first] , dp[i.second-1] + 1 + ( i.first != i.second ) ); if( i.second + 1 <= i.first - 1 ) dp[i.first] = min( dp[i.first] , st.query(i.second+1,i.first-1) + 1 ); st.update(i.first,dp[i.first]); } cout << n - dp[n] << '\n'; return 0; } It is possible to create an event where a segment starts and an event where a segment ends. Then process the events in order and each time a segment ends, if it is the rightmost segment added, add to the solution the segment with maximum $$$r_i$$$ among the segments that $$$l_i$$$ is already processed. It is possible to modify the sweep line to solve the problem for the whole array without splitting the array, the time complexity is $$$O(n)$$$ or $$$O(n\log{n})$$$ depending on the implementation. 1630D - Переверни отрезки Author: humbertoyusta What is the size of the smallest subarray that it is possible to multiply by $$$-1$$$ using some operations? Let $$$s$$$ be a string such that $$$s_i=0$$$ if that element is multiplied by $$$-1$$$ or $$$s_i=1$$$ otherwise, what such $$$s$$$ are reachable? Think about the parity of the sum of all $$$s_i$$$ such that $$$i\mod{g}=constant$$$, where $$$g$$$ is the size of the smallest subarray that it is possible to multiply by $$$-1$$$ using some operations If we have $$$x, y \in B$$$ (assume $$$x > y$$$), since all elements of $$$B$$$ are at most $$$\lfloor\frac{n}{2}\rfloor$$$, it is possible to multiply all intervals of size $$$x-y$$$ by either multiplying an interval of size $$$x$$$ that starts at the position of the interval of size $$$x-y$$$, and an interval of size $$$y$$$ that ends at the same position as the interval $$$x$$$, or multiply an interval of size $$$x$$$ that ends at the same position as the interval of size $$$x-y$$$ and another interval of size $$$y$$$ that starts at the same position as the interval of size $$$x$$$. For two elements $$$x, y \in B$$$ ($$$x > y$$$), it is possible to add $$$x-y$$$ to $$$B$$$, repeatedly doing this it is possible to get $$$\gcd(x, y)$$$. Let $$$g = \gcd(b_1, b_2, ..., b_m : b_i \in B)$$$, by applying the previous reduction $$$g$$$ is the smallest element that can be obtained, and all other elements will be its multiples, then the problem is reduced to, multiplying intervals of size $$$g$$$ by $$$-1$$$ any number of times, maximize $$$\sum\limits_{i=1}^n{a_i}$$$. Let's define the string $$$s = 000...00$$$ of size $$$n$$$ (0-indexed) such that $$$s_i = 0$$$ if the $$$i$$$-th element is not multiplied by $$$-1$$$ or $$$s_i = 1$$$ otherwise. The operation flips all values of $$$s$$$ in a substring of size $$$g$$$. Let's define $$$f_x$$$ as the xor over all values $$$s_i$$$ such that $$$i\mod g = x$$$, note that $$$f_x$$$ is defined for the values $$$0 \le x \le g-1$$$. In any operation, all values of $$$f$$$ change simultaneously, since they are all $$$0$$$ at the beginning only the states of $$$s$$$ such that all $$$f_i$$$ are equal are reachable. To prove that all states of $$$s$$$ with all $$$f_i$$$ equal are reachable, let's start with any state of $$$s$$$ such that $$$f = 000...00$$$ and repeatedly select the rightmost $$$i$$$ such that $$$s_i=1$$$ and $$$i\geq g-1$$$ and flip the substring that ends in that position, after doing that as many times as possible, $$$s_i = 0$$$ for $$$g-1\leq i\leq n-1$$$. If $$$s_i=1$$$ for any $$$0\leq i < g$$$, then $$$f_i = 1$$$ which is a contradiction since $$$f_{g-1} = 0$$$ and all $$$f_i$$$ change simultaneously, then $$$s = 000...00$$$. The case with all values of $$$f$$$ equal to $$$1$$$ is similar. After this, it is possible to solve the problem with $$$dp$$$. Let $$$dp_{i,0}$$$ be the maximum sum of $$$a_i, a_{i-g}, a_{i-2\cdot{g}}, ..., a_{i-k\cdot{g}}$$$ such that $$$i-k\cdot{g}\equiv{i}(\mod{g})$$$ and $$$\bigoplus\limits_{k\geq 0, i-k\cdot g\geq 0} f_{i-k \cdot g}=0$$$ and $$$dp_{i,1}$$$ be the same such that $$$\bigoplus\limits_{k\geq 0, i-k\cdot g\geq 0} f_{i-k \cdot g}=1$$$. The answer to the problem is $$$\max(\sum\limits_{i=n-g}^{n-1}{dp_{i,0}}, \sum\limits_{i=n-g}^{n-1}{dp_{i,1}} )$$$ (0-indexed). This $$$dp$$$ can be computed in $$$O(n)$$$. #include <bits/stdc++.h> using namespace std; typedef long long ll; int main() { ios_base::sync_with_stdio(0); cin.tie(0); int T; cin >> T; while(T--) { int n,m; cin >> n >> m; vector<int> a(n),b(m); for(int i=0;i<n;i++) cin >> a[i]; int g=0; for(int i=0;i<m;i++) { cin >> b[i]; g=__gcd(g,b[i]); } vector<vector<ll>> dp(g,vector<ll>(2)); for(int i=0;i<g;i++) dp[i][1]=-2e9; for(int i=0;i<n;i++) { int rem=i%g; ll v0=max(dp[rem][0]+a[i],dp[rem][1]-a[i]); ll v1=max(dp[rem][0]-a[i],dp[rem][1]+a[i]); dp[rem][0]=v0; dp[rem][1]=v1; } ll sum0=0,sum1=0; for(int i=0;i<g;i++) { sum0+=dp[i][0]; sum1+=dp[i][1]; } cout << max(sum0,sum1) << '\n'; } return 0; } 1630E - Ожидаемые компоненты Think about an easy way to count the number of components in a cyclic array. The number of components in a cyclic array is equal to the number of adjacent positions with different values. The problem can be solved by applying Burnside's lemma. The number of different permutations of the cyclic array $$$a$$$ is equal to the sum of number of fixed points for each permutation function divided by the number of permutations functions. Let's focus on two parts. First part (find the number of different permutations of $$$a$$$): Let's define a permutation function $$$F_x(arr)$$$ as the function that cyclically shifts the array $$$arr$$$ by $$$x$$$ positions. In this problem for an array of size $$$n$$$ we have $$$n$$$ possible permutations functions and we would need to find the sum of the number of fixed points for each permutation function. To find the number of fixed points for a permutation function $$$F_x()$$$ we have that $$$arr_i$$$ must be equal to $$$arr_{(i+x)\%n}$$$, if we add an edge $$$(i,(i+x)\%n)$$$ for each position $$$i$$$ then by number theory we would obtain that $$$gcd(n,x)$$$ cycles would be formed and each one of size $$$\frac{n}{gcd(n,x)}$$$, then we can note that each position $$$i$$$ will belong to the $$$(i\%gcd(n,x))$$$-th cycle, so we can say that the problem can be transformed into counting the number of permutations with repetition in an array of size $$$gcd(n,x)$$$. Let us denote $$$cnt[v]$$$ as the number of values equal to $$$v$$$ in array $$$a$$$, when we are processing the function $$$F_x()$$$ and we reduce the problem to an array of size $$$gcd(n,x)$$$ we should also decrease $$$cnt[v]$$$ to $$$\frac{cnt[v]}{n/gcd(n,x)}$$$ since each component is made up of $$$\frac{n}{gcd(n,x)}$$$ values, also we must observe that for solving a problem for an array of size $$$x$$$, then $$$\frac{n}{x}$$$ should be a divisor of $$$gcd(cnt[1],cnt[2],\ldots,cnt[n])$$$. Let us denote $$$cnt_x[v] = \frac{cnt[v]}{n/gcd(n,x)}$$$ So to count the number of permutations with repetition for $$$F_x()$$$ that can be formed with the frequency array $$$cnt_x$$$ we can use the formula $$$\frac{n!}{x_1! \cdot x_2! \cdot \ldots \cdot x_n!}$$$ Let us denote $$$G_{all} = gcd(cnt[1],cnt[2],\ldots,cnt[n])$$$ Let us denote $$$fdiv(val)$$$ as the number of divisors of $$$val$$$. Let us denote $$$tot_{sz}$$$ as the number of permutations with repetition for an array of size $$$sz$$$, from what has been said before we have that $$$\frac{n}{sz}$$$ must be divisible by $$$G_{all}$$$ so we only need to calculate the permutations with repetition for $$$fdiv(G_{all})$$$ arrays. Now suppose that the number of different values of array $$$a$$$ is $$$k$$$ then $$$G_{all}$$$ must be at most $$$\frac{n}{k}$$$ because the gcd of several numbers is always less than or equal to the smallest of them. Now to calculate the permutations with repetition for a $$$cnt_x$$$ we do it in $$$O(k)$$$, for that we need to precalculate some factorials and modular inverses before, and since we need to calculate them $$$fdiv(G_{all})$$$ times, then we have that in total the complexity would be $$$O(fdiv(G_{all})\cdot k)$$$ but since $$$G_{all}$$$ is at most $$$\frac{n}{k}$$$ and $$$fdiv(\frac{n}{k})$$$ is at most $$$\frac{n}{k}$$$, substituting it would be $$$O(\frac{n}{k}\cdot k)$$$ equal to $$$O(n)$$$ So to find the sum of the number of fixed points we need the sum of $$$tot_{gcd(n,x)}$$$ for $$$1 \le x \le n$$$ and $$$\frac{n}{gcd(n,x)}$$$ divides to $$$G_{all}$$$, at the end of all we divide the sum of the number of fixed points by $$$n$$$ and we would obtain the number of different permutations of $$$a$$$. To find the $$$gcd(n,x)$$$ for $$$1 \le x \le n$$$ we do it with the Euclid's algorithm in complexity $$$O(n\cdot log)$$$ so in total the complexity is $$$O(n\cdot log)$$$ Second part (find the expected value of components of different permutations of $$$a$$$): Here we will use the Linear Expectation property and we will focus on calculating the contribution of each component separately, the first thing is to realize that the number of components is equal to the number of different adjacent values, so we only need to focus on two adjacent values, except if it is a single component, this would be a special case. If we have $$$k$$$ different values we can use each different pair of them that in total would be $$$k\cdot(k-1)$$$ pairs, we can realize that when we put a pair its contribution would be equal to the number of ways to permute the remaining values, which if we are in an array of size $$$\frac{n}{x}$$$ and we use the values $$$val_1$$$ and $$$val_2$$$ it would be equal to: $$$tot_{n/x} \cdot \frac{1}{(n/x)\cdot(n/x-1)} \cdot cnt_x[val_1] \cdot cnt_x[val_2]$$$ because we removing a value $$$val_1$$$ and another value $$$val_2$$$ from the set, so if we have the formula: $$$\frac{n!}{x_1! \cdot x_2! \cdot \ldots \cdot x_n!}$$$ and $$$val_1$$$ and $$$val_2$$$ are the first two elements then it would be: $$$\frac{(n-2)!}{(x_1-1)! \cdot (x_2-1)! \cdot \ldots \cdot x_n!}$$$ which would be equivalent to: $$$\frac{n!}{x_1! \cdot x_2! \cdot \ldots \cdot x_n!} \cdot \frac{1}{n \cdot (n-1)} \cdot x_1 \cdot x_2$$$ Now to calculate the contribution of the $$$k\cdot(k-1)$$$ pairs we can realize that taking common factor $$$tot_{n/x} \cdot \frac{1}{(n/x)\cdot(n/x-1)}$$$ in the previous expression it only remains to find the sum of $$$cnt_x[i]\cdot cnt_x[j]$$$ for all $$$i \neq j$$$, this can be found in $$$O(k)$$$ easily by keeping the prefix sum and doing some multiplication. Then at the end we multiply by $$$n$$$ since there are $$$n$$$ possible pairs of adjacent elements in the general array. Let us define $$$sum_{sz}$$$ as the contribution of components of the permutations with repetition for an array of size $$$sz$$$, then: $$$sum_{n/x} = tot_{n/x} \cdot \frac{1}{(n/x)\cdot(n/x-1)} \cdot (sum~of~(cnt_x[i]\cdot cnt_x[j])~for~i \neq j) \cdot n$$$ Now for each possible permutation with repetition we have by the Burnside's lemma that in the end we divide it by $$$n$$$, so we should also divide by $$$n$$$ the contribution of each component. Let's define $$$tot'_x = \frac{tot_x}{n}$$$ and $$$sum'_x = \frac{sum_x}{n}$$$ Let's define $$$tot_{all}$$$ as the sum of $$$tot'_{gcd(n,x)}$$$ for $$$1 \le x \le n$$$ and $$$\frac{n}{gcd(n,x)}$$$ divide to $$$G_{all}$$$. Let's define $$$sum_{all}$$$ as the sum of $$$sum'_{gcd(n,x)}$$$ for $$$1 \le x \le n$$$ and $$$\frac{n}{gcd(n,x)}$$$ divide to $$$G_{all}$$$. The final answer would be: $$$res = \frac{sum_{all}}{tot_{all}}$$$ The final complexity then is $$$O(n\cdot log)$$$ #include <bits/stdc++.h> using namespace std; const int MAXN = 1e6 + 100; const int MOD = 998244353; long long fact[MAXN]; long long F[MAXN]; long long qpow(long long a, long long b) { long long res = 1; while(b) { if(b&1)res = res*a%MOD; a = a*a%MOD; b /= 2; } return res; } long long inv(long long x) { return qpow(x,MOD-2); } int32_t main() { ios_base::sync_with_stdio(0); cin.tie(0); fact[0] = 1; for(int i = 1 ; i < MAXN ; i++) { fact[i] = fact[i-1]*i%MOD; } int T; cin >> T; while(T--) { int N; cin >> N; for(int i = 1 ; i <= N ; i++) { F[i] = 0; } for(int i = 1 ; i <= N ; i++) { int n; cin >> n; F[n]++; } vector<long long> vvv; for(int i = 1 ; i <= N ; i++) { if(F[i])vvv.push_back(F[i]); } int G = 0; for(auto x : vvv) { G = __gcd(G,x); } if(G == N) { cout << 1 << '\n'; continue; } vector<long long> arr(N+1); vector<long long> arr2(N+1); for(int i = 1 ; i <= G ; i++) { if(G%i == 0) { long long tot = inv(fact[N/i-2]), acum = 0, sum = 0; for(auto x : vvv) { tot = tot*fact[x/i]%MOD; sum = (sum + acum*(x/i)*2)%MOD; acum = (acum + (x/i))%MOD; } tot = inv(tot); arr2[i] = tot*(N/i-1)%MOD*(N/i)%MOD; tot = tot*sum%MOD*N%MOD; arr[i] = tot; } } long long res = 0; long long cont = 0; for(int i = 1 ; i <= N ; i++) { long long ggg = N/__gcd(N,i); if(G%ggg == 0) { res = (res + arr[ggg])%MOD; cont = (cont + arr2[ggg])%MOD; } } cout << res*inv(cont)%MOD << '\n'; } return 0; } 1630F - Сделай его двудольным! Think about the directed graph where there is an directed edge from $$$a$$$ to $$$b$$$ if and only if $$$b|a$$$ Let us define the above graph as $$$G$$$, make a duplicate graph $$$G'$$$ from $$$G$$$, and then add directed edges $$$(x', x)$$$ for each node $$$x'$$$ of the graph $$$G'$$$. What happens in this graph? First of all, let's analyze what happens when there are $$$3$$$ vertices $$$x$$$, $$$y$$$ and $$$z$$$ such that $$$a_x|a_y$$$, $$$a_x|a_z$$$ and $$$a_y|a_z$$$, if this happens then the graph cannot be bipartite because there would be a cycle of size $$$3$$$, therefore there cannot be such a triple ($$$x$$$, $$$y$$$, $$$z$$$), this condition, besides to being necessary, is sufficient since we can separate the graph into two sets, set $$$A$$$: vertices that have edges towards multiples, set $$$B$$$: vertices that have edges towards divisors, keep in mind that a vertex cannot exist in two sets at the same time if the condition is fulfilled, now note that there are no edges between elements of the same set because if this happens it would mean that they belong to different sets and it would be a contradiction, then the problem is to find the minimum number of vertices to remove such that in the remaining vertices there is no such triple of numbers ($$$x$$$, $$$y$$$, $$$z$$$). Now instead of minimizing the number of vertices to remove, let's try to maximize the number of vertices that will remain in the graph. Let us define the directed graph $$$G$$$ as the graph formed by $$$n$$$ vertices, and directed edges ($$$u$$$, $$$v$$$) such that $$$a_v|a_u$$$, now the problem is reduced to finding the maximum number of vertices such that in the graph formed among them, no vertex has ingoing edges and outgoing edges at the same time, formally for each vertex $$$x$$$ the following property must be kept $$$indegree_x = 0$$$ or $$$outdegree_x = 0$$$, in this way we guarantee that there is no triple ($$$x$$$, $$$y$$$, $$$z$$$) such that $$$a_x|a_y$$$, $$$a_x|a_z$$$ and $$$a_y|a_z$$$. Now let's define the graph $$$G'$$$ as a copy of the graph $$$G$$$. Formally for each directed edge ($$$u$$$, $$$v$$$) in the graph $$$G$$$ there is an directed edge ($$$u'$$$, $$$v'$$$) in the graph $$$G'$$$. On the other hand, let's define the graph $$$H = G + G'$$$ and we will also add new directed edges ($$$u'$$$, $$$u$$$), this graph $$$H$$$ is a $$$DAG$$$, it is easy to see that the edges always go from a vertex $$$u$$$ to a vertex $$$v$$$ only if $$$a_u > a_v$$$, except for the edges ($$$u'$$$, $$$u$$$), which in this case $$$a_{u'} = a_u$$$, these edges are the ones that connect $$$G'$$$ to $$$G$$$, but since they always go in one direction pointing towards $$$G$$$, the property of $$$DAG$$$ is still fulfilled. Now the only thing we have to do is find the largest antichain in the graph $$$H$$$, this can be done using the Dilworth's Theorem, modeling the problem as a bipartite matching, we can use some flow algorithm such as Dinic's Algorithm, or Hopcroft Karp's Algorithm, which is specific to find the maximum bipartite matching. Proof: First of all we realize that the graph $$$G$$$ is a special graph since if there is an indirect path from a vertex $$$u$$$ to a vertex $$$v$$$ then there is always a direct edge between them, this is true because if we have $$$3$$$ vertices $$$x$$$, $$$y$$$ and $$$z$$$ such that $$$a_x|a_y$$$ and $$$a_y|a_z$$$ then always $$$a_x|a_z$$$. With this we can say that two elements are not reachable with each other if and only if there are no edges between them. Now let's say that all the vertices in the graph $$$G$$$ are white and all the vertices in the graph $$$G'$$$ are black, let us denote $$$f(x)$$$ a function such that $$$f(u') = u$$$, where the vertex $$$u$$$ from the graph $$$G$$$ is the projection of the vertex $$$u'$$$ from the graph $$$G'$$$. Now let's divide the proof into two parts. First part: Lemma 1: Every antichain of $$$H$$$ can be transformed into a valid set of vertices such that they form a bipartite graph. Proof of Lemma 1: Let's divide the antichain of $$$H$$$ into two sets, white vertices and black vertices, Let us define the set of white vertices as $$$W$$$ and the set of all black vertices as $$$B$$$, now we will create a set $$$S$$$ = {$$$f(x)$$$ | $$$x \in B$$$}. It is easy to see that no element in $$$S$$$ belongs to $$$W$$$ since if this happens it would mean that there is an element $$$x$$$ such that $$$x$$$ belongs to $$$B$$$ and $$$f(x)$$$ belongs to $$$W$$$ and by the concept of antichain that would not be possible. It is also easy to see that the elements of the set $$$S$$$ are an antichain since the set $$$S$$$ is a projection of vertices from the set $$$B$$$ of the graph $$$G'$$$ on $$$G$$$. Now we have that there are no edges between the vertices of the set $$$S$$$ and there are no edges between the vertices of the set $$$W$$$, with this it is proved that the graph is bipartite. Second part: Lemma 2: Every valid set of vertices such that they form a bipartite graph can be transformed into an antichain of $$$H$$$. Proof of Lemma 2: Let us denote $$$f^{-1}(x)$$$ a function such that $$$f^{-1}(u) = u'$$$, where vertex $$$u$$$ from graph $$$G$$$ is the projection of vertex $$$u'$$$ from graph $$$G'$$$. Let us denote the set $$$A$$$ as all vertices that have $$$indegree$$$ greater than $$$0$$$ and $$$B$$$ to all vertices that have $$$outdegree$$$ greater than $$$0$$$, now we will create a set $$$C$$$ = {$$$f^{-1}(x)$$$ | $$$x \in A$$$}, It is easy to see that set $$$B$$$ is an antichain since if one vertex has an edge to another vertex then some of them would have $$$indegree$$$ greater than $$$0$$$ and would contradict the definition of set $$$B$$$, we can also see that the elements in set $$$A$$$ are an antichain since all the elements have $$$outdegree = 0$$$ so no vertex point towards any other vertex, with this we can define that all the elements in $$$C$$$ are an antichain since they are a projection of vertices of the set $$$A$$$ from the graph $$$G$$$ on $$$G'$$$, Now we want to proof that the union of set $$$B$$$ and $$$C$$$ is an antichain, this is very simple to see since the vertices of set $$$B$$$ belong to $$$G$$$ and the vertices of $$$C$$$ belong to $$$G'$$$, therefore there is no edge from any vertex in $$$B$$$ to a vertex in $$$C$$$ since there are no edges from $$$G$$$ to $$$G'$$$. Now it only remains to proof that from set $$$C$$$ no vertex of set $$$B$$$ can be reached, this is proved taking into account that the vertices reachable from the set $$$C$$$ in the graph $$$G$$$ are the same that the vertices reachable from the set $$$A$$$ in the graph $$$G$$$, and as no vertex of $$$A$$$ has edges towards $$$B$$$, this cannot happen. Therefore the union of the sets $$$B$$$ and $$$C$$$ is an antichain of $$$H$$$. Then we can say that the two problems are equivalent and it is shown that finding the maximum antichain we obtain the largest bipartite graph. The graph $$$G$$$ contains $$$n$$$ vertices and around $$$n\cdot log(n)$$$ edges (since the numbers $$$a_x$$$ are different and the sum of the divisors from $$$1$$$ to $$$n$$$ is around $$$n\cdot log(n)$$$). The graph $$$G'$$$ is a duplicate of $$$G$$$ then we would have $$$n\cdot log(n)\cdot2 + n$$$ edges and $$$2\cdot n$$$ vertices, if we use the Hopcroft Karp algorithm we would obtain a time complexity of $$$O(n\cdot log(n)\cdot sqrt(n))$$$ and a space complexity of $$$O(n\cdot log(n))$$$. #include <bits/stdc++.h> using namespace std; struct HOPCROFT_KARP { int n, m; vector<vector<int>> adj; vector<int> mu, mv, level, que; HOPCROFT_KARP(int n, int m) : n(n), m(m), adj(n), mu(n, -1), mv(m, -1), level(n), que(n) {} void add_edge(int u, int v) { adj[u].push_back(v); } void bfs() { int qf = 0, qt = 0; for(int u = 0 ; u < n ; ++u) { if(mu[u] == -1)que[qt++] = u, level[u] = 0; else level[u] = -1; } for( ; qf < qt ; ++qf) { int u = que[qf]; for(auto w : adj[u]) { int v = mv[w]; if(v != -1 && level[v] == -1) que[qt++] = v, level[v] = level[u] + 1; } } } bool dfs(int u) { for(auto w : adj[u]) { int v = mv[w]; if(v == -1 || (level[v] == level[u] + 1 && dfs(v))) return mu[u] = w, mv[w] = u, true; } return false; } int max_matching() { int match = 0; for(int c = 1 ; bfs(), c ; match += c) for(int u = c = 0 ; u < n ; ++u) if(mu[u] == -1) c += dfs(u); return match; } pair<vector<int>, vector<int>> min_vertex_cover() { max_matching(); vector<int> L, R, inR(m); for(int u = 0 ; u < n ; ++u) { if(level[u] == -1)L.push_back(u); else if(mu[u] != -1)inR[mu[u]] = true; } for(int v = 0 ; v < m ; ++v) if(inR[v])R.push_back(v); return { L, R }; } }; const int MAXN = 5e4 + 100; int arr[MAXN]; vector<int> dv[MAXN]; int main() { ios_base::sync_with_stdio(0); cin.tie(0); for(int i = 1 ; i < MAXN ; i++) { for(int j = i*2 ; j < MAXN ; j+=i) { dv[j].push_back(i); } } for(int i = 0 ; i < MAXN ; i++) { arr[i] = -1; } int T; cin >> T; while(T--) { int N; cin >> N; vector<int> vect; for(int i = 0 ; i < N ; i++) { int n; cin >> n; vect.push_back(n); arr[n] = i; } vector<pair<int,int>> edge; for(int i = 0 ; i < N ; i++) { for(auto x : dv[vect[i]]) { if(arr[x] != -1) { edge.push_back({i, arr[x]}); } } } for(auto x : vect) { arr[x] = -1; } HOPCROFT_KARP HK(2*N,2*N); for(auto x : edge) { int i = x.first; int j = x.second; HK.add_edge(i,j); HK.add_edge(i+N,j+N); } for(int i = 0 ; i < N ; i++) { HK.add_edge(i+N,i); } cout << HK.max_matching()-N << '\n'; } return 0; }
https://codeforces.com/blog/entry/99384
CC-MAIN-2022-21
en
refinedweb
flytekit.types.file.FlyteFile¶ - class flytekit.types.file.FlyteFile(*args, **kwds)[source]¶ - Parameters path – The source path that users are expected to call open() on downloader – Optional function that can be passed that used to delay downloading of the actual fil until a user actually calls open(). remote_path – If the user wants to return something and also specify where it should be uploaded to. Methods - classmethod from_dict(kvs, *, infer_missing=False)¶ - classmethod from_json(s, *, parse_float=None, parse_int=None, parse_constant=None, infer_missing=False, **kw)¶ - classmethod schema(*, infer_missing=False, only=None, exclude=(), many=False, context=None, load_only=(), dump_only=(), partial=False, unknown=None)¶ - Parameters - Return type dataclasses_json.mm.SchemaF[dataclasses_json.mm.A] - to_dict(encode_json=False)¶ - to_json(*, skipkeys=False, ensure_ascii=True, check_circular=True, allow_nan=True, indent=None, separators=None, default=None, sort_keys=False, **kw)¶ Attributes - path: Union[str, os.PathLike] = None¶ Since there is no native Python implementation of files and directories for the Flyte Blob type, (like how int exists for Flyte’s Integer type) we need to create one so that users can express that their tasks take in or return a file. There is pathlib.Pathof course, (which is usable in Flytekit as a return value, though not a return type), but it made more sense to create a new type esp. since we can add on additional properties. Files (and directories) differ from the primitive types like floats and string in that Flytekit typically uploads the contents of the files to the blob store connected with your Flyte installation. That is, the Python native literal that represents a file is typically just the path to the file on the local filesystem. However in Flyte, an instance of a file is represented by a Blobliteral, with the urifield set to the location in the Flyte blob store (AWS/GCS etc.). Take a look at the data handling doc for a deeper discussion. We decided to not support pathlib.Pathas an input/output type because if you wanted the automatic upload/download behavior, you should just use the FlyteFiletype. If you do not, then a strworks just as well. The prefix for where uploads go is set by the raw output data prefix setting, which should be set at registration time in the launch plan. See the option listed under flytectl register examples --helpfor more information. If not set in the launch plan, then your Flyte backend will specify a default. This default is itself configurable as well. Contact your Flyte platform administrators to change or ascertain the value. In short, if a task returns "/path/to/file"and the task’s signature is set to return FlyteFile, then the contents of /path/to/fileare uploaded. You can also make it so that the upload does not happen. There are different types of task/workflow signatures. Keep in mind that in the backend, in Admin and in the blob store, there is only one type that represents files, the Blobtype. Whether the uploading happens or not, the behavior of the translation between Python native values and Flyte literal values depends on a few attributes: The declared Python type in the signature. These can be * python:flytekit.FlyteFile* os.PathLikeNote that os.PathLikeis only a type in Python, you can’t instantiate it. The type of the Python native value we’re returning. These can be * flytekit.FlyteFile* pathlib.Path* str Whether the value being converted is a “remote” path or not. For instance, if a task returns a value of “ as a FlyteFile, obviously it doesn’t make sense for us to try to upload that to the Flyte blob store. So no remote paths are uploaded. Flytekit considers a path remote if it starts with s3://, gs://, or even file://. Converting from a Flyte literal value to a Python instance of FlyteFile Converting from a Python value (FlyteFile, str, or pathlib.Path) to a Flyte literal Since Flyte file types have a string embedded in it as part of the type, you can add a format by specifying a string after the class like so. def t2() -> flytekit_typing.FlyteFile["csv"]: return "/tmp/local_file.csv"
https://docs.flyte.org/projects/flytekit/en/latest/generated/flytekit.types.file.FlyteFile.html
CC-MAIN-2022-21
en
refinedweb
VERSIONversion 0.60 SYNOPSIS use CHI; my $cache = CHI->new( driver => 'FastMmap', root_dir => '/path/to/cache/root', cache_size => '1m' ); DESCRIPTIONThis. REQUIREMENTSYou will need to install Cache::FastMmap from CPAN to use this driver. CONSTRUCTOR OPTIONS - root_dir - Path to the directory that will contain the share files, one per namespace. Defaults to a directory called 'chi-driver-fastmmap' under the OS default temp directory (e.g. '/tmp' on UNIX). - dir_create_mode - Permissions mode to use when creating directories. Defaults to 0775. Any other constructor options not recognized by CHI are passed along to Cache::FastMmap->new. METHODS - fm_cache - Returns a handle to the underlying Cache::FastMmap object. You can use this to call FastMmap-specific methods that are not supported by the general API, e.g. $self->fm_cache->get_and_set("key", sub { ... }); AUTHORJonathan Swartz <[email protected]> COPYRIGHT AND LICENSEThis software is copyright (c) 2012 by Jonathan Swartz. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.
https://manpages.org/chidriverfastmmap/3
CC-MAIN-2022-21
en
refinedweb
Things to be done before starting. - Download Node.js ( - Download XCode (Mac Only, - Create GitHub account( - Create Vercel account ( node -v You can check the version of you node. If it's not recently one, download a new one. sudo corepack enable and this will download yarn.(I'm using Mac) yarn create next-app wine.likelion.com --typescript // yarn create next-app { project name here } --typescript // if you add --typescript at the end, it means that I am going to use typescript When it's all downloaded, check if it works cd wine.likelion.com // to directory I will work yarn dev Pages I think I used router when I used React but there are pages in Next.js If you make a directory(folder) or file inside Pages directory, they work like Router. This was the first page I created. import type { NextPage } from "next"; // NextPage type (because it's typescript, you should specify types) const WinePage: NextPage = () => { return ( <div> <h1>Wine</h1> </div> ) } export default WinePage; Documentation - Page in Next.js package.json When you create next-app, you have package.json file in the directory. It's JSON format and it records important metadata about a project which is required before publishing and also defines functional attributes of a project that it uses to install dependencies, run scripts, and identify the entry point to the package. - scripts: Scripts are a great way of automating tasks related to your package, such as simple build processes or development tools. Using the "scripts" field, you can define various scripts to be run as yarn run <script>. eg) dev> yarn devor build> yarn build dev> Development mode, not optimised, it skips error sometimes build> Production mode, it's to create a product that will be deployed start> Start production server, used to test in real environment - ** works only it does yarn build** then yarn start lint> spell, syntax check with ESLINT Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/daaahailey/getting-started-with-nextjs-ni4
CC-MAIN-2022-21
en
refinedweb
Hello friends! As I've recollected in a previous post, I am an old competitor who hadn't really participated since ~2014, and recently got a bout of nostalgia to return to Competitive Programming. So, I did a couple Topcoder SRMs, suffered through some SNWS rounds, participated in three regional 5hr competitions on three consecutive days, and dozed off at every Codeforces contest I tried to wake up for. A lot of things are still the same as 8 years ago: tourist is still at the top, grey coders still ask for how many minutes to solve a problem before reading the editorial, Russian university teams continue winning ACM ICPC, and Snarknews never gives up on his alternate competition formats. But in this post, I want to focus on the new patterns that emerged since my last time around. #1: Codeforces rounds timing Did you know that there used to be a time when not ALL the freaking CF rounds were held at 5:35pm MSK? Back in the day, no matter what timezone you were in, there would be an occasional round you could compete in without having to interrupt your sleep (or your work). I'm personally pretty salty because 5:35pm MSK is like 6:35am in my timezone, and my brain simply refuses to operate for 2 consecutive hours this early in the day. And while a quick check of Ratings shows that most users are congregated in between Moscow and Japan timezones, I do wish we had rounds in more varied time slots. #2: Petr and Java Did you know that Petr used to write all contests in Java? Imagine you were looking for an intelligible implementation of any contest problem. You would sail through the ocean of C++ submissions filled with FOR0(n) and #define int long longs, and spot the safe haven of Petr's delightful Java solutions that read like a page from Dostoyevsky's masterpieces. His variable names alone could teach you more about solving this problem than the editorial itself. In fact, I was under an impression that Petr did not use prewritten code, and created each of his beautiful algorithms from scratch every time. Well folks, those days are gone. Checking Petr's recent submissions, I noticed that he turned to the dark side and the values in his Lang column for over 2 years now show... this: #3: The AtCoder library and the Codeforces Catalog There is another curious pattern in the submissions of some top contestants: namespace atcoder and a myriad of data structures and algorithms that magically pop out of this namespace, like a rabbit from a magician's top hat. The source of all this sorcery seem to be the AtCoder library. I mean, come on, red and nutella coders, can't you even write your own min-cost max-flow over a network generated by lazy propagation segment tree with 2-SAT FFT butterflies stored in each node? Meh! More generally, I can see there is a whole catalog here on CF, with links to tons of educational materials. And, in all seriousness, that's awesome — there was but a fraction of this available back when I was a student! #4: The new MOD to rule them all A lot of programming contest problems request to find some value modulo a large prime. The most common reason for this is that we are counting the number of ways to do something, but this quantity is extremely large. By requesting the answer modulo a large prime, we keep the calculations in a datatype that fits into machine word, thereby letting the solver focus on the "interesting" part of the problem. 10 years ago, the most common moduli used in contest problems were 1,000,000,007 and 1,000,000,009. These are nice, easy to remember big primes whose sum fits into a signed 32-bit datatype, making them perfect for the scenario above. Or maybe they were not, because fast-forward to today, I saw like 10 different problems requesting the answer modulo... 998244353 I mean, seriously, what the fuck is this? This is one ugly-ass forgettable prime and you should be ashamed for using it in your problems! P.S. I found Petr's blog post discussing the properties of this number. And it seems like the problem where it was introduced at the time actually relied on these properties, but not every problem does! Also, this post is from 2014 — are y'all telling me you've been using nine-nine-eight-two-four-four-three-five-three in contest problems since 2014!? #5: Probabilities modulo MOD One of my favorite topics in programming contests was always probabilities and expectations. You can just feel the blood coursing through your veins when your dynamic programming solution prints out a sexy 0.07574094401. But y'all had to ruin this as well by introducing... probabilities modulo a prime!? o_O Most probability theory-related problems I've seen in the past months request the answer modulo, you guessed it, 998244353, and proceed to add a note explaining that the answer can be represented as p/q where q is not a multiple of the given modulo. So essentially now to solve any problem with probabilities you have to also throw in some number theory and probably even more combinatorics. And you can't even receive a 0.07574094401 as an answer! Can anyone explain how this new fad started? #6: Li-Chao Segment Trees Saving the best for last! A couple weeks back, I was delightfully reading Um_nik's list of things he allegedly does not know, and it was heralded by a data structure called a Li-Chao Segment Tree. I initially thought that this was a joke, but a quick google search revealed that people write tutorials on this thing. Then I told myself that surely nobody would bring a problem utilizing a Li-Chao Segment Tree on Codeforces, but the very next Codeforces round editorial made sure to prove me wrong by mentioning a Li-Chao tree (I didn't dig into details, but you get the point!). Come on guys, was Heavy-Light Decomposition not an obscure enough data structure to bring to contests? Someone has to stop this madness!
http://codeforces.com/blog/entry/99364
CC-MAIN-2022-21
en
refinedweb
Queries a physical volume and returns all pertinent information. Logical Volume Manager Library (liblvm.a) #include <lvm.h> int lvm_querypv (VG_ID, PV_ID, QueryPV, PVName) struct unique_id *VG_ID; struct unique_id *PV_ID; struct querypv **QueryPV; char *PVName; Note: You must have root user authority to use the lvm_querypv subroutine. The lvm_querypv subroutine returns information on the physical volume specified by the PV_ID parameter. The querypv structure, defined in the lvm.h file, contains the following fields: struct querypv { long ppsize; long pv_state; long pp_count; long alloc_ppcount; struct pp_map *pp_map; long pvnum_vgdas; } struct pp_map { long pp_state; struct lv_id lv_id; long lp_num; long copy; struct unique_id fst_alt_vol; long fst_alt_part; struct unique_id snd_alt_vol; long snd_alt_part; } The PVName parameter enables the user to query from a volume group descriptor area on a specific physical volume instead of from the Logical Volume Manager's (LVM) most recent, in-memory copy of the descriptor area. This method should only be used if the volume group is varied off. The data returned is not guaranteed to be most recent or correct, and it can reflect a back level descriptor area. The PVname parameter should specify either the full path name of the physical volume that contains the descriptor area to query or a single file name that must reside in the /dev directory (for example, rhdisk1). This field must be a null-terminated string of from 1 to LVM_NAMESIZ bytes, including the null byte, and represent a raw or character device. If a raw or character device is not specified for the PVName parameter, the LVM will add an r to the file name in order to have a raw device name. If there is no raw device entry for this name, the LVM will return the LVM_NOTCHARDEV error code. If a PVName is specified, the volume group identifier, VG_ID , will be returned by the LVM through the VG_ID parameter passed in by the user. If the user wishes to query from the LVM in-memory copy, the PVName parameter should be set to null. When using this method of query, the volume group must be varied on, or an error will be returned. Note: As long as the PVName is not null, the LVM will attempt a query from a physical volume and not from its in-memory copy of data. In addition to the PVName parameter, the caller passes the VG_ID parameter, indicating the volume group that contains the physical volume to be queried, the unique ID of the physical volume to be queried, the PV_ID parameter, and the address of a pointer of the type QueryPV. The LVM will separately allocate enough space for the querypv structure and the struct pp_map array and return the address of the querypv structure in the QueryPV pointer passed in. The user is responsible for freeing the space by freeing the struct pp_map pointer and then freeing the QueryPV pointer. The lvm_querypv subroutine returns a value of 0 upon successful completion. If the lvm_querypv subroutine fails it returns one of the following error codes: If the query originates from the varied-on volume group's current volume group descriptor area, one of the following error codes may be returned: If a physical volume name has been passed, requesting that the query originate from a specific physical volume, then one of the following error codes may be returned: This subroutine is part of Base Operating System (BOS) Runtime. The lvm_varyonvg subroutine. List of Logical Volume Subroutines and Logical Volume Programming Overview in AIX Version 4.3 General Programming Concepts: Writing and Debugging Programs.
http://ps-2.kev009.com/tl/techlib/manuals/adoclib/libs/basetrf1/lvmquerz.htm
CC-MAIN-2022-21
en
refinedweb
TensorFlow's QueueRunner Sunday April 30, 2017 A TensorFlow QueueRunner helps to feed a TensorFlow queue using threads which are optionally managed with a TensorFlow Coordinator. QueueRunner objects can be used directly, or via higher-level APIs, which also offer automatic TensorBoard summaries. import tensorflow as tf session = tf.Session() QueueRunner Directly To use a QueueRunner you need a TensorFlow queue and an op that puts a new thing in the queue every time that op is evaluated. Typically, such an op will itself involve a queue, which is a bit of a tease. To avoid that circularity, this example will use random numbers. queue = tf.FIFOQueue(capacity=10, dtypes=[tf.float32]) random_value_to_enqueue = tf.random_normal([]) # shape=[] means a single value enqueue_op = queue.enqueue(random_value_to_enqueue) random_value_from_queue = queue.dequeue() At this point if you evaluate random_value_from_queue in the session it will block, because nothing has been put in the queue yet. queue_runner = tf.train.QueueRunner(queue, [enqueue_op]) Still nothing has been enqueued, but queue_runner stands ready to make threads that will do the enqueuing. If you put more enqueue ops in the list, or the same one multiple times, more threads will be started when things get going. coordinator = tf.train.Coordinator() threads = queue_runner.create_threads(session, coord=coordinator, start=True) Using start=True means we won't have to call .start() for each thread ourselves. >>> len(threads) ## 2 There are two threads running: one for handling coordinated shutdown, and one for the enqueue op. Now at last we can get at random values from the queue! >>> session.run(random_value_from_queue) ## 0.69283932 >>> session.run(random_value_from_queue) ## -0.53802371 The feeding thread will try to keep the queue at capacity, which was set to 10, so there should always be more items available to dequeue. Since we used a coordinator, we can shut the threads down nicely. coordinator.request_stop() coordinator.join(threads) QueueRunner with Higher-Level APIs It's possible to work with QueueRunner directly, as shown above, but it's easier to use higher-level TensorFlow APIs that themselves use QueueRunner. It's common for TensorFlow queue-chains to start with a list of filenames (sometimes a list of just one filename) to read data from. The string_input_producer function makes a queue using provided strings. queue = tf.train.string_input_producer(['a', 'b', 'c', 'd']) letter_from_queue = queue.dequeue() This is a FIFOQueue, just as before, but notice we don't have an enqueue op for it. Like many things in tf.train, here TensorFlow has already done some work for us. A QueueRunner has already been made, and it was added to the queue_runners collection. >>> tf.get_collection('queue_runners') ## [<tensorflow.python.training.queue_runner_impl.QueueRunner object at 0x121ee2c10>] You could access and run that QueueRunner directly, but tf.train makes things easier. coordinator.clear_stop() threads = tf.train.start_queue_runners(sess=session, coord=coordinator) tf.train.start_queue_runners automatically starts the threads. >>> session.run(letter_from_queue) session.run(letter_from_queue) ## 'a' >>> session.run(letter_from_queue) ## 'c' >>> session.run(letter_from_queue) ## 'd' >>> session.run(letter_from_queue) ## 'b' >>> session.run(letter_from_queue) ## 'd' By default, the queue will go through the original items multiple times, or multiple epochs, and shuffle the order of strings within an epoch. Limiting the number of epochs uses a local variable, which must be initialized. queue = tf.train.string_input_producer(['a', 'b', 'c', 'd'], shuffle=False, num_epochs=1) letter_from_queue = queue.dequeue() initializer = tf.local_variables_initializer() session.run(initializer) threads = tf.train.start_queue_runners(sess=session, coord=coordinator) Now the QueueRunner will close the queue when there isn't anything more to put in it, so the dequeue op will eventually give an OutOfRangeError. >>> session.run(letter_from_queue) ## 'a' >>> session.run(letter_from_queue) ## 'b' >>> session.run(letter_from_queue) ## 'c' >>> session.run(letter_from_queue) ## 'd' >>> session.run(letter_from_queue) ## OutOfRangeError Automatic TensorBoard Summaries A single-epoch queue will be helpful for illustrating an interesting thing about tf.train.string_input_producer: it automatically adds a TensorBoard summary to the graph. It's nice to have direct control over every detail of your program, but the conveniences of higher-level APIs are also pretty nice. The summary that gets added is a scalar summary representing the percent full that the queue is. tf.reset_default_graph() # Starting queue runners will fail if a queue is session = tf.Session() # closed, so we need to clear things out. queue = tf.train.string_input_producer(['a', 'b', 'c', 'd'], shuffle=False, capacity=4, num_epochs=1) letter_from_queue = queue.dequeue() summaries = tf.summary.merge_all() summary_writer = tf.summary.FileWriter('logs') initializer = tf.local_variables_initializer() session.run(initializer) coordinator = tf.train.Coordinator() threads = tf.train.start_queue_runners(sess=session, coord=coordinator) for i in range(4): summary_writer.add_summary(session.run(summaries), i) session.run(letter_from_queue) summary_writer.add_summary(session.run(summaries), 4) summary_writer.flush() # Ensure everything is written out. After opening up TensorBoard with tensorboard --logdir=logs and going to and and turning off plot smoothing, you'll see this: This shows that the queue, with capacity four, started 100% full and then every time something was dequeued from it it became 25 percentage points less full until it was empty. For this example, the queue wasn't being refilled, so we knew it would become less and less full. But if you're reading data into an input queue that you expect to keep full, it's a great diagnostic to be able to check how full it actually is while things are running, to find find out if loading data is a bottleneck. The automatic TensorBoard logging here is also a nice first taste of all the things that happen with even higher-level TensorFlow APIs. I'm working on Building TensorFlow systems from components, a workshop at OSCON 2017.
https://planspace.org/20170430-tensorflows_queuerunner/
CC-MAIN-2022-21
en
refinedweb
Gecode::Int::Linear::NegSupportIter< Val > Class Template Reference Support-based iterator for negative view. More... #include <int-dom.hpp> Detailed Description template<class Val> class Gecode::Int::Linear::NegSupportIter< Val > Support-based iterator for negative view. Definition at line 142 of file int-dom.hpp. Member Function Documentation template<class Val > Reset iterator to beginning and adjust. - d accordingly Definition at line 301 of file int-dom.hpp. template<class Val > Adjust. - d and return true if next value still works Definition at line 325 of file int-dom.hpp. The documentation for this class was generated from the following file: - gecode/int/linear/int-dom.hpp (Revision: 11014)
https://www.gecode.org/doc/3.7.3/reference/classGecode_1_1Int_1_1Linear_1_1NegSupportIter.html
CC-MAIN-2022-21
en
refinedweb
A module is basically a file which has many lines of python code that can be referred or used by other python programs. A big python program should be organized to keep different parts of the program in different modules. That helps in all aspects like debugging, enhancements and packaging the program efficiently. To use a module in any python program we should first import it to the new program. All the functions, methods etc. from this module then will be available to the new program. Let’s create a file named profit.py which contains program for a specific calculation as shown below. def getprofit(cp, sp): result = ((sp-cp)/cp)*100 return result Next we want to use the above function in another python program. We can then use the import function in the new program to refer to this module and its function named getprofit. import profit perc=profit.getprofit(350,500) print(perc) Running the above code gives us the following result − 42.857142857142854 We can also import only a specific method from a module instead of the entire module. For that we use the from Module import statement as shown below. In the below example we import the value of pi from math module to be used in some calculation in the program. from math import pi x = 30*pi print(x) Running the above code gives us the following result − 94.24777960769379 If we want to know the location of various inbuilt modules we can use the sys module to find out. Similarly to know the various function available in a module we can use the dir method as shown below. import sys import math print(sys.path) print(dir(math)) Running the above code gives us the following result − [' ', 'C:\\Windows\\system32\\python38.zip', 'C:\\Python38\\DLLs', 'C:\\Python38\\lib', 'C:\\Python38', 'C:\\Python38\\lib\\site-packages'] ['…..log2', 'modf', 'nan', 'perm', 'pi', 'pow', 'prod',….]
https://www.tutorialspoint.com/import-a-module-in-python
CC-MAIN-2022-21
en
refinedweb
How can I write to SiPy flash memory? I need persistent storage for my application and intend to use the internal flash memory. The spec sheet says that there is 4MB flash but I can't find any documentation for using the flash. We are not designing another pcb and intend to just use the sipy board with a power supply for a demo. Without having the ability to use onboard flash memory, the application isnt much use when the power resets which would happen periodically. @jmarcelino Thank you for sharing this. To add to this question, are there code example of we can persist an array of structs to the file system? I have a clear idea of how I want to implement a simple persistent database but I cant find micropython examples of doing so. The btree library would have been great but I realised I couldnt use it in this version of the port. - jmarcelino last edited by jmarcelino You can write files to the onboard /flash filesystem, there's about 512Kb available (the stated 4MB includes the space taken up by two copies of MicroPython and some system storage area) See this existing topic on how to read/write to files: more info about my Sipy MicroPython v1.8.6-724-g2b9ed601 on 2017-07-28; SiPy with ESP32 Type "help()" for more information. import os os.uname() (sysname='SiPy', nodename='SiPy', release='1.7.8.b1', version='v1.8.6-724-g2b9ed601 on 2017-07-28', machine='SiPy with ESP32', sigfox='1.0. 1')
https://forum.pycom.io/topic/1661/how-can-i-write-to-sipy-flash-memory
CC-MAIN-2022-21
en
refinedweb
Message-ID: <1063779203.26329.1408518181630.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_26328_956573235.1408518181630" ------=_Part_26328_956573235.1408518181630 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: Additional CRS beyong the one defined in the official EPSG database. The= additional CRS are defined as Well Known Text (WKT) in a property= file. At the difference of the legacy epsg-wkt module, only C= RS not found in the SQL-backed EPSG database are defined in the property fi= le. This module contains 2 property files:=20 esri.properties= a>contains additional CRS defined by ESRI in the EPSG namespace.=20 unnamed.proper= tiescontains additional CRS from other sources in the EPSG name= space. Module completed. Stable if no major issues are reported.=20 Just drop the JAR in the classpath. New EPSG codes made available by thi= s plugin are listed in the two properties files mentioned above.=20 None planed at this time.
http://docs.codehaus.org/exportword?pageId=72068
CC-MAIN-2014-35
en
refinedweb
What is templated DRM code? It was first discussed in a email about what Gareth had done to bring up the mach64 kernel module. Not wanting to simply copy-and-paste another version of drv.[ch], context.c, _bufs.s and so on, Gareth did some refactoring along the lines of what him and Rik Faith had discussed a long time ago. This is very much along the lines of a lot of Mesa code, where there exists a template header file that can be customized with a few defines. At the time, it was done drv.c and context.c, creating drivertmp.h_ and contexttmp.h_ that could be used to build up the core module. An inspection of mach64drv.c_ on the mach64-0-0-1-branch reveals the following code: #define DRIVER_AUTHOR "Gareth Hughes" #define DRIVER_NAME "mach64" #define DRIVER_DESC "DRM module for the ATI Rage Pro" #define DRIVER_DATE "20001203" #define DRIVER_MAJOR 1 #define DRIVER_MINOR 0 #define DRIVER_PATCHLEVEL 0 static drm_ioctl_desc_t mach64_ioctls[] = { [DRM_IOCTL_NR(DRM_IOCTL_VERSION)] = { mach64_version, 0, 0 }, [DRM_IOCTL_NR(DRM_IOCTL_GET_UNIQUE)] = { drm_getunique, 0, 0 }, [DRM_IOCTL_NR(DRM_IOCTL_GET_MAGIC)] = { drm_getmagic, 0, 0 }, [DRM_IOCTL_NR(DRM_IOCTL_IRQ_BUSID)] = { drm_irq_busid, 0, 1 }, [DRM_IOCTL_NR(DRM_IOCTL_SET_UNIQUE)] = { drm_setunique, 1, 1 }, [DRM_IOCTL_NR(DRM_IOCTL_BLOCK)] = { drm_block, 1, 1 }, [DRM_IOCTL_NR(DRM_IOCTL_UNBLOCK)] = { drm_unblock, 1, 1 }, [DRM_IOCTL_NR(DRM_IOCTL_AUTH_MAGIC)] = { drm_authmagic, 1, 1 }, [DRM_IOCTL_NR(DRM_IOCTL_ADD_MAP)] = { drm_addmap, 1, 1 }, [DRM_IOCTL_NR(DRM_IOCTL_ADD_BUFS)] = { drm_addbufs, 1, 1 }, [DRM_IOCTL_NR(DRM_IOCTL_MARK_BUFS)] = { drm_markbufs, 1, 1 }, [DRM_IOCTL_NR(DRM_IOCTL_INFO_BUFS)] = { drm_infobufs, 1, 0 }, [DRM_IOCTL_NR(DRM_IOCTL_MAP_BUFS)] = { drm_mapbufs, 1, 0 }, [DRM_IOCTL_NR(DRM_IOCTL_FREE_BUFS)] = { drm_freebufs, 1, 0 }, [DRM_IOCTL_NR(DRM_IOCTL_ADD_CTX)] = { mach64_addctx, 1, 1 }, [DRM_IOCTL_NR(DRM_IOCTL_RM_CTX)] = { mach64_rmctx, 1, 1 }, [DRM_IOCTL_NR(DRM_IOCTL_MOD_CTX)] = { mach64_modctx, 1, 1 }, [DRM_IOCTL_NR(DRM_IOCTL_GET_CTX)] = { mach64_getctx, 1, 0 }, [DRM_IOCTL_NR(DRM_IOCTL_SWITCH_CTX)] = { mach64_switchctx, 1, 1 }, [DRM_IOCTL_NR(DRM_IOCTL_NEW_CTX)] = { mach64_newctx, 1, 1 }, [DRM_IOCTL_NR(DRM_IOCTL_RES_CTX)] = { mach64_resctx, 1, 0 }, [DRM_IOCTL_NR(DRM_IOCTL_ADD_DRAW)] = { drm_adddraw, 1, 1 }, [DRM_IOCTL_NR(DRM_IOCTL_RM_DRAW)] = { drm_rmdraw, 1, 1 }, [DRM_IOCTL_NR(DRM_IOCTL_LOCK)] = { mach64_lock, 1, 0 }, [DRM_IOCTL_NR(DRM_IOCTL_UNLOCK)] = { mach64_unlock, 1, 0 }, [DRM_IOCTL_NR(DRM_IOCTL_FINISH)] = { drm_finish, 1, 0 }, #if defined(CONFIG_AGP) || defined(CONFIG_AGP_MODULE) [DRM_IOCTL_NR(DRM_IOCTL_AGP_ACQUIRE)] = { drm_agp_acquire, 1, 1 }, [DRM_IOCTL_NR(DRM_IOCTL_AGP_RELEASE)] = { drm_agp_release, 1, 1 }, [DRM_IOCTL_NR(DRM_IOCTL_AGP_ENABLE)] = { drm_agp_enable, 1, 1 }, [DRM_IOCTL_NR(DRM_IOCTL_AGP_INFO)] = { drm_agp_info, 1, 0 }, [DRM_IOCTL_NR(DRM_IOCTL_AGP_ALLOC)] = { drm_agp_alloc, 1, 1 }, [DRM_IOCTL_NR(DRM_IOCTL_AGP_FREE)] = { drm_agp_free, 1, 1 }, [DRM_IOCTL_NR(DRM_IOCTL_AGP_BIND)] = { drm_agp_bind, 1, 1 }, [DRM_IOCTL_NR(DRM_IOCTL_AGP_UNBIND)] = { drm_agp_unbind, 1, 1 }, #endif [DRM_IOCTL_NR(DRM_IOCTL_MACH64_INIT)] = { mach64_dma_init, 1, 1 }, [DRM_IOCTL_NR(DRM_IOCTL_MACH64_CLEAR)] = { mach64_dma_clear, 1, 0 }, [DRM_IOCTL_NR(DRM_IOCTL_MACH64_SWAP)] = { mach64_dma_swap, 1, 0 }, [DRM_IOCTL_NR(DRM_IOCTL_MACH64_IDLE)] = { mach64_dma_idle, 1, 0 }, }; #define DRIVER_IOCTL_COUNT DRM_ARRAY_SIZE( mach64_ioctls ) #define HAVE_CTX_BITMAP 1 #define TAG(x) mach64_##x #include "driver_tmp.h" And that's all you need. A trivial amount of code is needed for the context handling: #define __NO_VERSION__ #include "drmP.h" #include "mach64_drv.h" #define TAG(x) mach64_##x #include "context_tmp.h" And as far as I can tell, the only thing that's keeping this out of mach64drv.c_ is the , which is a 2.2 thing and is not used in 2.4 (right?). To enable all the context bitmap code, we see the #define HAVE_CTX_BITMAP 1 - To enable things like AGP, MTRR's and DMA management, the author simply needs to define the correct symbols. With less than five minutes of mach64-specific coding, I had a full kernel module that would do everything a basic driver requires -- enough to bring up a software-fallback driver. The above code is all that is needed for the tdfx driver, with appropriate name changes. Indeed, any card that doesn't do kernel-based DMA can have a fully functional DRM module with the above code. DMA-based drivers will need more, of course. The plan is to extend this to basic DMA setup and buffer management, so that the creation of PCI or AGP DMA buffers, installation of IRQs and so on is as trivial as this. What will then be left is the hardware-specific parts of the DRM module that deal with actually programming the card to do things, such as setting state for rendering or kicking off DMA buffers. That is, the interesting stuff. A couple of points: - Why was it done like this, and not with C++ features like virtual functions (i.e. why don't I do it in C++)? Because it's the Linux kernel, dammit! No offense to any C++ fan who may be reading this :-) Besides, a lot of the initialization is order-dependent, so inserting or removing blocks of code with #defines is a nice way to achieve the desired result, at least in this situation. - Much of the core DRM code (like bufs.c, context.c and dma.c) will essentially move into these template headers. I feel that this is a better way to handle the common code. Take context.c as a trivial example -- the i810, mga, tdfx, r128 and mach64 drivers have exactly the same code, with name changes. Take bufs.c as a slightly more interesting example -- some drivers map only AGP buffers, some do both AGP and PCI, some map differently depending on their DMA queue management and so on. Again, rather than cutting and pasting the code from drm_addbufs into my driver, removing the sections I don't need and leaving it at that, I think keeping the core functionality in bufstmp.h_ and allowing this to be customized at compile time is a cleaner and more maintainable solution. This it has the possibility to make keeping the other OS's code up to date a lot easier. The current mach64 branch is only using one template in the driver. Check out the r128 driver from the trunk, for a good example. Notice there are files in there such as r128tritmp.h_. This is a template that gets included in r128tris.c_. What it does basically is consolidate code that is largely reproduced over several functions, so that you set a few macros. For example: #define IND (R128_TWOSIDE_BIT) #define TAG(x) x##_twoside followed by #include "r128_tritmp.h" Notice the inline function's name defined in r128tritmp.h_ is the result of the TAG macro, as well the function's content is dependent on what IND value is defined. So essentially the inline function is a template for various functions that have a bit in common. That way you consolidate common code and keep things consistent. Look at e.g. xc/programs/Xserver/hw/xfree86/os-support/linux/drm/kernel/r128.h though. That's the template architecture at its beauty. Most of the code is shared between the drivers, customized with a few defines. Compare that to the duplication and inconsistency before.
http://dri.freedesktop.org/wiki/DRMTemplates/?action=raw
CC-MAIN-2014-35
en
refinedweb
The Windows Phone Bing Maps Silverlight control is almost, but not quite, identical to the desktop control. We take a look at using it and some of the ways that it differs. Mapping is a common requirement for a mobile app and now it is easy to add mapping, even custom mapping, to a mobile application using the Bing Silverlight control. It is important to realise that you have two distinct ways to use Bing Maps in your app. The original Javascript/Ajax control is supported and you can use this in a Web application or you can embed a browser control within your app and access it in the same way. However the Silverlight control has a lot going for it and it is much easier to integrate it into the logic of your app rather than having to write Javascript to make it all work. To make use of the Siverlight Bing Maps control you have to download the lasted edition of the Windows Phone Tools. This is a good idea anyway as you can't submit apps to the app store using an older version. To try it out start Visual Studio and create a new Windows Phone Silverlight application. To add the Bing Maps Control to your project simply use the Toolbox and drag-and-drop and resize the map using the designer. Notice that the Windows Phone Map control and the desktop map control are different and they have a different namespace. The designer adds the line: xmlns:my="clr-namespace:Microsoft. Phone.Controls.Maps; assembly=Microsoft.Phone.Controls.Maps" so you can refer to the control with the prefix my. The generated XAML map tag is something like: <my:Map As soon as you add the map tag the designer displays a default map view. You can't interact or use the map facilities in the designer - it just acts as a placeholder. The map in the designer If you now run the program you will discover that a map appears but then a message to say that you have invalid credentials appears. <my:Map where, of course you replace KEY with your real Bing Maps key. If you now run the application you will see the same map but now you should be able to interact with it using the zoom, pan and all of the other default tools. You don't have to use XAML - ever. XAML is just an object instantiation and property initialisation language. Anything you can do in XAML you can do in code and it's often a good idea to see how things are done without XAML. First you need to add: using Microsoft.Phone.Controls.Maps; using Microsoft.Phone.Controls.Maps; to the start of the code file.so that you can refer to map classes using short names. In the button's click handler you first create the map object: private void button1_Click( object sender, RoutedEventArgs e){ Map map = new Map(); Next you set the credentials: map.CredentialsProvider = new ApplicationIdCredentialsProvider( "KEY"); and as before you have to replace KEY with your Bing Maps key. Now we can add the map object to the Grid's Children collection so that it displays: LayoutRoot.Children.Add(map);} This is all that we need and if you run this program you will see the default map as before. <ASIN:0470886595> <ASIN:1449388361> <ASIN:0735643350> <ASIN:1430233060> <ASIN:1430234105>
http://i-programmer.info/programming/mobile/1357-windows-phone-7-the-bing-maps-control.html
CC-MAIN-2014-35
en
refinedweb
27 January 2010 16:24 [Source: ICIS news] TORONTO (ICIS news)--Germany’s economy should grow by 1.4% in 2010, after a 5.0% decline in 2009 from 2008, the country’s government said on Wednesday in its official economic report and forecast for the year. The 1.4% for 2010 is up by 0.2 percentage points from 1.2% growth the government had forecast in October. Even though capacity utilisation in ?xml:namespace> Exports were expected to increase by 5.1%, after a 14.7% decline in 2009 from 2008, it said. However, even with the increase, exports were not expected to regain their pre-crisis level this year, it added. At the same time, producers would continue to struggle with high costs and were likely to cut more jobs, with the unemployment rate forecast to average 8.9% in 2010, up from 8.2% in 2009, the government said. To help the economy, the government reduced the overall tax burden for families and companies by a total €24bn ($34bn), it said. At the same time, it promised a comprehensive reform to simplify the tax system and to reduce bureaucracy. While But in an economics update last week, Wiesbaden-based chemical employers group BAVC said even with 5% growth in 2010, chemical production would still be below its 2006 level. At the same time, ($1 = €0.71)
http://www.icis.com/Articles/2010/01/27/9329214/germany-boosts-official-2010-growth-forecast-to-1.4.html
CC-MAIN-2014-35
en
refinedweb
I've successfully created a class with um, well, here: And I obviously start this with a normal 'TechClass tech = new TechClass();' (with the appropriate using).And I obviously start this with a normal 'TechClass tech = new TechClass();' (with the appropriate using).Code:using System; namespace SP2ClassLibrary { public class TechClass { /// <summary> /// Agricultural Tech /// </summary> public int[] AT = new int[12]; // 0 /// <summary> /// Birth Labs Tech /// </summary> public int[] BL = new int[12]; // 1 /// <summary> /// Build Points Tech /// </summary> public int[] BP = new int[12]; // 2 ... There is a lot more like this too... } } I will obviously change it to a get/set system if you seriously recommend doing so. First off, what is the best way to convert the property from tech.AT[x] to tech[x].AT? Make it TechClass[] tech = new TechClass[12];? I do suspect that would be the way. BTW, x is the player number, and the letters represent the specific technology level of the player, AT is agricultural tech, so tech.AT[0] is the Agricultural Tech Level of player zero. After switching this around, I would like to understand how to add um, submethods (sorry, names ALWAYS escape me!). For example, my program does a lot of calculations using these variables, especially text output to a form. One such case is properly formatting this on a form. It would be a lot more readable to have something such as tech[0].AT.Name (as a string type), or tech[0].AT.SpacedString that outputs the variable number (ie. 1) as a string with a set of preceding spaces. My aim is to achieve a very high level of readability in my code, and also to learn more about classes. Anyone up to this?
http://cboard.cprogramming.com/csharp-programming/96465-classes-advice-needed.html
CC-MAIN-2014-35
en
refinedweb
Eric W. Biederman wrote:>> I'm fine with such situations, since we need containers mostly, but what makes>> me really afraid is that it introduces hard to find/fix/maintain issues. I have no>> any other concerns.> > Hard to find and maintain problems I agree should be avoided. There are only two> ways I can see coping with the weird interactions that might occur.>> 1) Assert weird interactions will never happen, don't worry about it,> and stomp on any place where they can occur. (A fully isolated container approach).> > 2) Assume weird interactions happen and write the code so that it simply> works if those interactions happen, because for each namespace you have> made certain the code works regardless of which namespace the objects are> in.> > The second case is slightly harder. But as far as I can tell it is more robust> and allows for much better incremental development.hmm, slightly, I would say much harder and these weird interactions arevery hard to anticipate without some experience in the field. We couldcontinue on arguing for ages without making any progress.let's apply that incremental development approach now. Let's work on simplenamespaces which would make _some_ container scenarios possible and notall. IMHO, that would mean tying some namespaces together and find a way tounshare them safely as a whole. Get some experience on it and then work onunsharing some more independently for the benefit of more use casescenarios. I like the concept and I think it will be useful.just being pragmatic, i like things to start working in simple cases beforeover optimizing them.cheers,C.-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2006/7/13/231
CC-MAIN-2014-35
en
refinedweb
Back to article November 5, 2000 Acronyms and shorthand are as much a part of IT as silicon, or so it seems. In keeping with this time honored tradition, I am proposing here that everyone extend this to include the T-SQL code that they write. C programmers are long familiar with using abbreviations in defining functions, procedures, and variables. Remember the famous Hungarian notation? It helped tremendously in tracking down problems with code by providing the programmer with information about the type of data contained in a variable, this type was expressed as an Abbreviation prefixed to the variable name. This can also be useful in database programming for a number of reasons, it helps to enforce standards, reduce typing, and prevent collisions with reserved words. With a standard list of abbreviations for data elements, your colleagues can more easily understand what data elements your code is working with. Granted long names can do the same thing, but there are other reasons not to use long names. Standards are extremely important in many aspects of life, but in programming these days with distributed teams of workers, they can substantially reduce the amount of time spent working on someone else's code. My advice is to gather existing programmers together and create a list of abbreviations for the commonly used objects in your system. Then publish this list for all programmers to reference as well as an introductory document for new programmers. By agreeing to this list as a team, you will foster communication as well as help to get everyone to agree to a list. I type fairly fast. While not a touch typist, I can type about 70 words a minute when I know what I want to type. If I type shorter words, then I can up this quite a bit. In programming, this means more lines of code per minute/hour/day spent at the keyboard. If I can reduce the amount of typing by using abbreviations, then less work for me. Or I get more done in less time and get a bigger bonus. Or (most importantly) I can spend more time with my family. Look at this code: update customerorders set shiptoaddress = '123 mystreet', shiptocity = 'The Big Apple', extrainstructions = 'drop off with landlady' where customerordersid = 999 With abbreviations, I could easily rewrite the above code to look like the following: update custord set shipaddr = '123 mystreet', shipcity = 'The Big Apple', xtrinstr = 'drop off with landlady' where custordid = 999 In one database I inherited from a development team, there were two tables that constantly bothered me. One was called "Orders" and the other was "OrderItem". Apart from not using abbreviations, the part that I had to think about (and was bothered by) everytime I referenced them was whether "order" was singular or plural. I knew why this was the case; the developer who created the tables wanted them to be "order" and "orderitem", but "order" is a reserved word in SQL Server (and probably more RDBMSes). So he created this inconsistency. I also got burned on a v6.5 to v7.0 database upgrade. "Percent" is not reserved in v6.5, but is in v7.0. Lucky for me there were about ten tables that used this column in the v6.5 database and were queried from a VB app on many desktops across the country (of course the queries were hardcoded). Can you imagine the fun I had? Both of these situations could easily be avoided by using abbreviations. If "order" were changed to "ord" and "percent" to "prcnt", then no problems would have resulted and development would not have been delayed. In fact, the time to setup abbreviations and learn them would probably be offset by the reduced typing and mistakes that could result. In addition, you would not be likely to collide with new keywords that may be created in SQL Server 2001. One note here, with SQL Server 7, you can enclose your variable and object names in brackets [] and embed white space as well as use reserved words. I personally do not like this approach and hate embedded white spaces (I think they create more typos and confusion than they are worth) but this is a perfectly valid approach in the SQL Server world. I cannot speak as to whether other RDBMS systems support this. This is a results oriented arena, so I would be remiss if I did not provide examples of abbreviations as well as a means to maintain them. The example is an Excel 2000 spreadsheet, so apologies to those of you without Excel 2000. Here it is and it includes a macro to re-sort the data. As for the maintenance, what I normally do is setup a share on the network that all developers have access to and put the spreadsheet in there along with other development documents (standards, practices, etc.). Then each developer can check the spreadsheet whenever they are unsure of an abbreviation. Abbreviations.xls For new abbreviations, you have two choices, both of which have worked well in the past. First, you can appoint someone to be the keeper of this document and everyone else has to "request" new abbreviations from this person. The job can be rotated, but this keeps the system ordered. Alternatively, you can have a new abbreviation sent around in email and "voted" on by the group. If everyone agrees, then the requester is responsible for adding it and resporting the sheet. I do not intend this as a universal list of abbreviations. In fact, you should save the spreadsheet and then delete all the data in it and add your own abbreviations that are appropriate to your company. In my last job, we used Pro for Product, though not consistently. In my current company, Prod was chosen as a team. Either one can work, just be consistent and enforce the standard. Abbreviations good, no abbreviations bad! Well, it may not be quite that simple, but it's close. Over time I have found this to be a real time saver with not much work involved. It usually takes more time to explain the rational to someone and then have them nod and agree than it does to implement. I feel and have found that having abbreviations as the standard usually helps enforce this and less programmers take shortcuts and create their own abbreviations in their section of code. It is rare I find a programmer that wants to type more than they have to. I hope this helps and as always, eelcome any feedback from you. I received this from Jim Sally ( I removed the intro and conlusions parapgraphs which were not relevant) First off, yes acronyms and shorthand have had a long time tradition in software development, but... this is the year 2000, memory is a dime a dozen (or so to speak), and in your specific instances, SQL 7 and above have extended the namespace size of its identifiers beyond the 32 character limitations of earlier versions. This wasn't done just for the hell of it, but as a thought out decision to let database developers create VERY self-descriptive names for database objects. In general, software development has refined itself to the point of being able to call some instances of itself 'software engineering'. The engineering part of it is important. And part of some of my most favorite parts of that deal with writing software that is NOT abbreviated, but very specifically spelled out. When I'm working with new folks joining my company, I don't care if they take twice as long to type out stuff using my naming schemes, because once they are done, it is very obvious what they typed since it is spelled out in clear english. Take your example as a perfect example of hard to read stuff if I were to come along and pick up your code after you left. extrainstructions => xtrinstr At first glance, even after the 10+ years of software development experience that I've had, 'xtrinstr' means nothing to me, where as 'ExtraInstructions' (note the proper capitalization too) has a very specific and clear meaning. I also think that your use of Hungarian notation as an example of why people should abbreviate is just wrong. First off, Hungarian notation was not an "abbreviating notation", but a prefixing notation for type information. It was useful information to have as a programmer while coding, but not necessary for following the logic in the code, which is why we name identifiers using english words rather than 'value1', value2', value3', etc. I can see if you were going to suggest using Hungarian notation in naming your fields, but you didn't. Bottom line, your article is yet another reason that young and inexperienced developers turn out half-assed applications that are hard to maintain and understand. And all in the name of saving time typing. I agree with much of what Mr Sally writes and apologize if I misled anyone with the analogy to Hungarian Notation (which is an abbreviation for a type). But I also think that I have a valid method of helping software development. In my experience, using abbreviations is productive and speeds development when used properly. The shorthand notation I am proposing has been used widely in written communication and I think it can be applied here. I do believe that the namespaces were extended precisely as Mr Sally to allow for more descriptive variable and object names. My time as a senior DBA, however, is valuable and saving me time is more efficient than saving a junior DBA time. I would also argue that the meaning of variables, even long, spelled- out descriptive names is not going to help junior programmers who are not familiar with the company's lexicon (which is also where most variable names come from). Is Customer really better than Cust? What does this mean in your company? Is it the same as User or Usr? Maybe, maybe not. The abbreviations I suggest are not intended to be universal, nor are they suited for every environment. I hope that you publish a standard set of abbreviations for your company as well as a set of standards for naming, coding, etc. that is given to every new programmer that joins the company. One of the standards that I learned early on was to use i, j, and k as counter variables. I still do this and let my programmers know that this is a standard that they will find in my code, rather than using counter or some other name. I expect and enure they do the same. There is not substitute for a good data dictionary and there is also no substitute for a well written, clear, and concise set of programming standards as well. I hope that I did not lead anyone to believe that you should remove all the vowels from your coding and expect people to understand the code. Indeed, some of my "abbreviations" are in fact the entire word ( email, fax, etc.). You should develop a set of standards as a team of programmers and then publish them for everyone to use. If you do this, then all existing programmers will easily adapt to the situation and all new programmers can be given a set of documentation that will familiarize them with how your company's code is written. Let me emphasize that Mr. Sally makes a number of perfectly valid (and accurate) points. I appreciate his feedback and agree to disagree with him. To me, this is more a stylistic arguement about how to write code than a right or wrong. If I were to go to work for Mr. Sally, I would argue my case for abbreviations, but if the standards he has set in place are to use long names, then I would code according to those standards at his company. The Network for Technology Professionals About Internet.com Legal Notices, Licensing, Permissions, Privacy Policy. Advertise | Newsletters | E-mail Offers
http://www.databasejournal.com/features/mssql/print.php/1471461
CC-MAIN-2014-35
en
refinedweb
26 April 2012 20:39 [Source: ICIS news] HOUSTON (ICIS)--US polyethylene terephthalate (PET) prices moved down by 2 cents/lb ($44/tonne, €33/tonne), following lower feedstock costs, sources said on Thursday. April PET domestic prices were assessed by ICIS at 84-89 cents/lb ?xml:namespace> Upstream, the PX is a feedstock for purified terephthalic acid (PTA), which in turn is the principal feedstock for PET. PET
http://www.icis.com/Articles/2012/04/26/9554198/us-pet-falls-2-centslb-in-april-on-lower-feedstock.html
CC-MAIN-2014-35
en
refinedweb
i downloaded a small program that shows the basics of this socket thing but it says that i cant find socket.h? Help pls thanks This is a discussion on Where to get this socket.h? within the Networking/Device Communication forums, part of the General Programming Boards category; i downloaded a small program that shows the basics of this socket thing but it says that i cant find ... i downloaded a small program that shows the basics of this socket thing but it says that i cant find socket.h? Help pls thanks most likely you want to use #include <winsock2.h> instead? I'm not sure exactly what you're trying to use this program for, but I know that when I use winsock, I use the winsock2.h header, but it seems like you're looking for a different file...? try a google search if winsock isn't what you're looking for actually, socket.h is part of the *nix GCC system includes. it is under the directory sys in /usr/include . although how they are calling it just "socket.h" makes me wonder as you usually have to include it with "sys/socket.h" Well here is the program i dunno how to execute/compile it Code:#include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <netinet/in.h> #include <arpa/inet.h> #include <unistd.h> #include <iostream> #include <string> int main(int argc,char* argv[]) { if (argc < 2) { std::cout << "Please specify a stock symbol\n"; exit(1); } struct sockaddr_in address; address.sin_family = AF_INET; address.sin_port = htons(80); inet_aton("66.94.228.229",&address.sin_addr); int sockNo = socket(PF_INET,SOCK_STREAM,0); int result = connect(sockNo,(sockaddr*)&address,sizeof(struct sockaddr_in)); if (result == 0) { /* We have a connection */ std::string getRequest("GET /d/quotes.csv?f=l1&s="); getRequest += argv[1]; getRequest += "\r\n"; write(sockNo,getRequest.c_str(),getRequest.length()); char buffer[2000]; read(sockNo,buffer,2000); std::cout << "Stock Price of " << argv[1] << " is " << buffer; } else { std::cout << "Failed to Connect\n"; } return(0); } There should be a button on your compiler called "Run". Press that.There should be a button on your compiler called "Run". Press that.i dunno how to execute/compile it benforbes@optusnet.com.au Microsoft Visual Studio .NET 2003 Enterprise Architect Windows XP Pro Code Tags Programming FAQ Tutorials Ohh, lets play a game of "guess the compiler and OS" On second thoughts, lets just wait for the OP If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut. If at first you don't succeed, try writing your phone number on the exam paper. I support as the first necessary step to a free Europe.
http://cboard.cprogramming.com/networking-device-communication/51794-where-get-socket-h.html
CC-MAIN-2014-35
en
refinedweb
Hey, Hey, Scripting Guy! Is there a way to scan a directory structure or a whole drive for files that were created or last accessed by a specific user account? - TA)" OneLineOperatingSystemVersion.vbs '========================================================================== ' ' NAME: OneLineOperatingSystemVersion.vbs ' ' AUTHOR: ed wilson , MSFT ' DATE : 4/8/2009 ' ' COMMENT: Uses a one line technique to report the OS Version. This only works On ' Vista and above due to the change in the win32_OperatingSystem class being a ' singleton ' '========================================================================== wscript.echo GetObject("winmgmts:win32_OperatingSystem=@").Version.. Hi GM, This may help: AddPrinterConnection.vbs Set WshNetwork = WScript.CreateObject("WScript.Network") WshNetwork.AddPrinterConnection "LPT1", '========================================================================== ' ' ' NAME: <PingTestAndConnectToRangeOfPorts.vbs> ' ' AUTHOR: Ed Wilson , MS ' DATE : 1/18/2006 ' ' COMMENT: <Uses netDiagnostics class from the netdiagProv> '1. Uses two methods from the netdiagProv in the cimv2 namespace '2. The methods are: ping and connectToPort. Each will return a -1 or 0 '3. a -1 is true (success) a 0 is false (failure) '4. It MUST run on a Windows XP machine, but can target Windows 2003. '========================================================================== Option Explicit 'On Error Resume Next dim strComputer dim wmiNS dim wmiQuery dim objWMIService dim colItems dim objItem Dim IP, strPorts, arrPorts, size, errRTN, errRTN1 Dim aPort 'single port at a time Dim strOUT 'output string. IP = "London" size = "32" strPorts = "53,88,135,389,445,464,593,636,1025,1026,1028,1051,1110,3268,3269" arrPorts = Split(strPorts,",") strComputer = "." 'The) Ed Wilson and Craig Liebendorfer, Scripting Guys
http://blogs.technet.com/b/heyscriptingguy/archive/2009/05/22/quick-hits-friday-the-scripting-guys-respond-to-a-bunch-of-questions-05-22-09.aspx
CC-MAIN-2014-35
en
refinedweb
Other Information / API for other plugins API for other plugins Logging Data It is incredibly easy for other plugins to log their own data to HawkEye so that server owners can see as much information as possible about their server. It takes one import and a single line of code for basic usage. - Add HawkEye.jaras an External Jar in your IDE. - Import uk.co.oliwali.HawkEye.util.HawkEyeAPIinto any classes where you want to log to HawkEye. When you want to log something, use the method: HawkEyeAPI.addCustomEntry(JavaPlugin plugin, String action, Player player, Location location, String data); Use the action parameter to differentiate between different types of actions your plugin logs. Use only alphanumeric characters and spaces in your action names. Basic Example using a 'Home' plugin The following logs to HawkEye everytime a player 'goes home'. It logs the 'data' part as the ID of the home in the database. Please note that the following will force your users to use HawkEye (it doesn't check if the plugin is loaded or not). private void goHome(Player player, Home home) { Location loc = home.getLocation(); player.teleport(loc); HawkEyeAPI.addCustomEntry(this, "Go Home", player, loc, home.getId()); } Other uses of the data field could be for the ban reason in a ban plugin or winner of the fight in a war plugin, for example. Advanced example that checks if HawkEye is loaded This example checks if HawkEye exists first, making it optional for your plugin users. public class MultiHome extends JavaPlugin { public boolean usingHawkEye = false; public void onEnable() { Plugin dl = getServer().getPluginManager().getPlugin("HawkEye"); if (dl != null) this.usingHawkEye = true; } private void goHome(Player player, Home home) { Location loc = home.getLocation(); player.teleport(loc); if (this.usingHawkEye) HawkEyeAPI.addEntry(this, "Go Home", player, loc, home.getId()); } Logging normal HawkEye events he normal HawkEye events like block breaks can all be logged from the API too. Simply use this API method instead: HawkEyeAPI.addEntry(JavaPlugin plugin, DataType type, Player player, Location location, String data); DataType is located at uk.co.oliwali.HawkEye.DataType Retrieving data Retrieving data is a little bit more complicated than adding to the database. You need to create your own 'callback' object for the search engine to call once it is done retrieving results. The basic method you need to call is this: HawkEyeAPI.performSearch(BaseCallback callBack, SearchParser parser, SearchDir dir); Creating a BaseCallback class To perform a search you need to use an instance of a class extending BaseCallback. There are two built-in callbacks that you can use if need be, although they are designed for very specific uses inside HawkEye. These can be found in the callbacks package: uk.co.oliwali.HawkEye.callbacks. You will more than likely need to create your own BaseCallback class. This class is outlined here: public abstract class BaseCallback { /** * Contains results of the search. This is set automatically before execute() is called */ public List<DataEntry> results = null; /** * Called when the search is complete */ public abstract void execute(); /** * Called if an error occurs during the search */ public abstract void error(SearchError error, String message); } Here is an example of a very simple extension of BaseCallback public class SimpleSearch extends BaseCallback { private Player player; public SimpleSearch(Player player) { player.sendMessage("Search database..."); } public void execute() { player.sendMessage("Search complete. " + results.size() + " results found"); } public void error(SearchError error, String message) { player.sendMessage(message); } } Creating a SearchParser instance Obviously you need to give the search engine some parameters to build a query out of. This is done by using the SearchParser class, found here: uk.co.oliwali.HawkEye.SearchParser There are four constructors available. Two are mainly for internal HawkEye commands, whilst the others are more general: public SearchParser() { }- this is the constructor you will most likely use public SearchParser(Player player) { }- use this one if you are searching due to some kind of player input public SearchParser(Player player, int radius) { }- this is used for 'radius' searching in HawkEye public SearchParser(Player player, List<String> args) throws IllegalArgumentException { }- used for HawkEye user-inputted search parameters SearchParser contains public methods that you can set to whatever you like. They are all optional, but bear in mind if you supply nothing, you won't get any results! The methods are outlined here: public Player player = null;- Player that has called initiated the search somehow public String[] players = null;- Array of player names to search for. Can be partial public Vector loc = null;- Location to search around public Vector minLoc = null;- Minimum corner of a cuboid to search in public Vector maxLoc = null;- Maximum corner of a cuboid to search in public Integer radius = null;- Radius to search around public List<DataType> actions = new ArrayList<DataType>();- List of DataType actions to search for public String[] worlds = null;- Array of worlds to search for. Can be partial public String dateFrom = null;- Date to start the search from public String dateTo = null;- Date to end the search at public String[] filters = null;- Array of strings to use as filters in the data column If you set the location and/or radius, you should then call the parseLocations() method to sort out the locations into proper minLoc and maxLoc values. If you set radius but no location and then call parseLocations() you MUST have set player, otherwise you shouldn't have set radius. Here is an example of a very simple setup of a SearchParser: SearchParser parser = new SearchParser(); parser.player = player; parser.radius = 5; parser.actions = Arrays.asList({DataType.BLOCK_BREAK, DataType.BLOCK_PLACE}); parser.parseLocations(); This will search 5 blocks around the player for block breaks and block places SearchDir SearchDir is an enumerator representing the direction to list search results in. Simply import the class and specify SearchDir.DESC or SearchDir.ASC Putting it all together So now we have got our BaseCallback written and our SearchParser instance created, we just need to put it together: //Setup a SearchParser instance and set values SearchParser parser = new SearchParser(); parser.player = player; parser.radius = 5; parser.actions = Arrays.asList({DataType.BLOCK_BREAK, DataType.BLOCK_PLACE}); parser.parseLocations(); //Call search function performSearch(new SimpleSearch(player), parser, SearchDir.DESC); This will search 5 blocks around the player for block break and block place. When the search is done the callback class tells the player how many results were found. - 1 comment - 1 comment Table of contents Facts - Date created - Aug 25, 2011 - Last updated - Aug 25, 2011 The JavaDocs link is dead. Please update.
http://dev.bukkit.org/bukkit-plugins/hawkeye/pages/other-information/api-for-other-plugins/
CC-MAIN-2014-35
en
refinedweb
NLP.HistPL.Lexicon Contents Description The module provides functions for working with the binary representation of the historical dictionary of Polish. It is intended to be imported qualified, to avoid name clashes with Prelude functions, e.g. import qualified NLP.HistPL.Lexicon as H Use save and load functions to save/load the entire dictionary in/from a given directory. They are particularly useful when you want to convert the LMF dictionary to a binary format (see NLP.HistPL.LMF module). To search the dictionary, open the binary directory with an open function. For example, during a GHCi session: >>> hpl <- H.open "srpsdp.bin" Set the OverloadedStrings extension for convenience: >>> :set -XOverloadedStrings To search the dictionary use the lookup function, e.g. >>> entries <- H.lookup hpl "dufliwego" You can use functions defined in the NLP.HistPL.Types module to query the entries for a particular feature, e.g. >>> map (H.text . H.lemma) entries[["dufliwy"]] Synopsis - data HistPL - data Code - type Key = Key UID - type UID = Int - tryOpen :: FilePath -> IO (Maybe HistPL) - open :: FilePath -> IO HistPL - lookup :: HistPL -> Text -> IO [(LexEntry, Code)] - lookupMany :: HistPL -> [Text] -> IO [(LexEntry, Code)] - getIndex :: HistPL -> IO [Key] - tryWithKey :: HistPL -> Key -> IO (Maybe LexEntry) - withKey :: HistPL -> Key -> IO LexEntry - save :: FilePath -> [(LexEntry, Set Text)] -> IO HistPL - load :: HistPL -> IO [(Key, LexEntry)] - module NLP.HistPL.Types Dictionary A binary dictionary holds additional info of type a for every entry and additional info of type b for every word form. Key Open tryOpen :: FilePath -> IO (Maybe HistPL)Source Open the binary dictionary residing in the given directory. Return Nothing if the directory doesn't exist or if it doesn't constitute a dictionary. open :: FilePath -> IO HistPLSource Open the binary dictionary residing in the given directory. Raise an error if the directory doesn't exist or if it doesn't constitute a dictionary. Query lookupMany :: HistPL -> [Text] -> IO [(LexEntry, Code)]Source Lookup a set of forms in the dictionary. tryWithKey :: HistPL -> Key -> IO (Maybe LexEntry)Source withKey :: HistPL -> Key -> IO LexEntrySource Extract lexical entry with a given key. Raise error if there is no entry with such a key. Conversion Save save :: FilePath -> [(LexEntry, Set Text)] -> IO HistPLSource Construct dictionary from a list of lexical entries and save it in the given directory. To each entry an additional set of forms can be assigned. Load Modules NLP.HistPL.Types module exports hierarchy of data types stored in the binary dictionary. module NLP.HistPL.Types
http://hackage.haskell.org/package/hist-pl-lexicon-0.4.0/docs/NLP-HistPL-Lexicon.html
CC-MAIN-2014-35
en
refinedweb
All, I have been looking for - unsuccessfully - for a utility or an Eclipse plugin which would perform static analysis of the code and generate information about dependencies on attributes/methods of classes/interfaces of a particular package. Say, the classes in input package/directory rely on some other package(s) in the classpath: Code Java: import packageA.*; class myClass { void method1() { packageA.ifcA i = packageA.classB.attrC; double d = i.methoD(); ... } ... } the output would be myClass -> packageA.ifcA.methodD myClass -> packageA.classB.attrC Is there such utility you are aware of? Or perhaps you could recommend some utility which would allow creating custom code analysis - so that I could write one with this particular purpose? Thanks a lot in advance, Bob
http://www.javaprogrammingforums.com/%20java-ides/9302-package-class-interface-method-dependency-report-printingthethread.html
CC-MAIN-2014-35
en
refinedweb
This is the mail archive of the elfutils-devel@sourceware.org mailing list for the elfutils project. Hi Jiri, On Wed, 2013-10-09 at 12:15 +0200, Jiri Slaby wrote: > On 10/09/2013 12:04 PM, Mark Wielaard wrote: > >. OK, thanks for the cleaned up test case. I think I finally understand now. I believe the correct check here is to see if the elf flags have ELF_F_DIRTY set indicating something changed between what is in memory and on disk. If so we need to realloc and explicit copy over the new scn shdr. What do you think about the following? diff --git a/libelf/elf32_updatefile.c b/libelf/elf32_updatefile.c index 296b1ac..c4af9c0 100644 --- a/libelf/elf32_updatefile.c +++ b/libelf/elf32_updatefile.c @@ -633,7 +633,8 @@ __elfw2(LIBELFBITS,updatefile) (Elf *elf, int change_bo, size_t shnum) #endif ElfW2(LIBELFBITS,Shdr) *shdr_data; - if (change_bo || elf->state.ELFW(elf,LIBELFBITS).shdr == NULL) + if (change_bo || elf->state.ELFW(elf,LIBELFBITS).shdr == NULL + || (elf->flags & ELF_F_DIRTY)) shdr_data = (ElfW2(LIBELFBITS,Shdr) *) alloca (shnum * sizeof (ElfW2(LIBELFBITS,Shdr))); else @@ -764,7 +765,8 @@ __elfw2(LIBELFBITS,updatefile) (Elf *elf, int change_bo, size_t shnum) (*shdr_fctp) (&shdr_data[scn->index], scn->shdr.ELFW(e,LIBELFBITS), sizeof (ElfW2(LIBELFBITS,Shdr)), 1); - else if (elf->state.ELFW(elf,LIBELFBITS).shdr == NULL) + else if (elf->state.ELFW(elf,LIBELFBITS).shdr == NULL + || (elf->flags & ELF_F_DIRTY)) memcpy (&shdr_data[scn->index], scn->shdr.ELFW(e,LIBELFBITS), sizeof (ElfW2(LIBELFBITS,Shdr)));
https://sourceware.org/ml/elfutils-devel/imported/msg00957.html
CC-MAIN-2018-43
en
refinedweb
Repository: Ionic Framework Github Repository Software Requirements: Visual Studio Code(Or any preferred editor) A browser(Preferably Chrome) What you will learn: In this tutorial you would learn how to live code a real ionic application with practical examples including these major concepts. Utilizing javascripts event loop in ionic Understanding common problems faced regarding blocking Learning how to create and use charts in our ionic app. Difficulty: Intermediate Tutorial Today we'd be dealing with more of the logic or back-end development of our ionic app. In this tutorial I would to my very best try to teach some major problems faced due to the fact that javascript is a single-threaded programming language and how this applies to ionic using a live example. So lets jump in; For our product detail page we would like to add a chart that shows the number of goods in the store over a period of time and since this isn't by default provided by ionic, we would have to use an external library called chart.js. Just head to your console and run this command while connected to the internet to install all its dependencies. npm install chart.js Note: You should be in the directory of your app After this you can directly use it by simply importing it into whatever page would require a chart. This module has many styles and designs for charts but for this case, we would be using the line chart. You could click here for the official documentation if you wish to use any other style or read more extensively on this. For a quick recap of what has changed. I created a new page called the productdetailPage and injected it into our app as taught in earlier tutorials of this series, then i set a function on the product page to create a modal for anytime any product is clicked using this pages template. I gave the page a simple front-end structure using the ion-grid and ion-card tags which are similar(if not identical) to bootstraps method of design. This is the present template. .png) And this is the code for the template productdetail.html <ion-header> <ion-navbar> <ion-title text-right>Productdetail</ion-title> <button ion-button icon-only text-right (click)="dismisspage()"><ion-icon</ion-icon></button> </ion-navbar> </ion-header> <ion-content padding:10px> <ion-card> <ion-card-header> <h1> Chart of Sales </h1> </ion-card-header> <ion-card-content> <canvas #lineCanvas></canvas> </ion-card-content> </ion-card> <button ion-button block>Restock</button> <ion-grid> <ion-row> <ion-col> <ion-card> <ion-card-header> <h4>Product Name</h4><br><hr> </ion-card-header> <ion-card-content> <h5>{{ product.name }}</h5> </ion-card-content> </ion-card> </ion-col> <ion-col> <ion-card> <ion-card-header> <h4>Number in Store</h4><br><hr> </ion-card-header> <ion-card-content> <h5>{{ product.numOfGoods }}</h5> </ion-card-content> </ion-card> </ion-col> </ion-row> <hr> <ion-row> <ion-col> <ion-card> <ion-card-header> <h4>Profit per unit</h4> <hr> </ion-card-header> <ion-card-content> <h5>{{ product.profit | currency: 'N' }}</h5> </ion-card-content> </ion-card> </ion-col> <ion-col> <ion-card> <ion-card-header> <h4> Date Entered </h4> </ion-card-header> <ion-card-content> <h5> {{ product.timeOfEntry }} </h5> </ion-card-content> </ion-card> </ion-col> </ion-row> </ion-grid> </ion-content> So now we can go on to the logic. For better understanding, let me go through some Javascript explanation. Javascript runs on a single-threaded runtime, which means that it can only run one piece of code at a time. Although it is single threaded, it had other components such as web API's which are made with C++, that can be used to run other processes that aren't on the main stack. Whenever processes are set to run on this API's however, they do no run on the main stack but are run separately and injected back onto the stack when they are done running by the call stack queue. Hence Whenever a call is run off the stack it is called asynchronous and whenever it isn't it is called synchronous. This explanation would come in handy in explaining why the code below seems bulky or is repeated for the if and else statements. So back to our code. To build a chart we have to create a canvas in the template and use ionic's ViewChild to link it up to the ts file. To do this simply import the viewchild. import { Component, ViewChild } from '@angular/core'; And then link it up to your template by using the hash to refer to the canvas you created in the template. productdetail.html <canvas #lineCanvas></canvas> and in the template use the viewChild like this productdetail.ts @ViewChild('lineCanvas') lineCanvas; Due to the fact that the page we are creating is a modal, the data for each of the pages is going to be different and specific to the product that triggered the modal. This would mean that we would have to save each product data to storage using a different name that we'll set our code to create for us. In the ionViewDidLoad() function, add this line of code which will create a name for the page based on the information passed to the modal. let graphdataname = 'graphdata' + data.name; This would be the key that our code will use to find the data for the page we're on. Note: The storage functions are asynchronous calls, so whenever they are called they leave the stack and are run after the rest of the code is run. This is why our code has to be repetitive. Also due to the fact that we'll be using storage, we would need to set code to be run if the storage is occupied or if it is empty. Here is what we do product.ts import { Component, ViewChild } from '@angular/core'; import { IonicPage, NavController, NavParams } from 'ionic-angular'; import { Chart } from 'chart.js'; import { Storage } from '@ionic/storage'; @IonicPage() @Component({ selector: 'page-productdetail', templateUrl: 'productdetail.html', }) export class ProductdetailPage { product: any; @ViewChild('lineCanvas') lineCanvas; lineChart: any; public graphdata: number[]; constructor(public navCtrl: NavController, public navParams: NavParams, public storage: Storage) { this.product ={name: "",type: "",costprice: null,sellingprice:null,numOfGoods: 1,totalsellingprice: 0,profit: 0}; this.graphdata = []; } ionViewDidLoad() { console.log('ionViewDidLoad ProductdetailPage'); let data = this.navParams.get('item'); let graphdataname = 'graphdata' + data.name; let newGraphData = data.numOfGoods; this.storage.get(graphdataname).then((name)=>{ if(name){ this.graphdata = name; console.log('There is data available'); this.graphdata.push(newGraphData); this.storage.set(graphdataname, this.graphdata);, } ] } }); } else{ console.log('There is no data available'); let list = this.graphdata; list.push(newGraphData); this.storage.set(graphdataname, list);, } ] } }); } }); console.log(this.graphdata); this.product = data; } } Because were using this asynchronous call we would have to run this code based on what we get from storage and by simply removing our chart from those calls, we would get a blank chart because the date would not be able to be called from the storage before using that data to run it. To better understand this copy the code above and remove the code for the chart which is defined within this.linechart = new Chart(this.lineCanvas.nativeElement{ //code here } And place it after the if and else statement to see how it affects the chart. This data is set to be modified each time the page is accessed and isn't a really good option, So it'll be better to load this data on the server side and send it to your app, or let this app generate this data after a certain period of time(such as a particular time of the day). In later parts of this series where we deal with server side interaction well cover how we could do that using more complex technicalities that we're yet to get into See you next time and Hope this tutorial was helpful. You can find my code in Github Thank you for your contribution. Below is our, @mcfarhat! So far this week you've reviewed 4 contributions. Keep up the good. As a follower of @followforupvotes this post has been randomly selected and upvoted! Enjoy your upvote and have a great day!!
https://steemit.com/utopian-io/@yalzeee/tutorial-ionic-app-development-building-the-business-app-part-6-adding-live-loading-charts
CC-MAIN-2018-43
en
refinedweb
This. Team Explorer: - The window has a vertical orientation and shares the space occupied by Solution Explorer by default. - The look of the of the list of files is much simpler. It’s a reasonably compressed tree view and rather than textually writing the change type, give a more visual indication ([+] for add, [oldfilename] for rename, strikeout for delete, etc). The result is a less cluttered and easier to digest look. Also, for the case of renamed files, I challenge you to find out what it was renamed from in the 2010 version - You’ll notice the checkboxes are now gone. Instead we have “included” and “excluded” changes. You can drag and drop, etc between them. When you check in, only the files in “included changes” are included in the check in. The excluded changes also includes the detected changes functionality I talked about in my post on workspaces. - If you are watching closely, you see that we have an “Add Work Item by ID” command which was something you couldn’t do in 2010 (you could only pick from a query result) and has been a much requested feature. - Each section can be expanded or collapse to remove clutter. - If you have checkin policy violations, that section will show up and if not, you won’t see it – a nice clutter reduction for those who don’t have policies or violations. - A checkin notes section will show up if you have checkin notes defined. We don’t have any by default any more so you don’t see them in the pane above. As you can see, additional commands are under the “More” drop down: Overall I think the experience around managing pending changes is an improvement. Friendly Names. Reducing Mod: - I can still filter by shelveset owner (here it’s filtered to my shelvesets – that’s the default). I can type “Brian Harry” or “redmond\bharry” and I’ll get the same result. - I can also filter by text in the shelveset title (the “Type here to filter the list” box) and is darn handy for people who keep a lot of shelvesets (and I’ve seen people with dozens and dozens). It’s an incremental – as you type filter so you only have to type as much as needed to find what you want. - We provide a nicer way of showing how old the shelveset is rather than just showing the creation date. - Once I’ve found the shelveset I want, rather than hitting a “Details” button and getting a modal dialog, double click it and get an in place shelveset details pane, allowing me to operate on the files in the shelveset the same way I would on any other. I’ve included a picture. Note it looks an awful lot like the pending changes window above. As a side note, notice that I’ve configured a checkin note (Localization) and that’s now showing up at the bottom of the pane. I’ve also collapsed the comment section and associated a work item with this shelveset. Change set details from many places (like history, annotate, etc) will also be non-modal now. Asynchronous Operations: - Editing a file is now asynchronous. Local workspaces help decouple us from the server but we’ve also decoupled all the IDE state updating so editing a file really is instantaneous. - Checking in is now asynchronous – you can keep on working while your checkin processes. - Find shelveset is asynchronous. - Sheveset details and changeset details are asynchronous. - File compare is asynchronous. - Other things outside of version control are also asynchronous – like opening a work item. It used to block the UI. There are certain things that have significant interaction with the VS project systems that are hard to make async. Other than that, probably the biggest operation that we haven’t gotten around to making async yet is “Get latest version”. Rollback in the UI. Restore file modification time: - The time the file is gotten (this was the default and works very well in concert with make and other similar build dependency trackers). - The modification time that the file had when it was last edited before checkin. - The date/time that the file was checked in. TFS 2010 and before only supported option #1. In TFS 11, we have added support for option #3. It can be set on a workspace by workspace basis. We plan to add support for #2 in the future but haven’t gotten there yet. Unix file attributes. Properties on Shelvesets Conclusion Join the conversationAdd Comment Nice to have great features to long for 🙂 When you talked about friendly names for the domain users, I recalled an issue I stumbled upon before. Is it possible to use Federated authentication using SAML tokens (like wif) to authenticate users instead of using the AD, or is the AD a requirement for TFS? Is the checkin notes / comments field an HTML field on changesets/Shelvesets(like you're localization comment on the selveset)? I hate that in 2010 I can't put bulleted lists, bolds, and colors(especially RED) in my comments/notes/description fields. Nice features…Too late but so good…also I hope some TFS Power tools actions are going to be built-in Great stuff, I'm really looking forward to the new version. A couple of questions though: Will the new version of Team Explorer Everywhere be released the same day as TFS 11? And it is the same for the other TFS products (SharePoint, Project… integration)? Will Team Explorer Everywhere have a Mylyn Connector (integrated or a separate updated version of labs.teamprise.com/mylyn)? Hi Brian, You have mentioned several identity providers. Does that mean that the authentication model (Windows Auth) is changing in TFS11? The wireframe person icon in the find shelvesets is a waste of space….wouldn't it be better to have a small image of the person's face…like how outlook and facebook chat does it. Also the wireframe looks like a guy….what about our female developers…will they have female looking wireframes? Some customization on this dialog would be nice. For example maybe I want certain fields, like time remaining to appear when checking in. Drag and drop files to exclude or include! Are you aware that most developers actually try not to use the mouse, but use the keyboard. I really hope there will be good keyboard support for selecting which files are part of a checkin or not. This include/exclude does not sound good. Generally, the lack of keyboard short cuts for use with TFS operations in Visual Studio is a big omission in the product. What keyboard short cut do you use to go to pending changes (I personally created a custom keyboard short cut CTRL+ALT+P). Ouch, that looks horrible. So what happens if I right click on a solution explorer node and choose "Check In" (which really should have been renamed "Commit" btw)? Does it force a tab change to Team Explorer? And does it tab back to my Solution Explorer context when the Commit has finished? Dan, not in our on premises solution at this time. We support WIF/ACS in our cloud solution. We may bring that to on premises as well at some point. Allen, no, I'm afraid not. I agree it would be great to be able to use rich text everywhere but it creates all kinds of problems. Rich text really doesn't work from the command line – entering or displaying. You can load it in Excel (it's created all manner of problems with our work item tracking rich fields). RSS feeds don't work well with rich text, notepad and other editors don't work well with it, and on and on. I'd love to believe the world wll get to the point that rich text is just assumed everywhere and all tools manage it well. We're not there yet and we'll be stuck with some stuff in plain text for a while. TBK0000, Yes, we'll be releasing all of the various TFS related components (including TEE) together. Rory, the on premises identity model is mostly unchanged from 2010 (in terms of functionality) but we've made it extensible and we have an alternate implementation for the cloud that uses Windows Azure Access Control Service (ACS). secaa23, we've added support for portraits in TFS 11 and you'll find them in various places in the product. More on that in future posts. abe, I'll be doing a whole post on extensibility in the next month or so, stay tuned. Harry, Yes, lots of developers prefer to use the keyboard. We've generally added keyboard shortcuts for everything you can do. Maybe I'll do a post just on keyboard shortcuts at some point. cat, Yes, you can right click on solution explorer and check in. Yes, it activates Team Explorer. We're still playing with this experience some. No, it does not automatically switch the tab back when you are done (although we could potentially do that). Brian "Editing a file is now asynchronous. " You just made my day. 🙂 There's some really nice changes in here. How is the update going to work on the client side, is it just something to install into VS 2010? Hi As part of your Shelveset story, are considering the ability for users to easily update an existing shelveset? At the moment, the process is a bit labourious (click Unshelve, get details on the shelveset to update, copy the name, close the dialogs, click shelve, paste in the name you copied, commit). Would it be possible to have a drop down of the existing shelvesets in the new Shelve window, allowing the user to select an existing one, or enter a new name? Or maybe this is one for the backlog? 🙂 Cheers Ged Re-typing a much longer comment that the blog software apparently ate: Overall: Good! Merging pending checkins with Team Explorer: Don't like it – tall skinny window won't look so nice with real-world files names/folder structures rather than "Clas5.cs". Tiny checkin comments textbox: Useless. Enough reason to find another tool for doing checkins – huge step backwards from previous versions. Can we look forward to a post on the changes to Test/Lab Manager as well? More async, less modal == great stuff! Looking forward to TFS v.next. Now, if the folks working on WIT could get the equivalent of tfseventhandler.codeplex.com into the product, we'd be very happy campers. That and give us a way to parameterize WIT queries i.e. pop UI to the query invoker to allow them to select a value to parameterize the query. Yeah, that'd be great. 🙂 As another commenter said, the check-in comments section needs to be MUCH larger. It is going to be excruciatingly difficult to use at this size. Perhaps you could make the section height draggable/expandable? This size is okay for a few words, but check in comments often take a sentence or several sentences to summarize what was done, and sometimes why it was done. Trying viewing 3 sentences in that tiny box! >In TFS 2010, we introduced a property system that allows you to tag properties on objects I'm really interested in this, sound like we can attach a name/value list on any object in TFS; is there any documentation on this as coudn't find any. I'm really interested to know more about TFS. I have to +1 on the comments box being bigger (WAY bigger). As another commenter said, the check-in comments section needs to be MUCH larger. It is going to be excruciatingly difficult to use at this size. Also bummed to hear that the field will still only take a text string…what is this 1999? Also in 2008/2010 carriage returns were truncated in the comments field if you pulled the comments out via the API has this been fixed. Rich comment data is the glue that really makes an ALM tool actually useful in our organization (and work item links). Any chance you could show us that the new Pending Changes Window would look like if it was wider (like the current Pending Changes Window)? Hopefully the comment text box goes to multiline as you type in long comments. Definitely hoping for a long multiline textbox for long comments. Any chance we will finally have spell check in the comments window? I misepll my comments all the tmie. I like the look of the new Pending Changes, but I have to +1 the comments about the width for paths, a much larger comments box, and keyboard shortcuts. – I like to keep Team Explorer and Solution Explorer as narrow as possible, but pending changes would require me to keep it quite a bit wider to see the paths. – I type detailed comments and appreciate the width of the pane when Pending Changes is on the bottom. – Currently, it's easy to multi-select contiguous and discontiguous items in pending changes to check/uncheck them as needed. This appears to be more difficult in the new UI. If there are no check-boxes, it needs to be easy to quickly select items (with keyboard) and then press Ctrl+<something> to include/exclude them. – For large changesets, I often re-sort the items in Pending Changes on the Name or Change columns. I sort by Name to find something quickly or sort related changes that exist in different folders (like Dev and Test folders) and I sort by Change to group edits, adds, and deletes together for large changesets or when reviewing merges prior to check-in. This doesn't appear to be possible with the tree, but perhaps this is still possible? Other comments: – This is minor, but can there be an option to prevent the automatic selection of all remaining checked out items after you check some in? I've accidentally done double check-ins before (the check-in confirmation in TFS 2010 does help here), but it's also annoying when I am doing several check-ins of subsets of changes and have to keep clearing the items). – I see there's a "Request Review" command on that More menu – is there built-in code review functionality coming in TFS 11? Sorry to be nit-picky on that. I'm psyched about the modality, async, and file modification changes (along with the changes you've talked about in other recent posts). Thank you, Jeff Hope this feedback makes it into the product. I urge you to make entering comments easier and allow comments to have a more prominent place in the UI when checking in files and shelving. Comments play a key part in our processes in ensuring developers commit high quality code into the system. We actually have a tool that generates a weekly word document from the comments each developer makes and this is the status report that is automatically emailed to his manager and dev lead. Now that devs no longer have to put together status reports they see the value in having really descriptive….really technical descriptions of their work in their comments. When developers go on vacation or leave the company we then have a great record of all of the work done by a dev since the beginning of time and what they were working on. It's really invaluable data about how the products/components are in fact built. We have another tool that searches for curse words…and when enough curses are detected we immendiately have the scrum masters step in and work with the development team to reset expectations and or move dates to accomodate some necessary but painful refactoring. Comments are the life blood of understanding how developers are working in the system, how they are working with each other, and how high a quality bar we have before we ship. Hi Brian, thanks for this exaustive explanation of the new feature of TFS11: they are more than welcome! 🙂 I have a question/feedback: in the solution explorer when a file is checked out by someone else, the tooltip say "checked out by someone else or in another place"; it is possible to know (without opening the source control explorer) who is that "someone" (maybe with the new friendly name :), or which is the other workspace, directly in that tooltip? Thanks! J.Elcik, It sounds like you have a very evolved process for managing comments. Do you have any suggestions on changes we might make that would make managing comments easier for you? Licantrop0, It's not impossible but we don't do it right now. I agree it's kind of goofy to say it's checked out but not to whom 🙂 However, in general, we're changing the way we think about workspaces. More and more people are doing various forms of parallel development – shelvesets, branches, DVCS, offline and who something is either not possible to determine, inefficient to determine or not all that interesting. With our new local workspaces, we now only show you if you have something checked out and not whether or not someone else does. Local workspaces are the default going forward and as such changing this server workspace behavior is lower priority than it might otherwise be. Brian What about full version control for build definitions??? Are you adding that to TFS 11? connect.microsoft.com/…/tfs-2010-no-longer-keeps-full-contents-of-build-definitions-under-version-control Brian, we've evolved our process for managing comments becuase comment data repeated showed itself to contain critical information about why something went wrong…and who knew what was being done. For a way to improve the comments in TFS…I would suggest looking at the facebook API for comments it's robust and it allows for powerful commenting functionality….that promotes comments to be 1st class data with rich eventing and notification properties. developers.facebook.com/…/comments If you don't like the facebook model….take a look at the powerful commenting support that the SharePoint API provides. weblogs.asp.net/…/sharepoint-changing-comments-of-document-versions-in-code.aspx Comments on check-ins are THE, yes THE most valuable way devs communicate and socialize the development work they are doing to build the product. Comments on workitems are THE, yes THE, most valuable way business analysts and product owners communicate and socialize product/schedule work they are doing to get the product out the door. Please make the commenting functionality in TFS 11 richer than TFS 2010. I think someone from MS should pay the JetBrain's usability team to give a thorough once over to TFS (and VS 2010). IMO, more effort needs to be put on usability, simplicity and ease of use – less on adding more stuff. The check-in comments box automatically expands (vertically) to accommodate your comment. You can, of course, also resize the tool window to give yourself a wider editing area. We're also making changes that will make it easier to associate work items with your changes so that you'll have more context than just the check-in comment. @KenException, regarding versioning of build definitions, you may want to check out my response to a related question on stackoverflow here: stackoverflow.com/…/7277493 From what I've seen in the developer preview the comments section does expand….but only to <10 lines and the wrapping of text is a pain to scroll and scroll to read try putting in code with long namespaces. May I suggest an "on hover" gesture that expands the dialog and makes it wider and reflows the text in a wider space. Thanks for the suggestion. I'll pass it on to Matt and Jim to think about. Brian “Also, for the case of renamed files, I challenge you to find out what it was renamed from in the 2010 version” As close as I could get with the TFS API geekswithblogs.net/…/database-changes-between-tfs-2010-and-tfs-2011.aspx. Cheers, Tarun I've been trying to use this for a bit now, can you add a one-click way of excluding individual items? The tickboxes in the old pending changes was a lot easier to work with as dragging to an offscreen location is difficult; also the right click works well for hierarchies but not for several individual files inside large hierarchies, the first few times I used it I didn't even realize I could do ctrl+clicking and use the context menu to exclude items and just sat stumped. The grey colour scheme in the latest beta also makes it really painful to work out which files are which, while the screenshots look ok… I know this is an old post, but I wanted to reask a year later something mentioned by a couple of people in this thread. Can I use WIF or any other form of federation to get away from just using AD now that we are closer to release? Thanks. Robert, no, I'm afraid not. We've added some support for WIF in our hosted version but that hasn't made it to on premises yet. It's just not all that common of an ask. I'd like to make sure I understand your scenario. What's drving you to want to use something other than Windows identities? What would work better for you? Brian Can you give an update on where TFS is with the implementation of modified date option (VSS option 2) you spoke of in your comments. @ToServe, We haven't made progress on it. Brian On the person's question – Can you give an update on where TFS is with the implementation of modified date option (VSS option 2) you spoke of in your comments. There are so many people who I've talked to at MS user group meetings that say that this is one of the biggest TFS frustrations. What can be so difficult with putting this in? It's been an issue in connect.microsoft.com for years…
https://blogs.msdn.microsoft.com/bharry/2011/09/01/wrapping-up-tfs-11-version-control-improvements/
CC-MAIN-2018-43
en
refinedweb
Introduction:. I had heard that the NRF24L01 was a very cost effective and yet effective little radio device, so I decided to do some research and see what I could come up with. There are a number of Instructables available that show how to use the NRF24L01 radio module with an Arduino. I found that it was difficult to find a simple setup where two Arduino's could communicate in the exact same way. I wanted both setups to each transmit and receive in the same way and that would define the most simplest way of doing both things on two Arduinos. Once I have two Arduinos communicating back and forth, I then have the basic framework to extend them to do other things. This Instructable does not produce the product that my son wants, but does produce the framework for it and many other simple two-way applications. Step 1: The NRF24L01 Module The main features for the NRF24L01 are as follows: Operates in the 2.4HGz ISM band Supply voltage is 3.3V (very important to remember this) The SPI pins are 5V tolerant which is useful for Arduino applications It has 126 selectable channels (frequencies), having channels can help to avoid any possible interference with nearby devices that are using the same frequency. Can be trial and error. Channels selectable in 1MHz increments Modules dimensions (approx): Small board with built in antenna:- 30mm X 16mm Small with vertical antenna mount:- 30mm X 16mm Large with horizontal antenna mount:- 41mm X 16mm (antenna mount protrudes another 7mm) The boards are not breadboard friendly so need to be connected with female dupont leads. There is also an adapter plate, which is still not breadboard friendly, but breaks out the pins to a single row with Vcc and GND on their own. Another advantage of the adapter plate is that it has an on-board 3.3v regulator (and bypass capacitors) which means that you can use a 5v supply. There have been problems noted with just using the 3.3v supply from the Arduino in that you have to solder capacitors onto the board to help with current surges etc. I went for the easier option and used the adapter plate and ran everything from 5v. Adapter plate dimensions:- 27mm X 19mm. Adds approx 17mm to the length of each board and obviously a bit of depth too. It is possible to de-solder the male and female pin headers and just solder the boards together. It would give you a smaller footprint but not for the faint hearted. Step 2: Putting It Together Parts list: - 2 X Arduinos - 2 X Breadboards - 2 X NRF24L01 - [optional] 2 X YL-105 breakout boards for the NRF24L01. This allows for 5v connection and easier wiring - 2 X Momentary switches - 2 X Red LEDs - 2 X Yellow LEDs - 4 X 220 ohm resistors - Jumper wires I use AliExpress for nearly all of my cheap components especially if I require more than one item. They are cheap components from China mainly, but I have not yet had a single bad component. I love using the Arduino and therefore everything I do is low voltage and cheap components are up to the job. However, if I ever moved to something more critical or high voltages etc, then I would most likely source my components from somewhere else. Connections: The module communicates using SPI protocol. Serial Peripheral Interface bus The pin arrangement that was used here was: NRF24L01 Arduino pin VCC 3.3 V If you use the YL-105 breakout board, the Vcc lead can go to the 5v Arduino pin GND GND CS 8 (Can be any unused pin but is defined in the code) CE 7 (Can be any unused pin but is defined in the code) MOSI 11 (Must be the SPI MOSI pin 11 or ICSP pin 4) MISO 12 (Must be the SPI MISO pin 12 or ICSP pin 1) SCK 13 (Must be the SPI SCK pin 13 or ICSP pin 3) Wiring for LEDs and switch: - Arduino pin 2 to Yellow LED long lead - anode - Yellow short lead - cathode to 220ohm resistor, then second resistor lead to GND - Arduino pin 3 to Red LED long lead - anode - Red short lead - cathode to 220ohm resistor, then second resistor lead to GND - Arduino pin 4 to switch, other side of switch to GND The physical build of the two boards are identical. There are minor software differences for the reading and writing pipes for the respective boards. Step 3: The Code The code that I created for both Arduinos is almost identical so I'll only show one of them. All code files are available for download. #include //comes with Arduino #include "RF24.h"//can be found through the IDE: Sketch/Include Library/Manage Libraries/ Search for RF24 and locate RF24 by TMRh20/ more info / Install //set up the button and LEDs #define button 4 #define confirmLed 2 #define led 3 RF24 NRF24L01 (7, 8);//create object called NRF24L01. specifying the CE and CSN pins to be used on the Arduino byte address[] [6] = {"pipe1", "pipe2"};//set addresses of the 2 pipes for read and write boolean buttonState = false;//used for both transmission and receive void setup() { //setup the Arduino pins pinMode(button, INPUT_PULLUP); pinMode(confirmLed, OUTPUT);//yellow LED pinMode(led, OUTPUT);//red LED NRF24L01.begin(); //open the pipes to read and write from board 1 NRF24L01.openWritingPipe(address[0]);//open writing pipe to address pipe 1 NRF24L01.openReadingPipe(1, address[1]);//open reading pipe from address pipe 2 //this is the only difference in the two sketches required //the two lines below are for board two, notice how the reading and writing pipes are reversed //NRF24L01.openReadingPipe(1, address[0]);//open reading pipe from address pipe 1 //NRF24L01.openWritingPipe(address[1]);//open writing pipe to address pipe 2 NRF24L01.setPALevel(RF24_PA_MAX);//set RF power output to minimum, RF24_PA_MIN (change to RF24_PA_MAX if required) NRF24L01.setDataRate(RF24_250KBPS);//set data rate to 250kbps //If the frequency of 110 below is a problem with other wi-fi for you increment by 1 until it is ok //Don't forget that both sets of code must have the same frequency NRF24L01.setChannel(110);//set frequency to channel 110. } void loop() { //Transmit button change TO the other Arduino delay(10); NRF24L01.stopListening(); buttonState = digitalRead(button);//test for button press on THIS board if (buttonState == LOW)//button is pulled up so test for LOW { NRF24L01.write(&buttonState, sizeof(buttonState));//send LOW state to other Arduino board //flash the yellow LED to show progress digitalWrite(confirmLed, HIGH); delay(100); digitalWrite(confirmLed, LOW); } buttonState = HIGH;//reset the button state variable //Receive button change FROM the other Arduino delay(10); NRF24L01.startListening(); if (NRF24L01.available())//do we have transmission from other Arduino board { NRF24L01.read(&buttonState, sizeof(buttonState));//update the variable with new state NRF24L01.stopListening(); } if (buttonState == HIGH)//test the other Arduino's button state { digitalWrite(led, LOW); } else { flashLed();//indicate that the button was pressed on the other board } buttonState = HIGH;//reset the button state variable } //flash the red LED five times void flashLed() { for (int i = 0; i < 5; i++) { digitalWrite(led, HIGH); delay(200); digitalWrite(led, LOW); delay(200); } } Step 4: Arduino Code and Fritzing Files Unless you upload the code onto the two Arduinos at separate times make sure that you have two different COM ports selected otherwise you will have exactly the same code on both. I hope that this may be of interest to you and help a beginner with their first implementation of an NRF24L01. If nothing else, it will give you a working framework that you can build upon. Step 5: Update Based on Possible Problems I have had a couple of comments from Tony in Italia and Andrea where they are both having trouble wiring this circuit up to an Arduino Nano. About a year ago I bought two Nanos but never got around to using them, so I took this opportunity to solder them up and put them to use. I had taken the original build apart, so I recreated it from the Instructable with an Arduino UNO as before, just to make certain that everything worked. I have added a couple of images of this build. It shows the wiring slightly different and may be easier to see. I did have a slight glitch/delay on one of the NRF24L01 boards, but it seemed to sort itself out. I swapped out the UNOs and replaced them with the Nanos, uploaded one sketch to one Nano and the other sketch to the other Nano, and I was expecting problems, but it all worked with no changes to the code. It is important that the different sketches are uploaded onto the two Arduinos. The differences are only slight but very important. I have added a couple of images for this build as well. That leads me to think that there may be either a bad connection or my instructions were ambiguous. I have used YL-105 breakout boards for the NRF24L01s so that I can use 5v. If you are not using the breakout boards then you must use 3.3v. If you push 5v into the NRF24L01, it will most likely die. There seven wires in use on the NRF24L01 board - GND - Positive 3.3v - CS to Arduino pin 8 (defined in the code) - CE to Arduino pin 7 (defined in the code) - MOSI to Arduino pin 11 (mandatory) - MISO to Arduino pin 12 (mandatory) - SCK to Arduino pin 13 (mandatory) - IRQ is not used - Yellow LED long lead - anode, to Arduino pin 2 - Yellow short lead - cathode, to 220ohm resistor, then second resistor lead to GND - Red LED long lead - anode, to Arduino pin 3 Red short lead - cathode, to 220ohm resistor, then second resistor lead to GND - Arduino pin 4 to switch, other side of switch to GND If I ever have a build that does not seem to work, I take it apart and start again, especially with a build that is quite small, that way I eliminate any previous mistakes. The only thing that I can think that may be incorrect on Tony's and Andrea's board is that the MOSI and MISO may have been swapped around. Hope this helps them and anyone else who may be having problems. Step 6: Video Demonstration Added Video uploaded to give a quick demonstration of the setup and use. 8 Discussions I just tried this, and I'm afraid it doesn't work for me, either. The yellow LEDs will flash when the button is pressed for both TX and RX, but that's it. I do know both the NRF24s work, because I've tested them successfully with other applications. I have Arduino 1.8.5 and the latest RF24 libraries. Hii I am Gopi I followed your instructions. But still it does not working. I don't know what the actual problem was.. Seeking solution.. Hi Gopi. Sorry to hear that you are having difficulties putting this together. The whole circuit is quite simple and there are not many things that can be done wrong, so I am sure it isn't much. It is difficult to say what is wrong with your build without knowing how you have built it and what it is not doing correctly. If you have uploaded the sketch and successfully and followed the fritzing circuit and/or watched the video then it should just work. What I have said to others is that the Arduino IDE that I am using is version 1.8.4 and the RF library is RF24 by TMRh20 version 1.3.0. They may be a problem, especially the RF24library. Do your sketches compile and upload correctly? Have you used both sketches (one for each Arduino: They are different) and not just one of them? Are the NRF24L01 boards lit up? Do you have a dodgy lead? As a last resort, and especially as it is a small build, if I had problems, I would strip it all down and start again. Let me know how you get on. Hi, I followed your instructions, but it doesn't work. Even with my NRF24L01 perfectly working, I don't understand what's the problem... What about a a little video-tutorial? It would be really helpful! Aniway, thanks for everything. Cheers, Andrea Hi Andrea. One thought, the Arduino IDE that I am using is version 1.8.4 and the RF library is RF24 by TMRh20 version 1.3.0. I have been trying to get to the video camera all day with no success. My plan now is to make and upload a video tomorrow. Regards Don Hi, I have the same problem of tony58. Any suggestion? It would be great if you could upload a video demonstration, think about it. Thanks, Andrea Hi Andrea. Sorry to hear that you are also having problems. I have updated the Instructable with some more images and possible reasons for your problem. Regards Don Ok, thank you so much, I'll let you know if I make it.
http://www.instructables.com/id/Arduino-and-NRF24L01/
CC-MAIN-2018-30
en
refinedweb
So hello everyone! This is my school project but I'm having a problem in my code. So here is the program i want to create. I want to create a program that can generate 5 numbers from 1-9 then user will guess it. My problem only is on the generating some unique numbers. My point in generating numbers are like there shouldn't be have a same number in generating it. EG.:"1 ,5 ,3 ,2, 6" this is correct while E.G:"2, 5, 4, 3, 2" is wrong. So i already have a code. please help me to fix/build it! Thanks alot. import java.util.ArrayList; import java.util.Arrays; ArrayList<Integer> numbers = new ArrayList<Integer>(); for( a = 0; a < 9; a++) { numbers.add(a+1); } Collections.shuffle(numbers); number1= numbers.get(0) ; number2= numbers.get(1); number3= numbers.get(2); number4= numbers.get(3); number5= numbers.get(4); Please help me. THANKS
https://www.daniweb.com/programming/software-development/threads/406576/help-how-to-generate-unique-random-numbers
CC-MAIN-2018-30
en
refinedweb
Write your own miniature Redis with Python December 28, 2017 15:24 / gevent python redis / 3 comments The other day the idea occurred to me that it would be neat to write a simple Redis-like database server. While I've had plenty of experience with WSGI applications, a database server presented a novel challenge and proved to be a nice practical way of learning how to work with sockets in Python. In this post I'll share what I learned along the way. The goal of my project was to write a simple server that I could use with a task queue project of mine called huey. Huey uses Redis as the default storage engine for tracking enqueued jobs, results of finished jobs, and other things. For the purposes of this post, I've reduced the scope of the original project even further so as not to muddy the waters with code you could very easily write yourself, but if you're curious, you can check out the end result here. The server we'll be building will be able to respond to the following commands: - GET <key> - SET <key> <value> - DELETE <key> - FLUSH - MGET <key1>... <keyn> - MSET <key1> <value1>... <keyn> <valuen> We'll support the following data-types as well: - Strings and Binary Data - Numbers - NULL - Arrays (which may be nested) - Dictionaries (which may be nested) - Error messages To handle multiple clients asynchronously, we'll be using gevent, but you could also use the standard library's SocketServer module with either the ForkingMixin or the ThreadingMixin. Skeleton Let's frame up a skeleton for our server. We'll need the server itself, and a callback to be executed when a new client connects. Additionally we'll need some kind of logic to process the client request and to send a response. Here's a start: from gevent import socket from gevent.pool import Pool from gevent.server import StreamServer from collections import namedtuple from io import BytesIO from socket import error as socket_error # We'll use exceptions to notify the connection-handling loop of problems. class CommandError(Exception): pass class Disconnect(Exception): pass Error = namedtuple('Error', ('message',)) class ProtocolHandler(object): def handle_request(self, socket_file): # Parse a request from the client into it's component parts. pass def write_response(self, socket_file, data): # Serialize the response data and send it to the client. pass class Server(object): def __init__(self, host='127.0.0.1', port=31337, max_clients=64): self._pool = Pool(max_clients) self._server = StreamServer( (host, port), self.connection_handler, spawn=self._pool) self._protocol = ProtocolHandler() self._kv = {} def connection_handler(self, conn, address): # Convert "conn" (a socket object) into a file-like object. socket_file = conn.makefile('rwb') # Process client requests until client disconnects. while True: try: data = self._protocol.handle_request(socket_file) except Disconnect: break try: resp = self.get_response(data) except CommandError as exc: resp = Error(exc.args[0]) self._protocol.write_response(socket_file, resp) def get_response(self, data): # Here we'll actually unpack the data sent by the client, execute the # command they specified, and pass back the return value. pass def run(self): self._server.serve_forever() The above code is hopefully fairly clear. We've separated concerns so that the protocol handling is in it's own class with two public methods: handle_request and write_response. The server itself uses the protocol handler to unpack client requests and serialize server responses back to the client. The get_response() method will be used to execute the command initiatied by the client. Taking a closer look at the code of the connection_handler() method, you can see that we obtain a file-like wrapper around the socket object. This wrapper allows us to abstract away some of the quirks one typically encounters working with raw sockets. The function enters an endless loop, reading requests from the client, sending responses, and finally exiting the loop when the client disconnects (indicated by read() returning an empty string). We use typed exceptions to handle client disconnects and to notify the user of errors processing commands. For example, if the user makes an improperly formatted request to the server, we will raise a CommandError, which is serialized into an error response and sent to the client. Before going further, let's discuss how the client and server will communicate. Wire protocol The first challenge I faced was how to handle sending binary data over the wire. Most examples I found online were pointless echo servers that converted the socket to a file-like object and just called readline(). If I wanted to store some pickled data or strings with new-lines, I would need to have some kind of serialization format. After wasting time trying to invent something suitable, I decided to read the documentation on the Redis protocol, which turned out to be very simple to implement and has the added benefit of supporting a couple different data-types. The Redis protocol uses a request/response communication pattern with the clients. Responses from the server will use the first byte to indicate data-type, followed by the data, terminated by a carriage-return/line-feed. Let's fill in the protocol handler's class so that it implements the Redis protocol. class ProtocolHandler(object): def __init__(self): self.handlers = { '+': self.handle_simple_string, '-': self.handle_error, ':': self.handle_integer, '$': self.handle_string, '*': self.handle_array, '%': self.handle_dict} def handle_request(self, socket_file): first_byte = socket_file.read(1) if not first_byte: raise Disconnect() try: # Delegate to the appropriate handler based on the first byte. return self.handlers[first_byte](socket_file) except KeyError: raise CommandError('bad request') def handle_simple_string(self, socket_file): return socket_file.readline().rstrip('\r\n') def handle_error(self, socket_file): return Error(socket_file.readline().rstrip('\r\n')) def handle_integer(self, socket_file): return int(socket_file.readline().rstrip('\r\n')) def handle_string(self, socket_file): # First read the length ($<length>\r\n). length = int(socket_file.readline().rstrip('\r\n')) if length == -1: return None # Special-case for NULLs. length += 2 # Include the trailing \r\n in count. return socket_file.read(length)[:-2] def handle_array(self, socket_file): num_elements = int(socket_file.readline().rstrip('\r\n')) return [self.handle_request(socket_file) for _ in range(num_elements)] def handle_dict(self, socket_file): num_items = int(socket_file.readline().rstrip('\r\n')) elements = [self.handle_request(socket_file) for _ in range(num_items * 2)] return dict(zip(elements[::2], elements[1::2])) For the serialization side of the protocol, we'll do the opposite of the above: turn Python objects into their serialized counterparts! class ProtocolHandler(object): # ... above methods omitted ... def write_response(self, socket_file, data): buf = BytesIO() self._write(buf, data) buf.seek(0) socket_file.write(buf.getvalue()) socket_file.flush() def _write(self, buf, data): if isinstance(data, str): data = data.encode('utf-8') if isinstance(data, bytes): buf.write('$%s\r\n%s\r\n' % (len(data), data)) elif isinstance(data, int): buf.write(':%s\r\n' % data) elif isinstance(data, Error): buf.write('-%s\r\n' % error.message) elif isinstance(data, (list, tuple)): buf.write('*%s\r\n' % len(data)) for item in data: self._write(buf, item) elif isinstance(data, dict): buf.write('%%%s\r\n' % len(data)) for key in data: self._write(buf, key) self._write(buf, data[key]) elif data is None: buf.write('$-1\r\n') else: raise CommandError('unrecognized type: %s' % type(data)) An additional benefit of keeping the protocol handling in its own class is that we can re-use the handle_request and write_response methods to build a client library. Implementing Commands The Server class we mocked up now needs to have it's get_response() method implemented. Commands will be assumed to be sent by the client as either simple strings or an array of command arguments, so the data parameter passed to get_response() will either be bytes or a list. To simplify handling, if data is a simple string, we'll convert it to a list by splitting on whitespace. The first argument will be the command name, with any additional arguments belonging to the specified command. As we did with the mapping of the first byte to the handlers in the ProtocolHandler, let's create a mapping of command to callback in the Server: class Server(object): def __init__(self, host='127.0.0.1', port=31337, max_clients=64): self._pool = Pool(max_clients) self._server = StreamServer( (host, port), self.connection_handler, spawn=self._pool) self._protocol = ProtocolHandler() self._kv = {} self._commands = self.get_commands() def get_commands(self): return { 'GET': self.get, 'SET': self.set, 'DELETE': self.delete, 'FLUSH': self.flush, 'MGET': self.mget, 'MSET': self.mset} def get_response(self, data): if not isinstance(data, list): try: data = data.split() except: raise CommandError('Request must be list or simple string.') if not data: raise CommandError('Missing command') command = data[0].upper() if command not in self._commands: raise CommandError('Unrecognized command: %s' % command) return self._commands[command](*data[1:]) Our server is almost finished! We just need to implement the six command methods defined in the get_commands() method: class Server(object): def get(self, key): return self._kv.get(key) def set(self, key, value): self._kv[key] = value return 1 def delete(self, key): if key in self._kv: del self._kv[key] return 1 return 0 def flush(self): kvlen = len(self._kv) self._kv.clear() return kvlen def mget(self, *keys): return [self._kv.get(key) for key in keys] def mset(self, *items): data = zip(items[::2], items[1::2]) for key, value in data: self._kv[key] = value return len(data) That's it! Our server is now ready to start processing requests. In the next section we'll implement a client to interact with the server. Client To interact with the server, let's re-use the ProtocolHandler class to implement a simple client. The client will connect to the server and send commands encoded as lists. We'll re-use both the write_response() and the handle_request() logic for encoding requests and processing server responses respectively. class Client(object): def __init__(self, host='127.0.0.1', port=31337): self._protocol = ProtocolHandler() self._socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self._socket.connect((host, port)) self._fh = self._socket.makefile('rwb') def execute(self, *args): self._protocol.write_response(self._fh, args) resp = self._protocol.handle_request(self._fh) if isinstance(resp, Error): raise CommandError(resp.message) return resp With the execute() method, we can pass an arbitrary list of parameters which will be encoded as an array and sent to the server. The response from the server is parsed and returned as a Python object. For convenience, we can write client methods for the individual commands: class Client(object): # ... def get(self, key): return self.execute('GET', key) def set(self, key, value): return self.execute('SET', key, value) def delete(self, key): return self.execute('DELETE', key) def flush(self): return self.execute('FLUSH') def mget(self, *keys): return self.execute('MGET', *keys) def mset(self, *items): return self.execute('MSET', *items) To test out our client, let's configure our Python script to start up a server when executed directly from the command-line: # Add this to bottom of module: if __name__ == '__main__': from gevent import monkey; monkey.patch_all() Server().run() Testing the Server To test the server, just execute the server's Python module from the command line. In another terminal, open up a Python interpreter and import the Client class from the server's module. Instantiating the client will open a connection and you can start running commands! >>> from server_ex import Client >>> client = Client() >>> client.mset('k1', 'v1', 'k2', ['v2-0', 1, 'v2-2'], 'k3', 'v3') 3 >>> client.get('k2') ['v2-0', 1, 'v2-2'] >>> client.mget('k3', 'k1') ['v3', 'v1'] >>> client.delete('k1') 1 >>> client.get('k1') >>> client.delete('k1') 0 >>> client.set('kx', {'vx': {'vy': 0, 'vz': [1, 2, 3]}}) 1 >>> client.get('kx') {'vx': {'vy': 0, 'vz': [1, 2, 3]}} >>> client.flush() 2 The code presented in this post is absolutely for demonstration purposes only. I hope you enjoyed reading about this project as much as I enjoyed writing about it. You can find a complete copy of the code here. To extend the project, you might consider: - Add more commands! - Use the protocol handler to implement an append-only command log - More robust error handling - Allow client to close connection and re-connect - Logging - Re-write to use the standard library's SocketServerand ThreadingMixin Commenting has been closed, but please feel free to contact me
https://pythondigest.ru/view/31992/
CC-MAIN-2018-43
en
refinedweb